You are on page 1of 1022

code faultName message DP- severity

Server, vendor([vendor]),
model([model]), serial([serial]) in
fltFabricComputeSlotEpMisplacedInChassisS slot [chassisId]/[slotId] presence:
F0156 lot [presence] warning

fltFabricComputeSlotEpServerIdentification Problem identifying server in slot


F0157 Problem [chassisId]/[slotId] warning

Eth vNIC [name], service profile


F0169 fltVnicEtherConfig-failed [name] failed to apply configuration minor

FC vHBA [name], service profile


F0170 fltVnicFcConfig-failed [name] failed to apply configuration minor

Processor [id] on server


[chassisId]/[slotId] operability:
F0174 fltProcessorUnitInoperable [operability] major
Processor [id] on server
[chassisId]/[slotId] temperature:
[thermal]Processor [id] on server
F0175 fltProcessorUnitThermalNonCritical [id] temperature: [thermal] info

Processor [id] on server


[chassisId]/[slotId] temperature:
[thermal]Processor [id] on server
F0176 fltProcessorUnitThermalThresholdCritical [id] temperature: [thermal] major
Processor [id] on server
[chassisId]/[slotId] temperature:
fltProcessorUnitThermalThresholdNonReco [thermal]Processor [id] on server
F0177 verable [id] temperature: [thermal] critical

fltProcessorUnitVoltageThresholdNonCritica Processor [id] on server


F0178 l [chassisId]/[slotId] voltage: [voltage] minor

Processor [id] on server


F0179 fltProcessorUnitVoltageThresholdCritical [chassisId]/[slotId] voltage: [voltage] major

fltProcessorUnitVoltageThresholdNonRecov Processor [id] on server


F0180 erable [chassisId]/[slotId] voltage: [voltage] critical
Local disk [id] on server
[chassisId]/[slotId] operability:
[operability]Local disk [id] on server
F0181 fltStorageLocalDiskInoperable [id] operability: [operability] major

Disk usage for partition [name] on


fabric interconnect [id] exceeded
F0182 fltStorageItemCapacityExceeded 70% minor

Disk usage for partition [name] on


fabric interconnect [id] exceeded
F0183 fltStorageItemCapacityWarning 90% major

DIMM [location] on server


[chassisId]/[slotId] operability:
[operability]DIMM [location] on
F0184 fltMemoryUnitDegraded server [id] operability: [operability] minor

DIMM [location] on server


[chassisId]/[slotId] operability:
[operability]DIMM [location] on
F0185 fltMemoryUnitInoperable server [id] operability: [operability] major
DIMM [location] on server
[chassisId]/[slotId] temperature:
[thermal]DIMM [location] on server
F0186 fltMemoryUnitThermalThresholdNonCritical [id] temperature: [thermal] info

DIMM [location] on server


[chassisId]/[slotId] temperature:
[thermal]DIMM [location] on server
F0187 fltMemoryUnitThermalThresholdCritical [id] temperature: [thermal] major
DIMM [location] on server
[chassisId]/[slotId] temperature:
fltMemoryUnitThermalThresholdNonRecov [thermal]DIMM [location] on server
F0188 erable [id] temperature: [thermal] critical

Memory array [id] on server


[chassisId]/[slotId] voltage:
fltMemoryArrayVoltageThresholdNonCritica [voltage]Memory array [id] on
F0189 l server [id] voltage: [voltage] minor

Memory array [id] on server


[chassisId]/[slotId] voltage:
[voltage]Memory array [id] on
F0190 fltMemoryArrayVoltageThresholdCritical server [id] voltage: [voltage] major
Memory array [id] on server
[chassisId]/[slotId] voltage:
fltMemoryArrayVoltageThresholdNonRecov [voltage]Memory array [id] on
F0191 erable server [id] voltage: [voltage] critical

Adapter [id] in server [id] has


unidentified FRUAdapter [id] in
server [chassisId]/[slotId] has
F0200 fltAdaptorUnitUnidentifiable-fru unidentified FRU major

Adapter [id] in server [id] presence:


[presence]Adapter [id] in server
[chassisId]/[slotId] presence:
F0203 fltAdaptorUnitMissing [presence] warning

Adapter [id]/[id] is
unreachableAdapter
[chassisId]/[slotId]/[id] is
F0206 fltAdaptorUnitAdaptorReachability unreachable info

Adapter [transport] host interface


[id]/[id]/[id] link state:
[linkState]Adapter [transport] host
interface [chassisId]/[slotId]/[id]/[id]
F0207 fltAdaptorHostIfLink-down link state: [linkState] major
Adapter uplink interface [id]/[id]/[id]
link state: [linkState]Adapter uplink
interface [chassisId]/[slotId]/[id]/[id]
F0209 fltAdaptorExtIfLink-down link state: [linkState] major

[transport] port [portId] on chassis


[id] oper state: [operState], reason:
[stateQual][transport] port [portId]
on fabric interconnect [id] oper
state: [operState], reason:
F0276 fltPortPIoLink-down [stateQual] major

[transport] port [portId] on chassis


[id] oper state: [operState], reason:
[stateQual][transport] port [portId]
on fabric interconnect [id] oper
state: [operState], reason:
F0277 fltPortPIoFailed [stateQual] major

[transport] port [portId] on chassis


[id] oper state: [operState], reason:
hardware-failure[transport] port
[portId] on fabric interconnect [id]
oper state: [operState], reason:
F0278 fltPortPIoHardware-failure hardware-failure major
[transport] port [portId] on chassis
[id] oper state: [operState]
[transport] port [portId] on fabric
interconnect [id] oper state:
F0279 fltPortPIoSfp-not-present [operState] info

[type] port-channel [portId] on


fabric interconnect [id] oper state:
[operState], reason: [stateQual]
[type] port-channel [portId] on
fabric interconnect [id] oper state:
F0282 fltFabricExternalPcDown [operState], reason: [stateQual] major
[transport] VIF [chassisId] / [slotId]
[switchId]-[id] down, reason:
[stateQual][transport] VIF [chassisId]
/ [id] [switchId]-[id] down, reason:
F0283 fltDcxVcDown [stateQual] major

Fabric Interconnect [id] operability:


F0291 fltNetworkElementInoperable [operability] critical

Fabric Interconnect [id], HA Cluster


F0293 fltMgmtEntityDegraded interconnect link failure major

Fabric Interconnect [id], HA Cluster


F0294 fltMgmtEntityDown interconnect total link failure critical

Server [chassisId]/[slotId] (service


profile: [assignedToDn]) virtual
network interface allocation
failed.Server [id] (service profile:
[assignedToDn]) virtual network
F0304 fltDcxNsFailed interface allocation failed. major

Server [id] (service profile:


[assignedToDn]) has insufficient
number of DIMMs, CPUs and/or
adaptersServer [chassisId]/[slotId]
(service profile: [assignedToDn]) has
insufficient number of DIMMs, CPUs
F0305 fltComputePhysicalInsufficientlyEquipped and/or adapters minor
Server [id] (service profile:
[assignedToDn]) has an invalid
FRUServer [chassisId]/[slotId]
(service profile: [assignedToDn]) has
F0306 fltComputePhysicalIdentityUnestablishable an invalid FRU minor

Motherboard of server
[chassisId]/[slotId] (service profile:
[assignedToDn]) power:
[operPower]Motherboard of server
[id] (service profile: [assignedToDn])
F0310 fltComputeBoardPowerError power: [operPower] major

Server [id] (service profile:


[assignedToDn]) oper state:
[operState]Server [chassisId]/[slotId]
(service profile: [assignedToDn])
F0311 fltComputePhysicalPowerProblem oper state: [operState] major

Server [id] (service profile:


[assignedToDn]) oper state:
[operState]Server [chassisId]/[slotId]
(service profile: [assignedToDn])
F0312 fltComputePhysicalThermalProblem oper state: [operState] minor

Server [id] (service profile:


[assignedToDn]) BIOS failed power-
on self testServer [chassisId]/[slotId]
(service profile: [assignedToDn])
F0313 fltComputePhysicalBiosPostTimeout BIOS failed power-on self test critical

Server [id] (service profile:


[assignedToDn]) discovery:
[discovery]Server [chassisId]/[slotId]
(service profile: [assignedToDn])
F0314 fltComputePhysicalDiscoveryFailed discovery: [discovery] major
Service profile [assignedToDn] failed
to associate with server [id]Service
profile [assignedToDn] failed to
associate with server
F0315 fltComputePhysicalAssociationFailed [chassisId]/[slotId] critical

Server [id] (service profile:


[assignedToDn]) health:
[operability]Server
[chassisId]/[slotId] (service profile:
F0317 fltComputePhysicalInoperable [assignedToDn]) health: [operability] major

Server [id] (no profile) missingServer


[chassisId]/[slotId] (no profile)
F0318 fltComputePhysicalUnassignedMissing missing minor

Server [id] (service profile:


[assignedToDn]) missingServer
[chassisId]/[slotId] (service profile:
F0319 fltComputePhysicalAssignedMissing [assignedToDn]) missing major

Server [id] (service profile:


[assignedToDn]) has an invalid FRU:
[presence]Server [chassisId]/[slotId]
(service profile: [assignedToDn]) has
F0320 fltComputePhysicalUnidentified an invalid FRU: [presence] minor
Server [id] (no profile)
inaccessibleServer
[chassisId]/[slotId] (no profile)
F0321 fltComputePhysicalUnassignedInaccessible inaccessible warning

Server [id] (service profile:


[assignedToDn]) inaccessibleServer
[chassisId]/[slotId] (service profile:
F0322 fltComputePhysicalAssignedInaccessible [assignedToDn]) inaccessible minor

F0324 fltLsServerFailed Service profile [name] failed major

Service profile [name] discovery


F0326 fltLsServerDiscoveryFailed failed major
Service profile [name] configuration
F0327 fltLsServerConfigFailure failed due to [configQualifier] major
Service profile [name] maintenance
F0329 fltLsServerMaintenanceFailed failed major

Service profile [name] underlying


F0330 fltLsServerRemoved resource removed major

Service profile [name] cannot be


F0331 fltLsServerInaccessible accessed major
Service profile [name] association
F0332 fltLsServerAssociationFailed failed for [pnDn] major

Service profile [name] is not


F0334 fltLsServerUnassociated associated warning

Server [pnDn] does not fulfill Service


profile [name] due to
F0337 fltLsServerServer-unfulfilled [configQualifier] warning

No link between IOM port


[chassisId]/[slotId]/[portId] and
fltEtherSwitchIntFIoSatellite-connection- fabric interconnect [switchId]:
F0367 absent [peerSlotId]/[peerPortId] major

Invalid connection between IOM


port [chassisId]/[slotId]/[portId] and
fabric interconnect [switchId]:
F0368 fltEtherSwitchIntFIoSatellite-wiring-problem [peerSlotId]/[peerPortId] info
Power supply [id] in chassis [id]
power: [power]Power supply [id] in
fabric interconnect [id] power:
[power]Power supply [id] in fex [id]
power: [power]Power supply [id] in
F0369 fltEquipmentPsuPowerSupplyProblem server [id] power: [power] major

Fan [id] in Fan Module [tray]-[id]


under chassis [id] operability:
[operability]Fan [id] in fabric
interconnect [id] operability:
[operability]Fan [id] in fex [id]
operability: [operability]Fan [id] in
Fan Module [tray]-[id] under server
F0371 fltEquipmentFanDegraded [id] operability: [operability] minor

Fan [id] in Fan Module [tray]-[id]


under chassis [id] operability:
[operability]Fan [id] in fabric
interconnect [id] operability:
[operability]Fan [id] in fex [id]
operability: [operability]Fan [id] in
Fan Module [tray]-[id] under server
F0373 fltEquipmentFanInoperable [id] operability: [operability] major
Power supply [id] in chassis [id]
operability: [operability]Power
supply [id] in fabric interconnect [id]
operability: [operability]Power
supply [id] in fex [id] operability:
[operability]Power supply [id] in
F0374 fltEquipmentPsuInoperable server [id] operability: [operability] major

[side] IOM [chassisId]/[id]


F0376 fltEquipmentIOCardRemoved ([switchId]) is removed critical

Fan module [tray]-[id] in chassis [id]


presence: [presence]Fan module
[tray]-[id] in server [id] presence:
[presence]Fan module [tray]-[id] in
fabric interconnect [id] presence:
F0377 fltEquipmentFanModuleMissing [presence] warning

Power supply [id] in chassis [id]


presence: [presence]Power supply
[id] in fabric interconnect [id]
presence: [presence]Power supply
[id] in fex [id] presence:
[presence]Power supply [id] in
F0378 fltEquipmentPsuMissing server [id] presence: [presence] warning
[side] IOM [chassisId]/[id]
F0379 fltEquipmentIOCardThermalProblem ([switchId]) operState: [operState] major
Fan module [tray]-[id] in chassis [id]
temperature: [thermal]Fan module
[tray]-[id] in server [id] temperature:
[thermal]Fan module [tray]-[id] in
fltEquipmentFanModuleThermalThresholdN fabric interconnect [id] temperature:
F0380 onCritical [thermal] minor

Power supply [id] in chassis [id]


temperature: [thermal]Power supply
[id] in fabric interconnect [id]
temperature: [thermal]Power supply
fltEquipmentPsuThermalThresholdNonCritic [id] in server [id] temperature:
F0381 al [thermal] minor
Fan module [tray]-[id] in chassis [id]
temperature: [thermal]Fan module
[tray]-[id] in server [id] temperature:
[thermal]Fan module [tray]-[id] in
fltEquipmentFanModuleThermalThresholdC fabric interconnect [id] temperature:
F0382 ritical [thermal] major

Power supply [id] in chassis [id]


temperature: [thermal]Power supply
[id] in fabric interconnect [id]
temperature: [thermal]Power supply
[id] in server [id] temperature:
F0383 fltEquipmentPsuThermalThresholdCritical [thermal] major
Fan module [tray]-[id] in chassis [id]
temperature: [thermal]Fan module
[tray]-[id] in server [id] temperature:
[thermal]Fan module [tray]-[id] in
fltEquipmentFanModuleThermalThresholdN fabric interconnect [id] temperature:
F0384 onRecoverable [thermal] critical

Power supply [id] in chassis [id]


temperature: [thermal]Power supply
[id] in fabric interconnect [id]
temperature: [thermal]Power supply
fltEquipmentPsuThermalThresholdNonReco [id] in server [id] temperature:
F0385 verable [thermal] critical
Power supply [id] in chassis [id]
voltage: [voltage]Power supply [id]
in fabric interconnect [id] voltage:
[voltage]Power supply [id] in fex [id]
fltEquipmentPsuVoltageThresholdNonCritic voltage: [voltage]Power supply [id]
F0387 al in server [id] voltage: [voltage] minor

Power supply [id] in chassis [id]


voltage: [voltage]Power supply [id]
in fabric interconnect [id] voltage:
[voltage]Power supply [id] in fex [id]
voltage: [voltage]Power supply [id]
F0389 fltEquipmentPsuVoltageThresholdCritical in server [id] voltage: [voltage] major

Power supply [id] in chassis [id]


voltage: [voltage]Power supply [id]
in fabric interconnect [id] voltage:
[voltage]Power supply [id] in fex [id]
fltEquipmentPsuVoltageThresholdNonReco voltage: [voltage]Power supply [id]
F0391 verable in server [id] voltage: [voltage] critical

Power supply [id] in chassis [id]


output power: [perf]Power supply
[id] in fabric interconnect [id] output
power: [perf]Power supply [id] in
F0392 fltEquipmentPsuPerfThresholdNonCritical server [id] output power: [perf] minor

Power supply [id] in chassis [id]


output power: [perf]Power supply
[id] in fabric interconnect [id] output
power: [perf]Power supply [id] in
F0393 fltEquipmentPsuPerfThresholdCritical server [id] output power: [perf] major

Power supply [id] in chassis [id]


output power: [perf]Power supply
[id] in fabric interconnect [id] output
fltEquipmentPsuPerfThresholdNonRecovera power: [perf]Power supply [id] in
F0394 ble server [id] output power: [perf] critical

Fan [id] in Fan Module [tray]-[id]


under chassis [id] speed: [perf]Fan
[id] in fabric interconnect [id] speed:
[perf]Fan [id] in Fan Module [tray]-
F0395 fltEquipmentFanPerfThresholdNonCritical [id] under server [id] speed: [perf] info
Fan [id] in Fan Module [tray]-[id]
under chassis [id] speed: [perf]Fan
[id] in fabric interconnect [id] speed:
[perf]Fan [id] in Fan Module [tray]-
F0396 fltEquipmentFanPerfThresholdCritical [id] under server [id] speed: [perf] info

Fan [id] in Fan Module [tray]-[id]


under chassis [id] speed: [perf]Fan
[id] in fabric interconnect [id] speed:
fltEquipmentFanPerfThresholdNonRecovera [perf]Fan [id] in Fan Module [tray]-
F0397 ble [id] under server [id] speed: [perf] info

Chassis controller in IOM


[chassisId]/[id] ([switchId]) firmware
F0398 fltEquipmentIOCardFirmwareUpgrade upgrade problem: [upgradeStatus] major

Current connectivity for chassis [id]


fltEquipmentChassisUnsupportedConnectivi does not match discovery policy:
F0399 ty [configState] major

Chassis [id] connectivity


F0400 fltEquipmentChassisUnacknowledged configuration: [configState] warning

IOM [chassisId]/[id] ([switchId])


fltEquipmentIOCardUnsupportedConnectivi current connectivity does not match
F0401 ty discovery policy: [configState] major
IOM [chassisId]/[id] ([switchId])
connectivity configuration:
F0402 fltEquipmentIOCardUnacknowledged [configState] warning

IOM [chassisId]/[id] ([switchId]) peer


F0403 fltEquipmentIOCardPeerDisconnected connectivity: [peerCommStatus] warning

Chassis [id] has a mismatch between


FRU identity reported by Fabric/IOM
F0404 fltEquipmentChassisIdentity vs. FRU identity reported by CMC critical

[side] IOM [chassisId]/[id]


F0405 fltEquipmentIOCardIdentity ([switchId]) has a malformed FRU critical

Fan Module [tray]-[id] in chassis [id]


has a malformed FRUFan Module
[tray]-[id] in server [id] has a
malformed FRUFan Module [tray]-
[id] in fabric interconnect [id] has a
F0406 fltEquipmentFanModuleIdentity malformed FRU critical

Power supply [id] on chassis [id] has


a malformed FRUPower supply [id]
F0407 fltEquipmentPsuIdentity on server [id] has a malformed FRU critical
Power state on chassis [id] is
F0408 fltEquipmentChassisPowerProblem [power] major

fltEquipmentChassisThermalThresholdCritic Thermal condition on chassis [id]


F0409 al cause: [thermalStateQualifier] major
fltEquipmentChassisThermalThresholdNonC Thermal condition on chassis [id]
F0410 ritical cause: [thermalStateQualifier] minor
fltEquipmentChassisThermalThresholdNonR Thermal condition on chassis [id]
F0411 ecoverable cause:[thermalStateQualifier] critical

Possible loss of CMOS settings:


CMOS battery voltage on server
[chassisId]/[slotId] is
[cmosVoltage]Possible loss of CMOS
fltComputeBoardCmosVoltageThresholdCri settings: CMOS battery voltage on
F0424 tical server [id] is [cmosVoltage] minor

Possible loss of CMOS settings:


CMOS battery voltage on server
[chassisId]/[slotId] is
[cmosVoltage]Possible loss of CMOS
fltComputeBoardCmosVoltageThresholdNo settings: CMOS battery voltage on
F0425 nRecoverable server [id] is [cmosVoltage] major
Fabric Interconnect [id], election of
primary managemt instance has
F0428 fltMgmtEntityElection-failure failed critical

Fabric Interconnect [id], HA


F0429 fltMgmtEntityHa-not-ready functionality not ready major

Fabric Interconnect [id],


management services, incompatible
F0430 fltMgmtEntityVersion-incompatible versions critical

Fan [id] in fabric interconnect [id]


presence: [presence]Fan [id] in fex
[id] presence: [presence]Fan [id] in
Fan Module [tray]-[id] under server
F0434 fltEquipmentFanMissing [id] presence: [presence] warning
fltEquipmentIOCardAutoUpgradingFirmwar IOM [chassisId]/[id] ([switchId]) is
F0435 e auto upgrading firmware major

[type] image with vendor


[hwVendor], model [hwModel] and
F0436 fltFirmwarePackItemImageMissing version [version] is deleted major

Chassis discovery policy conflict: Link


IOM [chassisId]/[slotId]/[portId] to
fabric interconnect [switchId]:
fltEtherSwitchIntFIoSatellite-wiring- [peerSlotId]/[peerPortId] not
F0440 numbers-unexpected configured info

Fabric Interconnect [id],


F0451 fltMgmtEntityManagement-services-failure management services have failed critical

Fabric Interconnect [id],


fltMgmtEntityManagement-services- management services are
F0452 unresponsive unresponsive critical
F0456 fltEquipmentChassisInoperable Chassis [id] operability: [operability] critical

IOM [transport] interface [portId] on


chassis [id] oper state: [operState],
reason: [stateQual]Fabric
Interconnect [transport] interface
[portId] on fabric interconnect [id]
oper state: [operState], reason:
[stateQual]IOM [transport] interface
[portId] on fex [id] oper state:
F0458 fltEtherServerIntFIoHardware-failure [operState], reason: [stateQual] major

IOM [chassisId] / [slotId] ([switchId])


management VIF [id] down, reason
F0459 fltDcxVcMgmt-vif-down [stateQual] major

Log capacity on [side] IOM


[chassisId]/[id] is [capacity]Log
capacity on Management Controller
on server [chassisId]/[slotId] is
[capacity]Log capacity on
Management Controller on server
F0460 fltSysdebugMEpLogMEpLogLog [id] is [capacity] info

Log capacity on [side] IOM


[chassisId]/[id] is [capacity]Log
capacity on Management Controller
on server [chassisId]/[slotId] is
[capacity]Log capacity on
Management Controller on server
F0461 fltSysdebugMEpLogMEpLogVeryLow [id] is [capacity] info
Log capacity on [side] IOM
[chassisId]/[id] is [capacity]Log
capacity on Management Controller
on server [chassisId]/[slotId] is
[capacity]Log capacity on
Management Controller on server
F0462 fltSysdebugMEpLogMEpLogFull [id] is [capacity] info

F0463 fltComputePoolEmpty server pool [name] is empty minor

F0464 fltUuidpoolPoolEmpty UUID suffix pool [name] is empty minor

F0465 fltIppoolPoolEmpty IP pool [name] is empty minor

F0466 fltMacpoolPoolEmpty MAC pool [name] is empty minor

backup image is unusable. reason:


F0470 fltFirmwareUpdatableImageUnusable [operStateQual] major
unable to boot the startup image.
F0471 fltFirmwareBootUnitCantBoot End point booted with backup image major

F0476 fltFcpoolInitiatorsEmpty FC pool [purpose] [name] is empty minor

[side] IOM [chassisId]/[id]


F0478 fltEquipmentIOCardInaccessible ([switchId]) is inaccessible critical

Virtual interface [id] link state is


F0479 fltDcxVIfLinkState down major
Fan module [tray]-[id] in chassis [id]
operability: [operability]Fan module
[tray]-[id] in server [id] operability:
[operability]Fan module [tray]-[id] in
fabric interconnect [id] operability:
F0480 fltEquipmentFanModuleDegraded [operability] minor

[side] IOM [chassisId]/[id]


F0481 fltEquipmentIOCardPost-failure ([switchId]) POST failure major

Fan [id] in Fan Module [tray]-[id]


under chassis [id] speed: [perf]Fan
[id] in fabric interconnect [id] speed:
fltEquipmentFanPerfThresholdLowerNonRe [perf]Fan [id] in Fan Module [tray]-
F0484 coverable [id] under server [id] speed: [perf] critical

DIMM [location] on server


[chassisId]/[slotId] has an invalid
FRUDIMM [location] on server [id]
F0502 fltMemoryUnitIdentityUnestablishable has an invalid FRU warning
Server [id] POST or diagnostic
failureServer [chassisId]/[slotId]
F0517 fltComputePhysicalPost-failure POST or diagnostic failure major

Power supply [id] in chassis [id]


power: [power]Power supply [id] in
fabric interconnect [id] power:
[power]Power supply [id] in fex [id]
power: [power]Power supply [id] in
F0528 fltEquipmentPsuOffline server [id] power: [power] warning

RAID Battery on server


[chassisId]/[slotId] operability:
[operability]RAID Battery on server
F0531 fltStorageRaidBatteryInoperable [id] operability: [operability] major

Server [chassisId]/[slotId] [type]


transfer failed: [operState]Server
[id] [type] transfer failed:
F0532 fltSysdebugMEpLogTransferError [operState] info

RTC Battery on server


[chassisId]/[slotId] operability:
F0533 fltComputeRtcBatteryInoperable [operability] major
Buffer Unit [id] on server
[chassisId]/[slotId] temperature:
fltMemoryBufferUnitThermalThresholdNon [thermal]Buffer Unit [id] on server
F0535 Critical [id] temperature: [thermal] info

Buffer Unit [id] on server


[chassisId]/[slotId] temperature:
fltMemoryBufferUnitThermalThresholdCriti [thermal]Buffer Unit [id] on server
F0536 cal [id] temperature: [thermal] major
Buffer Unit [id] on server
[chassisId]/[slotId] temperature:
fltMemoryBufferUnitThermalThresholdNon [thermal]Buffer Unit [id] on server
F0537 Recoverable [id] temperature: [thermal] critical

IO Hub on server [chassisId]/[slotId]


F0538 fltComputeIOHubThermalNonCritical temperature: [thermal] minor

IO Hub on server [chassisId]/[slotId]


F0539 fltComputeIOHubThermalThresholdCritical temperature: [thermal] major

fltComputeIOHubThermalThresholdNonRec IO Hub on server [chassisId]/[slotId]


F0540 overable temperature: [thermal] critical

fltEquipmentChassisIdentity-
F0543 unestablishable Chassis [id] has an invalid FRU major
F0549 fltSwVlanPortNsResourceStatus Vlan-Port Resource exceeded critical

Primary Vlan can not be resolved for


F0620 fltFabricVlanPrimaryVlanMissingIsolated isolated vlan [name] minor

F0621 fltFabricLanPinGroupEmpty LAN Pin Group [name] is empty minor

F0622 fltFabricSanPinGroupEmpty SAN Pin Group [name] is empty minor

Adapter [id] eth interface [id] in


F0625 fltAdaptorExtEthIfMisConnect server [id] mis-connected warning

Adapter [id] eth interface [id] in


F0626 fltAdaptorHostEthIfMisConnect server [id] mis-connected warning
Power cap application failed for
F0635 fltPowerBudgetPowerBudgetCmcProblem chassis [id] major

Power cap application failed for


server [chassisId]/[slotId]Power cap
F0637 fltPowerBudgetPowerBudgetBmcProblem application failed for server [id] major

Insufficient power available to


discover server
[chassisId]/[slotId]Insufficient power
F0640 fltPowerBudgetPowerBudgetDiscFail available to discover server [id] major

fltPowerGroupPowerGroupInsufficientBudg insufficient budget for power group


F0642 et [name] major

admin committed insufficient for


power group [name], using previous
F0643 fltPowerGroupPowerGroupBudgetIncorrect value [operCommitted] major
license for [feature] on fabric-
interconnect [scope] has entered
F0670 fltLicenseInstanceGracePeriodWarning1 into the grace period. warning

license for [feature] on fabric-


interconnect [scope] is running in
the grace period for more than 10
F0671 fltLicenseInstanceGracePeriodWarning2 days warning

license for [feature] on fabric-


interconnect [scope] is running in
the grace period for more than 30
F0672 fltLicenseInstanceGracePeriodWarning3 days warning

license for [feature] on fabric-


interconnect [scope] is running in
the grace period for more than 60
F0673 fltLicenseInstanceGracePeriodWarning4 days warning
license for [feature] on fabric-
interconnect [scope] is running in
the grace period for more than 90
F0674 fltLicenseInstanceGracePeriodWarning5 days major

license for [feature] on fabric-


interconnect [scope] is running in
the grace period for more than 119
F0675 fltLicenseInstanceGracePeriodWarning6 days critical

Grace period for [feature] on fabric-


interconnect [scope] is expired.
Please acquire a license for the
F0676 fltLicenseInstanceGracePeriodWarning7 same. critical

license file [name] on fabric-


interconnect [scope] can not be
F0677 fltLicenseFileBadLicenseFile installed critical
license file [name] from fabric-
interconnect [scope] could not be
F0678 fltLicenseFileFileNotDeleted deleted critical

Management Port [id] in server [id]


F0688 fltMgmtIfMisConnect is mis connected warning

fltLsComputeBindingAssignmentRequireme Assignment of service profile [name]


F0689 ntsNotMet to server [pnDn] failed minor

F0702 fltEquipmentFexPost-failure fex [id] POST failure major

F0703 fltEquipmentFexIdentity Fex [id] has a malformed FRU critical

Connection to Adapter [id] eth


F0708 fltAdaptorHostEthIfMissing interface [id] in server [id] missing warning
[transport] port [portId] on chassis
[id] role : [ifRole] transceiver type:
[xcvrType][transport] port [portId]
on fabric interconnect [id] role :
F0713 fltPortPIoInvalid-sfp [ifRole] transceiver type:[xcvrType] major

Connection to Management Port [id]


F0717 fltMgmtIfMissing in server [id] is missing warning

[type] Member [slotId]/[portId] of


Port-Channel [portId] on fabric
interconnect [id] is down,
F0727 fltFabricEthLanPcEpDown membership: [membership] major

[type] Member [slotId]/[portId] of


Port-Channel [portId] on fabric
interconnect [id] is down,
F0728 fltFabricFcSanPcEpDown membership: [membership] major
fltEquipmentIOCardThermalThresholdNonC [side] IOM [chassisId]/[id]
F0729 ritical ([switchId]) temperature: [thermal] minor
fltEquipmentIOCardThermalThresholdCritic [side] IOM [chassisId]/[id]
F0730 al ([switchId]) temperature: [thermal] major
fltEquipmentIOCardThermalThresholdNonR [side] IOM [chassisId]/[id]
F0731 ecoverable ([switchId]) temperature: [thermal] critical

Device [id] SEEPROM operability:


F0733 fltEquipmentChassisSeeprom-inoperable [seepromOperState] critical

Member [slotId]/[portId] cannot be


added to SAN Port-Channel [portId]
on fabric interconnect [id], reason:
F0734 fltFabricFcSanPcEpIncompatibleSpeed [membership] major

Cannot set admin speed to the


requested value, Speed
incompatible with member ports in
F0735 fltFabricFcSanPcIncompatibleSpeed the port-channel major
Management interface on Fabric
F0736 fltExtmgmtIfMgmtifdown Interconnect [id] is [operState] major

Chassis [id] cannot be capped as


fltPowerChassisMemberPowerGroupCapIns group cap is low. Please consider
F0740 ufficient raising the cap. major

Chassis [id] cannot be capped as at


least one of the CMC or CIMC or
BIOS firmware version is less than
fltPowerChassisMemberChassisFirmwarePr 1.4. Please upgrade the firmware for
F0741 oblem cap to be applied. major
fltPowerChassisMemberChassisPsuInsufficie Chassis [id] cannot be capped as at
F0742 nt least two PSU need to be powered major

Chassis [id] was configured for


fltPowerChassisMemberChassisPsuRedunda redundancy, but running in a non-
F0743 nceFailure redundant configuration. major

P-State lowered as consumption hit


power cap for server
[chassisId]/[slotId]P-State lowered
as consumption hit power cap for
F0744 fltPowerBudgetPowerCapReachedCommit server [id] info

Auto core transfer failure at remote


fltSysdebugAutoCoreFileExportTargetAutoC server [hostname]:[path]
F0747 oreTransferFailure [exportFailureReason] warning

Configuration for traffic monitor


[name] failed, reason:
F0757 fltFabricMonSpanConfigFail [configFailReason] major
Chassis [id] has had PSU failures.
Please correct the problem by
checking input power or replace the
F0764 fltPowerBudgetChassisPsuInsufficient PSU major

Blade [chassisId]/[slotId] has been


severely throttled. CIMC can recover
if budget is redeployed to the blade
or by rebooting the blade. If
problem persists, please ensure that
OS is ACPI compliantRack server [id]
has been severely throttled. CIMC
can recover if budget is redeployed
to the blade or by rebooting the
blade. If problem persists, please
F0765 fltPowerBudgetTStateTransition ensure that OS is ACPI compliant critical

Insufficient budget to apply no-cap


priority through policy [name].
F0766 fltPowerPolicyPowerPolicyApplicationFail Blades will continue to be capped minor

New connection discovered on


F0772 fltMgmtIfNew Management Port [id] in server [id] warning

Connection to Adapter [id] eth


F0775 fltAdaptorExtEthIfMissing interface [id] in server [id] missing warning

Local disk [id] on server [serverId] is


F0776 fltStorageLocalDiskSlotEpUnusable not usable by the operating system minor

[type] Member [slotId]/[portId] of


Port-Channel [portId] on fabric
interconnect [id] is down,
F0777 fltFabricEthEstcPcEpDown membership: [membership] major
F0778 fltEquipmentFexIdentity-unestablishable Fex [id] has an invalid FRU major

Fan module [tray]-[id] in chassis [id]


operability: [operability]Fan module
[tray]-[id] in server [id] operability:
[operability]Fan module [tray]-[id] in
fabric interconnect [id] operability:
F0794 fltEquipmentFanModuleInoperable [operability] major

Schedule [schedName] referenced


fltLsmaintMaintPolicyUnresolvableSchedule by maintenance policy [name] does
F0795 r not exist warning

F0796 fltFabricVsanErrorDisabled VSAN [name] is [operState] major

[type] Port [slotId]/[portId] on fabric


interconnect [switchId] has VSAN
[id] in error disabled statePort
channel [portId] on fabric
interconnect [switchId] has VSAN
F0797 fltFabricVsanEpErrorDisabled [id] in error disabled state major
Firmware on blade
[chassisId]/[slotId] does not allow
chassis level power capping. Please
consider upgrading to at least 1.4
F0798 fltPowerBudgetFirmwareMismatch version major
Processor [id] on server
[chassisId]/[slotId] has an invalid
FRUProcessor [id] on server [id] has
F0801 fltProcessorUnitIdentity-unestablishable an invalid FRU warning

F0821 fltIqnpoolPoolEmpty iqn pool [name] is empty minor

[type] Member [slotId]/[portId] of


Port-Channel [portId] on fabric
interconnect [id] is down,
F0831 fltFabricDceSwSrvPcEpDown membership: [membership] major

Port constraint violation on switch


F0832 fltFabricEpMgrEpTransModeFail [id]: [confQual] critical
VLAN [name] is [operState] because
of conflicting vlan-id with an fcoe-
F0833 fltFabricVlanMisconfigured vlan critical

Interface [name] is [operState].


F0834 fltFabricPIoEpErrorMisconfigured Reason: [operStateReason] critical

Primary vlan missing from fabric:


F0835 fltFabricEthLanEpMissingPrimaryVlan [switchId], port: [slotId]/[portId]. major

Primary vlan missing from fabric:


F0836 fltFabricEthLanPcMissingPrimaryVlan [switchId], port-channel: [portId]. major

Hard pinning target for eth vNIC


[name], service profile [name] does
not have all the required vlans
F0840 fltVnicEtherPinningMismatch configured warning
Hard pinning target for eth vNIC
[name], service profile [name] is
F0841 fltVnicEtherPinningMisconfig missing or misconfigured major

Processor [id] on server


[chassisId]/[slotId] operState:
[operState]Processor [id] on server
F0842 fltProcessorUnitDisabled [id] operState: [operState] info

Local LUN [id] on server


[chassisId]/[slotId] operability:
[operability]Local LUN [id] on server
F0843 fltStorageLocalLunInoperable [id] operability: [operability] major

DIMM [location] on server


[chassisId]/[slotId] operState:
[operState]DIMM [location] on
F0844 fltMemoryUnitDisabled server [id] operaState: [operState] major

Activation failed and Activate Status


F0856 fltFirmwareBootUnitActivateStatusFailed set to failed. warning

[type] port-channel [portId] on


fabric interconnect [id] oper state:
F0858 fltFabricInternalPcDown [operState], reason: [stateQual] major
device [chassis1], error accessing
F0863 fltMgmtEntityDevice-1-shared-storage-error shared-storage warning

device [chassis2], error accessing


F0864 fltMgmtEntityDevice-2-shared-storage error shared-storage warning

device [chassis3], error accessing


F0865 fltMgmtEntityDevice-3-shared-storage error shared-storage warning

Fabric Interconnect [id],


management services, mismatched
F0866 fltMgmtEntityHa-ssh-keys-mismatched SSH keys major

UCSM process [name] failed on FI


F0867 fltMgmtPmonEntryUCSM process failure [switchId] critical

Motherboard of server
[chassisId]/[slotId] (service profile:
[assignedToDn]) power:
[power]Motherboard of server [id]
(service profile: [assignedToDn])
F0868 fltComputeBoardPowerFail power: [power] critical

Motherboard of server
[chassisId]/[slotId] (service profile:
[assignedToDn]) thermal:
[thermal]Motherboard of server [id]
(service profile: [assignedToDn])
F0869 fltComputeBoardThermalProblem thermal: [thermal] minor
Virtual interface [vifId] link is down;
F0876 fltVmVifLinkState reason [stateQual] minor

Power supply [id] in chassis [id]


shutdown reason:
F0881 fltEquipmentPsuPowerSupplyShutdown [powerStateQualifier] major

Power supply [id] on chassis [id] has


exceeded its power thresholdPower
supply [id] on server [id] has
F0882 fltEquipmentPsuPowerThreshold exceeded its power threshold critical

Power supply [id] on chassis [id] has


disconnected cable or bad input
voltagePower supply [id] on server
[id] has disconnected cable or bad
F0883 fltEquipmentPsuInputError input voltage critical

F0884 fltEquipmentSwitchCardPowerOff Switch card is powered down. critical

Fabric Interconnect [id] inventory is


F0885 fltNetworkElementInventoryFailed not complete: [inventoryStatus] major
Adapter extension [id] in server
[chassisId]/[slotId] has unidentified
F0900 fltAdaptorUnitExtnUnidentifiable-fru FRU major

Adapter extension [id] in server


[chassisId]/[slotId] presence:
F0901 fltAdaptorUnitExtnMissing [presence] warning

Fex [id] with model [model] is


F0902 fltEquipmentFexFex-unsupported unsupported major

iSCSI vNIC [name], service profile


[name] has duplicate iqn name
F0903 fltVnicIScsiConfig-failed [initiatorName] major

[FSM:STAGE:FAILED|RETRY]:
Checking license for chassis
[chassisId] (iom [id])(FSM-
fsmStFailEquipmentIOCardFePresence:Chec STAGE:sam:dme:EquipmentIOCardF
F16405 kLicense ePresence:CheckLicense) warning

[FSM:STAGE:FAILED|RETRY]:
identifying IOM [chassisId]/[id](FSM-
fsmStFailEquipmentIOCardFePresence:Iden STAGE:sam:dme:EquipmentIOCardF
F16405 tify ePresence:Identify) warning
[FSM:STAGE:FAILED|RETRY]:
configuring management identity to
IOM [chassisId]/[id]([side])(FSM-
fsmStFailEquipmentIOCardFeConn:Configur STAGE:sam:dme:EquipmentIOCardF
F16406 eEndPoint eConn:ConfigureEndPoint) warning

[FSM:STAGE:FAILED|RETRY]:
configuring fabric interconnect
[switchId] mgmt connectivity to IOM
[chassisId]/[id]([side])(FSM-
fsmStFailEquipmentIOCardFeConn:Configur STAGE:sam:dme:EquipmentIOCardF
F16406 eSwMgmtEndPoint eConn:ConfigureSwMgmtEndPoint) warning

[FSM:STAGE:FAILED|RETRY]:
configuring IOM [chassisId]/[id]
([side]) virtual name space(FSM-
fsmStFailEquipmentIOCardFeConn:Configur STAGE:sam:dme:EquipmentIOCardF
F16406 eVifNs eConn:ConfigureVifNs) warning

[FSM:STAGE:FAILED|RETRY]:
triggerring chassis discovery via IOM
[chassisId]/[id]([side])(FSM-
fsmStFailEquipmentIOCardFeConn:Discover STAGE:sam:dme:EquipmentIOCardF
F16406 Chassis eConn:DiscoverChassis) warning
[FSM:STAGE:FAILED|RETRY]:
enabling chassis [chassisId] on [side]
side(FSM-
fsmStFailEquipmentIOCardFeConn:EnableC STAGE:sam:dme:EquipmentIOCardF
F16406 hassis eConn:EnableChassis) warning

[FSM:STAGE:FAILED|RETRY]:
unconfiguring access to chassis [id]
(FSM-
fsmStFailEquipmentChassisRemoveChassis: STAGE:sam:dme:EquipmentChassisR
F16407 DisableEndPoint emoveChassis:DisableEndPoint) warning
[FSM:STAGE:FAILED|RETRY]: erasing
chassis identity [id] from
primary(FSM-
fsmStFailEquipmentChassisRemoveChassis: STAGE:sam:dme:EquipmentChassisR
F16407 UnIdentifyLocal emoveChassis:UnIdentifyLocal) warning

[FSM:STAGE:FAILED|RETRY]: erasing
chassis identity [id] from
secondary(FSM-
fsmStFailEquipmentChassisRemoveChassis: STAGE:sam:dme:EquipmentChassisR
F16407 UnIdentifyPeer emoveChassis:UnIdentifyPeer) warning

[FSM:STAGE:FAILED|RETRY]: waiting
for clean up of resources for chassis
[id] (approx. 2 min)(FSM-
fsmStFailEquipmentChassisRemoveChassis: STAGE:sam:dme:EquipmentChassisR
F16407 Wait emoveChassis:Wait) warning
[FSM:STAGE:FAILED|RETRY]:
decomissioning chassis [id](FSM-
fsmStFailEquipmentChassisRemoveChassis: STAGE:sam:dme:EquipmentChassisR
F16407 decomission emoveChassis:decomission) warning

[FSM:STAGE:FAILED|RETRY]: setting
locator led to [adminState](FSM-
fsmStFailEquipmentLocatorLedSetLocatorLe STAGE:sam:dme:EquipmentLocatorL
F16408 d:Execute edSetLocatorLed:Execute) warning

[FSM:STAGE:FAILED|RETRY]:
external mgmt interface
configuration on primary(FSM-
fsmStFailMgmtControllerExtMgmtIfConfig:P STAGE:sam:dme:MgmtControllerExt
F16518 rimary MgmtIfConfig:Primary) warning

[FSM:STAGE:FAILED|RETRY]:
external mgmt interface
configuration on secondary(FSM-
fsmStFailMgmtControllerExtMgmtIfConfig:S STAGE:sam:dme:MgmtControllerExt
F16518 econdary MgmtIfConfig:Secondary) warning

[FSM:STAGE:FAILED|RETRY]:
identifying a server in
[chassisId]/[slotId] via CIMC(FSM-
fsmStFailFabricComputeSlotEpIdentify:Exec STAGE:sam:dme:FabricComputeSlot
F16519 uteLocal EpIdentify:ExecuteLocal) warning

[FSM:STAGE:FAILED|RETRY]:
identifying a server in
[chassisId]/[slotId] via CIMC(FSM-
fsmStFailFabricComputeSlotEpIdentify:Exec STAGE:sam:dme:FabricComputeSlot
F16519 utePeer EpIdentify:ExecutePeer) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for BIOS POST completion
from CIMC on server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:BiosPostCo STAGE:sam:dme:ComputeBladeDisc
F16520 mpletion over:BiosPostCompletion) warning

[FSM:STAGE:FAILED|RETRY]: power
server [chassisId]/[slotId] on with
pre-boot environment(FSM-
fsmStFailComputeBladeDiscover:BladeBoot STAGE:sam:dme:ComputeBladeDisc
F16520 Pnuos over:BladeBootPnuos) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for system reset on server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:BladeBoot STAGE:sam:dme:ComputeBladeDisc
F16520 Wait over:BladeBootWait) warning
[FSM:STAGE:FAILED|RETRY]: power
on server [chassisId]/[slotId] for
discovery(FSM-
fsmStFailComputeBladeDiscover:BladePowe STAGE:sam:dme:ComputeBladeDisc
F16520 rOn over:BladePowerOn) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for SMBIOS table from CIMC
on server [chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:BladeRead STAGE:sam:dme:ComputeBladeDisc
F16520 Smbios over:BladeReadSmbios) warning

[FSM:STAGE:FAILED|RETRY]:
provisioning a bootable device with
a bootable pre-boot image for
server(FSM-
fsmStFailComputeBladeDiscover:BmcConfig STAGE:sam:dme:ComputeBladeDisc
F16520 PnuOS over:BmcConfigPnuOS) warning

[FSM:STAGE:FAILED|RETRY]: getting
inventory of server
[chassisId]/[slotId] via CIMC(FSM-
fsmStFailComputeBladeDiscover:BmcInvent STAGE:sam:dme:ComputeBladeDisc
F16520 ory over:BmcInventory) warning

[FSM:STAGE:FAILED|RETRY]:
prepare configuration for preboot
environment(FSM-
fsmStFailComputeBladeDiscover:BmcPreCo STAGE:sam:dme:ComputeBladeDisc
F16520 nfigPnuOSLocal over:BmcPreConfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
prepare configuration for preboot
environment(FSM-
fsmStFailComputeBladeDiscover:BmcPreCo STAGE:sam:dme:ComputeBladeDisc
F16520 nfigPnuOSPeer over:BmcPreConfigPnuOSPeer) warning
[FSM:STAGE:FAILED|RETRY]:
checking CIMC of server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:BmcPresen STAGE:sam:dme:ComputeBladeDisc
F16520 ce over:BmcPresence) warning

[FSM:STAGE:FAILED|RETRY]:
Shutdown the server
[chassisId]/[slotId]; deep discovery
completed(FSM-
fsmStFailComputeBladeDiscover:BmcShutd STAGE:sam:dme:ComputeBladeDisc
F16520 ownDiscovered over:BmcShutdownDiscovered) warning
[FSM:STAGE:FAILED|RETRY]:
configuring primary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:ConfigFeLo STAGE:sam:dme:ComputeBladeDisc
F16520 cal over:ConfigFeLocal) warning

[FSM:STAGE:FAILED|RETRY]:
configuring secondary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:ConfigFePe STAGE:sam:dme:ComputeBladeDisc
F16520 er over:ConfigFePeer) warning

[FSM:STAGE:FAILED|RETRY]:
configuring external user access to
server [chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:ConfigUser STAGE:sam:dme:ComputeBladeDisc
F16520 Access over:ConfigUserAccess) warning

[FSM:STAGE:FAILED|RETRY]: Invoke
post-discovery policies on server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:HandlePoo STAGE:sam:dme:ComputeBladeDisc
F16520 ling over:HandlePooling) warning

[FSM:STAGE:FAILED|RETRY]:
configure primary adapter in
[chassisId]/[slotId] for pre-boot
environment(FSM-
fsmStFailComputeBladeDiscover:NicConfigP STAGE:sam:dme:ComputeBladeDisc
F16520 nuOSLocal over:NicConfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
configure secondary adapter in
[chassisId]/[slotId] for pre-boot
environment(FSM-
fsmStFailComputeBladeDiscover:NicConfigP STAGE:sam:dme:ComputeBladeDisc
F16520 nuOSPeer over:NicConfigPnuOSPeer) warning
[FSM:STAGE:FAILED|RETRY]: detect
mezz cards in [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiscover:NicPresenc STAGE:sam:dme:ComputeBladeDisc
F16520 eLocal over:NicPresenceLocal) warning
[FSM:STAGE:FAILED|RETRY]: detect
mezz cards in [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiscover:NicPresenc STAGE:sam:dme:ComputeBladeDisc
F16520 ePeer over:NicPresencePeer) warning
[FSM:STAGE:FAILED|RETRY]:
Unconfigure adapter of server
[chassisId]/[slotId] pre-boot
environment(FSM-
fsmStFailComputeBladeDiscover:NicUnconfi STAGE:sam:dme:ComputeBladeDisc
F16520 gPnuOSLocal over:NicUnconfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure adapter of server
[chassisId]/[slotId] pre-boot
environment(FSM-
fsmStFailComputeBladeDiscover:NicUnconfi STAGE:sam:dme:ComputeBladeDisc
F16520 gPnuOSPeer over:NicUnconfigPnuOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Populate pre-boot catalog to server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:PnuOSCata STAGE:sam:dme:ComputeBladeDisc
F16520 log over:PnuOSCatalog) warning

[FSM:STAGE:FAILED|RETRY]:
Identify pre-boot environment agent
on server [chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:PnuOSIden STAGE:sam:dme:ComputeBladeDisc
F16520 t over:PnuOSIdent) warning

[FSM:STAGE:FAILED|RETRY]:
Perform inventory of server
[chassisId]/[slotId] pre-boot
environment(FSM-
fsmStFailComputeBladeDiscover:PnuOSInve STAGE:sam:dme:ComputeBladeDisc
F16520 ntory over:PnuOSInventory) warning

[FSM:STAGE:FAILED|RETRY]:
Populate pre-boot environment
behavior policy to server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:PnuOSPoli STAGE:sam:dme:ComputeBladeDisc
F16520 cy over:PnuOSPolicy) warning

[FSM:STAGE:FAILED|RETRY]: Scrub
server [chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:PnuOSScru STAGE:sam:dme:ComputeBladeDisc
F16520 b over:PnuOSScrub) warning

[FSM:STAGE:FAILED|RETRY]: Trigger
self-test of server [chassisId]/[slotId]
pre-boot environment(FSM-
fsmStFailComputeBladeDiscover:PnuOSSelf STAGE:sam:dme:ComputeBladeDisc
F16520 Test over:PnuOSSelfTest) warning
[FSM:STAGE:FAILED|RETRY]:
Preparing to check hardware
configuration server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDisc
F16520 fsmStFailComputeBladeDiscover:PreSanitize over:PreSanitize) warning

[FSM:STAGE:FAILED|RETRY]:
Checking hardware configuration
server [chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDisc
F16520 fsmStFailComputeBladeDiscover:Sanitize over:Sanitize) warning

[FSM:STAGE:FAILED|RETRY]:
provisioning a Virtual Media device
with a bootable pre-boot image for
blade [chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:SetupVme STAGE:sam:dme:ComputeBladeDisc
F16520 diaLocal over:SetupVmediaLocal) warning

[FSM:STAGE:FAILED|RETRY]:
provisioning a Virtual Media device
with a bootable pre-boot image for
blade [chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:SetupVme STAGE:sam:dme:ComputeBladeDisc
F16520 diaPeer over:SetupVmediaPeer) warning

[FSM:STAGE:FAILED|RETRY]: Disable
Sol Redirection on server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:SolRedirec STAGE:sam:dme:ComputeBladeDisc
F16520 tDisable over:SolRedirectDisable) warning

[FSM:STAGE:FAILED|RETRY]: set up
bios token on server
[chassisId]/[slotId] for Sol
redirect(FSM-
fsmStFailComputeBladeDiscover:SolRedirec STAGE:sam:dme:ComputeBladeDisc
F16520 tEnable over:SolRedirectEnable) warning

[FSM:STAGE:FAILED|RETRY]:
configure primary fabric
interconnect in [chassisId]/[slotId]
for pre-boot environment(FSM-
fsmStFailComputeBladeDiscover:SwConfigP STAGE:sam:dme:ComputeBladeDisc
F16520 nuOSLocal over:SwConfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
configure secondary fabric
interconnect in [chassisId]/[slotId]
for pre-boot environment(FSM-
fsmStFailComputeBladeDiscover:SwConfigP STAGE:sam:dme:ComputeBladeDisc
F16520 nuOSPeer over:SwConfigPnuOSPeer) warning
[FSM:STAGE:FAILED|RETRY]:
Unconfigure primary fabric
interconnect for server
[chassisId]/[slotId] pre-boot
environment(FSM-
fsmStFailComputeBladeDiscover:SwUnconfi STAGE:sam:dme:ComputeBladeDisc
F16520 gPnuOSLocal over:SwUnconfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure secondary fabric
interconnect for server
[chassisId]/[slotId] pre-boot
environment(FSM-
fsmStFailComputeBladeDiscover:SwUnconfi STAGE:sam:dme:ComputeBladeDisc
F16520 gPnuOSPeer over:SwUnconfigPnuOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
unprovisioning the Virtual Media
bootable device for blade
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:Teardown STAGE:sam:dme:ComputeBladeDisc
F16520 VmediaLocal over:TeardownVmediaLocal) warning

[FSM:STAGE:FAILED|RETRY]:
unprovisioning the Virtual media
bootable device for blade
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiscover:Teardown STAGE:sam:dme:ComputeBladeDisc
F16520 VmediaPeer over:TeardownVmediaPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Connect to pre-boot environment
agent on server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiscover:hagConnec STAGE:sam:dme:ComputeBladeDisc
F16520 t over:hagConnect) warning

[FSM:STAGE:FAILED|RETRY]:
Disconnect pre-boot environment
agent for server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiscover:hagDiscon STAGE:sam:dme:ComputeBladeDisc
F16520 nect over:hagDisconnect) warning

[FSM:STAGE:FAILED|RETRY]:
Connect to pre-boot environment
agent on server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiscover:serialDebu STAGE:sam:dme:ComputeBladeDisc
F16520 gConnect over:serialDebugConnect) warning
[FSM:STAGE:FAILED|RETRY]:
Disconnect pre-boot environment
agent for server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiscover:serialDebu STAGE:sam:dme:ComputeBladeDisc
F16520 gDisconnect over:serialDebugDisconnect) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for BIOS POST completion
from CIMC on server [id](FSM-
fsmStFailComputeRackUnitDiscover:BiosPos STAGE:sam:dme:ComputeRackUnitD
F16520 tCompletion iscover:BiosPostCompletion) warning

[FSM:STAGE:FAILED|RETRY]:
provisioning a bootable device with
a bootable pre-boot image for
server [id](FSM-
fsmStFailComputeRackUnitDiscover:BmcCo STAGE:sam:dme:ComputeRackUnitD
F16520 nfigPnuOS iscover:BmcConfigPnuOS) warning

[FSM:STAGE:FAILED|RETRY]:
Configuring connectivity on CIMC of
server [id](FSM-
fsmStFailComputeRackUnitDiscover:BmcCo STAGE:sam:dme:ComputeRackUnitD
F16520 nfigureConnLocal iscover:BmcConfigureConnLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Configuring connectivity on CIMC of
server [id](FSM-
fsmStFailComputeRackUnitDiscover:BmcCo STAGE:sam:dme:ComputeRackUnitD
F16520 nfigureConnPeer iscover:BmcConfigureConnPeer) warning
[FSM:STAGE:FAILED|RETRY]: getting
inventory of server [id] via
CIMC(FSM-
fsmStFailComputeRackUnitDiscover:BmcInv STAGE:sam:dme:ComputeRackUnitD
F16520 entory iscover:BmcInventory) warning

[FSM:STAGE:FAILED|RETRY]:
prepare configuration for preboot
environment(FSM-
fsmStFailComputeRackUnitDiscover:BmcPre STAGE:sam:dme:ComputeRackUnitD
F16520 configPnuOSLocal iscover:BmcPreconfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
prepare configuration for preboot
environment(FSM-
fsmStFailComputeRackUnitDiscover:BmcPre STAGE:sam:dme:ComputeRackUnitD
F16520 configPnuOSPeer iscover:BmcPreconfigPnuOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
checking CIMC of server [id](FSM-
fsmStFailComputeRackUnitDiscover:BmcPre STAGE:sam:dme:ComputeRackUnitD
F16520 sence iscover:BmcPresence) warning
[FSM:STAGE:FAILED|RETRY]:
Shutdown the server [id]; deep
discovery completed(FSM-
fsmStFailComputeRackUnitDiscover:BmcSh STAGE:sam:dme:ComputeRackUnitD
F16520 utdownDiscovered iscover:BmcShutdownDiscovered) warning

[FSM:STAGE:FAILED|RETRY]:
unprovisioning the bootable device
for server [id](FSM-
fsmStFailComputeRackUnitDiscover:BmcUn STAGE:sam:dme:ComputeRackUnitD
F16520 configPnuOS iscover:BmcUnconfigPnuOS) warning

[FSM:STAGE:FAILED|RETRY]: power
server [id] on with pre-boot
environment(FSM-
fsmStFailComputeRackUnitDiscover:BootPn STAGE:sam:dme:ComputeRackUnitD
F16520 uos iscover:BootPnuos) warning
[FSM:STAGE:FAILED|RETRY]:
Waiting for system reset on server
[id](FSM-
fsmStFailComputeRackUnitDiscover:BootW STAGE:sam:dme:ComputeRackUnitD
F16520 ait iscover:BootWait) warning

[FSM:STAGE:FAILED|RETRY]: setting
adapter mode to discovery for
server [id](FSM-
fsmStFailComputeRackUnitDiscover:ConfigD STAGE:sam:dme:ComputeRackUnitD
F16520 iscoveryMode iscover:ConfigDiscoveryMode) warning

[FSM:STAGE:FAILED|RETRY]: setting
adapter mode to NIV for server [id]
(FSM-
fsmStFailComputeRackUnitDiscover:Config STAGE:sam:dme:ComputeRackUnitD
F16520 NivMode iscover:ConfigNivMode) warning

[FSM:STAGE:FAILED|RETRY]:
configuring external user access to
server [id](FSM-
fsmStFailComputeRackUnitDiscover:Config STAGE:sam:dme:ComputeRackUnitD
F16520 UserAccess iscover:ConfigUserAccess) warning

[FSM:STAGE:FAILED|RETRY]: Invoke
post-discovery policies on server [id]
(FSM-
fsmStFailComputeRackUnitDiscover:Handle STAGE:sam:dme:ComputeRackUnitD
F16520 Pooling iscover:HandlePooling) warning

[FSM:STAGE:FAILED|RETRY]: detect
and get mezz cards information from
[id](FSM-
fsmStFailComputeRackUnitDiscover:NicInve STAGE:sam:dme:ComputeRackUnitD
F16520 ntoryLocal iscover:NicInventoryLocal) warning
[FSM:STAGE:FAILED|RETRY]: detect
and get mezz cards information from
[id](FSM-
fsmStFailComputeRackUnitDiscover:NicInve STAGE:sam:dme:ComputeRackUnitD
F16520 ntoryPeer iscover:NicInventoryPeer) warning
[FSM:STAGE:FAILED|RETRY]:
Populate pre-boot catalog to server
[id](FSM-
fsmStFailComputeRackUnitDiscover:PnuOS STAGE:sam:dme:ComputeRackUnitD
F16520 Catalog iscover:PnuOSCatalog) warning

[FSM:STAGE:FAILED|RETRY]: Explore
connectivity of server [id] in pre-
boot environment(FSM-
fsmStFailComputeRackUnitDiscover:PnuOS STAGE:sam:dme:ComputeRackUnitD
F16520 ConnStatus iscover:PnuOSConnStatus) warning

[FSM:STAGE:FAILED|RETRY]: Explore
connectivity of server [id] in pre-
boot environment(FSM-
fsmStFailComputeRackUnitDiscover:PnuOS STAGE:sam:dme:ComputeRackUnitD
F16520 Connectivity iscover:PnuOSConnectivity) warning

[FSM:STAGE:FAILED|RETRY]:
Identify pre-boot environment agent
on server [id](FSM-
fsmStFailComputeRackUnitDiscover:PnuOSI STAGE:sam:dme:ComputeRackUnitD
F16520 dent iscover:PnuOSIdent) warning

[FSM:STAGE:FAILED|RETRY]:
Perform inventory of server [id] pre-
boot environment(FSM-
fsmStFailComputeRackUnitDiscover:PnuOSI STAGE:sam:dme:ComputeRackUnitD
F16520 nventory iscover:PnuOSInventory) warning

[FSM:STAGE:FAILED|RETRY]:
Populate pre-boot environment
behavior policy to server [id](FSM-
fsmStFailComputeRackUnitDiscover:PnuOSP STAGE:sam:dme:ComputeRackUnitD
F16520 olicy iscover:PnuOSPolicy) warning

[FSM:STAGE:FAILED|RETRY]: Scrub
server [id](FSM-
fsmStFailComputeRackUnitDiscover:PnuOSS STAGE:sam:dme:ComputeRackUnitD
F16520 crub iscover:PnuOSScrub) warning

[FSM:STAGE:FAILED|RETRY]: Trigger
self-test of server [id] pre-boot
environment(FSM-
fsmStFailComputeRackUnitDiscover:PnuOSS STAGE:sam:dme:ComputeRackUnitD
F16520 elfTest iscover:PnuOSSelfTest) warning
[FSM:STAGE:FAILED|RETRY]:
Preparing to check hardware
configuration server [id](FSM-
fsmStFailComputeRackUnitDiscover:PreSani STAGE:sam:dme:ComputeRackUnitD
F16520 tize iscover:PreSanitize) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for SMBIOS table from CIMC
on server [id](FSM-
fsmStFailComputeRackUnitDiscover:ReadS STAGE:sam:dme:ComputeRackUnitD
F16520 mbios iscover:ReadSmbios) warning

[FSM:STAGE:FAILED|RETRY]:
Checking hardware configuration
server [id](FSM-
STAGE:sam:dme:ComputeRackUnitD
F16520 fsmStFailComputeRackUnitDiscover:Sanitize iscover:Sanitize) warning

[FSM:STAGE:FAILED|RETRY]: Disable
Sol Redirection on server [id](FSM-
fsmStFailComputeRackUnitDiscover:SolRedi STAGE:sam:dme:ComputeRackUnitD
F16520 rectDisable iscover:SolRedirectDisable) warning

[FSM:STAGE:FAILED|RETRY]: set up
bios token on server [id] for Sol
redirect(FSM-
fsmStFailComputeRackUnitDiscover:SolRedi STAGE:sam:dme:ComputeRackUnitD
F16520 rectEnable iscover:SolRedirectEnable) warning

[FSM:STAGE:FAILED|RETRY]:
configure primary fabric
interconnect for pre-boot
environment(FSM-
fsmStFailComputeRackUnitDiscover:SwCon STAGE:sam:dme:ComputeRackUnitD
F16520 figPnuOSLocal iscover:SwConfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
configure secondary fabric
interconnect for pre-boot
environment(FSM-
fsmStFailComputeRackUnitDiscover:SwCon STAGE:sam:dme:ComputeRackUnitD
F16520 figPnuOSPeer iscover:SwConfigPnuOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
configuring primary fabric
interconnect access to server [id]
(FSM-
fsmStFailComputeRackUnitDiscover:SwCon STAGE:sam:dme:ComputeRackUnitD
F16520 figPortNivLocal iscover:SwConfigPortNivLocal) warning

[FSM:STAGE:FAILED|RETRY]:
configuring secondary fabric
interconnect access to server [id]
(FSM-
fsmStFailComputeRackUnitDiscover:SwCon STAGE:sam:dme:ComputeRackUnitD
F16520 figPortNivPeer iscover:SwConfigPortNivPeer) warning
[FSM:STAGE:FAILED|RETRY]:
Configuring fabric-interconnect
connectivity to CIMC of server [id]
(FSM-
fsmStFailComputeRackUnitDiscover:SwCon STAGE:sam:dme:ComputeRackUnitD
F16520 figureConnLocal iscover:SwConfigureConnLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Configuring fabric-interconnect
connectivity to CIMC of server [id]
(FSM-
fsmStFailComputeRackUnitDiscover:SwCon STAGE:sam:dme:ComputeRackUnitD
F16520 figureConnPeer iscover:SwConfigureConnPeer) warning

[FSM:STAGE:FAILED|RETRY]:
determine connectivity of server [id]
to fabric(FSM-
fsmStFailComputeRackUnitDiscover:SwPnu STAGE:sam:dme:ComputeRackUnitD
F16520 OSConnectivityLocal iscover:SwPnuOSConnectivityLocal) warning

[FSM:STAGE:FAILED|RETRY]:
determine connectivity of server [id]
to fabric(FSM-
fsmStFailComputeRackUnitDiscover:SwPnu STAGE:sam:dme:ComputeRackUnitD
F16520 OSConnectivityPeer iscover:SwPnuOSConnectivityPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfiguring primary fabric
interconnect access to server [id]
(FSM-
fsmStFailComputeRackUnitDiscover:SwUnc STAGE:sam:dme:ComputeRackUnitD
F16520 onfigPortNivLocal iscover:SwUnconfigPortNivLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfiguring secondary fabric
interconnect access to server [id]
(FSM-
fsmStFailComputeRackUnitDiscover:SwUnc STAGE:sam:dme:ComputeRackUnitD
F16520 onfigPortNivPeer iscover:SwUnconfigPortNivPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Connect to pre-boot environment
agent on server [id](FSM-
fsmStFailComputeRackUnitDiscover:hagCon STAGE:sam:dme:ComputeRackUnitD
F16520 nect iscover:hagConnect) warning

[FSM:STAGE:FAILED|RETRY]:
Disconnect pre-boot environment
agent for server [id](FSM-
fsmStFailComputeRackUnitDiscover:hagDisc STAGE:sam:dme:ComputeRackUnitD
F16520 onnect iscover:hagDisconnect) warning
[FSM:STAGE:FAILED|RETRY]:
Connect to pre-boot environment
agent on server [id](FSM-
fsmStFailComputeRackUnitDiscover:serialD STAGE:sam:dme:ComputeRackUnitD
F16520 ebugConnect iscover:serialDebugConnect) warning

[FSM:STAGE:FAILED|RETRY]:
Disconnect pre-boot environment
agent for server [id](FSM-
fsmStFailComputeRackUnitDiscover:serialD STAGE:sam:dme:ComputeRackUnitD
F16520 ebugDisconnect iscover:serialDebugDisconnect) warning
[FSM:STAGE:FAILED|RETRY]: wait
for connection to be
established(FSM-
fsmStFailComputeRackUnitDiscover:waitFor STAGE:sam:dme:ComputeRackUnitD
F16520 ConnReady iscover:waitForConnReady) warning

[FSM:STAGE:FAILED|RETRY]:
Deploying Power Management
policy changes on chassis [id](FSM-
fsmStFailEquipmentChassisPsuPolicyConfig: STAGE:sam:dme:EquipmentChassisP
F16533 Execute suPolicyConfig:Execute) warning

[FSM:STAGE:FAILED|RETRY]:
Resetting FC persistent bindings on
host interface [dn](FSM-
fsmStFailAdaptorHostFcIfResetFcPersBindin STAGE:sam:dme:AdaptorHostFcIfRes
F16534 g:ExecuteLocal etFcPersBinding:ExecuteLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Resetting FC persistent bindings on
host interface [dn](FSM-
fsmStFailAdaptorHostFcIfResetFcPersBindin STAGE:sam:dme:AdaptorHostFcIfRes
F16534 g:ExecutePeer etFcPersBinding:ExecutePeer) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for BIOS POST completion
from CIMC on server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:BiosPostCompl STAGE:sam:dme:ComputeBladeDiag:
F16535 etion BiosPostCompletion) warning

[FSM:STAGE:FAILED|RETRY]: Power-
on server [chassisId]/[slotId] for
diagnostics environment(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:BladeBoot BladeBoot) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for system reset on server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:BladeBootWait BladeBootWait) warning
[FSM:STAGE:FAILED|RETRY]: Power
on server [chassisId]/[slotId] for
diagnostics(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:BladePowerOn BladePowerOn) warning

[FSM:STAGE:FAILED|RETRY]: Read
SMBIOS tables on server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:BladeReadSmb STAGE:sam:dme:ComputeBladeDiag:
F16535 ios BladeReadSmbios) warning

[FSM:STAGE:FAILED|RETRY]:
provisioning a bootable device with
a bootable pre-boot image for
server(FSM-
fsmStFailComputeBladeDiag:BmcConfigPnu STAGE:sam:dme:ComputeBladeDiag:
F16535 OS BmcConfigPnuOS) warning

[FSM:STAGE:FAILED|RETRY]: Getting
inventory of server
[chassisId]/[slotId] via CIMC(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:BmcInventory BmcInventory) warning
[FSM:STAGE:FAILED|RETRY]:
Checking CIMC of server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:BmcPresence BmcPresence) warning

[FSM:STAGE:FAILED|RETRY]:
Shutdown server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiag:BmcShutdown STAGE:sam:dme:ComputeBladeDiag:
F16535 DiagCompleted BmcShutdownDiagCompleted) warning

[FSM:STAGE:FAILED|RETRY]:
Cleaning up server
[chassisId]/[slotId] interface on
fabric A(FSM-
fsmStFailComputeBladeDiag:CleanupServer STAGE:sam:dme:ComputeBladeDiag:
F16535 ConnSwA CleanupServerConnSwA) warning

[FSM:STAGE:FAILED|RETRY]:
Cleaning up server
[chassisId]/[slotId] interface on
fabric B(FSM-
fsmStFailComputeBladeDiag:CleanupServer STAGE:sam:dme:ComputeBladeDiag:
F16535 ConnSwB CleanupServerConnSwB) warning
[FSM:STAGE:FAILED|RETRY]:
Configuring primary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:ConfigFeLocal ConfigFeLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Configuring secondary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:ConfigFePeer ConfigFePeer) warning

[FSM:STAGE:FAILED|RETRY]:
Configuring external user access to
server [chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:ConfigUserAcc STAGE:sam:dme:ComputeBladeDiag:
F16535 ess ConfigUserAccess) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for debugging for server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:DebugWait DebugWait) warning

[FSM:STAGE:FAILED|RETRY]: Derive
diag config for server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:DeriveConfig DeriveConfig) warning

[FSM:STAGE:FAILED|RETRY]: Disable
server [chassisId]/[slotId] interface
on fabric A after completion of
network traffic tests on fabric
A(FSM-
fsmStFailComputeBladeDiag:DisableServerC STAGE:sam:dme:ComputeBladeDiag:
F16535 onnSwA DisableServerConnSwA) warning

[FSM:STAGE:FAILED|RETRY]: Disable
server [chassisId]/[slotId]
connectivity on fabric B in
preparation for network traffic tests
on fabric A(FSM-
fsmStFailComputeBladeDiag:DisableServerC STAGE:sam:dme:ComputeBladeDiag:
F16535 onnSwB DisableServerConnSwB) warning

[FSM:STAGE:FAILED|RETRY]: Enable
server [chassisId]/[slotId]
connectivity on fabric A in
preparation for network traffic tests
on fabric A(FSM-
fsmStFailComputeBladeDiag:EnableServerC STAGE:sam:dme:ComputeBladeDiag:
F16535 onnSwA EnableServerConnSwA) warning
[FSM:STAGE:FAILED|RETRY]: Enable
server [chassisId]/[slotId]
connectivity on fabric B in
preparation for network traffic tests
on fabric B(FSM-
fsmStFailComputeBladeDiag:EnableServerC STAGE:sam:dme:ComputeBladeDiag:
F16535 onnSwB EnableServerConnSwB) warning

[FSM:STAGE:FAILED|RETRY]:
Evaluating status; diagnostics
completed(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:EvaluateStatus EvaluateStatus) warning

[FSM:STAGE:FAILED|RETRY]: Gather
status of network traffic tests on
fabric A for server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiag:FabricATrafficT STAGE:sam:dme:ComputeBladeDiag:
F16535 estStatus FabricATrafficTestStatus) warning

[FSM:STAGE:FAILED|RETRY]: Gather
status of network tests on fabric B
for server [chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:FabricBTrafficT STAGE:sam:dme:ComputeBladeDiag:
F16535 estStatus FabricBTrafficTestStatus) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for collection of diagnostic
logs from server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiag:GenerateLogW STAGE:sam:dme:ComputeBladeDiag:
F16535 ait GenerateLogWait) warning

[FSM:STAGE:FAILED|RETRY]:
Generating report for server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:GenerateRepor STAGE:sam:dme:ComputeBladeDiag:
F16535 t GenerateReport) warning

[FSM:STAGE:FAILED|RETRY]:
Populate diagnostics catalog to
server [chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:HostCatalog HostCatalog) warning

[FSM:STAGE:FAILED|RETRY]:
Connect to diagnostics environment
agent on server [chassisId]/[slotId]
(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:HostConnect HostConnect) warning
[FSM:STAGE:FAILED|RETRY]:
Disconnect diagnostics environment
agent for server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiag:HostDisconnec STAGE:sam:dme:ComputeBladeDiag:
F16535 t HostDisconnect) warning

[FSM:STAGE:FAILED|RETRY]:
Identify diagnostics environment
agent on server [chassisId]/[slotId]
(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:HostIdent HostIdent) warning

[FSM:STAGE:FAILED|RETRY]:
Perform inventory of server
[chassisId]/[slotId] in diagnostics
environment(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:HostInventory HostInventory) warning

[FSM:STAGE:FAILED|RETRY]:
Populate diagnostics environment
behavior policy to server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:HostPolicy HostPolicy) warning

[FSM:STAGE:FAILED|RETRY]: Trigger
diagnostics on server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:HostServerDia STAGE:sam:dme:ComputeBladeDiag:
F16535 g HostServerDiag) warning

[FSM:STAGE:FAILED|RETRY]:
Diagnostics status on server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:HostServerDia STAGE:sam:dme:ComputeBladeDiag:
F16535 gStatus HostServerDiagStatus) warning

[FSM:STAGE:FAILED|RETRY]:
Configure adapter in server
[chassisId]/[slotId] for diagnostics
environment(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:NicConfigLocal NicConfigLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Configure adapter in server
[chassisId]/[slotId] for diagnostics
environment(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:NicConfigPeer NicConfigPeer) warning
[FSM:STAGE:FAILED|RETRY]:
Retrieve adapter inventory in server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:NicInventoryLo STAGE:sam:dme:ComputeBladeDiag:
F16535 cal NicInventoryLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Retrieve adapter inventory in server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:NicInventoryPe STAGE:sam:dme:ComputeBladeDiag:
F16535 er NicInventoryPeer) warning

[FSM:STAGE:FAILED|RETRY]: Detect
adapter in server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiag:NicPresenceLo STAGE:sam:dme:ComputeBladeDiag:
F16535 cal NicPresenceLocal) warning

[FSM:STAGE:FAILED|RETRY]: Detect
adapter in server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiag:NicPresencePe STAGE:sam:dme:ComputeBladeDiag:
F16535 er NicPresencePeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure adapter of server
[chassisId]/[slotId] diagnostics
environment(FSM-
fsmStFailComputeBladeDiag:NicUnconfigLo STAGE:sam:dme:ComputeBladeDiag:
F16535 cal NicUnconfigLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure adapter of server
[chassisId]/[slotId] diagnostics
environment(FSM-
fsmStFailComputeBladeDiag:NicUnconfigPe STAGE:sam:dme:ComputeBladeDiag:
F16535 er NicUnconfigPeer) warning

[FSM:STAGE:FAILED|RETRY]: Derive
diag config for server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:RemoveConfig RemoveConfig) warning

[FSM:STAGE:FAILED|RETRY]:
Remove VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmStFailComputeBladeDiag:RemoveVMedi STAGE:sam:dme:ComputeBladeDiag:
F16535 aLocal RemoveVMediaLocal) warning
[FSM:STAGE:FAILED|RETRY]:
Remove VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmStFailComputeBladeDiag:RemoveVMedi STAGE:sam:dme:ComputeBladeDiag:
F16535 aPeer RemoveVMediaPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Reconfiguring primary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:RestoreConfigF STAGE:sam:dme:ComputeBladeDiag:
F16535 eLocal RestoreConfigFeLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Reconfiguring secondary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:RestoreConfigF STAGE:sam:dme:ComputeBladeDiag:
F16535 ePeer RestoreConfigFePeer) warning

[FSM:STAGE:FAILED|RETRY]:
Populate diagnostics environment
with a user account to server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:SetDiagUser SetDiagUser) warning

[FSM:STAGE:FAILED|RETRY]: Setup
VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmStFailComputeBladeDiag:SetupVMediaL STAGE:sam:dme:ComputeBladeDiag:
F16535 ocal SetupVMediaLocal) warning

[FSM:STAGE:FAILED|RETRY]: Setup
VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmStFailComputeBladeDiag:SetupVMediaP STAGE:sam:dme:ComputeBladeDiag:
F16535 eer SetupVMediaPeer) warning

[FSM:STAGE:FAILED|RETRY]: Disable
Sol Redirection on server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:SolRedirectDis STAGE:sam:dme:ComputeBladeDiag:
F16535 able SolRedirectDisable) warning

[FSM:STAGE:FAILED|RETRY]: set up
bios token on server
[chassisId]/[slotId] for Sol
redirect(FSM-
fsmStFailComputeBladeDiag:SolRedirectEna STAGE:sam:dme:ComputeBladeDiag:
F16535 ble SolRedirectEnable) warning
[FSM:STAGE:FAILED|RETRY]: Trigger
network traffic tests on fabric A on
server [chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:StartFabricATr STAGE:sam:dme:ComputeBladeDiag:
F16535 afficTest StartFabricATrafficTest) warning

[FSM:STAGE:FAILED|RETRY]: Trigger
network tests on fabric B for server
[chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:StartFabricBTr STAGE:sam:dme:ComputeBladeDiag:
F16535 afficTest StartFabricBTrafficTest) warning

[FSM:STAGE:FAILED|RETRY]: Stop
VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmStFailComputeBladeDiag:StopVMediaLo STAGE:sam:dme:ComputeBladeDiag:
F16535 cal StopVMediaLocal) warning

[FSM:STAGE:FAILED|RETRY]: Stop
VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmStFailComputeBladeDiag:StopVMediaPe STAGE:sam:dme:ComputeBladeDiag:
F16535 er StopVMediaPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Configure primary fabric
interconnect in server
[chassisId]/[slotId] for diagnostics
environment(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:SwConfigLocal SwConfigLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Configure secondary fabric
interconnect in server
[chassisId]/[slotId] for diagnostics
environment(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F16535 fsmStFailComputeBladeDiag:SwConfigPeer SwConfigPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure primary fabric
interconnect for server
[chassisId]/[slotId] in diagnostics
environment(FSM-
fsmStFailComputeBladeDiag:SwUnconfigLoc STAGE:sam:dme:ComputeBladeDiag:
F16535 al SwUnconfigLocal) warning
[FSM:STAGE:FAILED|RETRY]:
Unconfigure secondary fabric
interconnect for server
[chassisId]/[slotId] in diagnostics
environment(FSM-
fsmStFailComputeBladeDiag:SwUnconfigPe STAGE:sam:dme:ComputeBladeDiag:
F16535 er SwUnconfigPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure external user access to
server [chassisId]/[slotId](FSM-
fsmStFailComputeBladeDiag:UnconfigUserA STAGE:sam:dme:ComputeBladeDiag:
F16535 ccess UnconfigUserAccess) warning

[FSM:STAGE:FAILED|RETRY]:
Connect to pre-boot environment
agent on server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiag:serialDebugCo STAGE:sam:dme:ComputeBladeDiag:
F16535 nnect serialDebugConnect) warning

[FSM:STAGE:FAILED|RETRY]:
Disconnect pre-boot environment
agent for server [chassisId]/[slotId]
(FSM-
fsmStFailComputeBladeDiag:serialDebugDis STAGE:sam:dme:ComputeBladeDiag:
F16535 connect serialDebugDisconnect) warning

[FSM:STAGE:FAILED|RETRY]: (FSM-
fsmStFailFabricLanCloudSwitchMode:SwCo STAGE:sam:dme:FabricLanCloudSwit
F16539 nfigLocal chMode:SwConfigLocal) warning

[FSM:STAGE:FAILED|RETRY]: Fabric
interconnect mode configuration to
primary(FSM-
fsmStFailFabricLanCloudSwitchMode:SwCo STAGE:sam:dme:FabricLanCloudSwit
F16539 nfigPeer chMode:SwConfigPeer) warning

[FSM:STAGE:FAILED|RETRY]: (FSM-
fsmStFailFabricSanCloudSwitchMode:SwCo STAGE:sam:dme:FabricSanCloudSwit
F16539 nfigLocal chMode:SwConfigLocal) warning

[FSM:STAGE:FAILED|RETRY]: Fabric
interconnect FC mode configuration
to primary(FSM-
fsmStFailFabricSanCloudSwitchMode:SwCo STAGE:sam:dme:FabricSanCloudSwit
F16539 nfigPeer chMode:SwConfigPeer) warning

[FSM:STAGE:FAILED|RETRY]:
propogate updated settings (eg.
timezone)(FSM-
fsmStFailCommSvcEpUpdateSvcEp:Propoga STAGE:sam:dme:CommSvcEpUpdate
F16576 teEpSettings SvcEp:PropogateEpSettings) warning
[FSM:STAGE:FAILED|RETRY]:
propogate updated timezone
settings to management controllers.
(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmStFailCommSvcEpUpdateSvcEp:Propoga SvcEp:PropogateEpTimeZoneSetting
F16576 teEpTimeZoneSettingsLocal sLocal) warning

[FSM:STAGE:FAILED|RETRY]:
propogate updated timezone
settings to management controllers.
(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmStFailCommSvcEpUpdateSvcEp:Propoga SvcEp:PropogateEpTimeZoneSetting
F16576 teEpTimeZoneSettingsPeer sPeer) warning

[FSM:STAGE:FAILED|RETRY]:
propogate updated timezone
settings to NICs.(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmStFailCommSvcEpUpdateSvcEp:Propoga SvcEp:PropogateEpTimeZoneSetting
F16576 teEpTimeZoneSettingsToAdaptorsLocal sToAdaptorsLocal) warning

[FSM:STAGE:FAILED|RETRY]:
propogate updated timezone
settings to NICs.(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmStFailCommSvcEpUpdateSvcEp:Propoga SvcEp:PropogateEpTimeZoneSetting
F16576 teEpTimeZoneSettingsToAdaptorsPeer sToAdaptorsPeer) warning

[FSM:STAGE:FAILED|RETRY]:
propogate updated timezone
settings to FEXs and IOMs.(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmStFailCommSvcEpUpdateSvcEp:Propoga SvcEp:PropogateEpTimeZoneSetting
F16576 teEpTimeZoneSettingsToFexIomLocal sToFexIomLocal) warning

[FSM:STAGE:FAILED|RETRY]:
propogate updated timezone
settings to FEXs and IOMs.(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmStFailCommSvcEpUpdateSvcEp:Propoga SvcEp:PropogateEpTimeZoneSetting
F16576 teEpTimeZoneSettingsToFexIomPeer sToFexIomPeer) warning

[FSM:STAGE:FAILED|RETRY]:
communication service [name]
configuration to primary(FSM-
fsmStFailCommSvcEpUpdateSvcEp:SetEpLoc STAGE:sam:dme:CommSvcEpUpdate
F16576 al SvcEp:SetEpLocal) warning
[FSM:STAGE:FAILED|RETRY]:
communication service [name]
configuration to secondary(FSM-
fsmStFailCommSvcEpUpdateSvcEp:SetEpPe STAGE:sam:dme:CommSvcEpUpdate
F16576 er SvcEp:SetEpPeer) warning

[FSM:STAGE:FAILED|RETRY]: restart
web services in primary(FSM-
STAGE:sam:dme:CommSvcEpRestart
F16577 fsmStFailCommSvcEpRestartWebSvc:local WebSvc:local) warning

[FSM:STAGE:FAILED|RETRY]: restart
web services in secondary(FSM-
STAGE:sam:dme:CommSvcEpRestart
F16577 fsmStFailCommSvcEpRestartWebSvc:peer WebSvc:peer) warning

[FSM:STAGE:FAILED|RETRY]:
external aaa server configuration to
primary(FSM-
STAGE:sam:dme:AaaEpUpdateEp:Se
F16579 fsmStFailAaaEpUpdateEp:SetEpLocal tEpLocal) warning

[FSM:STAGE:FAILED|RETRY]:
external aaa server configuration to
secondary(FSM-
STAGE:sam:dme:AaaEpUpdateEp:Se
F16579 fsmStFailAaaEpUpdateEp:SetEpPeer tEpPeer) warning

[FSM:STAGE:FAILED|RETRY]: keyring
configuration on primary(FSM-
STAGE:sam:dme:PkiEpUpdateEp:Set
F16579 fsmStFailPkiEpUpdateEp:SetKeyRingLocal KeyRingLocal) warning

[FSM:STAGE:FAILED|RETRY]: keyring
configuration on secondary(FSM-
STAGE:sam:dme:PkiEpUpdateEp:Set
F16579 fsmStFailPkiEpUpdateEp:SetKeyRingPeer KeyRingPeer) warning

[FSM:STAGE:FAILED|RETRY]: Update
endpoint on fabric interconnect
A(FSM-
fsmStFailStatsCollectionPolicyUpdateEp:Set STAGE:sam:dme:StatsCollectionPolic
F16579 EpA yUpdateEp:SetEpA) warning

[FSM:STAGE:FAILED|RETRY]: Update
endpoint on fabric interconnect
B(FSM-
fsmStFailStatsCollectionPolicyUpdateEp:Set STAGE:sam:dme:StatsCollectionPolic
F16579 EpB yUpdateEp:SetEpB) warning

[FSM:STAGE:FAILED|RETRY]: realm
configuration to primary(FSM-
fsmStFailAaaRealmUpdateRealm:SetRealmL STAGE:sam:dme:AaaRealmUpdateR
F16580 ocal ealm:SetRealmLocal) warning
[FSM:STAGE:FAILED|RETRY]: realm
configuration to secondary(FSM-
fsmStFailAaaRealmUpdateRealm:SetRealmP STAGE:sam:dme:AaaRealmUpdateR
F16580 eer ealm:SetRealmPeer) warning

[FSM:STAGE:FAILED|RETRY]: user
configuration to primary(FSM-
fsmStFailAaaUserEpUpdateUserEp:SetUserL STAGE:sam:dme:AaaUserEpUpdate
F16581 ocal UserEp:SetUserLocal) warning

[FSM:STAGE:FAILED|RETRY]: user
configuration to secondary(FSM-
fsmStFailAaaUserEpUpdateUserEp:SetUserP STAGE:sam:dme:AaaUserEpUpdate
F16581 eer UserEp:SetUserPeer) warning

[FSM:STAGE:FAILED|RETRY]:
[action] file [name](FSM-
STAGE:sam:dme:SysfileMutationSing
F16600 fsmStFailSysfileMutationSingle:Execute le:Execute) warning

[FSM:STAGE:FAILED|RETRY]:
remove files from local(FSM-
STAGE:sam:dme:SysfileMutationGlo
F16601 fsmStFailSysfileMutationGlobal:Local bal:Local) warning

[FSM:STAGE:FAILED|RETRY]:
remove files from peer(FSM-
STAGE:sam:dme:SysfileMutationGlo
F16601 fsmStFailSysfileMutationGlobal:Peer bal:Peer) warning

[FSM:STAGE:FAILED|RETRY]: export
core file [name] to [hostname](FSM-
fsmStFailSysdebugManualCoreFileExportTar STAGE:sam:dme:SysdebugManualCo
F16604 getExport:Execute reFileExportTargetExport:Execute) warning

[FSM:STAGE:FAILED|RETRY]: Apply
switch configuration(FSM-
STAGE:sam:dme:FabricEpMgrConfig
F16605 fsmStFailFabricEpMgrConfigure:ApplyConfig ure:ApplyConfig) warning
[FSM:STAGE:FAILED|RETRY]:
Applying physical
configuration(FSM-
fsmStFailFabricEpMgrConfigure:ApplyPhysic STAGE:sam:dme:FabricEpMgrConfig
F16605 al ure:ApplyPhysical) warning

[FSM:STAGE:FAILED|RETRY]:
Validating logical configuration(FSM-
fsmStFailFabricEpMgrConfigure:ValidateCon STAGE:sam:dme:FabricEpMgrConfig
F16605 figuration ure:ValidateConfiguration) warning
[FSM:STAGE:FAILED|RETRY]:
Waiting on physical change
application(FSM-
fsmStFailFabricEpMgrConfigure:WaitOnPhy STAGE:sam:dme:FabricEpMgrConfig
F16605 s ure:WaitOnPhys) warning
[FSM:STAGE:FAILED|RETRY]:
Analyzing changes impact(FSM-
STAGE:sam:dme:LsServerConfigure:
F16605 fsmStFailLsServerConfigure:AnalyzeImpact AnalyzeImpact) warning
[FSM:STAGE:FAILED|RETRY]:
Applying config to server [pnDn]
(FSM-
STAGE:sam:dme:LsServerConfigure:
F16605 fsmStFailLsServerConfigure:ApplyConfig ApplyConfig) warning
[FSM:STAGE:FAILED|RETRY]:
Resolving and applying
identifiers(FSM-
STAGE:sam:dme:LsServerConfigure:
F16605 fsmStFailLsServerConfigure:ApplyIdentifiers ApplyIdentifiers) warning

[FSM:STAGE:FAILED|RETRY]:
Resolving and applying policies(FSM-
STAGE:sam:dme:LsServerConfigure:
F16605 fsmStFailLsServerConfigure:ApplyPolicies ApplyPolicies) warning

[FSM:STAGE:FAILED|RETRY]:
Applying configuration template
[srcTemplName](FSM-
STAGE:sam:dme:LsServerConfigure:
F16605 fsmStFailLsServerConfigure:ApplyTemplate ApplyTemplate) warning
[FSM:STAGE:FAILED|RETRY]:
Evaluate association with server
[pnDn](FSM-
fsmStFailLsServerConfigure:EvaluateAssocia STAGE:sam:dme:LsServerConfigure:
F16605 tion EvaluateAssociation) warning

[FSM:STAGE:FAILED|RETRY]:
Computing binding changes(FSM-
fsmStFailLsServerConfigure:ResolveBootCon STAGE:sam:dme:LsServerConfigure:
F16605 fig ResolveBootConfig) warning
[FSM:STAGE:FAILED|RETRY]:
Waiting for ack or maint
window(FSM-
fsmStFailLsServerConfigure:WaitForMaintPe STAGE:sam:dme:LsServerConfigure:
F16605 rmission WaitForMaintPermission) warning
[FSM:STAGE:FAILED|RETRY]:
Waiting for maintenance
window(FSM-
fsmStFailLsServerConfigure:WaitForMaintW STAGE:sam:dme:LsServerConfigure:
F16605 indow WaitForMaintWindow) warning

[FSM:STAGE:FAILED|RETRY]:
configuring automatic core file
export service on local(FSM-
fsmStFailSysdebugAutoCoreFileExportTarge STAGE:sam:dme:SysdebugAutoCore
F16605 tConfigure:Local FileExportTargetConfigure:Local) warning
[FSM:STAGE:FAILED|RETRY]:
configuring automatic core file
export service on peer(FSM-
fsmStFailSysdebugAutoCoreFileExportTarge STAGE:sam:dme:SysdebugAutoCore
F16605 tConfigure:Peer FileExportTargetConfigure:Peer) warning

[FSM:STAGE:FAILED|RETRY]:
persisting LogControl on local(FSM-
fsmStFailSysdebugLogControlEpLogControlP STAGE:sam:dme:SysdebugLogContro
F16606 ersist:Local lEpLogControlPersist:Local) warning

[FSM:STAGE:FAILED|RETRY]:
persisting LogControl on peer(FSM-
fsmStFailSysdebugLogControlEpLogControlP STAGE:sam:dme:SysdebugLogContro
F16606 ersist:Peer lEpLogControlPersist:Peer) warning

[FSM:STAGE:FAILED|RETRY]: vnic
qos policy [name] configuration to
primary(FSM-
STAGE:sam:dme:EpqosDefinitionDe
F16634 fsmStFailEpqosDefinitionDeploy:Local ploy:Local) warning

[FSM:STAGE:FAILED|RETRY]: vnic
qos policy [name] configuration to
secondary(FSM-
STAGE:sam:dme:EpqosDefinitionDe
F16634 fsmStFailEpqosDefinitionDeploy:Peer ploy:Peer) warning

[FSM:STAGE:FAILED|RETRY]:
internal network configuration on
[switchId](FSM-
fsmStFailSwAccessDomainDeploy:UpdateCo STAGE:sam:dme:SwAccessDomainD
F16634 nnectivity eploy:UpdateConnectivity) warning

[FSM:STAGE:FAILED|RETRY]: Uplink
eth port configuration on [switchId]
(FSM-
fsmStFailSwEthLanBorderDeploy:UpdateCo STAGE:sam:dme:SwEthLanBorderDe
F16634 nnectivity ploy:UpdateConnectivity) warning

[FSM:STAGE:FAILED|RETRY]:
Ethernet traffic monitor
(SPAN)configuration on [switchId]
(FSM-
STAGE:sam:dme:SwEthMonDeploy:
F16634 fsmStFailSwEthMonDeploy:UpdateEthMon UpdateEthMon) warning

[FSM:STAGE:FAILED|RETRY]: FC
traffic monitor (SPAN)configuration
on [switchId](FSM-
STAGE:sam:dme:SwFcMonDeploy:U
F16634 fsmStFailSwFcMonDeploy:UpdateFcMon pdateFcMon) warning
[FSM:STAGE:FAILED|RETRY]: Uplink
fc port configuration on [switchId]
(FSM-
fsmStFailSwFcSanBorderDeploy:UpdateCon STAGE:sam:dme:SwFcSanBorderDep
F16634 nectivity loy:UpdateConnectivity) warning

[FSM:STAGE:FAILED|RETRY]: Utility
network configuration on [switchId]
(FSM-
fsmStFailSwUtilityDomainDeploy:UpdateCo STAGE:sam:dme:SwUtilityDomainDe
F16634 nnectivity ploy:UpdateConnectivity) warning

[FSM:STAGE:FAILED|RETRY]: VNIC
profile configuration on local
fabric(FSM-
STAGE:sam:dme:VnicProfileSetDeplo
F16634 fsmStFailVnicProfileSetDeploy:Local y:Local) warning

[FSM:STAGE:FAILED|RETRY]: VNIC
profile configuration on peer
fabric(FSM-
STAGE:sam:dme:VnicProfileSetDeplo
F16634 fsmStFailVnicProfileSetDeploy:Peer y:Peer) warning

[FSM:STAGE:FAILED|RETRY]: create
on primary(FSM-
STAGE:sam:dme:SyntheticFsObjCrea
F16641 fsmStFailSyntheticFsObjCreate:createLocal te:createLocal) warning

[FSM:STAGE:FAILED|RETRY]: create
on secondary(FSM-
fsmStFailSyntheticFsObjCreate:createRemot STAGE:sam:dme:SyntheticFsObjCrea
F16641 e te:createRemote) warning
[FSM:STAGE:FAILED|RETRY]:
deleting package [name] from
primary(FSM-
STAGE:sam:dme:FirmwareDistributa
F16651 fsmStFailFirmwareDistributableDelete:Local bleDelete:Local) warning
[FSM:STAGE:FAILED|RETRY]:
deleting package [name] from
secondary(FSM-
fsmStFailFirmwareDistributableDelete:Rem STAGE:sam:dme:FirmwareDistributa
F16651 ote bleDelete:Remote) warning

[FSM:STAGE:FAILED|RETRY]:
deleting image file [name] ([invTag])
from primary(FSM-
STAGE:sam:dme:FirmwareImageDel
F16651 fsmStFailFirmwareImageDelete:Local ete:Local) warning

[FSM:STAGE:FAILED|RETRY]:
deleting image file [name] ([invTag])
from secondary(FSM-
STAGE:sam:dme:FirmwareImageDel
F16651 fsmStFailFirmwareImageDelete:Remote ete:Remote) warning
[FSM:STAGE:FAILED|RETRY]:
rebooting local fabric
interconnect(FSM-
fsmStFailMgmtControllerUpdateSwitch:rese STAGE:sam:dme:MgmtControllerUp
F16653 tLocal dateSwitch:resetLocal) warning

[FSM:STAGE:FAILED|RETRY]:
rebooting remote fabric
interconnect(FSM-
fsmStFailMgmtControllerUpdateSwitch:rese STAGE:sam:dme:MgmtControllerUp
F16653 tRemote dateSwitch:resetRemote) warning
[FSM:STAGE:FAILED|RETRY]:
updating local fabric
interconnect(FSM-
fsmStFailMgmtControllerUpdateSwitch:upd STAGE:sam:dme:MgmtControllerUp
F16653 ateLocal dateSwitch:updateLocal) warning
[FSM:STAGE:FAILED|RETRY]:
updating peer fabric
interconnect(FSM-
fsmStFailMgmtControllerUpdateSwitch:upd STAGE:sam:dme:MgmtControllerUp
F16653 ateRemote dateSwitch:updateRemote) warning

[FSM:STAGE:FAILED|RETRY]:
verifying boot variables for local
fabric interconnect(FSM-
fsmStFailMgmtControllerUpdateSwitch:veri STAGE:sam:dme:MgmtControllerUp
F16653 fyLocal dateSwitch:verifyLocal) warning

[FSM:STAGE:FAILED|RETRY]:
verifying boot variables for remote
fabric interconnect(FSM-
fsmStFailMgmtControllerUpdateSwitch:veri STAGE:sam:dme:MgmtControllerUp
F16653 fyRemote dateSwitch:verifyRemote) warning

[FSM:STAGE:FAILED|RETRY]: waiting
for IOM update(FSM-
fsmStFailMgmtControllerUpdateIOM:PollUp STAGE:sam:dme:MgmtControllerUp
F16654 dateStatus dateIOM:PollUpdateStatus) warning
[FSM:STAGE:FAILED|RETRY]:
sending update request to
IOM(FSM-
fsmStFailMgmtControllerUpdateIOM:Updat STAGE:sam:dme:MgmtControllerUp
F16654 eRequest dateIOM:UpdateRequest) warning
[FSM:STAGE:FAILED|RETRY]:
activating backup image of
IOM(FSM-
fsmStFailMgmtControllerActivateIOM:Activ STAGE:sam:dme:MgmtControllerAc
F16655 ate tivateIOM:Activate) warning

[FSM:STAGE:FAILED|RETRY]:
Resetting IOM to boot the activated
version(FSM-
STAGE:sam:dme:MgmtControllerAc
F16655 fsmStFailMgmtControllerActivateIOM:Reset tivateIOM:Reset) warning
[FSM:STAGE:FAILED|RETRY]: waiting
for update to complete(FSM-
fsmStFailMgmtControllerUpdateBMC:PollU STAGE:sam:dme:MgmtControllerUp
F16656 pdateStatus dateBMC:PollUpdateStatus) warning
[FSM:STAGE:FAILED|RETRY]:
sending update request to
CIMC(FSM-
fsmStFailMgmtControllerUpdateBMC:Updat STAGE:sam:dme:MgmtControllerUp
F16656 eRequest dateBMC:UpdateRequest) warning
[FSM:STAGE:FAILED|RETRY]:
activating backup image of
CIMC(FSM-
fsmStFailMgmtControllerActivateBMC:Activ STAGE:sam:dme:MgmtControllerAc
F16657 ate tivateBMC:Activate) warning

[FSM:STAGE:FAILED|RETRY]:
Resetting CIMC to boot the activated
version(FSM-
STAGE:sam:dme:MgmtControllerAc
F16657 fsmStFailMgmtControllerActivateBMC:Reset tivateBMC:Reset) warning
[FSM:STAGE:FAILED|RETRY]: call-
home configuration on
primary(FSM-
fsmStFailCallhomeEpConfigCallhome:SetLoc STAGE:sam:dme:CallhomeEpConfigC
F16670 al allhome:SetLocal) warning

[FSM:STAGE:FAILED|RETRY]: call-
home configuration on
secondary(FSM-
fsmStFailCallhomeEpConfigCallhome:SetPee STAGE:sam:dme:CallhomeEpConfigC
F16670 r allhome:SetPeer) warning

[FSM:STAGE:FAILED|RETRY]:
configuring the out-of-band
interface(FSM-
fsmStFailMgmtIfSwMgmtOobIfConfig:Switc STAGE:sam:dme:MgmtIfSwMgmtOo
F16673 h bIfConfig:Switch) warning
[FSM:STAGE:FAILED|RETRY]:
configuring the inband
interface(FSM-
fsmStFailMgmtIfSwMgmtInbandIfConfig:Swi STAGE:sam:dme:MgmtIfSwMgmtInb
F16674 tch andIfConfig:Switch) warning

[FSM:STAGE:FAILED|RETRY]:
Updating virtual interface on local
fabric interconnect(FSM-
STAGE:sam:dme:MgmtIfVirtualIfCon
F16679 fsmStFailMgmtIfVirtualIfConfig:Local fig:Local) warning

[FSM:STAGE:FAILED|RETRY]:
Updating virtual interface on peer
fabric interconnect(FSM-
STAGE:sam:dme:MgmtIfVirtualIfCon
F16679 fsmStFailMgmtIfVirtualIfConfig:Remote fig:Remote) warning
[FSM:STAGE:FAILED|RETRY]: Enable
virtual interface on local fabric
interconnect(FSM-
STAGE:sam:dme:MgmtIfEnableVip:L
F16680 fsmStFailMgmtIfEnableVip:Local ocal) warning

[FSM:STAGE:FAILED|RETRY]: Disable
virtual interface on peer fabric
interconnect(FSM-
STAGE:sam:dme:MgmtIfDisableVip:P
F16681 fsmStFailMgmtIfDisableVip:Peer eer) warning

[FSM:STAGE:FAILED|RETRY]:
Transition from single to cluster
mode(FSM-
STAGE:sam:dme:MgmtIfEnableHA:L
F16682 fsmStFailMgmtIfEnableHA:Local ocal) warning

[FSM:STAGE:FAILED|RETRY]:
internal database backup(FSM-
STAGE:sam:dme:MgmtBackupBacku
F16683 fsmStFailMgmtBackupBackup:backupLocal p:backupLocal) warning

[FSM:STAGE:FAILED|RETRY]:
internal system backup(FSM-
STAGE:sam:dme:MgmtBackupBacku
F16683 fsmStFailMgmtBackupBackup:upload p:upload) warning
[FSM:STAGE:FAILED|RETRY]:
importing the configuration
file(FSM-
STAGE:sam:dme:MgmtImporterImp
F16684 fsmStFailMgmtImporterImport:config ort:config) warning

[FSM:STAGE:FAILED|RETRY]:
downloading the configuration file
from the remote location(FSM-
fsmStFailMgmtImporterImport:downloadLo STAGE:sam:dme:MgmtImporterImp
F16684 cal ort:downloadLocal) warning

[FSM:STAGE:FAILED|RETRY]: Report
results of configuration
application(FSM-
fsmStFailMgmtImporterImport:reportResult STAGE:sam:dme:MgmtImporterImp
F16684 s ort:reportResults) warning

[FSM:STAGE:FAILED|RETRY]: QoS
Classification Definition [name]
configuration on primary(FSM-
fsmStFailQosclassDefinitionConfigGlobalQo STAGE:sam:dme:QosclassDefinitionC
F16745 S:SetLocal onfigGlobalQoS:SetLocal) warning
[FSM:STAGE:FAILED|RETRY]: QoS
Classification Definition [name]
configuration on secondary(FSM-
fsmStFailQosclassDefinitionConfigGlobalQo STAGE:sam:dme:QosclassDefinitionC
F16745 S:SetPeer onfigGlobalQoS:SetPeer) warning

[FSM:STAGE:FAILED|RETRY]: vnic
qos policy [name] removal from
primary(FSM-
fsmStFailEpqosDefinitionDelTaskRemove:Lo STAGE:sam:dme:EpqosDefinitionDel
F16750 cal TaskRemove:Local) warning

[FSM:STAGE:FAILED|RETRY]: vnic
qos policy [name] removal from
secondary(FSM-
fsmStFailEpqosDefinitionDelTaskRemove:Pe STAGE:sam:dme:EpqosDefinitionDel
F16750 er TaskRemove:Peer) warning

[FSM:STAGE:FAILED|RETRY]:
Resetting Chassis Management
Controller on IOM [chassisId]/[id]
(FSM-
fsmStFailEquipmentIOCardResetCmc:Execut STAGE:sam:dme:EquipmentIOCardR
F16803 e esetCmc:Execute) warning
[FSM:STAGE:FAILED|RETRY]:
Updating UCS Manager
firmware(FSM-
fsmStFailMgmtControllerUpdateUCSManag STAGE:sam:dme:MgmtControllerUp
F16815 er:execute dateUCSManager:execute) warning
[FSM:STAGE:FAILED|RETRY]:
Scheduling UCS manager
update(FSM-
fsmStFailMgmtControllerUpdateUCSManag STAGE:sam:dme:MgmtControllerUp
F16815 er:start dateUCSManager:start) warning

[FSM:STAGE:FAILED|RETRY]: system
configuration on primary(FSM-
STAGE:sam:dme:MgmtControllerSys
F16823 fsmStFailMgmtControllerSysConfig:Primary Config:Primary) warning

[FSM:STAGE:FAILED|RETRY]: system
configuration on secondary(FSM-
fsmStFailMgmtControllerSysConfig:Seconda STAGE:sam:dme:MgmtControllerSys
F16823 ry Config:Secondary) warning

[FSM:STAGE:FAILED|RETRY]: Disable
path [side]([switchId]) on server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:AdaptorExtEthIfPat
F16852 fsmStFailAdaptorExtEthIfPathReset:Disable hReset:Disable) warning
[FSM:STAGE:FAILED|RETRY]: Enabe
path [side]([switchId]) on server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:AdaptorExtEthIfPat
F16852 fsmStFailAdaptorExtEthIfPathReset:Enable hReset:Enable) warning

[FSM:STAGE:FAILED|RETRY]: Disable
circuit A for host adaptor [id] on
server [chassisId]/[slotId](FSM-
fsmStFailAdaptorHostEthIfCircuitReset:Disa STAGE:sam:dme:AdaptorHostEthIfCi
F16857 bleA rcuitReset:DisableA) warning

[FSM:STAGE:FAILED|RETRY]: Disable
circuit B for host adaptor [id] on
server [chassisId]/[slotId](FSM-
fsmStFailAdaptorHostEthIfCircuitReset:Disa STAGE:sam:dme:AdaptorHostEthIfCi
F16857 bleB rcuitReset:DisableB) warning

[FSM:STAGE:FAILED|RETRY]: Enable
circuit A for host adaptor [id] on
server [chassisId]/[slotId](FSM-
fsmStFailAdaptorHostEthIfCircuitReset:Enab STAGE:sam:dme:AdaptorHostEthIfCi
F16857 leA rcuitReset:EnableA) warning

[FSM:STAGE:FAILED|RETRY]: Enable
circuit B for host adaptor [id] on
server [chassisId]/[slotId](FSM-
fsmStFailAdaptorHostEthIfCircuitReset:Enab STAGE:sam:dme:AdaptorHostEthIfCi
F16857 leB rcuitReset:EnableB) warning

[FSM:STAGE:FAILED|RETRY]: Disable
circuit A for host adaptor [id] on
server [chassisId]/[slotId](FSM-
fsmStFailAdaptorHostFcIfCircuitReset:Disabl STAGE:sam:dme:AdaptorHostFcIfCir
F16857 eA cuitReset:DisableA) warning

[FSM:STAGE:FAILED|RETRY]: Disable
circuit B for host adaptor [id] on
server [chassisId]/[slotId](FSM-
fsmStFailAdaptorHostFcIfCircuitReset:Disabl STAGE:sam:dme:AdaptorHostFcIfCir
F16857 eB cuitReset:DisableB) warning

[FSM:STAGE:FAILED|RETRY]: Enable
circuit A for host adaptor [id] on
server [chassisId]/[slotId](FSM-
fsmStFailAdaptorHostFcIfCircuitReset:Enabl STAGE:sam:dme:AdaptorHostFcIfCir
F16857 eA cuitReset:EnableA) warning

[FSM:STAGE:FAILED|RETRY]: Enable
circuit B for host adaptor [id] on
server [chassisId]/[slotId](FSM-
fsmStFailAdaptorHostFcIfCircuitReset:Enabl STAGE:sam:dme:AdaptorHostFcIfCir
F16857 eB cuitReset:EnableB) warning
[FSM:STAGE:FAILED|RETRY]:
external VM manager extension-key
configuration on peer fabric(FSM-
fsmStFailExtvmmMasterExtKeyConfig:SetLo STAGE:sam:dme:ExtvmmMasterExtK
F16879 cal eyConfig:SetLocal) warning

[FSM:STAGE:FAILED|RETRY]:
external VM manager extension-key
configuration on local fabric(FSM-
fsmStFailExtvmmMasterExtKeyConfig:SetPe STAGE:sam:dme:ExtvmmMasterExtK
F16879 er eyConfig:SetPeer) warning
[FSM:STAGE:FAILED|RETRY]:
external VM manager version
fetch(FSM-
STAGE:sam:dme:ExtvmmProviderCo
F16879 fsmStFailExtvmmProviderConfig:GetVersion nfig:GetVersion) warning

[FSM:STAGE:FAILED|RETRY]:
external VM manager configuration
on local fabric(FSM-
STAGE:sam:dme:ExtvmmProviderCo
F16879 fsmStFailExtvmmProviderConfig:SetLocal nfig:SetLocal) warning

[FSM:STAGE:FAILED|RETRY]:
external VM manager configuration
on peer fabric(FSM-
STAGE:sam:dme:ExtvmmProviderCo
F16879 fsmStFailExtvmmProviderConfig:SetPeer nfig:SetPeer) warning

[FSM:STAGE:FAILED|RETRY]: set
Veth Auto-delete Retention Timer
on local fabric(FSM-
STAGE:sam:dme:VmLifeCyclePolicyC
F16879 fsmStFailVmLifeCyclePolicyConfig:Local onfig:Local) warning

[FSM:STAGE:FAILED|RETRY]: set
Veth Auto-delete Retention Timer
on peer fabric(FSM-
STAGE:sam:dme:VmLifeCyclePolicyC
F16879 fsmStFailVmLifeCyclePolicyConfig:Peer onfig:Peer) warning

[FSM:STAGE:FAILED|RETRY]:
external VM manager cetificate
configuration on local fabric(FSM-
fsmStFailExtvmmKeyStoreCertInstall:SetLoc STAGE:sam:dme:ExtvmmKeyStoreCe
F16880 al rtInstall:SetLocal) warning

[FSM:STAGE:FAILED|RETRY]:
external VM manager certificate
configuration on peer fabric(FSM-
fsmStFailExtvmmKeyStoreCertInstall:SetPee STAGE:sam:dme:ExtvmmKeyStoreCe
F16880 r rtInstall:SetPeer) warning
[FSM:STAGE:FAILED|RETRY]:
external VM manager deletion from
local fabric(FSM-
fsmStFailExtvmmSwitchDelTaskRemoveProv STAGE:sam:dme:ExtvmmSwitchDelT
F16881 ider:RemoveLocal askRemoveProvider:RemoveLocal) warning

[FSM:STAGE:FAILED|RETRY]:
applying changes to catalog(FSM-
STAGE:sam:dme:CapabilityUpdaterU
F16904 fsmStFailCapabilityUpdaterUpdater:Apply pdater:Apply) warning

[FSM:STAGE:FAILED|RETRY]: syncing
catalog files to subordinate(FSM-
fsmStFailCapabilityUpdaterUpdater:CopyRe STAGE:sam:dme:CapabilityUpdaterU
F16904 mote pdater:CopyRemote) warning
[FSM:STAGE:FAILED|RETRY]:
deleting temp image [fileName] on
local(FSM-
fsmStFailCapabilityUpdaterUpdater:DeleteL STAGE:sam:dme:CapabilityUpdaterU
F16904 ocal pdater:DeleteLocal) warning

[FSM:STAGE:FAILED|RETRY]:
evaluating status of update(FSM-
fsmStFailCapabilityUpdaterUpdater:Evaluat STAGE:sam:dme:CapabilityUpdaterU
F16904 eStatus pdater:EvaluateStatus) warning

[FSM:STAGE:FAILED|RETRY]:
downloading catalog file [fileName]
from [server](FSM-
STAGE:sam:dme:CapabilityUpdaterU
F16904 fsmStFailCapabilityUpdaterUpdater:Local pdater:Local) warning

[FSM:STAGE:FAILED|RETRY]:
rescanning image files(FSM-
fsmStFailCapabilityUpdaterUpdater:RescanI STAGE:sam:dme:CapabilityUpdaterU
F16904 mages pdater:RescanImages) warning

[FSM:STAGE:FAILED|RETRY]:
unpacking catalog file [fileName] on
primary(FSM-
fsmStFailCapabilityUpdaterUpdater:Unpack STAGE:sam:dme:CapabilityUpdaterU
F16904 Local pdater:UnpackLocal) warning

[FSM:STAGE:FAILED|RETRY]: Power
off server [chassisId]/[slotId](FSM-
fsmStFailComputeBladeUpdateBoardContro STAGE:sam:dme:ComputeBladeUpd
F16930 ller:BladePowerOff ateBoardController:BladePowerOff) warning

[FSM:STAGE:FAILED|RETRY]: Power
on server [chassisId]/[slotId](FSM-
fsmStFailComputeBladeUpdateBoardContro STAGE:sam:dme:ComputeBladeUpd
F16930 ller:BladePowerOn ateBoardController:BladePowerOn) warning
[FSM:STAGE:FAILED|RETRY]:
Waiting for Board Controller update
to complete(FSM-
STAGE:sam:dme:ComputeBladeUpd
fsmStFailComputeBladeUpdateBoardContro ateBoardController:PollUpdateStatu
F16930 ller:PollUpdateStatus s) warning
[FSM:STAGE:FAILED|RETRY]:
Prepare for BoardController
update(FSM-
STAGE:sam:dme:ComputeBladeUpd
fsmStFailComputeBladeUpdateBoardContro ateBoardController:PrepareForUpda
F16930 ller:PrepareForUpdate te) warning

[FSM:STAGE:FAILED|RETRY]:
Sending Board Controller update
request to CIMC(FSM-
fsmStFailComputeBladeUpdateBoardContro STAGE:sam:dme:ComputeBladeUpd
F16930 ller:UpdateRequest ateBoardController:UpdateRequest) warning

[FSM:STAGE:FAILED|RETRY]:
Sending capability catalogue
[version] to local bladeAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmStFailCapabilityCatalogueDeployCatalog eDeployCatalogue:SyncBladeAGLoca
F16931 ue:SyncBladeAGLocal l) warning

[FSM:STAGE:FAILED|RETRY]:
Sending capability catalogue
[version] to remote bladeAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmStFailCapabilityCatalogueDeployCatalog eDeployCatalogue:SyncBladeAGRem
F16931 ue:SyncBladeAGRemote ote) warning

[FSM:STAGE:FAILED|RETRY]:
Sending capability catalogue
[version] to local hostagentAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmStFailCapabilityCatalogueDeployCatalog eDeployCatalogue:SyncHostagentAG
F16931 ue:SyncHostagentAGLocal Local) warning

[FSM:STAGE:FAILED|RETRY]:
Sending capability catalogue
[version] to remote
hostagentAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmStFailCapabilityCatalogueDeployCatalog eDeployCatalogue:SyncHostagentAG
F16931 ue:SyncHostagentAGRemote Remote) warning

[FSM:STAGE:FAILED|RETRY]:
Sending capability catalogue
[version] to local nicAG(FSM-
fsmStFailCapabilityCatalogueDeployCatalog STAGE:sam:dme:CapabilityCatalogu
F16931 ue:SyncNicAGLocal eDeployCatalogue:SyncNicAGLocal) warning
[FSM:STAGE:FAILED|RETRY]:
Sending capability catalogue
[version] to remote nicAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmStFailCapabilityCatalogueDeployCatalog eDeployCatalogue:SyncNicAGRemot
F16931 ue:SyncNicAGRemote e) warning

[FSM:STAGE:FAILED|RETRY]:
Sending capability catalogue
[version] to local portAG(FSM-
fsmStFailCapabilityCatalogueDeployCatalog STAGE:sam:dme:CapabilityCatalogu
F16931 ue:SyncPortAGLocal eDeployCatalogue:SyncPortAGLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Sending capability catalogue
[version] to remote portAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmStFailCapabilityCatalogueDeployCatalog eDeployCatalogue:SyncPortAGRemo
F16931 ue:SyncPortAGRemote te) warning

[FSM:STAGE:FAILED|RETRY]:
Finalizing capability catalogue
[version] deployment(FSM-
fsmStFailCapabilityCatalogueDeployCatalog STAGE:sam:dme:CapabilityCatalogu
F16931 ue:finalize eDeployCatalogue:finalize) warning

[FSM:STAGE:FAILED|RETRY]:
cleaning host entries(FSM-
fsmStFailEquipmentFexRemoveFex:Cleanup STAGE:sam:dme:EquipmentFexRem
F16942 Entries oveFex:CleanupEntries) warning

[FSM:STAGE:FAILED|RETRY]: erasing
fex identity [id] from primary(FSM-
fsmStFailEquipmentFexRemoveFex:UnIden STAGE:sam:dme:EquipmentFexRem
F16942 tifyLocal oveFex:UnIdentifyLocal) warning

[FSM:STAGE:FAILED|RETRY]: waiting
for clean up of resources for chassis
[id] (approx. 2 min)(FSM-
STAGE:sam:dme:EquipmentFexRem
F16942 fsmStFailEquipmentFexRemoveFex:Wait oveFex:Wait) warning

[FSM:STAGE:FAILED|RETRY]:
decomissioning fex [id](FSM-
fsmStFailEquipmentFexRemoveFex:decomis STAGE:sam:dme:EquipmentFexRem
F16942 sion oveFex:decomission) warning

[FSM:STAGE:FAILED|RETRY]: setting
locator led to [adminState](FSM-
fsmStFailEquipmentLocatorLedSetFeLocator STAGE:sam:dme:EquipmentLocatorL
F16943 Led:Execute edSetFeLocatorLed:Execute) warning
[FSM:STAGE:FAILED|RETRY]:
Configuring power cap of server [dn]
(FSM-
STAGE:sam:dme:ComputePhysicalPo
F16944 fsmStFailComputePhysicalPowerCap:Config werCap:Config) warning

[FSM:STAGE:FAILED|RETRY]: (FSM-
fsmStFailEquipmentChassisPowerCap:Confi STAGE:sam:dme:EquipmentChassisP
F16944 g owerCap:Config) warning

[FSM:STAGE:FAILED|RETRY]:
cleaning host entries(FSM-
fsmStFailEquipmentIOCardMuxOffline:Clean STAGE:sam:dme:EquipmentIOCard
F16945 upEntries MuxOffline:CleanupEntries) warning
[FSM:STAGE:FAILED|RETRY]:
Activate BIOS image for server
[serverId](FSM-
fsmStFailComputePhysicalAssociate:Activat STAGE:sam:dme:ComputePhysicalAs
F16973 eBios sociate:ActivateBios) warning

[FSM:STAGE:FAILED|RETRY]: Update
blade BIOS image(FSM-
fsmStFailComputePhysicalAssociate:BiosImg STAGE:sam:dme:ComputePhysicalAs
F16973 Update sociate:BiosImgUpdate) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for BIOS POST completion
from CIMC on server [serverId](FSM-
fsmStFailComputePhysicalAssociate:BiosPos STAGE:sam:dme:ComputePhysicalAs
F16973 tCompletion sociate:BiosPostCompletion) warning

[FSM:STAGE:FAILED|RETRY]: Power
off server for configuration of
service profile [assignedToDn](FSM-
fsmStFailComputePhysicalAssociate:BladeP STAGE:sam:dme:ComputePhysicalAs
F16973 owerOff sociate:BladePowerOff) warning

[FSM:STAGE:FAILED|RETRY]:
provisioning a bootable device with
a bootable pre-boot image for
server(FSM-
fsmStFailComputePhysicalAssociate:BmcCo STAGE:sam:dme:ComputePhysicalAs
F16973 nfigPnuOS sociate:BmcConfigPnuOS) warning

[FSM:STAGE:FAILED|RETRY]:
prepare configuration for preboot
environment(FSM-
fsmStFailComputePhysicalAssociate:BmcPre STAGE:sam:dme:ComputePhysicalAs
F16973 configPnuOSLocal sociate:BmcPreconfigPnuOSLocal) warning
[FSM:STAGE:FAILED|RETRY]:
prepare configuration for preboot
environment(FSM-
fsmStFailComputePhysicalAssociate:BmcPre STAGE:sam:dme:ComputePhysicalAs
F16973 configPnuOSPeer sociate:BmcPreconfigPnuOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
unprovisioning the bootable device
for server(FSM-
fsmStFailComputePhysicalAssociate:BmcUn STAGE:sam:dme:ComputePhysicalAs
F16973 configPnuOS sociate:BmcUnconfigPnuOS) warning

[FSM:STAGE:FAILED|RETRY]: Boot
host OS for server (service profile:
[assignedToDn])(FSM-
fsmStFailComputePhysicalAssociate:BootHo STAGE:sam:dme:ComputePhysicalAs
F16973 st sociate:BootHost) warning

[FSM:STAGE:FAILED|RETRY]: Bring-
up pre-boot environment for
association with [assignedToDn]
(FSM-
fsmStFailComputePhysicalAssociate:BootPn STAGE:sam:dme:ComputePhysicalAs
F16973 uos sociate:BootPnuos) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for system reset(FSM-
fsmStFailComputePhysicalAssociate:BootW STAGE:sam:dme:ComputePhysicalAs
F16973 ait sociate:BootWait) warning
[FSM:STAGE:FAILED|RETRY]:
Clearing pending BIOS image
update(FSM-
fsmStFailComputePhysicalAssociate:ClearBi STAGE:sam:dme:ComputePhysicalAs
F16973 osUpdate sociate:ClearBiosUpdate) warning

[FSM:STAGE:FAILED|RETRY]:
Configuring SoL interface on
server(FSM-
fsmStFailComputePhysicalAssociate:ConfigS STAGE:sam:dme:ComputePhysicalAs
F16973 oL sociate:ConfigSoL) warning
[FSM:STAGE:FAILED|RETRY]:
Configuring external user
access(FSM-
fsmStFailComputePhysicalAssociate:ConfigU STAGE:sam:dme:ComputePhysicalAs
F16973 serAccess sociate:ConfigUserAccess) warning

[FSM:STAGE:FAILED|RETRY]:
Configure logical UUID for server
(service profile: [assignedToDn])
(FSM-
fsmStFailComputePhysicalAssociate:ConfigU STAGE:sam:dme:ComputePhysicalAs
F16973 uid sociate:ConfigUuid) warning
[FSM:STAGE:FAILED|RETRY]: Update
Host Bus Adapter image(FSM-
fsmStFailComputePhysicalAssociate:HbaImg STAGE:sam:dme:ComputePhysicalAs
F16973 Update sociate:HbaImgUpdate) warning

[FSM:STAGE:FAILED|RETRY]:
Configure host OS components on
server (service profile:
[assignedToDn])(FSM-
fsmStFailComputePhysicalAssociate:HostOS STAGE:sam:dme:ComputePhysicalAs
F16973 Config sociate:HostOSConfig) warning

[FSM:STAGE:FAILED|RETRY]:
Identify host agent on server
(service profile: [assignedToDn])
(FSM-
fsmStFailComputePhysicalAssociate:HostOS STAGE:sam:dme:ComputePhysicalAs
F16973 Ident sociate:HostOSIdent) warning

[FSM:STAGE:FAILED|RETRY]: Upload
host agent policy to host agent on
server (service profile:
[assignedToDn])(FSM-
fsmStFailComputePhysicalAssociate:HostOS STAGE:sam:dme:ComputePhysicalAs
F16973 Policy sociate:HostOSPolicy) warning

[FSM:STAGE:FAILED|RETRY]:
Validate host OS on server (service
profile: [assignedToDn])(FSM-
fsmStFailComputePhysicalAssociate:HostOS STAGE:sam:dme:ComputePhysicalAs
F16973 Validate sociate:HostOSValidate) warning

[FSM:STAGE:FAILED|RETRY]: Update
LocalDisk firmware image(FSM-
fsmStFailComputePhysicalAssociate:LocalDi STAGE:sam:dme:ComputePhysicalAs
F16973 skFwUpdate sociate:LocalDiskFwUpdate) warning

[FSM:STAGE:FAILED|RETRY]:
Configure adapter in server for host
OS (service profile: [assignedToDn])
(FSM-
fsmStFailComputePhysicalAssociate:NicCon STAGE:sam:dme:ComputePhysicalAs
F16973 figHostOSLocal sociate:NicConfigHostOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Configure adapter in server for host
OS (service profile: [assignedToDn])
(FSM-
fsmStFailComputePhysicalAssociate:NicCon STAGE:sam:dme:ComputePhysicalAs
F16973 figHostOSPeer sociate:NicConfigHostOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Configure adapter for pre-boot
environment(FSM-
fsmStFailComputePhysicalAssociate:NicCon STAGE:sam:dme:ComputePhysicalAs
F16973 figPnuOSLocal sociate:NicConfigPnuOSLocal) warning
[FSM:STAGE:FAILED|RETRY]:
Configure adapter for pre-boot
environment(FSM-
fsmStFailComputePhysicalAssociate:NicCon STAGE:sam:dme:ComputePhysicalAs
F16973 figPnuOSPeer sociate:NicConfigPnuOSPeer) warning

[FSM:STAGE:FAILED|RETRY]: Update
adapter image(FSM-
fsmStFailComputePhysicalAssociate:NicImg STAGE:sam:dme:ComputePhysicalAs
F16973 Update sociate:NicImgUpdate) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure adapter of server pre-
boot environment(FSM-
fsmStFailComputePhysicalAssociate:NicUnc STAGE:sam:dme:ComputePhysicalAs
F16973 onfigPnuOSLocal sociate:NicUnconfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure adapter of server pre-
boot environment(FSM-
fsmStFailComputePhysicalAssociate:NicUnc STAGE:sam:dme:ComputePhysicalAs
F16973 onfigPnuOSPeer sociate:NicUnconfigPnuOSPeer) warning
[FSM:STAGE:FAILED|RETRY]:
Populate pre-boot catalog to
server(FSM-
fsmStFailComputePhysicalAssociate:PnuOSC STAGE:sam:dme:ComputePhysicalAs
F16973 atalog sociate:PnuOSCatalog) warning

[FSM:STAGE:FAILED|RETRY]:
Configure server with service profile
[assignedToDn] pre-boot
environment(FSM-
fsmStFailComputePhysicalAssociate:PnuOSC STAGE:sam:dme:ComputePhysicalAs
F16973 onfig sociate:PnuOSConfig) warning
[FSM:STAGE:FAILED|RETRY]:
Identify pre-boot environment
agent(FSM-
fsmStFailComputePhysicalAssociate:PnuOSI STAGE:sam:dme:ComputePhysicalAs
F16973 dent sociate:PnuOSIdent) warning

[FSM:STAGE:FAILED|RETRY]:
Perform inventory of server(FSM-
fsmStFailComputePhysicalAssociate:PnuOSI STAGE:sam:dme:ComputePhysicalAs
F16973 nventory sociate:PnuOSInventory) warning

[FSM:STAGE:FAILED|RETRY]:
Configure local disk on server with
service profile [assignedToDn] pre-
boot environment(FSM-
fsmStFailComputePhysicalAssociate:PnuOSL STAGE:sam:dme:ComputePhysicalAs
F16973 ocalDiskConfig sociate:PnuOSLocalDiskConfig) warning
[FSM:STAGE:FAILED|RETRY]:
Populate pre-boot environment
behavior policy(FSM-
fsmStFailComputePhysicalAssociate:PnuOSP STAGE:sam:dme:ComputePhysicalAs
F16973 olicy sociate:PnuOSPolicy) warning

[FSM:STAGE:FAILED|RETRY]: Trigger
self-test on pre-boot
environment(FSM-
fsmStFailComputePhysicalAssociate:PnuOSS STAGE:sam:dme:ComputePhysicalAs
F16973 elfTest sociate:PnuOSSelfTest) warning

[FSM:STAGE:FAILED|RETRY]: Unload
drivers on server with service profile
[assignedToDn](FSM-
fsmStFailComputePhysicalAssociate:PnuOS STAGE:sam:dme:ComputePhysicalAs
F16973 UnloadDrivers sociate:PnuOSUnloadDrivers) warning

[FSM:STAGE:FAILED|RETRY]: Pre-
boot environment validation for
association with [assignedToDn]
(FSM-
fsmStFailComputePhysicalAssociate:PnuOS STAGE:sam:dme:ComputePhysicalAs
F16973 Validate sociate:PnuOSValidate) warning

[FSM:STAGE:FAILED|RETRY]: waiting
for BIOS activate(FSM-
fsmStFailComputePhysicalAssociate:PollBios STAGE:sam:dme:ComputePhysicalAs
F16973 ActivateStatus sociate:PollBiosActivateStatus) warning
[FSM:STAGE:FAILED|RETRY]:
Waiting for BIOS update to
complete(FSM-
fsmStFailComputePhysicalAssociate:PollBios STAGE:sam:dme:ComputePhysicalAs
F16973 UpdateStatus sociate:PollBiosUpdateStatus) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for Board Controller update
to complete(FSM-
fsmStFailComputePhysicalAssociate:PollBoa STAGE:sam:dme:ComputePhysicalAs
F16973 rdCtrlUpdateStatus sociate:PollBoardCtrlUpdateStatus) warning

[FSM:STAGE:FAILED|RETRY]: waiting
for pending BIOS image update to
clear(FSM-
fsmStFailComputePhysicalAssociate:PollCle STAGE:sam:dme:ComputePhysicalAs
F16973 arBiosUpdateStatus sociate:PollClearBiosUpdateStatus) warning

[FSM:STAGE:FAILED|RETRY]: Power
on server for configuration of service
profile [assignedToDn](FSM-
fsmStFailComputePhysicalAssociate:Power STAGE:sam:dme:ComputePhysicalAs
F16973 On sociate:PowerOn) warning
[FSM:STAGE:FAILED|RETRY]:
Preparing to check hardware
configuration(FSM-
fsmStFailComputePhysicalAssociate:PreSani STAGE:sam:dme:ComputePhysicalAs
F16973 tize sociate:PreSanitize) warning
[FSM:STAGE:FAILED|RETRY]:
Prepare server for booting host
OS(FSM-
fsmStFailComputePhysicalAssociate:Prepare STAGE:sam:dme:ComputePhysicalAs
F16973 ForBoot sociate:PrepareForBoot) warning
[FSM:STAGE:FAILED|RETRY]:
Checking hardware
configuration(FSM-
STAGE:sam:dme:ComputePhysicalAs
F16973 fsmStFailComputePhysicalAssociate:Sanitize sociate:Sanitize) warning

[FSM:STAGE:FAILED|RETRY]: Disable
Sol Redirection on server
[assignedToDn](FSM-
fsmStFailComputePhysicalAssociate:SolRedi STAGE:sam:dme:ComputePhysicalAs
F16973 rectDisable sociate:SolRedirectDisable) warning

[FSM:STAGE:FAILED|RETRY]: set up
bios token for server [assignedToDn]
for Sol redirect(FSM-
fsmStFailComputePhysicalAssociate:SolRedi STAGE:sam:dme:ComputePhysicalAs
F16973 rectEnable sociate:SolRedirectEnable) warning

[FSM:STAGE:FAILED|RETRY]: Update
storage controller image(FSM-
fsmStFailComputePhysicalAssociate:Storage STAGE:sam:dme:ComputePhysicalAs
F16973 CtlrImgUpdate sociate:StorageCtlrImgUpdate) warning

[FSM:STAGE:FAILED|RETRY]:
Configure primary fabric
interconnect for server host os
(service profile: [assignedToDn])
(FSM-
fsmStFailComputePhysicalAssociate:SwCon STAGE:sam:dme:ComputePhysicalAs
F16973 figHostOSLocal sociate:SwConfigHostOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Configure secondary fabric
interconnect for server host OS
(service profile: [assignedToDn])
(FSM-
fsmStFailComputePhysicalAssociate:SwCon STAGE:sam:dme:ComputePhysicalAs
F16973 figHostOSPeer sociate:SwConfigHostOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Configure primary fabric
interconnect for pre-boot
environment(FSM-
fsmStFailComputePhysicalAssociate:SwCon STAGE:sam:dme:ComputePhysicalAs
F16973 figPnuOSLocal sociate:SwConfigPnuOSLocal) warning
[FSM:STAGE:FAILED|RETRY]:
Configure secondary fabric
interconnect for pre-boot
environment(FSM-
fsmStFailComputePhysicalAssociate:SwCon STAGE:sam:dme:ComputePhysicalAs
F16973 figPnuOSPeer sociate:SwConfigPnuOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
configuring primary fabric
interconnect access to server(FSM-
fsmStFailComputePhysicalAssociate:SwCon STAGE:sam:dme:ComputePhysicalAs
F16973 figPortNivLocal sociate:SwConfigPortNivLocal) warning

[FSM:STAGE:FAILED|RETRY]:
configuring secondary fabric
interconnect access to server(FSM-
fsmStFailComputePhysicalAssociate:SwCon STAGE:sam:dme:ComputePhysicalAs
F16973 figPortNivPeer sociate:SwConfigPortNivPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure primary fabric
interconnect for server pre-boot
environment(FSM-
fsmStFailComputePhysicalAssociate:SwUnc STAGE:sam:dme:ComputePhysicalAs
F16973 onfigPnuOSLocal sociate:SwUnconfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure secondary fabric
interconnect for server pre-boot
environment(FSM-
fsmStFailComputePhysicalAssociate:SwUnc STAGE:sam:dme:ComputePhysicalAs
F16973 onfigPnuOSPeer sociate:SwUnconfigPnuOSPeer) warning
[FSM:STAGE:FAILED|RETRY]:
Sending update BIOS request to
CIMC(FSM-
fsmStFailComputePhysicalAssociate:Update STAGE:sam:dme:ComputePhysicalAs
F16973 BiosRequest sociate:UpdateBiosRequest) warning

[FSM:STAGE:FAILED|RETRY]:
Sending Board Controller update
request to CIMC(FSM-
fsmStFailComputePhysicalAssociate:Update STAGE:sam:dme:ComputePhysicalAs
F16973 BoardCtrlRequest sociate:UpdateBoardCtrlRequest) warning
[FSM:STAGE:FAILED|RETRY]:
Activate adapter network firmware
on(FSM-
fsmStFailComputePhysicalAssociate:activate STAGE:sam:dme:ComputePhysicalAs
F16973 AdaptorNwFwLocal sociate:activateAdaptorNwFwLocal) warning
[FSM:STAGE:FAILED|RETRY]:
Activate adapter network firmware
on(FSM-
fsmStFailComputePhysicalAssociate:activate STAGE:sam:dme:ComputePhysicalAs
F16973 AdaptorNwFwPeer sociate:activateAdaptorNwFwPeer) warning
[FSM:STAGE:FAILED|RETRY]:
Activate CIMC firmware of server
[serverId](FSM-
fsmStFailComputePhysicalAssociate:activate STAGE:sam:dme:ComputePhysicalAs
F16973 IBMCFw sociate:activateIBMCFw) warning

[FSM:STAGE:FAILED|RETRY]:
Connect to host agent on server
(service profile: [assignedToDn])
(FSM-
fsmStFailComputePhysicalAssociate:hagHos STAGE:sam:dme:ComputePhysicalAs
F16973 tOSConnect sociate:hagHostOSConnect) warning

[FSM:STAGE:FAILED|RETRY]:
Connect to pre-boot environment
agent for association with
[assignedToDn](FSM-
fsmStFailComputePhysicalAssociate:hagPnu STAGE:sam:dme:ComputePhysicalAs
F16973 OSConnect sociate:hagPnuOSConnect) warning

[FSM:STAGE:FAILED|RETRY]:
Disconnect pre-boot environment
agent(FSM-
fsmStFailComputePhysicalAssociate:hagPnu STAGE:sam:dme:ComputePhysicalAs
F16973 OSDisconnect sociate:hagPnuOSDisconnect) warning

[FSM:STAGE:FAILED|RETRY]: Reset
CIMC of server [serverId](FSM-
fsmStFailComputePhysicalAssociate:resetIB STAGE:sam:dme:ComputePhysicalAs
F16973 MC sociate:resetIBMC) warning

[FSM:STAGE:FAILED|RETRY]:
Connect to pre-boot environment
agent for association with
[assignedToDn](FSM-
fsmStFailComputePhysicalAssociate:serialD STAGE:sam:dme:ComputePhysicalAs
F16973 ebugPnuOSConnect sociate:serialDebugPnuOSConnect) warning

[FSM:STAGE:FAILED|RETRY]:
Disconnect pre-boot environment
agent(FSM-
STAGE:sam:dme:ComputePhysicalAs
fsmStFailComputePhysicalAssociate:serialD sociate:serialDebugPnuOSDisconnec
F16973 ebugPnuOSDisconnect t) warning

[FSM:STAGE:FAILED|RETRY]: Update
adapter network firmware(FSM-
fsmStFailComputePhysicalAssociate:update STAGE:sam:dme:ComputePhysicalAs
F16973 AdaptorNwFwLocal sociate:updateAdaptorNwFwLocal) warning

[FSM:STAGE:FAILED|RETRY]: Update
adapter network firmware(FSM-
fsmStFailComputePhysicalAssociate:update STAGE:sam:dme:ComputePhysicalAs
F16973 AdaptorNwFwPeer sociate:updateAdaptorNwFwPeer) warning
[FSM:STAGE:FAILED|RETRY]: Update
CIMC firmware of server [serverId]
(FSM-
fsmStFailComputePhysicalAssociate:updateI STAGE:sam:dme:ComputePhysicalAs
F16973 BMCFw sociate:updateIBMCFw) warning

[FSM:STAGE:FAILED|RETRY]: Wait
for adapter network firmware
update completion(FSM-
STAGE:sam:dme:ComputePhysicalAs
fsmStFailComputePhysicalAssociate:waitFor sociate:waitForAdaptorNwFwUpdat
F16973 AdaptorNwFwUpdateLocal eLocal) warning

[FSM:STAGE:FAILED|RETRY]: Wait
for adapter network firmware
update completion(FSM-
STAGE:sam:dme:ComputePhysicalAs
fsmStFailComputePhysicalAssociate:waitFor sociate:waitForAdaptorNwFwUpdat
F16973 AdaptorNwFwUpdatePeer ePeer) warning

[FSM:STAGE:FAILED|RETRY]: Wait
for CIMC firmware completion on
server [serverId](FSM-
fsmStFailComputePhysicalAssociate:waitFor STAGE:sam:dme:ComputePhysicalAs
F16973 IBMCFwUpdate sociate:waitForIBMCFwUpdate) warning

[FSM:STAGE:FAILED|RETRY]:
Waiting for BIOS POST completion
from CIMC on server [serverId](FSM-
fsmStFailComputePhysicalDisassociate:Bios STAGE:sam:dme:ComputePhysicalDi
F16974 PostCompletion sassociate:BiosPostCompletion) warning

[FSM:STAGE:FAILED|RETRY]:
provisioning a bootable device with
a bootable pre-boot image for
server(FSM-
fsmStFailComputePhysicalDisassociate:Bmc STAGE:sam:dme:ComputePhysicalDi
F16974 ConfigPnuOS sassociate:BmcConfigPnuOS) warning

[FSM:STAGE:FAILED|RETRY]:
prepare configuration for preboot
environment(FSM-
STAGE:sam:dme:ComputePhysicalDi
fsmStFailComputePhysicalDisassociate:Bmc sassociate:BmcPreconfigPnuOSLocal
F16974 PreconfigPnuOSLocal ) warning

[FSM:STAGE:FAILED|RETRY]:
prepare configuration for preboot
environment(FSM-
fsmStFailComputePhysicalDisassociate:Bmc STAGE:sam:dme:ComputePhysicalDi
F16974 PreconfigPnuOSPeer sassociate:BmcPreconfigPnuOSPeer) warning
[FSM:STAGE:FAILED|RETRY]:
unprovisioning the bootable device
for server(FSM-
fsmStFailComputePhysicalDisassociate:Bmc STAGE:sam:dme:ComputePhysicalDi
F16974 UnconfigPnuOS sassociate:BmcUnconfigPnuOS) warning

[FSM:STAGE:FAILED|RETRY]: Bring-
up pre-boot environment on server
for disassociation with service
profile [assignedToDn](FSM-
fsmStFailComputePhysicalDisassociate:Boot STAGE:sam:dme:ComputePhysicalDi
F16974 Pnuos sassociate:BootPnuos) warning
[FSM:STAGE:FAILED|RETRY]:
Waiting for system reset on
server(FSM-
fsmStFailComputePhysicalDisassociate:Boot STAGE:sam:dme:ComputePhysicalDi
F16974 Wait sassociate:BootWait) warning

[FSM:STAGE:FAILED|RETRY]:
Configuring BIOS Defaults on server
[serverId](FSM-
fsmStFailComputePhysicalDisassociate:Con STAGE:sam:dme:ComputePhysicalDi
F16974 figBios sassociate:ConfigBios) warning
[FSM:STAGE:FAILED|RETRY]:
Configuring external user
access(FSM-
fsmStFailComputePhysicalDisassociate:Con STAGE:sam:dme:ComputePhysicalDi
F16974 figUserAccess sassociate:ConfigUserAccess) warning

[FSM:STAGE:FAILED|RETRY]: Apply
post-disassociation policies to
server(FSM-
fsmStFailComputePhysicalDisassociate:Han STAGE:sam:dme:ComputePhysicalDi
F16974 dlePooling sassociate:HandlePooling) warning

[FSM:STAGE:FAILED|RETRY]:
Configure adapter for pre-boot
environment on server(FSM-
fsmStFailComputePhysicalDisassociate:NicC STAGE:sam:dme:ComputePhysicalDi
F16974 onfigPnuOSLocal sassociate:NicConfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Configure adapter for pre-boot
environment on server(FSM-
fsmStFailComputePhysicalDisassociate:NicC STAGE:sam:dme:ComputePhysicalDi
F16974 onfigPnuOSPeer sassociate:NicConfigPnuOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure host OS connectivity
from server adapter(FSM-
fsmStFailComputePhysicalDisassociate:NicU STAGE:sam:dme:ComputePhysicalDi
F16974 nconfigHostOSLocal sassociate:NicUnconfigHostOSLocal) warning
[FSM:STAGE:FAILED|RETRY]:
Unconfigure host OS connectivity
from server adapter(FSM-
fsmStFailComputePhysicalDisassociate:NicU STAGE:sam:dme:ComputePhysicalDi
F16974 nconfigHostOSPeer sassociate:NicUnconfigHostOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure adapter of server pre-
boot environment(FSM-
fsmStFailComputePhysicalDisassociate:NicU STAGE:sam:dme:ComputePhysicalDi
F16974 nconfigPnuOSLocal sassociate:NicUnconfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure adapter of server pre-
boot environment(FSM-
fsmStFailComputePhysicalDisassociate:NicU STAGE:sam:dme:ComputePhysicalDi
F16974 nconfigPnuOSPeer sassociate:NicUnconfigPnuOSPeer) warning
[FSM:STAGE:FAILED|RETRY]:
Populate pre-boot catalog to
server(FSM-
fsmStFailComputePhysicalDisassociate:Pnu STAGE:sam:dme:ComputePhysicalDi
F16974 OSCatalog sassociate:PnuOSCatalog) warning

[FSM:STAGE:FAILED|RETRY]:
Identify pre-boot environment agent
on server(FSM-
fsmStFailComputePhysicalDisassociate:Pnu STAGE:sam:dme:ComputePhysicalDi
F16974 OSIdent sassociate:PnuOSIdent) warning

[FSM:STAGE:FAILED|RETRY]:
Populate pre-boot environment
behavior policy to server(FSM-
fsmStFailComputePhysicalDisassociate:Pnu STAGE:sam:dme:ComputePhysicalDi
F16974 OSPolicy sassociate:PnuOSPolicy) warning

[FSM:STAGE:FAILED|RETRY]: Scrub
server(FSM-
fsmStFailComputePhysicalDisassociate:Pnu STAGE:sam:dme:ComputePhysicalDi
F16974 OSScrub sassociate:PnuOSScrub) warning

[FSM:STAGE:FAILED|RETRY]: Trigger
self-test of server pre-boot
environment(FSM-
fsmStFailComputePhysicalDisassociate:Pnu STAGE:sam:dme:ComputePhysicalDi
F16974 OSSelfTest sassociate:PnuOSSelfTest) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure server from service
profile [assignedToDn] pre-boot
environment(FSM-
fsmStFailComputePhysicalDisassociate:Pnu STAGE:sam:dme:ComputePhysicalDi
F16974 OSUnconfig sassociate:PnuOSUnconfig) warning
[FSM:STAGE:FAILED|RETRY]: Pre-
boot environment validate server
for disassociation with service
profile [assignedToDn](FSM-
fsmStFailComputePhysicalDisassociate:Pnu STAGE:sam:dme:ComputePhysicalDi
F16974 OSValidate sassociate:PnuOSValidate) warning

[FSM:STAGE:FAILED|RETRY]: Power
on server for unconfiguration of
service profile [assignedToDn](FSM-
fsmStFailComputePhysicalDisassociate:Pow STAGE:sam:dme:ComputePhysicalDi
F16974 erOn sassociate:PowerOn) warning

[FSM:STAGE:FAILED|RETRY]:
Preparing to check hardware
configuration server(FSM-
fsmStFailComputePhysicalDisassociate:PreS STAGE:sam:dme:ComputePhysicalDi
F16974 anitize sassociate:PreSanitize) warning
[FSM:STAGE:FAILED|RETRY]:
Checking hardware configuration
server(FSM-
fsmStFailComputePhysicalDisassociate:Sani STAGE:sam:dme:ComputePhysicalDi
F16974 tize sassociate:Sanitize) warning

[FSM:STAGE:FAILED|RETRY]:
Shutdown server(FSM-
fsmStFailComputePhysicalDisassociate:Shut STAGE:sam:dme:ComputePhysicalDi
F16974 down sassociate:Shutdown) warning

[FSM:STAGE:FAILED|RETRY]: Disable
Sol redirection on server [serverId]
(FSM-
fsmStFailComputePhysicalDisassociate:SolR STAGE:sam:dme:ComputePhysicalDi
F16974 edirectDisable sassociate:SolRedirectDisable) warning

[FSM:STAGE:FAILED|RETRY]: set up
bios token for server [serverId] for
Sol redirect(FSM-
fsmStFailComputePhysicalDisassociate:SolR STAGE:sam:dme:ComputePhysicalDi
F16974 edirectEnable sassociate:SolRedirectEnable) warning

[FSM:STAGE:FAILED|RETRY]:
Configure primary fabric
interconnect for pre-boot
environment on server(FSM-
fsmStFailComputePhysicalDisassociate:SwC STAGE:sam:dme:ComputePhysicalDi
F16974 onfigPnuOSLocal sassociate:SwConfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Configure secondary fabric
interconnect for pre-boot
environment on server(FSM-
fsmStFailComputePhysicalDisassociate:SwC STAGE:sam:dme:ComputePhysicalDi
F16974 onfigPnuOSPeer sassociate:SwConfigPnuOSPeer) warning
[FSM:STAGE:FAILED|RETRY]:
configuring primary fabric
interconnect access to server(FSM-
fsmStFailComputePhysicalDisassociate:SwC STAGE:sam:dme:ComputePhysicalDi
F16974 onfigPortNivLocal sassociate:SwConfigPortNivLocal) warning

[FSM:STAGE:FAILED|RETRY]:
configuring secondary fabric
interconnect access to server(FSM-
fsmStFailComputePhysicalDisassociate:SwC STAGE:sam:dme:ComputePhysicalDi
F16974 onfigPortNivPeer sassociate:SwConfigPortNivPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure host OS connectivity
from server to primary fabric
interconnect(FSM-
fsmStFailComputePhysicalDisassociate:SwU STAGE:sam:dme:ComputePhysicalDi
F16974 nconfigHostOSLocal sassociate:SwUnconfigHostOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure host OS connectivity
from server to secondary fabric
interconnect(FSM-
fsmStFailComputePhysicalDisassociate:SwU STAGE:sam:dme:ComputePhysicalDi
F16974 nconfigHostOSPeer sassociate:SwUnconfigHostOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure primary fabric
interconnect for server pre-boot
environment(FSM-
fsmStFailComputePhysicalDisassociate:SwU STAGE:sam:dme:ComputePhysicalDi
F16974 nconfigPnuOSLocal sassociate:SwUnconfigPnuOSLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfigure secondary fabric
interconnect for server pre-boot
environment(FSM-
fsmStFailComputePhysicalDisassociate:SwU STAGE:sam:dme:ComputePhysicalDi
F16974 nconfigPnuOSPeer sassociate:SwUnconfigPnuOSPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfiguring BIOS Settings and
Boot Order of server [serverId]
(service profile [assignedToDn])
(FSM-
fsmStFailComputePhysicalDisassociate:Unc STAGE:sam:dme:ComputePhysicalDi
F16974 onfigBios sassociate:UnconfigBios) warning

[FSM:STAGE:FAILED|RETRY]:
Removing SoL configuration from
server(FSM-
fsmStFailComputePhysicalDisassociate:Unc STAGE:sam:dme:ComputePhysicalDi
F16974 onfigSoL sassociate:UnconfigSoL) warning
[FSM:STAGE:FAILED|RETRY]:
Restore original UUID for server
(service profile: [assignedToDn])
(FSM-
fsmStFailComputePhysicalDisassociate:Unc STAGE:sam:dme:ComputePhysicalDi
F16974 onfigUuid sassociate:UnconfigUuid) warning

[FSM:STAGE:FAILED|RETRY]:
Connect to pre-boot environment
agent on server for disassociation
with service profile [assignedToDn]
(FSM-
fsmStFailComputePhysicalDisassociate:hagP STAGE:sam:dme:ComputePhysicalDi
F16974 nuOSConnect sassociate:hagPnuOSConnect) warning

[FSM:STAGE:FAILED|RETRY]:
Disconnect pre-boot environment
agent for server(FSM-
fsmStFailComputePhysicalDisassociate:hagP STAGE:sam:dme:ComputePhysicalDi
F16974 nuOSDisconnect sassociate:hagPnuOSDisconnect) warning
[FSM:STAGE:FAILED|RETRY]:
Connect to pre-boot environment
agent on server for disassociation
with service profile [assignedToDn]
(FSM-
STAGE:sam:dme:ComputePhysicalDi
fsmStFailComputePhysicalDisassociate:seria sassociate:serialDebugPnuOSConnec
F16974 lDebugPnuOSConnect t) warning

[FSM:STAGE:FAILED|RETRY]:
Disconnect pre-boot environment
agent for server(FSM-
STAGE:sam:dme:ComputePhysicalDi
fsmStFailComputePhysicalDisassociate:seria sassociate:serialDebugPnuOSDiscon
F16974 lDebugPnuOSDisconnect nect) warning

[FSM:STAGE:FAILED|RETRY]:
Cleaning up CIMC configuration for
server [dn](FSM-
fsmStFailComputePhysicalDecommission:Cl STAGE:sam:dme:ComputePhysicalD
F16976 eanupCIMC ecommission:CleanupCIMC) warning

[FSM:STAGE:FAILED|RETRY]:
Decommissioning server [dn](FSM-
fsmStFailComputePhysicalDecommission:Ex STAGE:sam:dme:ComputePhysicalD
F16976 ecute ecommission:Execute) warning

[FSM:STAGE:FAILED|RETRY]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmStFailComputePhysicalDecommission:St STAGE:sam:dme:ComputePhysicalD
F16976 opVMediaLocal ecommission:StopVMediaLocal) warning
[FSM:STAGE:FAILED|RETRY]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmStFailComputePhysicalDecommission:St STAGE:sam:dme:ComputePhysicalD
F16976 opVMediaPeer ecommission:StopVMediaPeer) warning

[FSM:STAGE:FAILED|RETRY]: Soft
shutdown of server [dn](FSM-
fsmStFailComputePhysicalSoftShutdown:Ex STAGE:sam:dme:ComputePhysicalSo
F16977 ecute ftShutdown:Execute) warning

[FSM:STAGE:FAILED|RETRY]: Hard
shutdown of server [dn](FSM-
fsmStFailComputePhysicalHardShutdown:Ex STAGE:sam:dme:ComputePhysicalH
F16978 ecute ardShutdown:Execute) warning

[FSM:STAGE:FAILED|RETRY]: Power-
on server [dn](FSM-
STAGE:sam:dme:ComputePhysicalTu
F16979 fsmStFailComputePhysicalTurnup:Execute rnup:Execute) warning

[FSM:STAGE:FAILED|RETRY]: Power-
cycle server [dn](FSM-
fsmStFailComputePhysicalPowercycle:Execu STAGE:sam:dme:ComputePhysicalPo
F16980 te wercycle:Execute) warning

[FSM:STAGE:FAILED|RETRY]:
Preparing to check hardware
configuration server [dn](FSM-
fsmStFailComputePhysicalPowercycle:PreSa STAGE:sam:dme:ComputePhysicalPo
F16980 nitize wercycle:PreSanitize) warning

[FSM:STAGE:FAILED|RETRY]:
Checking hardware configuration
server [dn](FSM-
fsmStFailComputePhysicalPowercycle:Saniti STAGE:sam:dme:ComputePhysicalPo
F16980 ze wercycle:Sanitize) warning

[FSM:STAGE:FAILED|RETRY]: Hard-
reset server [dn](FSM-
fsmStFailComputePhysicalHardreset:Execut STAGE:sam:dme:ComputePhysicalH
F16981 e ardreset:Execute) warning

[FSM:STAGE:FAILED|RETRY]:
Preparing to check hardware
configuration server [dn](FSM-
fsmStFailComputePhysicalHardreset:PreSan STAGE:sam:dme:ComputePhysicalH
F16981 itize ardreset:PreSanitize) warning

[FSM:STAGE:FAILED|RETRY]:
Checking hardware configuration
server [dn](FSM-
fsmStFailComputePhysicalHardreset:Sanitiz STAGE:sam:dme:ComputePhysicalH
F16981 e ardreset:Sanitize) warning
[FSM:STAGE:FAILED|RETRY]: Soft-
reset server [dn](FSM-
STAGE:sam:dme:ComputePhysicalSo
F16982 fsmStFailComputePhysicalSoftreset:Execute ftreset:Execute) warning

[FSM:STAGE:FAILED|RETRY]:
Preparing to check hardware
configuration server [dn](FSM-
fsmStFailComputePhysicalSoftreset:PreSani STAGE:sam:dme:ComputePhysicalSo
F16982 tize ftreset:PreSanitize) warning

[FSM:STAGE:FAILED|RETRY]:
Checking hardware configuration
server [dn](FSM-
STAGE:sam:dme:ComputePhysicalSo
F16982 fsmStFailComputePhysicalSoftreset:Sanitize ftreset:Sanitize) warning
[FSM:STAGE:FAILED|RETRY]:
Updating fabric A for server [dn]
(FSM-
STAGE:sam:dme:ComputePhysicalS
F16983 fsmStFailComputePhysicalSwConnUpd:A wConnUpd:A) warning
[FSM:STAGE:FAILED|RETRY]:
Updating fabric B for server [dn]
(FSM-
STAGE:sam:dme:ComputePhysicalS
F16983 fsmStFailComputePhysicalSwConnUpd:B wConnUpd:B) warning

[FSM:STAGE:FAILED|RETRY]:
Completing BIOS recovery mode for
server [dn], and shutting it
down(FSM-
fsmStFailComputePhysicalBiosRecovery:Cle STAGE:sam:dme:ComputePhysicalBi
F16984 anup osRecovery:Cleanup) warning

[FSM:STAGE:FAILED|RETRY]:
Preparing to check hardware
configuration server [dn](FSM-
fsmStFailComputePhysicalBiosRecovery:Pre STAGE:sam:dme:ComputePhysicalBi
F16984 Sanitize osRecovery:PreSanitize) warning

[FSM:STAGE:FAILED|RETRY]:
Resetting server [dn] power state
after BIOS recovery(FSM-
fsmStFailComputePhysicalBiosRecovery:Res STAGE:sam:dme:ComputePhysicalBi
F16984 et osRecovery:Reset) warning

[FSM:STAGE:FAILED|RETRY]:
Checking hardware configuration
server [dn](FSM-
fsmStFailComputePhysicalBiosRecovery:San STAGE:sam:dme:ComputePhysicalBi
F16984 itize osRecovery:Sanitize) warning
[FSM:STAGE:FAILED|RETRY]:
Provisioning a V-Media device with a
bootable BIOS image for server [dn]
(FSM-
fsmStFailComputePhysicalBiosRecovery:Set STAGE:sam:dme:ComputePhysicalBi
F16984 upVmediaLocal osRecovery:SetupVmediaLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Provisioning a V-Media device with a
bootable BIOS image for server [dn]
(FSM-
fsmStFailComputePhysicalBiosRecovery:Set STAGE:sam:dme:ComputePhysicalBi
F16984 upVmediaPeer osRecovery:SetupVmediaPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Shutting down server [dn] to
prepare for BIOS recovery(FSM-
fsmStFailComputePhysicalBiosRecovery:Shu STAGE:sam:dme:ComputePhysicalBi
F16984 tdown osRecovery:Shutdown) warning
[FSM:STAGE:FAILED|RETRY]:
Running BIOS recovery on server
[dn](FSM-
fsmStFailComputePhysicalBiosRecovery:Star STAGE:sam:dme:ComputePhysicalBi
F16984 t osRecovery:Start) warning

[FSM:STAGE:FAILED|RETRY]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmStFailComputePhysicalBiosRecovery:Sto STAGE:sam:dme:ComputePhysicalBi
F16984 pVMediaLocal osRecovery:StopVMediaLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmStFailComputePhysicalBiosRecovery:Sto STAGE:sam:dme:ComputePhysicalBi
F16984 pVMediaPeer osRecovery:StopVMediaPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmStFailComputePhysicalBiosRecovery:Tea STAGE:sam:dme:ComputePhysicalBi
F16984 rdownVmediaLocal osRecovery:TeardownVmediaLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmStFailComputePhysicalBiosRecovery:Tea STAGE:sam:dme:ComputePhysicalBi
F16984 rdownVmediaPeer osRecovery:TeardownVmediaPeer) warning
[FSM:STAGE:FAILED|RETRY]:
Waiting for completion of BIOS
recovery for server [dn] (up to 15
min)(FSM-
fsmStFailComputePhysicalBiosRecovery:Wai STAGE:sam:dme:ComputePhysicalBi
F16984 t osRecovery:Wait) warning

[FSM:STAGE:FAILED|RETRY]: Power
on server [serverId](FSM-
fsmStFailComputePhysicalCmosReset:Blade STAGE:sam:dme:ComputePhysicalC
F16986 PowerOn mosReset:BladePowerOn) warning
[FSM:STAGE:FAILED|RETRY]:
Resetting CMOS for server [serverId]
(FSM-
fsmStFailComputePhysicalCmosReset:Execu STAGE:sam:dme:ComputePhysicalC
F16986 te mosReset:Execute) warning

[FSM:STAGE:FAILED|RETRY]:
Preparing to check hardware
configuration server [serverId](FSM-
fsmStFailComputePhysicalCmosReset:PreSa STAGE:sam:dme:ComputePhysicalC
F16986 nitize mosReset:PreSanitize) warning

[FSM:STAGE:FAILED|RETRY]:
Reconfiguring BIOS Settings and
Boot Order of server [serverId] for
service profile [assignedToDn](FSM-
fsmStFailComputePhysicalCmosReset:Recon STAGE:sam:dme:ComputePhysicalC
F16986 figBios mosReset:ReconfigBios) warning

[FSM:STAGE:FAILED|RETRY]:
Reconfiguring logical UUID of server
[serverId] for service profile
[assignedToDn](FSM-
fsmStFailComputePhysicalCmosReset:Recon STAGE:sam:dme:ComputePhysicalC
F16986 figUuid mosReset:ReconfigUuid) warning

[FSM:STAGE:FAILED|RETRY]:
Checking hardware configuration
server [serverId](FSM-
fsmStFailComputePhysicalCmosReset:Saniti STAGE:sam:dme:ComputePhysicalC
F16986 ze mosReset:Sanitize) warning

[FSM:STAGE:FAILED|RETRY]:
Resetting Management Controller
on server [dn](FSM-
fsmStFailComputePhysicalResetBmc:Execut STAGE:sam:dme:ComputePhysicalRe
F16987 e setBmc:Execute) warning

[FSM:STAGE:FAILED|RETRY]: Reset
IOM [id] on Fex [chassisId](FSM-
fsmStFailEquipmentIOCardResetIom:Execut STAGE:sam:dme:EquipmentIOCardR
F16988 e esetIom:Execute) warning
[FSM:STAGE:FAILED|RETRY]:
external mgmt user deployment on
server [dn] (profile [assignedToDn])
(FSM-
fsmStFailComputePhysicalUpdateExtUsers: STAGE:sam:dme:ComputePhysicalU
F17008 Deploy pdateExtUsers:Deploy) warning

[FSM:STAGE:FAILED|RETRY]: create
tech-support file from GUI on
local(FSM-
STAGE:sam:dme:SysdebugTechSupp
F17012 fsmStFailSysdebugTechSupportInitiate:Local ortInitiate:Local) warning

[FSM:STAGE:FAILED|RETRY]: delete
tech-support file from GUI on
local(FSM-
fsmStFailSysdebugTechSupportDeleteTechS STAGE:sam:dme:SysdebugTechSupp
F17013 upFile:Local ortDeleteTechSupFile:Local) warning

[FSM:STAGE:FAILED|RETRY]: delete
tech-support file from GUI on
peer(FSM-
fsmStFailSysdebugTechSupportDeleteTechS STAGE:sam:dme:SysdebugTechSupp
F17013 upFile:peer ortDeleteTechSupFile:peer) warning

[FSM:STAGE:FAILED|RETRY]: sync
images to subordinate(FSM-
fsmStFailFirmwareDownloaderDownload:Co STAGE:sam:dme:FirmwareDownload
F17014 pyRemote erDownload:CopyRemote) warning

[FSM:STAGE:FAILED|RETRY]:
deleting downloadable [fileName]
on local(FSM-
fsmStFailFirmwareDownloaderDownload:D STAGE:sam:dme:FirmwareDownload
F17014 eleteLocal erDownload:DeleteLocal) warning

[FSM:STAGE:FAILED|RETRY]:
downloading image [fileName] from
[server](FSM-
fsmStFailFirmwareDownloaderDownload:Lo STAGE:sam:dme:FirmwareDownload
F17014 cal erDownload:Local) warning

[FSM:STAGE:FAILED|RETRY]:
unpacking image [fileName] on
primary(FSM-
fsmStFailFirmwareDownloaderDownload:U STAGE:sam:dme:FirmwareDownload
F17014 npackLocal erDownload:UnpackLocal) warning

[FSM:STAGE:FAILED|RETRY]: Copy
the license file to subordinate for
inventory(FSM-
fsmStFailLicenseDownloaderDownload:Cop STAGE:sam:dme:LicenseDownloader
F17014 yRemote Download:CopyRemote) warning
[FSM:STAGE:FAILED|RETRY]:
deleting temporary files for
[fileName] on local(FSM-
fsmStFailLicenseDownloaderDownload:Dele STAGE:sam:dme:LicenseDownloader
F17014 teLocal Download:DeleteLocal) warning

[FSM:STAGE:FAILED|RETRY]:
deleting temporary files for
[fileName] on subordinate(FSM-
fsmStFailLicenseDownloaderDownload:Dele STAGE:sam:dme:LicenseDownloader
F17014 teRemote Download:DeleteRemote) warning

[FSM:STAGE:FAILED|RETRY]:
downloading license file [fileName]
from [server](FSM-
fsmStFailLicenseDownloaderDownload:Loca STAGE:sam:dme:LicenseDownloader
F17014 l Download:Local) warning

[FSM:STAGE:FAILED|RETRY]:
validation for license file [fileName]
on primary(FSM-
fsmStFailLicenseDownloaderDownload:Vali STAGE:sam:dme:LicenseDownloader
F17014 dateLocal Download:ValidateLocal) warning

[FSM:STAGE:FAILED|RETRY]:
validation for license file [fileName]
on subordinate(FSM-
fsmStFailLicenseDownloaderDownload:Vali STAGE:sam:dme:LicenseDownloader
F17014 dateRemote Download:ValidateRemote) warning

[FSM:STAGE:FAILED|RETRY]: Copy
the Core file to primary for
download(FSM-
fsmStFailSysdebugCoreDownload:CopyPrim STAGE:sam:dme:SysdebugCoreDow
F17014 ary nload:CopyPrimary) warning

[FSM:STAGE:FAILED|RETRY]: copy
Core file on subordinate switch to
tmp directory(FSM-
STAGE:sam:dme:SysdebugCoreDow
F17014 fsmStFailSysdebugCoreDownload:CopySub nload:CopySub) warning

[FSM:STAGE:FAILED|RETRY]: Delete
the Core file from primary switch
under tmp directory(FSM-
fsmStFailSysdebugCoreDownload:DeletePri STAGE:sam:dme:SysdebugCoreDow
F17014 mary nload:DeletePrimary) warning

[FSM:STAGE:FAILED|RETRY]: Delete
the Core file from subordinate under
tmp directory(FSM-
fsmStFailSysdebugCoreDownload:DeleteSu STAGE:sam:dme:SysdebugCoreDow
F17014 b nload:DeleteSub) warning
[FSM:STAGE:FAILED|RETRY]: Copy
the tech-support file to primary for
download(FSM-
fsmStFailSysdebugTechSupportDownload:C STAGE:sam:dme:SysdebugTechSupp
F17014 opyPrimary ortDownload:CopyPrimary) warning

[FSM:STAGE:FAILED|RETRY]: copy
tech-support file on subordinate
switch to tmp directory(FSM-
fsmStFailSysdebugTechSupportDownload:C STAGE:sam:dme:SysdebugTechSupp
F17014 opySub ortDownload:CopySub) warning

[FSM:STAGE:FAILED|RETRY]: Delete
the tech-support file from primary
switch under tmp directory(FSM-
fsmStFailSysdebugTechSupportDownload:D STAGE:sam:dme:SysdebugTechSupp
F17014 eletePrimary ortDownload:DeletePrimary) warning

[FSM:STAGE:FAILED|RETRY]: Delete
the tech-support file from
subordinate under tmp
directory(FSM-
fsmStFailSysdebugTechSupportDownload:D STAGE:sam:dme:SysdebugTechSupp
F17014 eleteSub ortDownload:DeleteSub) warning
[FSM:STAGE:FAILED|RETRY]: waiting
for update to complete(FSM-
STAGE:sam:dme:ComputePhysicalU
fsmStFailComputePhysicalUpdateAdaptor:P pdateAdaptor:PollUpdateStatusLoca
F17043 ollUpdateStatusLocal l) warning
[FSM:STAGE:FAILED|RETRY]: waiting
for update to complete(FSM-
STAGE:sam:dme:ComputePhysicalU
fsmStFailComputePhysicalUpdateAdaptor:P pdateAdaptor:PollUpdateStatusPeer
F17043 ollUpdateStatusPeer ) warning

[FSM:STAGE:FAILED|RETRY]: Power
off the server(FSM-
fsmStFailComputePhysicalUpdateAdaptor:P STAGE:sam:dme:ComputePhysicalU
F17043 owerOff pdateAdaptor:PowerOff) warning

[FSM:STAGE:FAILED|RETRY]: power
on the blade(FSM-
fsmStFailComputePhysicalUpdateAdaptor:P STAGE:sam:dme:ComputePhysicalU
F17043 owerOn pdateAdaptor:PowerOn) warning
[FSM:STAGE:FAILED|RETRY]:
sending update request to
Adaptor(FSM-
fsmStFailComputePhysicalUpdateAdaptor:U STAGE:sam:dme:ComputePhysicalU
F17043 pdateRequestLocal pdateAdaptor:UpdateRequestLocal) warning
[FSM:STAGE:FAILED|RETRY]:
sending update request to
Adaptor(FSM-
fsmStFailComputePhysicalUpdateAdaptor:U STAGE:sam:dme:ComputePhysicalU
F17043 pdateRequestPeer pdateAdaptor:UpdateRequestPeer) warning
[FSM:STAGE:FAILED|RETRY]:
activating backup image of
Adaptor(FSM-
fsmStFailComputePhysicalActivateAdaptor: STAGE:sam:dme:ComputePhysicalAc
F17044 ActivateLocal tivateAdaptor:ActivateLocal) warning

[FSM:STAGE:FAILED|RETRY]:
activating backup image of
Adaptor(FSM-
fsmStFailComputePhysicalActivateAdaptor: STAGE:sam:dme:ComputePhysicalAc
F17044 ActivatePeer tivateAdaptor:ActivatePeer) warning

[FSM:STAGE:FAILED|RETRY]: power
on the blade(FSM-
fsmStFailComputePhysicalActivateAdaptor: STAGE:sam:dme:ComputePhysicalAc
F17044 PowerOn tivateAdaptor:PowerOn) warning

[FSM:STAGE:FAILED|RETRY]:
reseting the blade(FSM-
fsmStFailComputePhysicalActivateAdaptor: STAGE:sam:dme:ComputePhysicalAc
F17044 Reset tivateAdaptor:Reset) warning

[FSM:STAGE:FAILED|RETRY]:
applying changes to catalog(FSM-
fsmStFailCapabilityCatalogueActivateCatalo STAGE:sam:dme:CapabilityCatalogu
F17045 g:ApplyCatalog eActivateCatalog:ApplyCatalog) warning
[FSM:STAGE:FAILED|RETRY]: syncing
catalog changes to
subordinate(FSM-
fsmStFailCapabilityCatalogueActivateCatalo STAGE:sam:dme:CapabilityCatalogu
F17045 g:CopyRemote eActivateCatalog:CopyRemote) warning

[FSM:STAGE:FAILED|RETRY]:
evaluating status of activation(FSM-
fsmStFailCapabilityCatalogueActivateCatalo STAGE:sam:dme:CapabilityCatalogu
F17045 g:EvaluateStatus eActivateCatalog:EvaluateStatus) warning

[FSM:STAGE:FAILED|RETRY]:
rescanning image files(FSM-
fsmStFailCapabilityCatalogueActivateCatalo STAGE:sam:dme:CapabilityCatalogu
F17045 g:RescanImages eActivateCatalog:RescanImages) warning

[FSM:STAGE:FAILED|RETRY]:
activating catalog changes(FSM-
fsmStFailCapabilityCatalogueActivateCatalo STAGE:sam:dme:CapabilityCatalogu
F17045 g:UnpackLocal eActivateCatalog:UnpackLocal) warning
[FSM:STAGE:FAILED|RETRY]:
applying changes to catalog(FSM-
STAGE:sam:dme:CapabilityMgmtExt
fsmStFailCapabilityMgmtExtensionActivate ensionActivateMgmtExt:ApplyCatalo
F17046 MgmtExt:ApplyCatalog g) warning
[FSM:STAGE:FAILED|RETRY]: syncing
management extension changes to
subordinate(FSM-
STAGE:sam:dme:CapabilityMgmtExt
fsmStFailCapabilityMgmtExtensionActivate ensionActivateMgmtExt:CopyRemot
F17046 MgmtExt:CopyRemote e) warning

[FSM:STAGE:FAILED|RETRY]:
evaluating status of activation(FSM-
STAGE:sam:dme:CapabilityMgmtExt
fsmStFailCapabilityMgmtExtensionActivate ensionActivateMgmtExt:EvaluateSta
F17046 MgmtExt:EvaluateStatus tus) warning
[FSM:STAGE:FAILED|RETRY]:
rescanning image files(FSM-
STAGE:sam:dme:CapabilityMgmtExt
fsmStFailCapabilityMgmtExtensionActivate ensionActivateMgmtExt:RescanImag
F17046 MgmtExt:RescanImages es) warning

[FSM:STAGE:FAILED|RETRY]:
activating management extension
changes(FSM-
STAGE:sam:dme:CapabilityMgmtExt
fsmStFailCapabilityMgmtExtensionActivate ensionActivateMgmtExt:UnpackLoca
F17046 MgmtExt:UnpackLocal l) warning

[FSM:STAGE:FAILED|RETRY]:
Installing license on primary(FSM-
STAGE:sam:dme:LicenseFileInstall:Lo
F17051 fsmStFailLicenseFileInstall:Local cal) warning
[FSM:STAGE:FAILED|RETRY]:
Installing license on
subordinate(FSM-
STAGE:sam:dme:LicenseFileInstall:R
F17051 fsmStFailLicenseFileInstall:Remote emote) warning

[FSM:STAGE:FAILED|RETRY]:
Clearing license on primary(FSM-
STAGE:sam:dme:LicenseFileClear:Lo
F17052 fsmStFailLicenseFileClear:Local cal) warning
[FSM:STAGE:FAILED|RETRY]:
Clearing license on
subordinate(FSM-
STAGE:sam:dme:LicenseFileClear:Re
F17052 fsmStFailLicenseFileClear:Remote mote) warning

[FSM:STAGE:FAILED|RETRY]:
Updating on primary(FSM-
fsmStFailLicenseInstanceUpdateFlexlm:Loca STAGE:sam:dme:LicenseInstanceUp
F17053 l dateFlexlm:Local) warning

[FSM:STAGE:FAILED|RETRY]:
Updating on subordinate(FSM-
fsmStFailLicenseInstanceUpdateFlexlm:Rem STAGE:sam:dme:LicenseInstanceUp
F17053 ote dateFlexlm:Remote) warning
[FSM:STAGE:FAILED|RETRY]:
configuring SoL interface on server
[dn](FSM-
fsmStFailComputePhysicalConfigSoL:Execut STAGE:sam:dme:ComputePhysicalCo
F17083 e nfigSoL:Execute) warning

[FSM:STAGE:FAILED|RETRY]:
removing SoL interface configuration
from server [dn](FSM-
fsmStFailComputePhysicalUnconfigSoL:Exec STAGE:sam:dme:ComputePhysicalU
F17084 ute nconfigSoL:Execute) warning

[FSM:STAGE:FAILED|RETRY]:
Shutting down port(FSM-
fsmStFailPortPIoInCompatSfpPresence:Shut STAGE:sam:dme:PortPIoInCompatSf
F17089 down pPresence:Shutdown) warning

[FSM:STAGE:FAILED|RETRY]:
Execute Diagnostic Interrupt(NMI)
for server [dn](FSM-
fsmStFailComputePhysicalDiagnosticInterru STAGE:sam:dme:ComputePhysicalDi
F17116 pt:Execute agnosticInterrupt:Execute) warning

[FSM:STAGE:FAILED|RETRY]: (FSM-
fsmStFailEquipmentChassisDynamicRealloca STAGE:sam:dme:EquipmentChassisD
F17134 tion:Config ynamicReallocation:Config) warning
[FSM:STAGE:FAILED|RETRY]:
Execute KVM Reset for server [dn]
(FSM-
fsmStFailComputePhysicalResetKvm:Execut STAGE:sam:dme:ComputePhysicalRe
F17163 e setKvm:Execute) warning

[FSM:STAGE:FAILED|RETRY]:
Configuring connectivity on
CIMC(FSM-
fsmStFailMgmtControllerOnline:BmcConfig STAGE:sam:dme:MgmtControllerOnl
F17169 ureConnLocal ine:BmcConfigureConnLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Configuring connectivity on
CIMC(FSM-
fsmStFailMgmtControllerOnline:BmcConfig STAGE:sam:dme:MgmtControllerOnl
F17169 ureConnPeer ine:BmcConfigureConnPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Configuring fabric-interconnect
connectivity to CIMC(FSM-
fsmStFailMgmtControllerOnline:SwConfigur STAGE:sam:dme:MgmtControllerOnl
F17169 eConnLocal ine:SwConfigureConnLocal) warning
[FSM:STAGE:FAILED|RETRY]:
Configuring fabric-interconnect
connectivity to CIMC(FSM-
fsmStFailMgmtControllerOnline:SwConfigur STAGE:sam:dme:MgmtControllerOnl
F17169 eConnPeer ine:SwConfigureConnPeer) warning

[FSM:STAGE:FAILED|RETRY]:
cleaning host entries on local fabric-
interconnect(FSM-
fsmStFailComputeRackUnitOffline:CleanupL STAGE:sam:dme:ComputeRackUnitO
F17170 ocal ffline:CleanupLocal) warning

[FSM:STAGE:FAILED|RETRY]:
cleaning host entries on peer fabric-
interconnect(FSM-
fsmStFailComputeRackUnitOffline:CleanupP STAGE:sam:dme:ComputeRackUnitO
F17170 eer ffline:CleanupPeer) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfiguring fabric-interconnect
connectivity to CIMC of server [id]
(FSM-
fsmStFailComputeRackUnitOffline:SwUncon STAGE:sam:dme:ComputeRackUnitO
F17170 figureLocal ffline:SwUnconfigureLocal) warning

[FSM:STAGE:FAILED|RETRY]:
Unconfiguring fabric-interconnect
connectivity to CIMC of server [id]
(FSM-
fsmStFailComputeRackUnitOffline:SwUncon STAGE:sam:dme:ComputeRackUnitO
F17170 figurePeer ffline:SwUnconfigurePeer) warning

[FSM:STAGE:FAILED|RETRY]: setting
FI locator led to [adminState](FSM-
fsmStFailEquipmentLocatorLedSetFiLocator STAGE:sam:dme:EquipmentLocatorL
F17187 Led:Execute edSetFiLocatorLed:Execute) warning

[FSM:STAGE:FAILED|RETRY]: VNIC
profile alias configuration on local
fabric(FSM-
STAGE:sam:dme:VnicProfileSetDeplo
F17223 fsmStFailVnicProfileSetDeployAlias:Local yAlias:Local) warning

[FSM:STAGE:FAILED|RETRY]: VNIC
profile alias configuration on peer
fabric(FSM-
STAGE:sam:dme:VnicProfileSetDeplo
F17223 fsmStFailVnicProfileSetDeployAlias:Peer yAlias:Peer) warning
[FSM:STAGE:FAILED|RETRY]:
Configure physical port types on
fabric interconnect [id](FSM-
STAGE:sam:dme:SwPhysConfPhysica
F17239 fsmStFailSwPhysConfPhysical:ConfigSwA l:ConfigSwA) warning

[FSM:STAGE:FAILED|RETRY]:
Configure physical port types on
fabric interconnect [id](FSM-
STAGE:sam:dme:SwPhysConfPhysica
F17239 fsmStFailSwPhysConfPhysical:ConfigSwB l:ConfigSwB) warning

[FSM:STAGE:FAILED|RETRY]:
Performing local port inventory of
switch [id](FSM-
fsmStFailSwPhysConfPhysical:PortInventory STAGE:sam:dme:SwPhysConfPhysica
F17239 SwA l:PortInventorySwA) warning

[FSM:STAGE:FAILED|RETRY]:
Performing peer port inventory of
switch [id](FSM-
fsmStFailSwPhysConfPhysical:PortInventory STAGE:sam:dme:SwPhysConfPhysica
F17239 SwB l:PortInventorySwB) warning

[FSM:STAGE:FAILED|RETRY]:
Verifying physical transition on
fabric interconnect [id](FSM-
fsmStFailSwPhysConfPhysical:VerifyPhysCon STAGE:sam:dme:SwPhysConfPhysica
F17239 fig l:VerifyPhysConfig) warning

[FSM:STAGE:FAILED|RETRY]:
external VM management cluster
role configuration on local
fabric(FSM-
STAGE:sam:dme:ExtvmmEpClusterR
F17254 fsmStFailExtvmmEpClusterRole:SetLocal ole:SetLocal) warning

[FSM:STAGE:FAILED|RETRY]:
external VM management cluster
role configuration on peer
fabric(FSM-
STAGE:sam:dme:ExtvmmEpClusterR
F17254 fsmStFailExtvmmEpClusterRole:SetPeer ole:SetPeer) warning

[FSM:STAGE:FAILED|RETRY]: Turn
beacon lights on/off for [dn] on
fabric interconnect [id](FSM-
fsmStFailEquipmentBeaconLedIlluminate:Ex STAGE:sam:dme:EquipmentBeaconL
F17262 ecuteA edIlluminate:ExecuteA) warning

[FSM:STAGE:FAILED|RETRY]: Turn
beacon lights on/off for [dn] on
fabric interconnect [id](FSM-
fsmStFailEquipmentBeaconLedIlluminate:Ex STAGE:sam:dme:EquipmentBeaconL
F17262 ecuteB edIlluminate:ExecuteB) warning
[FSM:STAGE:FAILED|RETRY]:
Configure admin speed for [dn]
(FSM-
fsmStFailEtherServerIntFIoConfigSpeed:Con STAGE:sam:dme:EtherServerIntFIoC
F17271 figure onfigSpeed:Configure) warning
[FSM:STAGE:FAILED|RETRY]:
clearing pending BIOS image
update(FSM-
STAGE:sam:dme:ComputePhysicalU
F17281 fsmStFailComputePhysicalUpdateBIOS:Clear pdateBIOS:Clear) warning

[FSM:STAGE:FAILED|RETRY]: waiting
for pending BIOS image update to
clear(FSM-
fsmStFailComputePhysicalUpdateBIOS:PollC STAGE:sam:dme:ComputePhysicalU
F17281 learStatus pdateBIOS:PollClearStatus) warning

[FSM:STAGE:FAILED|RETRY]: waiting
for BIOS update to complete(FSM-
fsmStFailComputePhysicalUpdateBIOS:PollU STAGE:sam:dme:ComputePhysicalU
F17281 pdateStatus pdateBIOS:PollUpdateStatus) warning
[FSM:STAGE:FAILED|RETRY]:
sending BIOS update request to
CIMC(FSM-
fsmStFailComputePhysicalUpdateBIOS:Upd STAGE:sam:dme:ComputePhysicalU
F17281 ateRequest pdateBIOS:UpdateRequest) warning

[FSM:STAGE:FAILED|RETRY]:
activating BIOS image(FSM-
fsmStFailComputePhysicalActivateBIOS:Acti STAGE:sam:dme:ComputePhysicalAc
F17282 vate tivateBIOS:Activate) warning
[FSM:STAGE:FAILED|RETRY]:
clearing pending BIOS image
activate(FSM-
fsmStFailComputePhysicalActivateBIOS:Clea STAGE:sam:dme:ComputePhysicalAc
F17282 r tivateBIOS:Clear) warning

[FSM:STAGE:FAILED|RETRY]: waiting
for BIOS activate(FSM-
fsmStFailComputePhysicalActivateBIOS:Poll STAGE:sam:dme:ComputePhysicalAc
F17282 ActivateStatus tivateBIOS:PollActivateStatus) warning

[FSM:STAGE:FAILED|RETRY]: waiting
for pending BIOS image activate to
clear(FSM-
fsmStFailComputePhysicalActivateBIOS:Poll STAGE:sam:dme:ComputePhysicalAc
F17282 ClearStatus tivateBIOS:PollClearStatus) warning

[FSM:STAGE:FAILED|RETRY]: Power
off the server(FSM-
fsmStFailComputePhysicalActivateBIOS:Pow STAGE:sam:dme:ComputePhysicalAc
F17282 erOff tivateBIOS:PowerOff) warning
[FSM:STAGE:FAILED|RETRY]: power
on the server(FSM-
fsmStFailComputePhysicalActivateBIOS:Pow STAGE:sam:dme:ComputePhysicalAc
F17282 erOn tivateBIOS:PowerOn) warning

[FSM:STAGE:FAILED|RETRY]:
updating BIOS tokens(FSM-
fsmStFailComputePhysicalActivateBIOS:Upd STAGE:sam:dme:ComputePhysicalAc
F17282 ateTokens tivateBIOS:UpdateTokens) warning

[FSM:STAGE:REMOTE-ERROR]:
Checking license for chassis
[chassisId] (iom [id])(FSM-
fsmRmtErrEquipmentIOCardFePresence:Ch STAGE:sam:dme:EquipmentIOCardF
F77845 eckLicense ePresence:CheckLicense) warning

[FSM:STAGE:REMOTE-ERROR]:
identifying IOM [chassisId]/[id](FSM-
fsmRmtErrEquipmentIOCardFePresence:Ide STAGE:sam:dme:EquipmentIOCardF
F77845 ntify ePresence:Identify) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring management identity to
IOM [chassisId]/[id]([side])(FSM-
fsmRmtErrEquipmentIOCardFeConn:Config STAGE:sam:dme:EquipmentIOCardF
F77846 ureEndPoint eConn:ConfigureEndPoint) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring fabric interconnect
[switchId] mgmt connectivity to IOM
[chassisId]/[id]([side])(FSM-
fsmRmtErrEquipmentIOCardFeConn:Config STAGE:sam:dme:EquipmentIOCardF
F77846 ureSwMgmtEndPoint eConn:ConfigureSwMgmtEndPoint) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring IOM [chassisId]/[id]
([side]) virtual name space(FSM-
fsmRmtErrEquipmentIOCardFeConn:Config STAGE:sam:dme:EquipmentIOCardF
F77846 ureVifNs eConn:ConfigureVifNs) warning

[FSM:STAGE:REMOTE-ERROR]:
triggerring chassis discovery via IOM
[chassisId]/[id]([side])(FSM-
fsmRmtErrEquipmentIOCardFeConn:Discov STAGE:sam:dme:EquipmentIOCardF
F77846 erChassis eConn:DiscoverChassis) warning

[FSM:STAGE:REMOTE-ERROR]:
enabling chassis [chassisId] on [side]
side(FSM-
fsmRmtErrEquipmentIOCardFeConn:Enable STAGE:sam:dme:EquipmentIOCardF
F77846 Chassis eConn:EnableChassis) warning
[FSM:STAGE:REMOTE-ERROR]:
unconfiguring access to chassis [id]
(FSM-
fsmRmtErrEquipmentChassisRemoveChassis STAGE:sam:dme:EquipmentChassisR
F77847 :DisableEndPoint emoveChassis:DisableEndPoint) warning
[FSM:STAGE:REMOTE-ERROR]:
erasing chassis identity [id] from
primary(FSM-
fsmRmtErrEquipmentChassisRemoveChassis STAGE:sam:dme:EquipmentChassisR
F77847 :UnIdentifyLocal emoveChassis:UnIdentifyLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
erasing chassis identity [id] from
secondary(FSM-
fsmRmtErrEquipmentChassisRemoveChassis STAGE:sam:dme:EquipmentChassisR
F77847 :UnIdentifyPeer emoveChassis:UnIdentifyPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
waiting for clean up of resources for
chassis [id] (approx. 2 min)(FSM-
fsmRmtErrEquipmentChassisRemoveChassis STAGE:sam:dme:EquipmentChassisR
F77847 :Wait emoveChassis:Wait) warning

[FSM:STAGE:REMOTE-ERROR]:
decomissioning chassis [id](FSM-
fsmRmtErrEquipmentChassisRemoveChassis STAGE:sam:dme:EquipmentChassisR
F77847 :decomission emoveChassis:decomission) warning
[FSM:STAGE:REMOTE-ERROR]:
setting locator led to [adminState]
(FSM-
fsmRmtErrEquipmentLocatorLedSetLocator STAGE:sam:dme:EquipmentLocatorL
F77848 Led:Execute edSetLocatorLed:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
external mgmt interface
configuration on primary(FSM-
fsmRmtErrMgmtControllerExtMgmtIfConfig: STAGE:sam:dme:MgmtControllerExt
F77958 Primary MgmtIfConfig:Primary) warning

[FSM:STAGE:REMOTE-ERROR]:
external mgmt interface
configuration on secondary(FSM-
fsmRmtErrMgmtControllerExtMgmtIfConfig: STAGE:sam:dme:MgmtControllerExt
F77958 Secondary MgmtIfConfig:Secondary) warning

[FSM:STAGE:REMOTE-ERROR]:
identifying a server in
[chassisId]/[slotId] via CIMC(FSM-
fsmRmtErrFabricComputeSlotEpIdentify:Exe STAGE:sam:dme:FabricComputeSlot
F77959 cuteLocal EpIdentify:ExecuteLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
identifying a server in
[chassisId]/[slotId] via CIMC(FSM-
fsmRmtErrFabricComputeSlotEpIdentify:Exe STAGE:sam:dme:FabricComputeSlot
F77959 cutePeer EpIdentify:ExecutePeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for BIOS POST completion
from CIMC on server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:BiosPost STAGE:sam:dme:ComputeBladeDisc
F77960 Completion over:BiosPostCompletion) warning

[FSM:STAGE:REMOTE-ERROR]:
power server [chassisId]/[slotId] on
with pre-boot environment(FSM-
fsmRmtErrComputeBladeDiscover:BladeBoo STAGE:sam:dme:ComputeBladeDisc
F77960 tPnuos over:BladeBootPnuos) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for system reset on server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:BladeBoo STAGE:sam:dme:ComputeBladeDisc
F77960 tWait over:BladeBootWait) warning

[FSM:STAGE:REMOTE-ERROR]:
power on server [chassisId]/[slotId]
for discovery(FSM-
fsmRmtErrComputeBladeDiscover:BladePo STAGE:sam:dme:ComputeBladeDisc
F77960 werOn over:BladePowerOn) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for SMBIOS table from CIMC
on server [chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:BladeRea STAGE:sam:dme:ComputeBladeDisc
F77960 dSmbios over:BladeReadSmbios) warning

[FSM:STAGE:REMOTE-ERROR]:
provisioning a bootable device with
a bootable pre-boot image for
server(FSM-
fsmRmtErrComputeBladeDiscover:BmcConfi STAGE:sam:dme:ComputeBladeDisc
F77960 gPnuOS over:BmcConfigPnuOS) warning

[FSM:STAGE:REMOTE-ERROR]:
getting inventory of server
[chassisId]/[slotId] via CIMC(FSM-
fsmRmtErrComputeBladeDiscover:BmcInve STAGE:sam:dme:ComputeBladeDisc
F77960 ntory over:BmcInventory) warning
[FSM:STAGE:REMOTE-ERROR]:
prepare configuration for preboot
environment(FSM-
fsmRmtErrComputeBladeDiscover:BmcPreC STAGE:sam:dme:ComputeBladeDisc
F77960 onfigPnuOSLocal over:BmcPreConfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
prepare configuration for preboot
environment(FSM-
fsmRmtErrComputeBladeDiscover:BmcPreC STAGE:sam:dme:ComputeBladeDisc
F77960 onfigPnuOSPeer over:BmcPreConfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
checking CIMC of server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:BmcPres STAGE:sam:dme:ComputeBladeDisc
F77960 ence over:BmcPresence) warning

[FSM:STAGE:REMOTE-ERROR]:
Shutdown the server
[chassisId]/[slotId]; deep discovery
completed(FSM-
fsmRmtErrComputeBladeDiscover:BmcShut STAGE:sam:dme:ComputeBladeDisc
F77960 downDiscovered over:BmcShutdownDiscovered) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring primary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:ConfigFe STAGE:sam:dme:ComputeBladeDisc
F77960 Local over:ConfigFeLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring secondary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:ConfigFe STAGE:sam:dme:ComputeBladeDisc
F77960 Peer over:ConfigFePeer) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring external user access to
server [chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:ConfigUs STAGE:sam:dme:ComputeBladeDisc
F77960 erAccess over:ConfigUserAccess) warning

[FSM:STAGE:REMOTE-ERROR]:
Invoke post-discovery policies on
server [chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:HandleP STAGE:sam:dme:ComputeBladeDisc
F77960 ooling over:HandlePooling) warning
[FSM:STAGE:REMOTE-ERROR]:
configure primary adapter in
[chassisId]/[slotId] for pre-boot
environment(FSM-
fsmRmtErrComputeBladeDiscover:NicConfig STAGE:sam:dme:ComputeBladeDisc
F77960 PnuOSLocal over:NicConfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
configure secondary adapter in
[chassisId]/[slotId] for pre-boot
environment(FSM-
fsmRmtErrComputeBladeDiscover:NicConfig STAGE:sam:dme:ComputeBladeDisc
F77960 PnuOSPeer over:NicConfigPnuOSPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
detect mezz cards in
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:NicPrese STAGE:sam:dme:ComputeBladeDisc
F77960 nceLocal over:NicPresenceLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
detect mezz cards in
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:NicPrese STAGE:sam:dme:ComputeBladeDisc
F77960 ncePeer over:NicPresencePeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure adapter of server
[chassisId]/[slotId] pre-boot
environment(FSM-
fsmRmtErrComputeBladeDiscover:NicUnco STAGE:sam:dme:ComputeBladeDisc
F77960 nfigPnuOSLocal over:NicUnconfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure adapter of server
[chassisId]/[slotId] pre-boot
environment(FSM-
fsmRmtErrComputeBladeDiscover:NicUnco STAGE:sam:dme:ComputeBladeDisc
F77960 nfigPnuOSPeer over:NicUnconfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Populate pre-boot catalog to server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:PnuOSCa STAGE:sam:dme:ComputeBladeDisc
F77960 talog over:PnuOSCatalog) warning

[FSM:STAGE:REMOTE-ERROR]:
Identify pre-boot environment agent
on server [chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:PnuOSId STAGE:sam:dme:ComputeBladeDisc
F77960 ent over:PnuOSIdent) warning
[FSM:STAGE:REMOTE-ERROR]:
Perform inventory of server
[chassisId]/[slotId] pre-boot
environment(FSM-
fsmRmtErrComputeBladeDiscover:PnuOSIn STAGE:sam:dme:ComputeBladeDisc
F77960 ventory over:PnuOSInventory) warning

[FSM:STAGE:REMOTE-ERROR]:
Populate pre-boot environment
behavior policy to server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:PnuOSPo STAGE:sam:dme:ComputeBladeDisc
F77960 licy over:PnuOSPolicy) warning

[FSM:STAGE:REMOTE-ERROR]: Scrub
server [chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:PnuOSSc STAGE:sam:dme:ComputeBladeDisc
F77960 rub over:PnuOSScrub) warning

[FSM:STAGE:REMOTE-ERROR]:
Trigger self-test of server
[chassisId]/[slotId] pre-boot
environment(FSM-
fsmRmtErrComputeBladeDiscover:PnuOSSel STAGE:sam:dme:ComputeBladeDisc
F77960 fTest over:PnuOSSelfTest) warning

[FSM:STAGE:REMOTE-ERROR]:
Preparing to check hardware
configuration server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:PreSaniti STAGE:sam:dme:ComputeBladeDisc
F77960 ze over:PreSanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
Checking hardware configuration
server [chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDisc
F77960 fsmRmtErrComputeBladeDiscover:Sanitize over:Sanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
provisioning a Virtual Media device
with a bootable pre-boot image for
blade [chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:SetupVm STAGE:sam:dme:ComputeBladeDisc
F77960 ediaLocal over:SetupVmediaLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
provisioning a Virtual Media device
with a bootable pre-boot image for
blade [chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:SetupVm STAGE:sam:dme:ComputeBladeDisc
F77960 ediaPeer over:SetupVmediaPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
Disable Sol Redirection on server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:SolRedire STAGE:sam:dme:ComputeBladeDisc
F77960 ctDisable over:SolRedirectDisable) warning

[FSM:STAGE:REMOTE-ERROR]: set
up bios token on server
[chassisId]/[slotId] for Sol
redirect(FSM-
fsmRmtErrComputeBladeDiscover:SolRedire STAGE:sam:dme:ComputeBladeDisc
F77960 ctEnable over:SolRedirectEnable) warning

[FSM:STAGE:REMOTE-ERROR]:
configure primary fabric
interconnect in [chassisId]/[slotId]
for pre-boot environment(FSM-
fsmRmtErrComputeBladeDiscover:SwConfig STAGE:sam:dme:ComputeBladeDisc
F77960 PnuOSLocal over:SwConfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
configure secondary fabric
interconnect in [chassisId]/[slotId]
for pre-boot environment(FSM-
fsmRmtErrComputeBladeDiscover:SwConfig STAGE:sam:dme:ComputeBladeDisc
F77960 PnuOSPeer over:SwConfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure primary fabric
interconnect for server
[chassisId]/[slotId] pre-boot
environment(FSM-
fsmRmtErrComputeBladeDiscover:SwUncon STAGE:sam:dme:ComputeBladeDisc
F77960 figPnuOSLocal over:SwUnconfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure secondary fabric
interconnect for server
[chassisId]/[slotId] pre-boot
environment(FSM-
fsmRmtErrComputeBladeDiscover:SwUncon STAGE:sam:dme:ComputeBladeDisc
F77960 figPnuOSPeer over:SwUnconfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
unprovisioning the Virtual Media
bootable device for blade
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:Teardow STAGE:sam:dme:ComputeBladeDisc
F77960 nVmediaLocal over:TeardownVmediaLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
unprovisioning the Virtual media
bootable device for blade
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiscover:Teardow STAGE:sam:dme:ComputeBladeDisc
F77960 nVmediaPeer over:TeardownVmediaPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Connect to pre-boot environment
agent on server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeDiscover:hagConn STAGE:sam:dme:ComputeBladeDisc
F77960 ect over:hagConnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Disconnect pre-boot environment
agent for server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeDiscover:hagDisco STAGE:sam:dme:ComputeBladeDisc
F77960 nnect over:hagDisconnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Connect to pre-boot environment
agent on server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeDiscover:serialDeb STAGE:sam:dme:ComputeBladeDisc
F77960 ugConnect over:serialDebugConnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Disconnect pre-boot environment
agent for server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeDiscover:serialDeb STAGE:sam:dme:ComputeBladeDisc
F77960 ugDisconnect over:serialDebugDisconnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for BIOS POST completion
from CIMC on server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:BiosP STAGE:sam:dme:ComputeRackUnitD
F77960 ostCompletion iscover:BiosPostCompletion) warning

[FSM:STAGE:REMOTE-ERROR]:
provisioning a bootable device with
a bootable pre-boot image for
server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:BmcC STAGE:sam:dme:ComputeRackUnitD
F77960 onfigPnuOS iscover:BmcConfigPnuOS) warning
[FSM:STAGE:REMOTE-ERROR]:
Configuring connectivity on CIMC of
server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:BmcC STAGE:sam:dme:ComputeRackUnitD
F77960 onfigureConnLocal iscover:BmcConfigureConnLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configuring connectivity on CIMC of
server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:BmcC STAGE:sam:dme:ComputeRackUnitD
F77960 onfigureConnPeer iscover:BmcConfigureConnPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
getting inventory of server [id] via
CIMC(FSM-
fsmRmtErrComputeRackUnitDiscover:BmcIn STAGE:sam:dme:ComputeRackUnitD
F77960 ventory iscover:BmcInventory) warning

[FSM:STAGE:REMOTE-ERROR]:
prepare configuration for preboot
environment(FSM-
fsmRmtErrComputeRackUnitDiscover:BmcP STAGE:sam:dme:ComputeRackUnitD
F77960 reconfigPnuOSLocal iscover:BmcPreconfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
prepare configuration for preboot
environment(FSM-
fsmRmtErrComputeRackUnitDiscover:BmcP STAGE:sam:dme:ComputeRackUnitD
F77960 reconfigPnuOSPeer iscover:BmcPreconfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
checking CIMC of server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:BmcP STAGE:sam:dme:ComputeRackUnitD
F77960 resence iscover:BmcPresence) warning

[FSM:STAGE:REMOTE-ERROR]:
Shutdown the server [id]; deep
discovery completed(FSM-
fsmRmtErrComputeRackUnitDiscover:BmcS STAGE:sam:dme:ComputeRackUnitD
F77960 hutdownDiscovered iscover:BmcShutdownDiscovered) warning

[FSM:STAGE:REMOTE-ERROR]:
unprovisioning the bootable device
for server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:BmcU STAGE:sam:dme:ComputeRackUnitD
F77960 nconfigPnuOS iscover:BmcUnconfigPnuOS) warning

[FSM:STAGE:REMOTE-ERROR]:
power server [id] on with pre-boot
environment(FSM-
fsmRmtErrComputeRackUnitDiscover:BootP STAGE:sam:dme:ComputeRackUnitD
F77960 nuos iscover:BootPnuos) warning
[FSM:STAGE:REMOTE-ERROR]:
Waiting for system reset on server
[id](FSM-
fsmRmtErrComputeRackUnitDiscover:Boot STAGE:sam:dme:ComputeRackUnitD
F77960 Wait iscover:BootWait) warning

[FSM:STAGE:REMOTE-ERROR]:
setting adapter mode to discovery
for server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:Confi STAGE:sam:dme:ComputeRackUnitD
F77960 gDiscoveryMode iscover:ConfigDiscoveryMode) warning

[FSM:STAGE:REMOTE-ERROR]:
setting adapter mode to NIV for
server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:Confi STAGE:sam:dme:ComputeRackUnitD
F77960 gNivMode iscover:ConfigNivMode) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring external user access to
server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:Confi STAGE:sam:dme:ComputeRackUnitD
F77960 gUserAccess iscover:ConfigUserAccess) warning

[FSM:STAGE:REMOTE-ERROR]:
Invoke post-discovery policies on
server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:Handl STAGE:sam:dme:ComputeRackUnitD
F77960 ePooling iscover:HandlePooling) warning

[FSM:STAGE:REMOTE-ERROR]:
detect and get mezz cards
information from [id](FSM-
fsmRmtErrComputeRackUnitDiscover:NicInv STAGE:sam:dme:ComputeRackUnitD
F77960 entoryLocal iscover:NicInventoryLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
detect and get mezz cards
information from [id](FSM-
fsmRmtErrComputeRackUnitDiscover:NicInv STAGE:sam:dme:ComputeRackUnitD
F77960 entoryPeer iscover:NicInventoryPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Populate pre-boot catalog to server
[id](FSM-
fsmRmtErrComputeRackUnitDiscover:PnuO STAGE:sam:dme:ComputeRackUnitD
F77960 SCatalog iscover:PnuOSCatalog) warning

[FSM:STAGE:REMOTE-ERROR]:
Explore connectivity of server [id] in
pre-boot environment(FSM-
fsmRmtErrComputeRackUnitDiscover:PnuO STAGE:sam:dme:ComputeRackUnitD
F77960 SConnStatus iscover:PnuOSConnStatus) warning
[FSM:STAGE:REMOTE-ERROR]:
Explore connectivity of server [id] in
pre-boot environment(FSM-
fsmRmtErrComputeRackUnitDiscover:PnuO STAGE:sam:dme:ComputeRackUnitD
F77960 SConnectivity iscover:PnuOSConnectivity) warning

[FSM:STAGE:REMOTE-ERROR]:
Identify pre-boot environment agent
on server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:PnuO STAGE:sam:dme:ComputeRackUnitD
F77960 SIdent iscover:PnuOSIdent) warning

[FSM:STAGE:REMOTE-ERROR]:
Perform inventory of server [id] pre-
boot environment(FSM-
fsmRmtErrComputeRackUnitDiscover:PnuO STAGE:sam:dme:ComputeRackUnitD
F77960 SInventory iscover:PnuOSInventory) warning

[FSM:STAGE:REMOTE-ERROR]:
Populate pre-boot environment
behavior policy to server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:PnuO STAGE:sam:dme:ComputeRackUnitD
F77960 SPolicy iscover:PnuOSPolicy) warning

[FSM:STAGE:REMOTE-ERROR]: Scrub
server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:PnuO STAGE:sam:dme:ComputeRackUnitD
F77960 SScrub iscover:PnuOSScrub) warning

[FSM:STAGE:REMOTE-ERROR]:
Trigger self-test of server [id] pre-
boot environment(FSM-
fsmRmtErrComputeRackUnitDiscover:PnuO STAGE:sam:dme:ComputeRackUnitD
F77960 SSelfTest iscover:PnuOSSelfTest) warning

[FSM:STAGE:REMOTE-ERROR]:
Preparing to check hardware
configuration server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:PreSa STAGE:sam:dme:ComputeRackUnitD
F77960 nitize iscover:PreSanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for SMBIOS table from CIMC
on server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:ReadS STAGE:sam:dme:ComputeRackUnitD
F77960 mbios iscover:ReadSmbios) warning

[FSM:STAGE:REMOTE-ERROR]:
Checking hardware configuration
server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:Saniti STAGE:sam:dme:ComputeRackUnitD
F77960 ze iscover:Sanitize) warning
[FSM:STAGE:REMOTE-ERROR]:
Disable Sol Redirection on server [id]
(FSM-
fsmRmtErrComputeRackUnitDiscover:SolRe STAGE:sam:dme:ComputeRackUnitD
F77960 directDisable iscover:SolRedirectDisable) warning

[FSM:STAGE:REMOTE-ERROR]: set
up bios token on server [id] for Sol
redirect(FSM-
fsmRmtErrComputeRackUnitDiscover:SolRe STAGE:sam:dme:ComputeRackUnitD
F77960 directEnable iscover:SolRedirectEnable) warning

[FSM:STAGE:REMOTE-ERROR]:
configure primary fabric
interconnect for pre-boot
environment(FSM-
fsmRmtErrComputeRackUnitDiscover:SwCo STAGE:sam:dme:ComputeRackUnitD
F77960 nfigPnuOSLocal iscover:SwConfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
configure secondary fabric
interconnect for pre-boot
environment(FSM-
fsmRmtErrComputeRackUnitDiscover:SwCo STAGE:sam:dme:ComputeRackUnitD
F77960 nfigPnuOSPeer iscover:SwConfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring primary fabric
interconnect access to server [id]
(FSM-
fsmRmtErrComputeRackUnitDiscover:SwCo STAGE:sam:dme:ComputeRackUnitD
F77960 nfigPortNivLocal iscover:SwConfigPortNivLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring secondary fabric
interconnect access to server [id]
(FSM-
fsmRmtErrComputeRackUnitDiscover:SwCo STAGE:sam:dme:ComputeRackUnitD
F77960 nfigPortNivPeer iscover:SwConfigPortNivPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Configuring fabric-interconnect
connectivity to CIMC of server [id]
(FSM-
fsmRmtErrComputeRackUnitDiscover:SwCo STAGE:sam:dme:ComputeRackUnitD
F77960 nfigureConnLocal iscover:SwConfigureConnLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configuring fabric-interconnect
connectivity to CIMC of server [id]
(FSM-
fsmRmtErrComputeRackUnitDiscover:SwCo STAGE:sam:dme:ComputeRackUnitD
F77960 nfigureConnPeer iscover:SwConfigureConnPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
determine connectivity of server [id]
to fabric(FSM-
fsmRmtErrComputeRackUnitDiscover:SwPn STAGE:sam:dme:ComputeRackUnitD
F77960 uOSConnectivityLocal iscover:SwPnuOSConnectivityLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
determine connectivity of server [id]
to fabric(FSM-
fsmRmtErrComputeRackUnitDiscover:SwPn STAGE:sam:dme:ComputeRackUnitD
F77960 uOSConnectivityPeer iscover:SwPnuOSConnectivityPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfiguring primary fabric
interconnect access to server [id]
(FSM-
fsmRmtErrComputeRackUnitDiscover:SwUn STAGE:sam:dme:ComputeRackUnitD
F77960 configPortNivLocal iscover:SwUnconfigPortNivLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfiguring secondary fabric
interconnect access to server [id]
(FSM-
fsmRmtErrComputeRackUnitDiscover:SwUn STAGE:sam:dme:ComputeRackUnitD
F77960 configPortNivPeer iscover:SwUnconfigPortNivPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Connect to pre-boot environment
agent on server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:hagCo STAGE:sam:dme:ComputeRackUnitD
F77960 nnect iscover:hagConnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Disconnect pre-boot environment
agent for server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:hagDi STAGE:sam:dme:ComputeRackUnitD
F77960 sconnect iscover:hagDisconnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Connect to pre-boot environment
agent on server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:serial STAGE:sam:dme:ComputeRackUnitD
F77960 DebugConnect iscover:serialDebugConnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Disconnect pre-boot environment
agent for server [id](FSM-
fsmRmtErrComputeRackUnitDiscover:serial STAGE:sam:dme:ComputeRackUnitD
F77960 DebugDisconnect iscover:serialDebugDisconnect) warning
[FSM:STAGE:REMOTE-ERROR]: wait
for connection to be
established(FSM-
fsmRmtErrComputeRackUnitDiscover:waitF STAGE:sam:dme:ComputeRackUnitD
F77960 orConnReady iscover:waitForConnReady) warning
[FSM:STAGE:REMOTE-ERROR]:
Deploying Power Management
policy changes on chassis [id](FSM-
fsmRmtErrEquipmentChassisPsuPolicyConfi STAGE:sam:dme:EquipmentChassisP
F77973 g:Execute suPolicyConfig:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
Resetting FC persistent bindings on
host interface [dn](FSM-
fsmRmtErrAdaptorHostFcIfResetFcPersBindi STAGE:sam:dme:AdaptorHostFcIfRes
F77974 ng:ExecuteLocal etFcPersBinding:ExecuteLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Resetting FC persistent bindings on
host interface [dn](FSM-
fsmRmtErrAdaptorHostFcIfResetFcPersBindi STAGE:sam:dme:AdaptorHostFcIfRes
F77974 ng:ExecutePeer etFcPersBinding:ExecutePeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for BIOS POST completion
from CIMC on server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:BiosPostCom STAGE:sam:dme:ComputeBladeDiag:
F77975 pletion BiosPostCompletion) warning

[FSM:STAGE:REMOTE-ERROR]:
Power-on server [chassisId]/[slotId]
for diagnostics environment(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F77975 fsmRmtErrComputeBladeDiag:BladeBoot BladeBoot) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for system reset on server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:BladeBootWa STAGE:sam:dme:ComputeBladeDiag:
F77975 it BladeBootWait) warning

[FSM:STAGE:REMOTE-ERROR]:
Power on server [chassisId]/[slotId]
for diagnostics(FSM-
fsmRmtErrComputeBladeDiag:BladePowerO STAGE:sam:dme:ComputeBladeDiag:
F77975 n BladePowerOn) warning

[FSM:STAGE:REMOTE-ERROR]: Read
SMBIOS tables on server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:BladeReadSm STAGE:sam:dme:ComputeBladeDiag:
F77975 bios BladeReadSmbios) warning
[FSM:STAGE:REMOTE-ERROR]:
provisioning a bootable device with
a bootable pre-boot image for
server(FSM-
fsmRmtErrComputeBladeDiag:BmcConfigPn STAGE:sam:dme:ComputeBladeDiag:
F77975 uOS BmcConfigPnuOS) warning

[FSM:STAGE:REMOTE-ERROR]:
Getting inventory of server
[chassisId]/[slotId] via CIMC(FSM-
fsmRmtErrComputeBladeDiag:BmcInventor STAGE:sam:dme:ComputeBladeDiag:
F77975 y BmcInventory) warning

[FSM:STAGE:REMOTE-ERROR]:
Checking CIMC of server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F77975 fsmRmtErrComputeBladeDiag:BmcPresence BmcPresence) warning

[FSM:STAGE:REMOTE-ERROR]:
Shutdown server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeDiag:BmcShutdow STAGE:sam:dme:ComputeBladeDiag:
F77975 nDiagCompleted BmcShutdownDiagCompleted) warning

[FSM:STAGE:REMOTE-ERROR]:
Cleaning up server
[chassisId]/[slotId] interface on
fabric A(FSM-
fsmRmtErrComputeBladeDiag:CleanupServe STAGE:sam:dme:ComputeBladeDiag:
F77975 rConnSwA CleanupServerConnSwA) warning

[FSM:STAGE:REMOTE-ERROR]:
Cleaning up server
[chassisId]/[slotId] interface on
fabric B(FSM-
fsmRmtErrComputeBladeDiag:CleanupServe STAGE:sam:dme:ComputeBladeDiag:
F77975 rConnSwB CleanupServerConnSwB) warning

[FSM:STAGE:REMOTE-ERROR]:
Configuring primary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:ConfigFeLoca STAGE:sam:dme:ComputeBladeDiag:
F77975 l ConfigFeLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configuring secondary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F77975 fsmRmtErrComputeBladeDiag:ConfigFePeer ConfigFePeer) warning
[FSM:STAGE:REMOTE-ERROR]:
Configuring external user access to
server [chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:ConfigUserAc STAGE:sam:dme:ComputeBladeDiag:
F77975 cess ConfigUserAccess) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for debugging for server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F77975 fsmRmtErrComputeBladeDiag:DebugWait DebugWait) warning

[FSM:STAGE:REMOTE-ERROR]:
Derive diag config for server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F77975 fsmRmtErrComputeBladeDiag:DeriveConfig DeriveConfig) warning

[FSM:STAGE:REMOTE-ERROR]:
Disable server [chassisId]/[slotId]
interface on fabric A after
completion of network traffic tests
on fabric A(FSM-
fsmRmtErrComputeBladeDiag:DisableServer STAGE:sam:dme:ComputeBladeDiag:
F77975 ConnSwA DisableServerConnSwA) warning

[FSM:STAGE:REMOTE-ERROR]:
Disable server [chassisId]/[slotId]
connectivity on fabric B in
preparation for network traffic tests
on fabric A(FSM-
fsmRmtErrComputeBladeDiag:DisableServer STAGE:sam:dme:ComputeBladeDiag:
F77975 ConnSwB DisableServerConnSwB) warning

[FSM:STAGE:REMOTE-ERROR]:
Enable server [chassisId]/[slotId]
connectivity on fabric A in
preparation for network traffic tests
on fabric A(FSM-
fsmRmtErrComputeBladeDiag:EnableServer STAGE:sam:dme:ComputeBladeDiag:
F77975 ConnSwA EnableServerConnSwA) warning

[FSM:STAGE:REMOTE-ERROR]:
Enable server [chassisId]/[slotId]
connectivity on fabric B in
preparation for network traffic tests
on fabric B(FSM-
fsmRmtErrComputeBladeDiag:EnableServer STAGE:sam:dme:ComputeBladeDiag:
F77975 ConnSwB EnableServerConnSwB) warning

[FSM:STAGE:REMOTE-ERROR]:
Evaluating status; diagnostics
completed(FSM-
fsmRmtErrComputeBladeDiag:EvaluateStat STAGE:sam:dme:ComputeBladeDiag:
F77975 us EvaluateStatus) warning
[FSM:STAGE:REMOTE-ERROR]:
Gather status of network traffic tests
on fabric A for server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:FabricATraffic STAGE:sam:dme:ComputeBladeDiag:
F77975 TestStatus FabricATrafficTestStatus) warning

[FSM:STAGE:REMOTE-ERROR]:
Gather status of network tests on
fabric B for server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeDiag:FabricBTraffic STAGE:sam:dme:ComputeBladeDiag:
F77975 TestStatus FabricBTrafficTestStatus) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for collection of diagnostic
logs from server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeDiag:GenerateLog STAGE:sam:dme:ComputeBladeDiag:
F77975 Wait GenerateLogWait) warning

[FSM:STAGE:REMOTE-ERROR]:
Generating report for server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:GenerateRep STAGE:sam:dme:ComputeBladeDiag:
F77975 ort GenerateReport) warning

[FSM:STAGE:REMOTE-ERROR]:
Populate diagnostics catalog to
server [chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F77975 fsmRmtErrComputeBladeDiag:HostCatalog HostCatalog) warning

[FSM:STAGE:REMOTE-ERROR]:
Connect to diagnostics environment
agent on server [chassisId]/[slotId]
(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F77975 fsmRmtErrComputeBladeDiag:HostConnect HostConnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Disconnect diagnostics environment
agent for server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeDiag:HostDisconn STAGE:sam:dme:ComputeBladeDiag:
F77975 ect HostDisconnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Identify diagnostics environment
agent on server [chassisId]/[slotId]
(FSM-
STAGE:sam:dme:ComputeBladeDiag:
F77975 fsmRmtErrComputeBladeDiag:HostIdent HostIdent) warning
[FSM:STAGE:REMOTE-ERROR]:
Perform inventory of server
[chassisId]/[slotId] in diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:HostInventor STAGE:sam:dme:ComputeBladeDiag:
F77975 y HostInventory) warning

[FSM:STAGE:REMOTE-ERROR]:
Populate diagnostics environment
behavior policy to server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F77975 fsmRmtErrComputeBladeDiag:HostPolicy HostPolicy) warning

[FSM:STAGE:REMOTE-ERROR]:
Trigger diagnostics on server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:HostServerDi STAGE:sam:dme:ComputeBladeDiag:
F77975 ag HostServerDiag) warning

[FSM:STAGE:REMOTE-ERROR]:
Diagnostics status on server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:HostServerDi STAGE:sam:dme:ComputeBladeDiag:
F77975 agStatus HostServerDiagStatus) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure adapter in server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:NicConfigLoc STAGE:sam:dme:ComputeBladeDiag:
F77975 al NicConfigLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure adapter in server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:NicConfigPee STAGE:sam:dme:ComputeBladeDiag:
F77975 r NicConfigPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Retrieve adapter inventory in server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:NicInventory STAGE:sam:dme:ComputeBladeDiag:
F77975 Local NicInventoryLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Retrieve adapter inventory in server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:NicInventory STAGE:sam:dme:ComputeBladeDiag:
F77975 Peer NicInventoryPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
Detect adapter in server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:NicPresenceL STAGE:sam:dme:ComputeBladeDiag:
F77975 ocal NicPresenceLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Detect adapter in server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:NicPresenceP STAGE:sam:dme:ComputeBladeDiag:
F77975 eer NicPresencePeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure adapter of server
[chassisId]/[slotId] diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:NicUnconfigL STAGE:sam:dme:ComputeBladeDiag:
F77975 ocal NicUnconfigLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure adapter of server
[chassisId]/[slotId] diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:NicUnconfigP STAGE:sam:dme:ComputeBladeDiag:
F77975 eer NicUnconfigPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Derive diag config for server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:RemoveConfi STAGE:sam:dme:ComputeBladeDiag:
F77975 g RemoveConfig) warning

[FSM:STAGE:REMOTE-ERROR]:
Remove VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:RemoveVMe STAGE:sam:dme:ComputeBladeDiag:
F77975 diaLocal RemoveVMediaLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Remove VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:RemoveVMe STAGE:sam:dme:ComputeBladeDiag:
F77975 diaPeer RemoveVMediaPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Reconfiguring primary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:RestoreConfi STAGE:sam:dme:ComputeBladeDiag:
F77975 gFeLocal RestoreConfigFeLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
Reconfiguring secondary fabric
interconnect access to server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:RestoreConfi STAGE:sam:dme:ComputeBladeDiag:
F77975 gFePeer RestoreConfigFePeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Populate diagnostics environment
with a user account to server
[chassisId]/[slotId](FSM-
STAGE:sam:dme:ComputeBladeDiag:
F77975 fsmRmtErrComputeBladeDiag:SetDiagUser SetDiagUser) warning

[FSM:STAGE:REMOTE-ERROR]:
Setup VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:SetupVMedia STAGE:sam:dme:ComputeBladeDiag:
F77975 Local SetupVMediaLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Setup VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:SetupVMedia STAGE:sam:dme:ComputeBladeDiag:
F77975 Peer SetupVMediaPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Disable Sol Redirection on server
[chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:SolRedirectDi STAGE:sam:dme:ComputeBladeDiag:
F77975 sable SolRedirectDisable) warning

[FSM:STAGE:REMOTE-ERROR]: set
up bios token on server
[chassisId]/[slotId] for Sol
redirect(FSM-
fsmRmtErrComputeBladeDiag:SolRedirectE STAGE:sam:dme:ComputeBladeDiag:
F77975 nable SolRedirectEnable) warning

[FSM:STAGE:REMOTE-ERROR]:
Trigger network traffic tests on
fabric A on server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeDiag:StartFabricAT STAGE:sam:dme:ComputeBladeDiag:
F77975 rafficTest StartFabricATrafficTest) warning

[FSM:STAGE:REMOTE-ERROR]:
Trigger network tests on fabric B for
server [chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:StartFabricBT STAGE:sam:dme:ComputeBladeDiag:
F77975 rafficTest StartFabricBTrafficTest) warning
[FSM:STAGE:REMOTE-ERROR]: Stop
VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:StopVMediaL STAGE:sam:dme:ComputeBladeDiag:
F77975 ocal StopVMediaLocal) warning

[FSM:STAGE:REMOTE-ERROR]: Stop
VMedia for server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:StopVMediaP STAGE:sam:dme:ComputeBladeDiag:
F77975 eer StopVMediaPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure primary fabric
interconnect in server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:SwConfigLoc STAGE:sam:dme:ComputeBladeDiag:
F77975 al SwConfigLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure secondary fabric
interconnect in server
[chassisId]/[slotId] for diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:SwConfigPee STAGE:sam:dme:ComputeBladeDiag:
F77975 r SwConfigPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure primary fabric
interconnect for server
[chassisId]/[slotId] in diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:SwUnconfigL STAGE:sam:dme:ComputeBladeDiag:
F77975 ocal SwUnconfigLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure secondary fabric
interconnect for server
[chassisId]/[slotId] in diagnostics
environment(FSM-
fsmRmtErrComputeBladeDiag:SwUnconfigP STAGE:sam:dme:ComputeBladeDiag:
F77975 eer SwUnconfigPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure external user access to
server [chassisId]/[slotId](FSM-
fsmRmtErrComputeBladeDiag:UnconfigUser STAGE:sam:dme:ComputeBladeDiag:
F77975 Access UnconfigUserAccess) warning
[FSM:STAGE:REMOTE-ERROR]:
Connect to pre-boot environment
agent on server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeDiag:serialDebugC STAGE:sam:dme:ComputeBladeDiag:
F77975 onnect serialDebugConnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Disconnect pre-boot environment
agent for server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeDiag:serialDebugD STAGE:sam:dme:ComputeBladeDiag:
F77975 isconnect serialDebugDisconnect) warning
[FSM:STAGE:REMOTE-ERROR]:
(FSM-
fsmRmtErrFabricLanCloudSwitchMode:SwC STAGE:sam:dme:FabricLanCloudSwit
F77979 onfigLocal chMode:SwConfigLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Fabric interconnect mode
configuration to primary(FSM-
fsmRmtErrFabricLanCloudSwitchMode:SwC STAGE:sam:dme:FabricLanCloudSwit
F77979 onfigPeer chMode:SwConfigPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
(FSM-
fsmRmtErrFabricSanCloudSwitchMode:SwC STAGE:sam:dme:FabricSanCloudSwit
F77979 onfigLocal chMode:SwConfigLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Fabric interconnect FC mode
configuration to primary(FSM-
fsmRmtErrFabricSanCloudSwitchMode:SwC STAGE:sam:dme:FabricSanCloudSwit
F77979 onfigPeer chMode:SwConfigPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
propogate updated settings (eg.
timezone)(FSM-
fsmRmtErrCommSvcEpUpdateSvcEp:Propog STAGE:sam:dme:CommSvcEpUpdate
F78016 ateEpSettings SvcEp:PropogateEpSettings) warning

[FSM:STAGE:REMOTE-ERROR]:
propogate updated timezone
settings to management controllers.
(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmRmtErrCommSvcEpUpdateSvcEp:Propog SvcEp:PropogateEpTimeZoneSetting
F78016 ateEpTimeZoneSettingsLocal sLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
propogate updated timezone
settings to management controllers.
(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmRmtErrCommSvcEpUpdateSvcEp:Propog SvcEp:PropogateEpTimeZoneSetting
F78016 ateEpTimeZoneSettingsPeer sPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
propogate updated timezone
settings to NICs.(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmRmtErrCommSvcEpUpdateSvcEp:Propog SvcEp:PropogateEpTimeZoneSetting
F78016 ateEpTimeZoneSettingsToAdaptorsLocal sToAdaptorsLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
propogate updated timezone
settings to NICs.(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmRmtErrCommSvcEpUpdateSvcEp:Propog SvcEp:PropogateEpTimeZoneSetting
F78016 ateEpTimeZoneSettingsToAdaptorsPeer sToAdaptorsPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
propogate updated timezone
settings to FEXs and IOMs.(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmRmtErrCommSvcEpUpdateSvcEp:Propog SvcEp:PropogateEpTimeZoneSetting
F78016 ateEpTimeZoneSettingsToFexIomLocal sToFexIomLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
propogate updated timezone
settings to FEXs and IOMs.(FSM-
STAGE:sam:dme:CommSvcEpUpdate
fsmRmtErrCommSvcEpUpdateSvcEp:Propog SvcEp:PropogateEpTimeZoneSetting
F78016 ateEpTimeZoneSettingsToFexIomPeer sToFexIomPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
communication service [name]
configuration to primary(FSM-
fsmRmtErrCommSvcEpUpdateSvcEp:SetEpL STAGE:sam:dme:CommSvcEpUpdate
F78016 ocal SvcEp:SetEpLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
communication service [name]
configuration to secondary(FSM-
fsmRmtErrCommSvcEpUpdateSvcEp:SetEpP STAGE:sam:dme:CommSvcEpUpdate
F78016 eer SvcEp:SetEpPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
restart web services in primary(FSM-
STAGE:sam:dme:CommSvcEpRestart
F78017 fsmRmtErrCommSvcEpRestartWebSvc:local WebSvc:local) warning
[FSM:STAGE:REMOTE-ERROR]:
restart web services in
secondary(FSM-
STAGE:sam:dme:CommSvcEpRestart
F78017 fsmRmtErrCommSvcEpRestartWebSvc:peer WebSvc:peer) warning
[FSM:STAGE:REMOTE-ERROR]:
external aaa server configuration to
primary(FSM-
STAGE:sam:dme:AaaEpUpdateEp:Se
F78019 fsmRmtErrAaaEpUpdateEp:SetEpLocal tEpLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
external aaa server configuration to
secondary(FSM-
STAGE:sam:dme:AaaEpUpdateEp:Se
F78019 fsmRmtErrAaaEpUpdateEp:SetEpPeer tEpPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
keyring configuration on
primary(FSM-
STAGE:sam:dme:PkiEpUpdateEp:Set
F78019 fsmRmtErrPkiEpUpdateEp:SetKeyRingLocal KeyRingLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
keyring configuration on
secondary(FSM-
STAGE:sam:dme:PkiEpUpdateEp:Set
F78019 fsmRmtErrPkiEpUpdateEp:SetKeyRingPeer KeyRingPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Update endpoint on fabric
interconnect A(FSM-
fsmRmtErrStatsCollectionPolicyUpdateEp:Se STAGE:sam:dme:StatsCollectionPolic
F78019 tEpA yUpdateEp:SetEpA) warning

[FSM:STAGE:REMOTE-ERROR]:
Update endpoint on fabric
interconnect B(FSM-
fsmRmtErrStatsCollectionPolicyUpdateEp:Se STAGE:sam:dme:StatsCollectionPolic
F78019 tEpB yUpdateEp:SetEpB) warning

[FSM:STAGE:REMOTE-ERROR]:
realm configuration to primary(FSM-
fsmRmtErrAaaRealmUpdateRealm:SetReal STAGE:sam:dme:AaaRealmUpdateR
F78020 mLocal ealm:SetRealmLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
realm configuration to
secondary(FSM-
fsmRmtErrAaaRealmUpdateRealm:SetReal STAGE:sam:dme:AaaRealmUpdateR
F78020 mPeer ealm:SetRealmPeer) warning

[FSM:STAGE:REMOTE-ERROR]: user
configuration to primary(FSM-
fsmRmtErrAaaUserEpUpdateUserEp:SetUse STAGE:sam:dme:AaaUserEpUpdate
F78021 rLocal UserEp:SetUserLocal) warning

[FSM:STAGE:REMOTE-ERROR]: user
configuration to secondary(FSM-
fsmRmtErrAaaUserEpUpdateUserEp:SetUse STAGE:sam:dme:AaaUserEpUpdate
F78021 rPeer UserEp:SetUserPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
[action] file [name](FSM-
STAGE:sam:dme:SysfileMutationSing
F78040 fsmRmtErrSysfileMutationSingle:Execute le:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
remove files from local(FSM-
STAGE:sam:dme:SysfileMutationGlo
F78041 fsmRmtErrSysfileMutationGlobal:Local bal:Local) warning

[FSM:STAGE:REMOTE-ERROR]:
remove files from peer(FSM-
STAGE:sam:dme:SysfileMutationGlo
F78041 fsmRmtErrSysfileMutationGlobal:Peer bal:Peer) warning
[FSM:STAGE:REMOTE-ERROR]:
export core file [name] to
[hostname](FSM-
fsmRmtErrSysdebugManualCoreFileExportT STAGE:sam:dme:SysdebugManualCo
F78044 argetExport:Execute reFileExportTargetExport:Execute) warning

[FSM:STAGE:REMOTE-ERROR]: Apply
switch configuration(FSM-
fsmRmtErrFabricEpMgrConfigure:ApplyCon STAGE:sam:dme:FabricEpMgrConfig
F78045 fig ure:ApplyConfig) warning
[FSM:STAGE:REMOTE-ERROR]:
Applying physical
configuration(FSM-
fsmRmtErrFabricEpMgrConfigure:ApplyPhys STAGE:sam:dme:FabricEpMgrConfig
F78045 ical ure:ApplyPhysical) warning

[FSM:STAGE:REMOTE-ERROR]:
Validating logical configuration(FSM-
fsmRmtErrFabricEpMgrConfigure:ValidateC STAGE:sam:dme:FabricEpMgrConfig
F78045 onfiguration ure:ValidateConfiguration) warning
[FSM:STAGE:REMOTE-ERROR]:
Waiting on physical change
application(FSM-
fsmRmtErrFabricEpMgrConfigure:WaitOnPh STAGE:sam:dme:FabricEpMgrConfig
F78045 ys ure:WaitOnPhys) warning

[FSM:STAGE:REMOTE-ERROR]:
Analyzing changes impact(FSM-
STAGE:sam:dme:LsServerConfigure:
F78045 fsmRmtErrLsServerConfigure:AnalyzeImpact AnalyzeImpact) warning
[FSM:STAGE:REMOTE-ERROR]:
Applying config to server [pnDn]
(FSM-
STAGE:sam:dme:LsServerConfigure:
F78045 fsmRmtErrLsServerConfigure:ApplyConfig ApplyConfig) warning

[FSM:STAGE:REMOTE-ERROR]:
Resolving and applying
identifiers(FSM-
fsmRmtErrLsServerConfigure:ApplyIdentifier STAGE:sam:dme:LsServerConfigure:
F78045 s ApplyIdentifiers) warning
[FSM:STAGE:REMOTE-ERROR]:
Resolving and applying policies(FSM-
STAGE:sam:dme:LsServerConfigure:
F78045 fsmRmtErrLsServerConfigure:ApplyPolicies ApplyPolicies) warning

[FSM:STAGE:REMOTE-ERROR]:
Applying configuration template
[srcTemplName](FSM-
fsmRmtErrLsServerConfigure:ApplyTemplat STAGE:sam:dme:LsServerConfigure:
F78045 e ApplyTemplate) warning

[FSM:STAGE:REMOTE-ERROR]:
Evaluate association with server
[pnDn](FSM-
fsmRmtErrLsServerConfigure:EvaluateAssoci STAGE:sam:dme:LsServerConfigure:
F78045 ation EvaluateAssociation) warning

[FSM:STAGE:REMOTE-ERROR]:
Computing binding changes(FSM-
fsmRmtErrLsServerConfigure:ResolveBootC STAGE:sam:dme:LsServerConfigure:
F78045 onfig ResolveBootConfig) warning
[FSM:STAGE:REMOTE-ERROR]:
Waiting for ack or maint
window(FSM-
fsmRmtErrLsServerConfigure:WaitForMaint STAGE:sam:dme:LsServerConfigure:
F78045 Permission WaitForMaintPermission) warning
[FSM:STAGE:REMOTE-ERROR]:
Waiting for maintenance
window(FSM-
fsmRmtErrLsServerConfigure:WaitForMaint STAGE:sam:dme:LsServerConfigure:
F78045 Window WaitForMaintWindow) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring automatic core file
export service on local(FSM-
fsmRmtErrSysdebugAutoCoreFileExportTarg STAGE:sam:dme:SysdebugAutoCore
F78045 etConfigure:Local FileExportTargetConfigure:Local) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring automatic core file
export service on peer(FSM-
fsmRmtErrSysdebugAutoCoreFileExportTarg STAGE:sam:dme:SysdebugAutoCore
F78045 etConfigure:Peer FileExportTargetConfigure:Peer) warning

[FSM:STAGE:REMOTE-ERROR]:
persisting LogControl on local(FSM-
fsmRmtErrSysdebugLogControlEpLogContro STAGE:sam:dme:SysdebugLogContro
F78046 lPersist:Local lEpLogControlPersist:Local) warning

[FSM:STAGE:REMOTE-ERROR]:
persisting LogControl on peer(FSM-
fsmRmtErrSysdebugLogControlEpLogContro STAGE:sam:dme:SysdebugLogContro
F78046 lPersist:Peer lEpLogControlPersist:Peer) warning
[FSM:STAGE:REMOTE-ERROR]: vnic
qos policy [name] configuration to
primary(FSM-
STAGE:sam:dme:EpqosDefinitionDe
F78074 fsmRmtErrEpqosDefinitionDeploy:Local ploy:Local) warning

[FSM:STAGE:REMOTE-ERROR]: vnic
qos policy [name] configuration to
secondary(FSM-
STAGE:sam:dme:EpqosDefinitionDe
F78074 fsmRmtErrEpqosDefinitionDeploy:Peer ploy:Peer) warning

[FSM:STAGE:REMOTE-ERROR]:
internal network configuration on
[switchId](FSM-
fsmRmtErrSwAccessDomainDeploy:Update STAGE:sam:dme:SwAccessDomainD
F78074 Connectivity eploy:UpdateConnectivity) warning

[FSM:STAGE:REMOTE-ERROR]:
Uplink eth port configuration on
[switchId](FSM-
fsmRmtErrSwEthLanBorderDeploy:UpdateC STAGE:sam:dme:SwEthLanBorderDe
F78074 onnectivity ploy:UpdateConnectivity) warning

[FSM:STAGE:REMOTE-ERROR]:
Ethernet traffic monitor
(SPAN)configuration on [switchId]
(FSM-
fsmRmtErrSwEthMonDeploy:UpdateEthMo STAGE:sam:dme:SwEthMonDeploy:
F78074 n UpdateEthMon) warning

[FSM:STAGE:REMOTE-ERROR]: FC
traffic monitor (SPAN)configuration
on [switchId](FSM-
STAGE:sam:dme:SwFcMonDeploy:U
F78074 fsmRmtErrSwFcMonDeploy:UpdateFcMon pdateFcMon) warning

[FSM:STAGE:REMOTE-ERROR]:
Uplink fc port configuration on
[switchId](FSM-
fsmRmtErrSwFcSanBorderDeploy:UpdateCo STAGE:sam:dme:SwFcSanBorderDep
F78074 nnectivity loy:UpdateConnectivity) warning

[FSM:STAGE:REMOTE-ERROR]:
Utility network configuration on
[switchId](FSM-
fsmRmtErrSwUtilityDomainDeploy:UpdateC STAGE:sam:dme:SwUtilityDomainDe
F78074 onnectivity ploy:UpdateConnectivity) warning
[FSM:STAGE:REMOTE-ERROR]: VNIC
profile configuration on local
fabric(FSM-
STAGE:sam:dme:VnicProfileSetDeplo
F78074 fsmRmtErrVnicProfileSetDeploy:Local y:Local) warning

[FSM:STAGE:REMOTE-ERROR]: VNIC
profile configuration on peer
fabric(FSM-
STAGE:sam:dme:VnicProfileSetDeplo
F78074 fsmRmtErrVnicProfileSetDeploy:Peer y:Peer) warning

[FSM:STAGE:REMOTE-ERROR]:
create on primary(FSM-
STAGE:sam:dme:SyntheticFsObjCrea
F78081 fsmRmtErrSyntheticFsObjCreate:createLocal te:createLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
create on secondary(FSM-
fsmRmtErrSyntheticFsObjCreate:createRem STAGE:sam:dme:SyntheticFsObjCrea
F78081 ote te:createRemote) warning
[FSM:STAGE:REMOTE-ERROR]:
deleting package [name] from
primary(FSM-
fsmRmtErrFirmwareDistributableDelete:Loc STAGE:sam:dme:FirmwareDistributa
F78091 al bleDelete:Local) warning
[FSM:STAGE:REMOTE-ERROR]:
deleting package [name] from
secondary(FSM-
fsmRmtErrFirmwareDistributableDelete:Re STAGE:sam:dme:FirmwareDistributa
F78091 mote bleDelete:Remote) warning

[FSM:STAGE:REMOTE-ERROR]:
deleting image file [name] ([invTag])
from primary(FSM-
STAGE:sam:dme:FirmwareImageDel
F78091 fsmRmtErrFirmwareImageDelete:Local ete:Local) warning

[FSM:STAGE:REMOTE-ERROR]:
deleting image file [name] ([invTag])
from secondary(FSM-
STAGE:sam:dme:FirmwareImageDel
F78091 fsmRmtErrFirmwareImageDelete:Remote ete:Remote) warning

[FSM:STAGE:REMOTE-ERROR]:
rebooting local fabric
interconnect(FSM-
fsmRmtErrMgmtControllerUpdateSwitch:re STAGE:sam:dme:MgmtControllerUp
F78093 setLocal dateSwitch:resetLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
rebooting remote fabric
interconnect(FSM-
fsmRmtErrMgmtControllerUpdateSwitch:re STAGE:sam:dme:MgmtControllerUp
F78093 setRemote dateSwitch:resetRemote) warning

[FSM:STAGE:REMOTE-ERROR]:
updating local fabric
interconnect(FSM-
fsmRmtErrMgmtControllerUpdateSwitch:up STAGE:sam:dme:MgmtControllerUp
F78093 dateLocal dateSwitch:updateLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
updating peer fabric
interconnect(FSM-
fsmRmtErrMgmtControllerUpdateSwitch:up STAGE:sam:dme:MgmtControllerUp
F78093 dateRemote dateSwitch:updateRemote) warning

[FSM:STAGE:REMOTE-ERROR]:
verifying boot variables for local
fabric interconnect(FSM-
fsmRmtErrMgmtControllerUpdateSwitch:ve STAGE:sam:dme:MgmtControllerUp
F78093 rifyLocal dateSwitch:verifyLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
verifying boot variables for remote
fabric interconnect(FSM-
fsmRmtErrMgmtControllerUpdateSwitch:ve STAGE:sam:dme:MgmtControllerUp
F78093 rifyRemote dateSwitch:verifyRemote) warning

[FSM:STAGE:REMOTE-ERROR]:
waiting for IOM update(FSM-
fsmRmtErrMgmtControllerUpdateIOM:Poll STAGE:sam:dme:MgmtControllerUp
F78094 UpdateStatus dateIOM:PollUpdateStatus) warning
[FSM:STAGE:REMOTE-ERROR]:
sending update request to
IOM(FSM-
fsmRmtErrMgmtControllerUpdateIOM:Upd STAGE:sam:dme:MgmtControllerUp
F78094 ateRequest dateIOM:UpdateRequest) warning
[FSM:STAGE:REMOTE-ERROR]:
activating backup image of
IOM(FSM-
fsmRmtErrMgmtControllerActivateIOM:Acti STAGE:sam:dme:MgmtControllerAc
F78095 vate tivateIOM:Activate) warning

[FSM:STAGE:REMOTE-ERROR]:
Resetting IOM to boot the activated
version(FSM-
fsmRmtErrMgmtControllerActivateIOM:Res STAGE:sam:dme:MgmtControllerAc
F78095 et tivateIOM:Reset) warning
[FSM:STAGE:REMOTE-ERROR]:
waiting for update to
complete(FSM-
fsmRmtErrMgmtControllerUpdateBMC:Poll STAGE:sam:dme:MgmtControllerUp
F78096 UpdateStatus dateBMC:PollUpdateStatus) warning
[FSM:STAGE:REMOTE-ERROR]:
sending update request to
CIMC(FSM-
fsmRmtErrMgmtControllerUpdateBMC:Upd STAGE:sam:dme:MgmtControllerUp
F78096 ateRequest dateBMC:UpdateRequest) warning
[FSM:STAGE:REMOTE-ERROR]:
activating backup image of
CIMC(FSM-
fsmRmtErrMgmtControllerActivateBMC:Acti STAGE:sam:dme:MgmtControllerAc
F78097 vate tivateBMC:Activate) warning

[FSM:STAGE:REMOTE-ERROR]:
Resetting CIMC to boot the activated
version(FSM-
fsmRmtErrMgmtControllerActivateBMC:Res STAGE:sam:dme:MgmtControllerAc
F78097 et tivateBMC:Reset) warning
[FSM:STAGE:REMOTE-ERROR]: call-
home configuration on
primary(FSM-
fsmRmtErrCallhomeEpConfigCallhome:SetL STAGE:sam:dme:CallhomeEpConfigC
F78110 ocal allhome:SetLocal) warning

[FSM:STAGE:REMOTE-ERROR]: call-
home configuration on
secondary(FSM-
fsmRmtErrCallhomeEpConfigCallhome:SetP STAGE:sam:dme:CallhomeEpConfigC
F78110 eer allhome:SetPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring the out-of-band
interface(FSM-
fsmRmtErrMgmtIfSwMgmtOobIfConfig:Swit STAGE:sam:dme:MgmtIfSwMgmtOo
F78113 ch bIfConfig:Switch) warning
[FSM:STAGE:REMOTE-ERROR]:
configuring the inband
interface(FSM-
fsmRmtErrMgmtIfSwMgmtInbandIfConfig:S STAGE:sam:dme:MgmtIfSwMgmtInb
F78114 witch andIfConfig:Switch) warning

[FSM:STAGE:REMOTE-ERROR]:
Updating virtual interface on local
fabric interconnect(FSM-
STAGE:sam:dme:MgmtIfVirtualIfCon
F78119 fsmRmtErrMgmtIfVirtualIfConfig:Local fig:Local) warning

[FSM:STAGE:REMOTE-ERROR]:
Updating virtual interface on peer
fabric interconnect(FSM-
STAGE:sam:dme:MgmtIfVirtualIfCon
F78119 fsmRmtErrMgmtIfVirtualIfConfig:Remote fig:Remote) warning
[FSM:STAGE:REMOTE-ERROR]:
Enable virtual interface on local
fabric interconnect(FSM-
STAGE:sam:dme:MgmtIfEnableVip:L
F78120 fsmRmtErrMgmtIfEnableVip:Local ocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Disable virtual interface on peer
fabric interconnect(FSM-
STAGE:sam:dme:MgmtIfDisableVip:P
F78121 fsmRmtErrMgmtIfDisableVip:Peer eer) warning

[FSM:STAGE:REMOTE-ERROR]:
Transition from single to cluster
mode(FSM-
STAGE:sam:dme:MgmtIfEnableHA:L
F78122 fsmRmtErrMgmtIfEnableHA:Local ocal) warning

[FSM:STAGE:REMOTE-ERROR]:
internal database backup(FSM-
STAGE:sam:dme:MgmtBackupBacku
F78123 fsmRmtErrMgmtBackupBackup:backupLocal p:backupLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
internal system backup(FSM-
STAGE:sam:dme:MgmtBackupBacku
F78123 fsmRmtErrMgmtBackupBackup:upload p:upload) warning
[FSM:STAGE:REMOTE-ERROR]:
importing the configuration
file(FSM-
STAGE:sam:dme:MgmtImporterImp
F78124 fsmRmtErrMgmtImporterImport:config ort:config) warning

[FSM:STAGE:REMOTE-ERROR]:
downloading the configuration file
from the remote location(FSM-
fsmRmtErrMgmtImporterImport:downloadL STAGE:sam:dme:MgmtImporterImp
F78124 ocal ort:downloadLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Report results of configuration
application(FSM-
fsmRmtErrMgmtImporterImport:reportResu STAGE:sam:dme:MgmtImporterImp
F78124 lts ort:reportResults) warning

[FSM:STAGE:REMOTE-ERROR]: QoS
Classification Definition [name]
configuration on primary(FSM-
fsmRmtErrQosclassDefinitionConfigGlobalQ STAGE:sam:dme:QosclassDefinitionC
F78185 oS:SetLocal onfigGlobalQoS:SetLocal) warning
[FSM:STAGE:REMOTE-ERROR]: QoS
Classification Definition [name]
configuration on secondary(FSM-
fsmRmtErrQosclassDefinitionConfigGlobalQ STAGE:sam:dme:QosclassDefinitionC
F78185 oS:SetPeer onfigGlobalQoS:SetPeer) warning

[FSM:STAGE:REMOTE-ERROR]: vnic
qos policy [name] removal from
primary(FSM-
fsmRmtErrEpqosDefinitionDelTaskRemove:L STAGE:sam:dme:EpqosDefinitionDel
F78190 ocal TaskRemove:Local) warning

[FSM:STAGE:REMOTE-ERROR]: vnic
qos policy [name] removal from
secondary(FSM-
fsmRmtErrEpqosDefinitionDelTaskRemove: STAGE:sam:dme:EpqosDefinitionDel
F78190 Peer TaskRemove:Peer) warning

[FSM:STAGE:REMOTE-ERROR]:
Resetting Chassis Management
Controller on IOM [chassisId]/[id]
(FSM-
fsmRmtErrEquipmentIOCardResetCmc:Exec STAGE:sam:dme:EquipmentIOCardR
F78243 ute esetCmc:Execute) warning
[FSM:STAGE:REMOTE-ERROR]:
Updating UCS Manager
firmware(FSM-
fsmRmtErrMgmtControllerUpdateUCSMana STAGE:sam:dme:MgmtControllerUp
F78255 ger:execute dateUCSManager:execute) warning
[FSM:STAGE:REMOTE-ERROR]:
Scheduling UCS manager
update(FSM-
fsmRmtErrMgmtControllerUpdateUCSMana STAGE:sam:dme:MgmtControllerUp
F78255 ger:start dateUCSManager:start) warning
[FSM:STAGE:REMOTE-ERROR]:
system configuration on
primary(FSM-
fsmRmtErrMgmtControllerSysConfig:Primar STAGE:sam:dme:MgmtControllerSys
F78263 y Config:Primary) warning
[FSM:STAGE:REMOTE-ERROR]:
system configuration on
secondary(FSM-
fsmRmtErrMgmtControllerSysConfig:Second STAGE:sam:dme:MgmtControllerSys
F78263 ary Config:Secondary) warning

[FSM:STAGE:REMOTE-ERROR]:
Disable path [side]([switchId]) on
server [chassisId]/[slotId](FSM-
fsmRmtErrAdaptorExtEthIfPathReset:Disabl STAGE:sam:dme:AdaptorExtEthIfPat
F78292 e hReset:Disable) warning
[FSM:STAGE:REMOTE-ERROR]:
Enabe path [side]([switchId]) on
server [chassisId]/[slotId](FSM-
STAGE:sam:dme:AdaptorExtEthIfPat
F78292 fsmRmtErrAdaptorExtEthIfPathReset:Enable hReset:Enable) warning

[FSM:STAGE:REMOTE-ERROR]:
Disable circuit A for host adaptor [id]
on server [chassisId]/[slotId](FSM-
fsmRmtErrAdaptorHostEthIfCircuitReset:Dis STAGE:sam:dme:AdaptorHostEthIfCi
F78297 ableA rcuitReset:DisableA) warning

[FSM:STAGE:REMOTE-ERROR]:
Disable circuit B for host adaptor [id]
on server [chassisId]/[slotId](FSM-
fsmRmtErrAdaptorHostEthIfCircuitReset:Dis STAGE:sam:dme:AdaptorHostEthIfCi
F78297 ableB rcuitReset:DisableB) warning

[FSM:STAGE:REMOTE-ERROR]:
Enable circuit A for host adaptor [id]
on server [chassisId]/[slotId](FSM-
fsmRmtErrAdaptorHostEthIfCircuitReset:En STAGE:sam:dme:AdaptorHostEthIfCi
F78297 ableA rcuitReset:EnableA) warning

[FSM:STAGE:REMOTE-ERROR]:
Enable circuit B for host adaptor [id]
on server [chassisId]/[slotId](FSM-
fsmRmtErrAdaptorHostEthIfCircuitReset:En STAGE:sam:dme:AdaptorHostEthIfCi
F78297 ableB rcuitReset:EnableB) warning

[FSM:STAGE:REMOTE-ERROR]:
Disable circuit A for host adaptor [id]
on server [chassisId]/[slotId](FSM-
fsmRmtErrAdaptorHostFcIfCircuitReset:Disa STAGE:sam:dme:AdaptorHostFcIfCir
F78297 bleA cuitReset:DisableA) warning

[FSM:STAGE:REMOTE-ERROR]:
Disable circuit B for host adaptor [id]
on server [chassisId]/[slotId](FSM-
fsmRmtErrAdaptorHostFcIfCircuitReset:Disa STAGE:sam:dme:AdaptorHostFcIfCir
F78297 bleB cuitReset:DisableB) warning

[FSM:STAGE:REMOTE-ERROR]:
Enable circuit A for host adaptor [id]
on server [chassisId]/[slotId](FSM-
fsmRmtErrAdaptorHostFcIfCircuitReset:Ena STAGE:sam:dme:AdaptorHostFcIfCir
F78297 bleA cuitReset:EnableA) warning

[FSM:STAGE:REMOTE-ERROR]:
Enable circuit B for host adaptor [id]
on server [chassisId]/[slotId](FSM-
fsmRmtErrAdaptorHostFcIfCircuitReset:Ena STAGE:sam:dme:AdaptorHostFcIfCir
F78297 bleB cuitReset:EnableB) warning
[FSM:STAGE:REMOTE-ERROR]:
external VM manager extension-key
configuration on peer fabric(FSM-
fsmRmtErrExtvmmMasterExtKeyConfig:SetL STAGE:sam:dme:ExtvmmMasterExtK
F78319 ocal eyConfig:SetLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
external VM manager extension-key
configuration on local fabric(FSM-
fsmRmtErrExtvmmMasterExtKeyConfig:SetP STAGE:sam:dme:ExtvmmMasterExtK
F78319 eer eyConfig:SetPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
external VM manager version
fetch(FSM-
fsmRmtErrExtvmmProviderConfig:GetVersio STAGE:sam:dme:ExtvmmProviderCo
F78319 n nfig:GetVersion) warning

[FSM:STAGE:REMOTE-ERROR]:
external VM manager configuration
on local fabric(FSM-
STAGE:sam:dme:ExtvmmProviderCo
F78319 fsmRmtErrExtvmmProviderConfig:SetLocal nfig:SetLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
external VM manager configuration
on peer fabric(FSM-
STAGE:sam:dme:ExtvmmProviderCo
F78319 fsmRmtErrExtvmmProviderConfig:SetPeer nfig:SetPeer) warning

[FSM:STAGE:REMOTE-ERROR]: set
Veth Auto-delete Retention Timer
on local fabric(FSM-
STAGE:sam:dme:VmLifeCyclePolicyC
F78319 fsmRmtErrVmLifeCyclePolicyConfig:Local onfig:Local) warning

[FSM:STAGE:REMOTE-ERROR]: set
Veth Auto-delete Retention Timer
on peer fabric(FSM-
STAGE:sam:dme:VmLifeCyclePolicyC
F78319 fsmRmtErrVmLifeCyclePolicyConfig:Peer onfig:Peer) warning

[FSM:STAGE:REMOTE-ERROR]:
external VM manager cetificate
configuration on local fabric(FSM-
fsmRmtErrExtvmmKeyStoreCertInstall:SetLo STAGE:sam:dme:ExtvmmKeyStoreCe
F78320 cal rtInstall:SetLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
external VM manager certificate
configuration on peer fabric(FSM-
fsmRmtErrExtvmmKeyStoreCertInstall:SetPe STAGE:sam:dme:ExtvmmKeyStoreCe
F78320 er rtInstall:SetPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
external VM manager deletion from
local fabric(FSM-
fsmRmtErrExtvmmSwitchDelTaskRemovePr STAGE:sam:dme:ExtvmmSwitchDelT
F78321 ovider:RemoveLocal askRemoveProvider:RemoveLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
applying changes to catalog(FSM-
STAGE:sam:dme:CapabilityUpdaterU
F78344 fsmRmtErrCapabilityUpdaterUpdater:Apply pdater:Apply) warning
[FSM:STAGE:REMOTE-ERROR]:
syncing catalog files to
subordinate(FSM-
fsmRmtErrCapabilityUpdaterUpdater:CopyR STAGE:sam:dme:CapabilityUpdaterU
F78344 emote pdater:CopyRemote) warning
[FSM:STAGE:REMOTE-ERROR]:
deleting temp image [fileName] on
local(FSM-
fsmRmtErrCapabilityUpdaterUpdater:Delete STAGE:sam:dme:CapabilityUpdaterU
F78344 Local pdater:DeleteLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
evaluating status of update(FSM-
fsmRmtErrCapabilityUpdaterUpdater:Evalua STAGE:sam:dme:CapabilityUpdaterU
F78344 teStatus pdater:EvaluateStatus) warning

[FSM:STAGE:REMOTE-ERROR]:
downloading catalog file [fileName]
from [server](FSM-
STAGE:sam:dme:CapabilityUpdaterU
F78344 fsmRmtErrCapabilityUpdaterUpdater:Local pdater:Local) warning

[FSM:STAGE:REMOTE-ERROR]:
rescanning image files(FSM-
fsmRmtErrCapabilityUpdaterUpdater:Resca STAGE:sam:dme:CapabilityUpdaterU
F78344 nImages pdater:RescanImages) warning

[FSM:STAGE:REMOTE-ERROR]:
unpacking catalog file [fileName] on
primary(FSM-
fsmRmtErrCapabilityUpdaterUpdater:Unpac STAGE:sam:dme:CapabilityUpdaterU
F78344 kLocal pdater:UnpackLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
Power off server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeUpdateBoardCont STAGE:sam:dme:ComputeBladeUpd
F78370 roller:BladePowerOff ateBoardController:BladePowerOff) warning
[FSM:STAGE:REMOTE-ERROR]:
Power on server [chassisId]/[slotId]
(FSM-
fsmRmtErrComputeBladeUpdateBoardCont STAGE:sam:dme:ComputeBladeUpd
F78370 roller:BladePowerOn ateBoardController:BladePowerOn) warning
[FSM:STAGE:REMOTE-ERROR]:
Waiting for Board Controller update
to complete(FSM-
STAGE:sam:dme:ComputeBladeUpd
fsmRmtErrComputeBladeUpdateBoardCont ateBoardController:PollUpdateStatu
F78370 roller:PollUpdateStatus s) warning
[FSM:STAGE:REMOTE-ERROR]:
Prepare for BoardController
update(FSM-
STAGE:sam:dme:ComputeBladeUpd
fsmRmtErrComputeBladeUpdateBoardCont ateBoardController:PrepareForUpda
F78370 roller:PrepareForUpdate te) warning

[FSM:STAGE:REMOTE-ERROR]:
Sending Board Controller update
request to CIMC(FSM-
fsmRmtErrComputeBladeUpdateBoardCont STAGE:sam:dme:ComputeBladeUpd
F78370 roller:UpdateRequest ateBoardController:UpdateRequest) warning

[FSM:STAGE:REMOTE-ERROR]:
Sending capability catalogue
[version] to local bladeAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmRmtErrCapabilityCatalogueDeployCatalo eDeployCatalogue:SyncBladeAGLoca
F78371 gue:SyncBladeAGLocal l) warning

[FSM:STAGE:REMOTE-ERROR]:
Sending capability catalogue
[version] to remote bladeAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmRmtErrCapabilityCatalogueDeployCatalo eDeployCatalogue:SyncBladeAGRem
F78371 gue:SyncBladeAGRemote ote) warning

[FSM:STAGE:REMOTE-ERROR]:
Sending capability catalogue
[version] to local hostagentAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmRmtErrCapabilityCatalogueDeployCatalo eDeployCatalogue:SyncHostagentAG
F78371 gue:SyncHostagentAGLocal Local) warning

[FSM:STAGE:REMOTE-ERROR]:
Sending capability catalogue
[version] to remote
hostagentAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmRmtErrCapabilityCatalogueDeployCatalo eDeployCatalogue:SyncHostagentAG
F78371 gue:SyncHostagentAGRemote Remote) warning

[FSM:STAGE:REMOTE-ERROR]:
Sending capability catalogue
[version] to local nicAG(FSM-
fsmRmtErrCapabilityCatalogueDeployCatalo STAGE:sam:dme:CapabilityCatalogu
F78371 gue:SyncNicAGLocal eDeployCatalogue:SyncNicAGLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
Sending capability catalogue
[version] to remote nicAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmRmtErrCapabilityCatalogueDeployCatalo eDeployCatalogue:SyncNicAGRemot
F78371 gue:SyncNicAGRemote e) warning

[FSM:STAGE:REMOTE-ERROR]:
Sending capability catalogue
[version] to local portAG(FSM-
fsmRmtErrCapabilityCatalogueDeployCatalo STAGE:sam:dme:CapabilityCatalogu
F78371 gue:SyncPortAGLocal eDeployCatalogue:SyncPortAGLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Sending capability catalogue
[version] to remote portAG(FSM-
STAGE:sam:dme:CapabilityCatalogu
fsmRmtErrCapabilityCatalogueDeployCatalo eDeployCatalogue:SyncPortAGRemo
F78371 gue:SyncPortAGRemote te) warning

[FSM:STAGE:REMOTE-ERROR]:
Finalizing capability catalogue
[version] deployment(FSM-
fsmRmtErrCapabilityCatalogueDeployCatalo STAGE:sam:dme:CapabilityCatalogu
F78371 gue:finalize eDeployCatalogue:finalize) warning

[FSM:STAGE:REMOTE-ERROR]:
cleaning host entries(FSM-
fsmRmtErrEquipmentFexRemoveFex:Cleanu STAGE:sam:dme:EquipmentFexRem
F78382 pEntries oveFex:CleanupEntries) warning
[FSM:STAGE:REMOTE-ERROR]:
erasing fex identity [id] from
primary(FSM-
fsmRmtErrEquipmentFexRemoveFex:UnIde STAGE:sam:dme:EquipmentFexRem
F78382 ntifyLocal oveFex:UnIdentifyLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
waiting for clean up of resources for
chassis [id] (approx. 2 min)(FSM-
STAGE:sam:dme:EquipmentFexRem
F78382 fsmRmtErrEquipmentFexRemoveFex:Wait oveFex:Wait) warning

[FSM:STAGE:REMOTE-ERROR]:
decomissioning fex [id](FSM-
fsmRmtErrEquipmentFexRemoveFex:decom STAGE:sam:dme:EquipmentFexRem
F78382 ission oveFex:decomission) warning
[FSM:STAGE:REMOTE-ERROR]:
setting locator led to [adminState]
(FSM-
fsmRmtErrEquipmentLocatorLedSetFeLocat STAGE:sam:dme:EquipmentLocatorL
F78383 orLed:Execute edSetFeLocatorLed:Execute) warning
[FSM:STAGE:REMOTE-ERROR]:
Configuring power cap of server [dn]
(FSM-
fsmRmtErrComputePhysicalPowerCap:Confi STAGE:sam:dme:ComputePhysicalPo
F78384 g werCap:Config) warning
[FSM:STAGE:REMOTE-ERROR]:
(FSM-
fsmRmtErrEquipmentChassisPowerCap:Con STAGE:sam:dme:EquipmentChassisP
F78384 fig owerCap:Config) warning

[FSM:STAGE:REMOTE-ERROR]:
cleaning host entries(FSM-
fsmRmtErrEquipmentIOCardMuxOffline:Cle STAGE:sam:dme:EquipmentIOCard
F78385 anupEntries MuxOffline:CleanupEntries) warning
[FSM:STAGE:REMOTE-ERROR]:
Activate BIOS image for server
[serverId](FSM-
fsmRmtErrComputePhysicalAssociate:Activa STAGE:sam:dme:ComputePhysicalAs
F78413 teBios sociate:ActivateBios) warning

[FSM:STAGE:REMOTE-ERROR]:
Update blade BIOS image(FSM-
fsmRmtErrComputePhysicalAssociate:BiosI STAGE:sam:dme:ComputePhysicalAs
F78413 mgUpdate sociate:BiosImgUpdate) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for BIOS POST completion
from CIMC on server [serverId](FSM-
fsmRmtErrComputePhysicalAssociate:BiosP STAGE:sam:dme:ComputePhysicalAs
F78413 ostCompletion sociate:BiosPostCompletion) warning

[FSM:STAGE:REMOTE-ERROR]:
Power off server for configuration of
service profile [assignedToDn](FSM-
fsmRmtErrComputePhysicalAssociate:Blade STAGE:sam:dme:ComputePhysicalAs
F78413 PowerOff sociate:BladePowerOff) warning

[FSM:STAGE:REMOTE-ERROR]:
provisioning a bootable device with
a bootable pre-boot image for
server(FSM-
fsmRmtErrComputePhysicalAssociate:BmcC STAGE:sam:dme:ComputePhysicalAs
F78413 onfigPnuOS sociate:BmcConfigPnuOS) warning

[FSM:STAGE:REMOTE-ERROR]:
prepare configuration for preboot
environment(FSM-
fsmRmtErrComputePhysicalAssociate:BmcP STAGE:sam:dme:ComputePhysicalAs
F78413 reconfigPnuOSLocal sociate:BmcPreconfigPnuOSLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
prepare configuration for preboot
environment(FSM-
fsmRmtErrComputePhysicalAssociate:BmcP STAGE:sam:dme:ComputePhysicalAs
F78413 reconfigPnuOSPeer sociate:BmcPreconfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
unprovisioning the bootable device
for server(FSM-
fsmRmtErrComputePhysicalAssociate:BmcU STAGE:sam:dme:ComputePhysicalAs
F78413 nconfigPnuOS sociate:BmcUnconfigPnuOS) warning

[FSM:STAGE:REMOTE-ERROR]: Boot
host OS for server (service profile:
[assignedToDn])(FSM-
fsmRmtErrComputePhysicalAssociate:BootH STAGE:sam:dme:ComputePhysicalAs
F78413 ost sociate:BootHost) warning

[FSM:STAGE:REMOTE-ERROR]:
Bring-up pre-boot environment for
association with [assignedToDn]
(FSM-
fsmRmtErrComputePhysicalAssociate:BootP STAGE:sam:dme:ComputePhysicalAs
F78413 nuos sociate:BootPnuos) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for system reset(FSM-
fsmRmtErrComputePhysicalAssociate:Boot STAGE:sam:dme:ComputePhysicalAs
F78413 Wait sociate:BootWait) warning
[FSM:STAGE:REMOTE-ERROR]:
Clearing pending BIOS image
update(FSM-
fsmRmtErrComputePhysicalAssociate:Clear STAGE:sam:dme:ComputePhysicalAs
F78413 BiosUpdate sociate:ClearBiosUpdate) warning

[FSM:STAGE:REMOTE-ERROR]:
Configuring SoL interface on
server(FSM-
fsmRmtErrComputePhysicalAssociate:Confi STAGE:sam:dme:ComputePhysicalAs
F78413 gSoL sociate:ConfigSoL) warning
[FSM:STAGE:REMOTE-ERROR]:
Configuring external user
access(FSM-
fsmRmtErrComputePhysicalAssociate:Confi STAGE:sam:dme:ComputePhysicalAs
F78413 gUserAccess sociate:ConfigUserAccess) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure logical UUID for server
(service profile: [assignedToDn])
(FSM-
fsmRmtErrComputePhysicalAssociate:Confi STAGE:sam:dme:ComputePhysicalAs
F78413 gUuid sociate:ConfigUuid) warning
[FSM:STAGE:REMOTE-ERROR]:
Update Host Bus Adapter
image(FSM-
fsmRmtErrComputePhysicalAssociate:HbaI STAGE:sam:dme:ComputePhysicalAs
F78413 mgUpdate sociate:HbaImgUpdate) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure host OS components on
server (service profile:
[assignedToDn])(FSM-
fsmRmtErrComputePhysicalAssociate:HostO STAGE:sam:dme:ComputePhysicalAs
F78413 SConfig sociate:HostOSConfig) warning

[FSM:STAGE:REMOTE-ERROR]:
Identify host agent on server
(service profile: [assignedToDn])
(FSM-
fsmRmtErrComputePhysicalAssociate:HostO STAGE:sam:dme:ComputePhysicalAs
F78413 SIdent sociate:HostOSIdent) warning

[FSM:STAGE:REMOTE-ERROR]:
Upload host agent policy to host
agent on server (service profile:
[assignedToDn])(FSM-
fsmRmtErrComputePhysicalAssociate:HostO STAGE:sam:dme:ComputePhysicalAs
F78413 SPolicy sociate:HostOSPolicy) warning

[FSM:STAGE:REMOTE-ERROR]:
Validate host OS on server (service
profile: [assignedToDn])(FSM-
fsmRmtErrComputePhysicalAssociate:HostO STAGE:sam:dme:ComputePhysicalAs
F78413 SValidate sociate:HostOSValidate) warning
[FSM:STAGE:REMOTE-ERROR]:
Update LocalDisk firmware
image(FSM-
fsmRmtErrComputePhysicalAssociate:Local STAGE:sam:dme:ComputePhysicalAs
F78413 DiskFwUpdate sociate:LocalDiskFwUpdate) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure adapter in server for host
OS (service profile: [assignedToDn])
(FSM-
fsmRmtErrComputePhysicalAssociate:NicCo STAGE:sam:dme:ComputePhysicalAs
F78413 nfigHostOSLocal sociate:NicConfigHostOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure adapter in server for host
OS (service profile: [assignedToDn])
(FSM-
fsmRmtErrComputePhysicalAssociate:NicCo STAGE:sam:dme:ComputePhysicalAs
F78413 nfigHostOSPeer sociate:NicConfigHostOSPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
Configure adapter for pre-boot
environment(FSM-
fsmRmtErrComputePhysicalAssociate:NicCo STAGE:sam:dme:ComputePhysicalAs
F78413 nfigPnuOSLocal sociate:NicConfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure adapter for pre-boot
environment(FSM-
fsmRmtErrComputePhysicalAssociate:NicCo STAGE:sam:dme:ComputePhysicalAs
F78413 nfigPnuOSPeer sociate:NicConfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Update adapter image(FSM-
fsmRmtErrComputePhysicalAssociate:NicIm STAGE:sam:dme:ComputePhysicalAs
F78413 gUpdate sociate:NicImgUpdate) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure adapter of server pre-
boot environment(FSM-
fsmRmtErrComputePhysicalAssociate:NicUn STAGE:sam:dme:ComputePhysicalAs
F78413 configPnuOSLocal sociate:NicUnconfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure adapter of server pre-
boot environment(FSM-
fsmRmtErrComputePhysicalAssociate:NicUn STAGE:sam:dme:ComputePhysicalAs
F78413 configPnuOSPeer sociate:NicUnconfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Populate pre-boot catalog to
server(FSM-
fsmRmtErrComputePhysicalAssociate:PnuO STAGE:sam:dme:ComputePhysicalAs
F78413 SCatalog sociate:PnuOSCatalog) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure server with service profile
[assignedToDn] pre-boot
environment(FSM-
fsmRmtErrComputePhysicalAssociate:PnuO STAGE:sam:dme:ComputePhysicalAs
F78413 SConfig sociate:PnuOSConfig) warning
[FSM:STAGE:REMOTE-ERROR]:
Identify pre-boot environment
agent(FSM-
fsmRmtErrComputePhysicalAssociate:PnuO STAGE:sam:dme:ComputePhysicalAs
F78413 SIdent sociate:PnuOSIdent) warning

[FSM:STAGE:REMOTE-ERROR]:
Perform inventory of server(FSM-
fsmRmtErrComputePhysicalAssociate:PnuO STAGE:sam:dme:ComputePhysicalAs
F78413 SInventory sociate:PnuOSInventory) warning
[FSM:STAGE:REMOTE-ERROR]:
Configure local disk on server with
service profile [assignedToDn] pre-
boot environment(FSM-
fsmRmtErrComputePhysicalAssociate:PnuO STAGE:sam:dme:ComputePhysicalAs
F78413 SLocalDiskConfig sociate:PnuOSLocalDiskConfig) warning

[FSM:STAGE:REMOTE-ERROR]:
Populate pre-boot environment
behavior policy(FSM-
fsmRmtErrComputePhysicalAssociate:PnuO STAGE:sam:dme:ComputePhysicalAs
F78413 SPolicy sociate:PnuOSPolicy) warning

[FSM:STAGE:REMOTE-ERROR]:
Trigger self-test on pre-boot
environment(FSM-
fsmRmtErrComputePhysicalAssociate:PnuO STAGE:sam:dme:ComputePhysicalAs
F78413 SSelfTest sociate:PnuOSSelfTest) warning

[FSM:STAGE:REMOTE-ERROR]:
Unload drivers on server with
service profile [assignedToDn](FSM-
fsmRmtErrComputePhysicalAssociate:PnuO STAGE:sam:dme:ComputePhysicalAs
F78413 SUnloadDrivers sociate:PnuOSUnloadDrivers) warning

[FSM:STAGE:REMOTE-ERROR]: Pre-
boot environment validation for
association with [assignedToDn]
(FSM-
fsmRmtErrComputePhysicalAssociate:PnuO STAGE:sam:dme:ComputePhysicalAs
F78413 SValidate sociate:PnuOSValidate) warning

[FSM:STAGE:REMOTE-ERROR]:
waiting for BIOS activate(FSM-
fsmRmtErrComputePhysicalAssociate:PollBi STAGE:sam:dme:ComputePhysicalAs
F78413 osActivateStatus sociate:PollBiosActivateStatus) warning
[FSM:STAGE:REMOTE-ERROR]:
Waiting for BIOS update to
complete(FSM-
fsmRmtErrComputePhysicalAssociate:PollBi STAGE:sam:dme:ComputePhysicalAs
F78413 osUpdateStatus sociate:PollBiosUpdateStatus) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for Board Controller update
to complete(FSM-
fsmRmtErrComputePhysicalAssociate:PollBo STAGE:sam:dme:ComputePhysicalAs
F78413 ardCtrlUpdateStatus sociate:PollBoardCtrlUpdateStatus) warning

[FSM:STAGE:REMOTE-ERROR]:
waiting for pending BIOS image
update to clear(FSM-
fsmRmtErrComputePhysicalAssociate:PollCl STAGE:sam:dme:ComputePhysicalAs
F78413 earBiosUpdateStatus sociate:PollClearBiosUpdateStatus) warning
[FSM:STAGE:REMOTE-ERROR]:
Power on server for configuration of
service profile [assignedToDn](FSM-
fsmRmtErrComputePhysicalAssociate:Powe STAGE:sam:dme:ComputePhysicalAs
F78413 rOn sociate:PowerOn) warning

[FSM:STAGE:REMOTE-ERROR]:
Preparing to check hardware
configuration(FSM-
fsmRmtErrComputePhysicalAssociate:PreSa STAGE:sam:dme:ComputePhysicalAs
F78413 nitize sociate:PreSanitize) warning
[FSM:STAGE:REMOTE-ERROR]:
Prepare server for booting host
OS(FSM-
fsmRmtErrComputePhysicalAssociate:Prepa STAGE:sam:dme:ComputePhysicalAs
F78413 reForBoot sociate:PrepareForBoot) warning

[FSM:STAGE:REMOTE-ERROR]:
Checking hardware
configuration(FSM-
fsmRmtErrComputePhysicalAssociate:Saniti STAGE:sam:dme:ComputePhysicalAs
F78413 ze sociate:Sanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
Disable Sol Redirection on server
[assignedToDn](FSM-
fsmRmtErrComputePhysicalAssociate:SolRe STAGE:sam:dme:ComputePhysicalAs
F78413 directDisable sociate:SolRedirectDisable) warning

[FSM:STAGE:REMOTE-ERROR]: set
up bios token for server
[assignedToDn] for Sol redirect(FSM-
fsmRmtErrComputePhysicalAssociate:SolRe STAGE:sam:dme:ComputePhysicalAs
F78413 directEnable sociate:SolRedirectEnable) warning
[FSM:STAGE:REMOTE-ERROR]:
Update storage controller
image(FSM-
fsmRmtErrComputePhysicalAssociate:Stora STAGE:sam:dme:ComputePhysicalAs
F78413 geCtlrImgUpdate sociate:StorageCtlrImgUpdate) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure primary fabric
interconnect for server host os
(service profile: [assignedToDn])
(FSM-
fsmRmtErrComputePhysicalAssociate:SwCo STAGE:sam:dme:ComputePhysicalAs
F78413 nfigHostOSLocal sociate:SwConfigHostOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure secondary fabric
interconnect for server host OS
(service profile: [assignedToDn])
(FSM-
fsmRmtErrComputePhysicalAssociate:SwCo STAGE:sam:dme:ComputePhysicalAs
F78413 nfigHostOSPeer sociate:SwConfigHostOSPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
Configure primary fabric
interconnect for pre-boot
environment(FSM-
fsmRmtErrComputePhysicalAssociate:SwCo STAGE:sam:dme:ComputePhysicalAs
F78413 nfigPnuOSLocal sociate:SwConfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure secondary fabric
interconnect for pre-boot
environment(FSM-
fsmRmtErrComputePhysicalAssociate:SwCo STAGE:sam:dme:ComputePhysicalAs
F78413 nfigPnuOSPeer sociate:SwConfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring primary fabric
interconnect access to server(FSM-
fsmRmtErrComputePhysicalAssociate:SwCo STAGE:sam:dme:ComputePhysicalAs
F78413 nfigPortNivLocal sociate:SwConfigPortNivLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring secondary fabric
interconnect access to server(FSM-
fsmRmtErrComputePhysicalAssociate:SwCo STAGE:sam:dme:ComputePhysicalAs
F78413 nfigPortNivPeer sociate:SwConfigPortNivPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure primary fabric
interconnect for server pre-boot
environment(FSM-
fsmRmtErrComputePhysicalAssociate:SwUn STAGE:sam:dme:ComputePhysicalAs
F78413 configPnuOSLocal sociate:SwUnconfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure secondary fabric
interconnect for server pre-boot
environment(FSM-
fsmRmtErrComputePhysicalAssociate:SwUn STAGE:sam:dme:ComputePhysicalAs
F78413 configPnuOSPeer sociate:SwUnconfigPnuOSPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
Sending update BIOS request to
CIMC(FSM-
fsmRmtErrComputePhysicalAssociate:Updat STAGE:sam:dme:ComputePhysicalAs
F78413 eBiosRequest sociate:UpdateBiosRequest) warning

[FSM:STAGE:REMOTE-ERROR]:
Sending Board Controller update
request to CIMC(FSM-
fsmRmtErrComputePhysicalAssociate:Updat STAGE:sam:dme:ComputePhysicalAs
F78413 eBoardCtrlRequest sociate:UpdateBoardCtrlRequest) warning
[FSM:STAGE:REMOTE-ERROR]:
Activate adapter network firmware
on(FSM-
fsmRmtErrComputePhysicalAssociate:activa STAGE:sam:dme:ComputePhysicalAs
F78413 teAdaptorNwFwLocal sociate:activateAdaptorNwFwLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
Activate adapter network firmware
on(FSM-
fsmRmtErrComputePhysicalAssociate:activa STAGE:sam:dme:ComputePhysicalAs
F78413 teAdaptorNwFwPeer sociate:activateAdaptorNwFwPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Activate CIMC firmware of server
[serverId](FSM-
fsmRmtErrComputePhysicalAssociate:activa STAGE:sam:dme:ComputePhysicalAs
F78413 teIBMCFw sociate:activateIBMCFw) warning

[FSM:STAGE:REMOTE-ERROR]:
Connect to host agent on server
(service profile: [assignedToDn])
(FSM-
fsmRmtErrComputePhysicalAssociate:hagH STAGE:sam:dme:ComputePhysicalAs
F78413 ostOSConnect sociate:hagHostOSConnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Connect to pre-boot environment
agent for association with
[assignedToDn](FSM-
fsmRmtErrComputePhysicalAssociate:hagPn STAGE:sam:dme:ComputePhysicalAs
F78413 uOSConnect sociate:hagPnuOSConnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Disconnect pre-boot environment
agent(FSM-
fsmRmtErrComputePhysicalAssociate:hagPn STAGE:sam:dme:ComputePhysicalAs
F78413 uOSDisconnect sociate:hagPnuOSDisconnect) warning

[FSM:STAGE:REMOTE-ERROR]: Reset
CIMC of server [serverId](FSM-
fsmRmtErrComputePhysicalAssociate:resetI STAGE:sam:dme:ComputePhysicalAs
F78413 BMC sociate:resetIBMC) warning

[FSM:STAGE:REMOTE-ERROR]:
Connect to pre-boot environment
agent for association with
[assignedToDn](FSM-
fsmRmtErrComputePhysicalAssociate:serial STAGE:sam:dme:ComputePhysicalAs
F78413 DebugPnuOSConnect sociate:serialDebugPnuOSConnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Disconnect pre-boot environment
agent(FSM-
STAGE:sam:dme:ComputePhysicalAs
fsmRmtErrComputePhysicalAssociate:serial sociate:serialDebugPnuOSDisconnec
F78413 DebugPnuOSDisconnect t) warning
[FSM:STAGE:REMOTE-ERROR]:
Update adapter network
firmware(FSM-
fsmRmtErrComputePhysicalAssociate:updat STAGE:sam:dme:ComputePhysicalAs
F78413 eAdaptorNwFwLocal sociate:updateAdaptorNwFwLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
Update adapter network
firmware(FSM-
fsmRmtErrComputePhysicalAssociate:updat STAGE:sam:dme:ComputePhysicalAs
F78413 eAdaptorNwFwPeer sociate:updateAdaptorNwFwPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Update CIMC firmware of server
[serverId](FSM-
fsmRmtErrComputePhysicalAssociate:updat STAGE:sam:dme:ComputePhysicalAs
F78413 eIBMCFw sociate:updateIBMCFw) warning

[FSM:STAGE:REMOTE-ERROR]: Wait
for adapter network firmware
update completion(FSM-
STAGE:sam:dme:ComputePhysicalAs
fsmRmtErrComputePhysicalAssociate:waitF sociate:waitForAdaptorNwFwUpdat
F78413 orAdaptorNwFwUpdateLocal eLocal) warning

[FSM:STAGE:REMOTE-ERROR]: Wait
for adapter network firmware
update completion(FSM-
STAGE:sam:dme:ComputePhysicalAs
fsmRmtErrComputePhysicalAssociate:waitF sociate:waitForAdaptorNwFwUpdat
F78413 orAdaptorNwFwUpdatePeer ePeer) warning

[FSM:STAGE:REMOTE-ERROR]: Wait
for CIMC firmware completion on
server [serverId](FSM-
fsmRmtErrComputePhysicalAssociate:waitF STAGE:sam:dme:ComputePhysicalAs
F78413 orIBMCFwUpdate sociate:waitForIBMCFwUpdate) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for BIOS POST completion
from CIMC on server [serverId](FSM-
fsmRmtErrComputePhysicalDisassociate:Bio STAGE:sam:dme:ComputePhysicalDi
F78414 sPostCompletion sassociate:BiosPostCompletion) warning

[FSM:STAGE:REMOTE-ERROR]:
provisioning a bootable device with
a bootable pre-boot image for
server(FSM-
fsmRmtErrComputePhysicalDisassociate:Bm STAGE:sam:dme:ComputePhysicalDi
F78414 cConfigPnuOS sassociate:BmcConfigPnuOS) warning
[FSM:STAGE:REMOTE-ERROR]:
prepare configuration for preboot
environment(FSM-
STAGE:sam:dme:ComputePhysicalDi
fsmRmtErrComputePhysicalDisassociate:Bm sassociate:BmcPreconfigPnuOSLocal
F78414 cPreconfigPnuOSLocal ) warning

[FSM:STAGE:REMOTE-ERROR]:
prepare configuration for preboot
environment(FSM-
fsmRmtErrComputePhysicalDisassociate:Bm STAGE:sam:dme:ComputePhysicalDi
F78414 cPreconfigPnuOSPeer sassociate:BmcPreconfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
unprovisioning the bootable device
for server(FSM-
fsmRmtErrComputePhysicalDisassociate:Bm STAGE:sam:dme:ComputePhysicalDi
F78414 cUnconfigPnuOS sassociate:BmcUnconfigPnuOS) warning

[FSM:STAGE:REMOTE-ERROR]:
Bring-up pre-boot environment on
server for disassociation with service
profile [assignedToDn](FSM-
fsmRmtErrComputePhysicalDisassociate:Bo STAGE:sam:dme:ComputePhysicalDi
F78414 otPnuos sassociate:BootPnuos) warning
[FSM:STAGE:REMOTE-ERROR]:
Waiting for system reset on
server(FSM-
fsmRmtErrComputePhysicalDisassociate:Bo STAGE:sam:dme:ComputePhysicalDi
F78414 otWait sassociate:BootWait) warning

[FSM:STAGE:REMOTE-ERROR]:
Configuring BIOS Defaults on server
[serverId](FSM-
fsmRmtErrComputePhysicalDisassociate:Co STAGE:sam:dme:ComputePhysicalDi
F78414 nfigBios sassociate:ConfigBios) warning
[FSM:STAGE:REMOTE-ERROR]:
Configuring external user
access(FSM-
fsmRmtErrComputePhysicalDisassociate:Co STAGE:sam:dme:ComputePhysicalDi
F78414 nfigUserAccess sassociate:ConfigUserAccess) warning

[FSM:STAGE:REMOTE-ERROR]: Apply
post-disassociation policies to
server(FSM-
fsmRmtErrComputePhysicalDisassociate:Ha STAGE:sam:dme:ComputePhysicalDi
F78414 ndlePooling sassociate:HandlePooling) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure adapter for pre-boot
environment on server(FSM-
fsmRmtErrComputePhysicalDisassociate:Nic STAGE:sam:dme:ComputePhysicalDi
F78414 ConfigPnuOSLocal sassociate:NicConfigPnuOSLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
Configure adapter for pre-boot
environment on server(FSM-
fsmRmtErrComputePhysicalDisassociate:Nic STAGE:sam:dme:ComputePhysicalDi
F78414 ConfigPnuOSPeer sassociate:NicConfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure host OS connectivity
from server adapter(FSM-
fsmRmtErrComputePhysicalDisassociate:Nic STAGE:sam:dme:ComputePhysicalDi
F78414 UnconfigHostOSLocal sassociate:NicUnconfigHostOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure host OS connectivity
from server adapter(FSM-
fsmRmtErrComputePhysicalDisassociate:Nic STAGE:sam:dme:ComputePhysicalDi
F78414 UnconfigHostOSPeer sassociate:NicUnconfigHostOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure adapter of server pre-
boot environment(FSM-
fsmRmtErrComputePhysicalDisassociate:Nic STAGE:sam:dme:ComputePhysicalDi
F78414 UnconfigPnuOSLocal sassociate:NicUnconfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure adapter of server pre-
boot environment(FSM-
fsmRmtErrComputePhysicalDisassociate:Nic STAGE:sam:dme:ComputePhysicalDi
F78414 UnconfigPnuOSPeer sassociate:NicUnconfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Populate pre-boot catalog to
server(FSM-
fsmRmtErrComputePhysicalDisassociate:Pn STAGE:sam:dme:ComputePhysicalDi
F78414 uOSCatalog sassociate:PnuOSCatalog) warning

[FSM:STAGE:REMOTE-ERROR]:
Identify pre-boot environment agent
on server(FSM-
fsmRmtErrComputePhysicalDisassociate:Pn STAGE:sam:dme:ComputePhysicalDi
F78414 uOSIdent sassociate:PnuOSIdent) warning

[FSM:STAGE:REMOTE-ERROR]:
Populate pre-boot environment
behavior policy to server(FSM-
fsmRmtErrComputePhysicalDisassociate:Pn STAGE:sam:dme:ComputePhysicalDi
F78414 uOSPolicy sassociate:PnuOSPolicy) warning

[FSM:STAGE:REMOTE-ERROR]: Scrub
server(FSM-
fsmRmtErrComputePhysicalDisassociate:Pn STAGE:sam:dme:ComputePhysicalDi
F78414 uOSScrub sassociate:PnuOSScrub) warning
[FSM:STAGE:REMOTE-ERROR]:
Trigger self-test of server pre-boot
environment(FSM-
fsmRmtErrComputePhysicalDisassociate:Pn STAGE:sam:dme:ComputePhysicalDi
F78414 uOSSelfTest sassociate:PnuOSSelfTest) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure server from service
profile [assignedToDn] pre-boot
environment(FSM-
fsmRmtErrComputePhysicalDisassociate:Pn STAGE:sam:dme:ComputePhysicalDi
F78414 uOSUnconfig sassociate:PnuOSUnconfig) warning

[FSM:STAGE:REMOTE-ERROR]: Pre-
boot environment validate server
for disassociation with service
profile [assignedToDn](FSM-
fsmRmtErrComputePhysicalDisassociate:Pn STAGE:sam:dme:ComputePhysicalDi
F78414 uOSValidate sassociate:PnuOSValidate) warning

[FSM:STAGE:REMOTE-ERROR]:
Power on server for unconfiguration
of service profile [assignedToDn]
(FSM-
fsmRmtErrComputePhysicalDisassociate:Po STAGE:sam:dme:ComputePhysicalDi
F78414 werOn sassociate:PowerOn) warning

[FSM:STAGE:REMOTE-ERROR]:
Preparing to check hardware
configuration server(FSM-
fsmRmtErrComputePhysicalDisassociate:Pre STAGE:sam:dme:ComputePhysicalDi
F78414 Sanitize sassociate:PreSanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
Checking hardware configuration
server(FSM-
fsmRmtErrComputePhysicalDisassociate:Sa STAGE:sam:dme:ComputePhysicalDi
F78414 nitize sassociate:Sanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
Shutdown server(FSM-
fsmRmtErrComputePhysicalDisassociate:Sh STAGE:sam:dme:ComputePhysicalDi
F78414 utdown sassociate:Shutdown) warning

[FSM:STAGE:REMOTE-ERROR]:
Disable Sol redirection on server
[serverId](FSM-
fsmRmtErrComputePhysicalDisassociate:Sol STAGE:sam:dme:ComputePhysicalDi
F78414 RedirectDisable sassociate:SolRedirectDisable) warning
[FSM:STAGE:REMOTE-ERROR]: set
up bios token for server [serverId]
for Sol redirect(FSM-
fsmRmtErrComputePhysicalDisassociate:Sol STAGE:sam:dme:ComputePhysicalDi
F78414 RedirectEnable sassociate:SolRedirectEnable) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure primary fabric
interconnect for pre-boot
environment on server(FSM-
fsmRmtErrComputePhysicalDisassociate:Sw STAGE:sam:dme:ComputePhysicalDi
F78414 ConfigPnuOSLocal sassociate:SwConfigPnuOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure secondary fabric
interconnect for pre-boot
environment on server(FSM-
fsmRmtErrComputePhysicalDisassociate:Sw STAGE:sam:dme:ComputePhysicalDi
F78414 ConfigPnuOSPeer sassociate:SwConfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring primary fabric
interconnect access to server(FSM-
fsmRmtErrComputePhysicalDisassociate:Sw STAGE:sam:dme:ComputePhysicalDi
F78414 ConfigPortNivLocal sassociate:SwConfigPortNivLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring secondary fabric
interconnect access to server(FSM-
fsmRmtErrComputePhysicalDisassociate:Sw STAGE:sam:dme:ComputePhysicalDi
F78414 ConfigPortNivPeer sassociate:SwConfigPortNivPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure host OS connectivity
from server to primary fabric
interconnect(FSM-
fsmRmtErrComputePhysicalDisassociate:Sw STAGE:sam:dme:ComputePhysicalDi
F78414 UnconfigHostOSLocal sassociate:SwUnconfigHostOSLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure host OS connectivity
from server to secondary fabric
interconnect(FSM-
fsmRmtErrComputePhysicalDisassociate:Sw STAGE:sam:dme:ComputePhysicalDi
F78414 UnconfigHostOSPeer sassociate:SwUnconfigHostOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfigure primary fabric
interconnect for server pre-boot
environment(FSM-
fsmRmtErrComputePhysicalDisassociate:Sw STAGE:sam:dme:ComputePhysicalDi
F78414 UnconfigPnuOSLocal sassociate:SwUnconfigPnuOSLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
Unconfigure secondary fabric
interconnect for server pre-boot
environment(FSM-
fsmRmtErrComputePhysicalDisassociate:Sw STAGE:sam:dme:ComputePhysicalDi
F78414 UnconfigPnuOSPeer sassociate:SwUnconfigPnuOSPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfiguring BIOS Settings and
Boot Order of server [serverId]
(service profile [assignedToDn])
(FSM-
fsmRmtErrComputePhysicalDisassociate:Un STAGE:sam:dme:ComputePhysicalDi
F78414 configBios sassociate:UnconfigBios) warning

[FSM:STAGE:REMOTE-ERROR]:
Removing SoL configuration from
server(FSM-
fsmRmtErrComputePhysicalDisassociate:Un STAGE:sam:dme:ComputePhysicalDi
F78414 configSoL sassociate:UnconfigSoL) warning

[FSM:STAGE:REMOTE-ERROR]:
Restore original UUID for server
(service profile: [assignedToDn])
(FSM-
fsmRmtErrComputePhysicalDisassociate:Un STAGE:sam:dme:ComputePhysicalDi
F78414 configUuid sassociate:UnconfigUuid) warning

[FSM:STAGE:REMOTE-ERROR]:
Connect to pre-boot environment
agent on server for disassociation
with service profile [assignedToDn]
(FSM-
fsmRmtErrComputePhysicalDisassociate:ha STAGE:sam:dme:ComputePhysicalDi
F78414 gPnuOSConnect sassociate:hagPnuOSConnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Disconnect pre-boot environment
agent for server(FSM-
fsmRmtErrComputePhysicalDisassociate:ha STAGE:sam:dme:ComputePhysicalDi
F78414 gPnuOSDisconnect sassociate:hagPnuOSDisconnect) warning

[FSM:STAGE:REMOTE-ERROR]:
Connect to pre-boot environment
agent on server for disassociation
with service profile [assignedToDn]
(FSM-
STAGE:sam:dme:ComputePhysicalDi
fsmRmtErrComputePhysicalDisassociate:ser sassociate:serialDebugPnuOSConnec
F78414 ialDebugPnuOSConnect t) warning
[FSM:STAGE:REMOTE-ERROR]:
Disconnect pre-boot environment
agent for server(FSM-
STAGE:sam:dme:ComputePhysicalDi
fsmRmtErrComputePhysicalDisassociate:ser sassociate:serialDebugPnuOSDiscon
F78414 ialDebugPnuOSDisconnect nect) warning

[FSM:STAGE:REMOTE-ERROR]:
Cleaning up CIMC configuration for
server [dn](FSM-
fsmRmtErrComputePhysicalDecommission: STAGE:sam:dme:ComputePhysicalD
F78416 CleanupCIMC ecommission:CleanupCIMC) warning

[FSM:STAGE:REMOTE-ERROR]:
Decommissioning server [dn](FSM-
fsmRmtErrComputePhysicalDecommission: STAGE:sam:dme:ComputePhysicalD
F78416 Execute ecommission:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmRmtErrComputePhysicalDecommission:S STAGE:sam:dme:ComputePhysicalD
F78416 topVMediaLocal ecommission:StopVMediaLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmRmtErrComputePhysicalDecommission:S STAGE:sam:dme:ComputePhysicalD
F78416 topVMediaPeer ecommission:StopVMediaPeer) warning

[FSM:STAGE:REMOTE-ERROR]: Soft
shutdown of server [dn](FSM-
fsmRmtErrComputePhysicalSoftShutdown:E STAGE:sam:dme:ComputePhysicalSo
F78417 xecute ftShutdown:Execute) warning

[FSM:STAGE:REMOTE-ERROR]: Hard
shutdown of server [dn](FSM-
fsmRmtErrComputePhysicalHardShutdown: STAGE:sam:dme:ComputePhysicalH
F78418 Execute ardShutdown:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
Power-on server [dn](FSM-
STAGE:sam:dme:ComputePhysicalTu
F78419 fsmRmtErrComputePhysicalTurnup:Execute rnup:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
Power-cycle server [dn](FSM-
fsmRmtErrComputePhysicalPowercycle:Exe STAGE:sam:dme:ComputePhysicalPo
F78420 cute wercycle:Execute) warning
[FSM:STAGE:REMOTE-ERROR]:
Preparing to check hardware
configuration server [dn](FSM-
fsmRmtErrComputePhysicalPowercycle:Pre STAGE:sam:dme:ComputePhysicalPo
F78420 Sanitize wercycle:PreSanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
Checking hardware configuration
server [dn](FSM-
fsmRmtErrComputePhysicalPowercycle:Sani STAGE:sam:dme:ComputePhysicalPo
F78420 tize wercycle:Sanitize) warning

[FSM:STAGE:REMOTE-ERROR]: Hard-
reset server [dn](FSM-
fsmRmtErrComputePhysicalHardreset:Execu STAGE:sam:dme:ComputePhysicalH
F78421 te ardreset:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
Preparing to check hardware
configuration server [dn](FSM-
fsmRmtErrComputePhysicalHardreset:PreSa STAGE:sam:dme:ComputePhysicalH
F78421 nitize ardreset:PreSanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
Checking hardware configuration
server [dn](FSM-
fsmRmtErrComputePhysicalHardreset:Saniti STAGE:sam:dme:ComputePhysicalH
F78421 ze ardreset:Sanitize) warning

[FSM:STAGE:REMOTE-ERROR]: Soft-
reset server [dn](FSM-
fsmRmtErrComputePhysicalSoftreset:Execut STAGE:sam:dme:ComputePhysicalSo
F78422 e ftreset:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
Preparing to check hardware
configuration server [dn](FSM-
fsmRmtErrComputePhysicalSoftreset:PreSa STAGE:sam:dme:ComputePhysicalSo
F78422 nitize ftreset:PreSanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
Checking hardware configuration
server [dn](FSM-
fsmRmtErrComputePhysicalSoftreset:Sanitiz STAGE:sam:dme:ComputePhysicalSo
F78422 e ftreset:Sanitize) warning
[FSM:STAGE:REMOTE-ERROR]:
Updating fabric A for server [dn]
(FSM-
STAGE:sam:dme:ComputePhysicalS
F78423 fsmRmtErrComputePhysicalSwConnUpd:A wConnUpd:A) warning
[FSM:STAGE:REMOTE-ERROR]:
Updating fabric B for server [dn]
(FSM-
STAGE:sam:dme:ComputePhysicalS
F78423 fsmRmtErrComputePhysicalSwConnUpd:B wConnUpd:B) warning

[FSM:STAGE:REMOTE-ERROR]:
Completing BIOS recovery mode for
server [dn], and shutting it
down(FSM-
fsmRmtErrComputePhysicalBiosRecovery:Cl STAGE:sam:dme:ComputePhysicalBi
F78424 eanup osRecovery:Cleanup) warning

[FSM:STAGE:REMOTE-ERROR]:
Preparing to check hardware
configuration server [dn](FSM-
fsmRmtErrComputePhysicalBiosRecovery:Pr STAGE:sam:dme:ComputePhysicalBi
F78424 eSanitize osRecovery:PreSanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
Resetting server [dn] power state
after BIOS recovery(FSM-
fsmRmtErrComputePhysicalBiosRecovery:R STAGE:sam:dme:ComputePhysicalBi
F78424 eset osRecovery:Reset) warning

[FSM:STAGE:REMOTE-ERROR]:
Checking hardware configuration
server [dn](FSM-
fsmRmtErrComputePhysicalBiosRecovery:Sa STAGE:sam:dme:ComputePhysicalBi
F78424 nitize osRecovery:Sanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
Provisioning a V-Media device with a
bootable BIOS image for server [dn]
(FSM-
fsmRmtErrComputePhysicalBiosRecovery:Se STAGE:sam:dme:ComputePhysicalBi
F78424 tupVmediaLocal osRecovery:SetupVmediaLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Provisioning a V-Media device with a
bootable BIOS image for server [dn]
(FSM-
fsmRmtErrComputePhysicalBiosRecovery:Se STAGE:sam:dme:ComputePhysicalBi
F78424 tupVmediaPeer osRecovery:SetupVmediaPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Shutting down server [dn] to
prepare for BIOS recovery(FSM-
fsmRmtErrComputePhysicalBiosRecovery:Sh STAGE:sam:dme:ComputePhysicalBi
F78424 utdown osRecovery:Shutdown) warning
[FSM:STAGE:REMOTE-ERROR]:
Running BIOS recovery on server
[dn](FSM-
fsmRmtErrComputePhysicalBiosRecovery:St STAGE:sam:dme:ComputePhysicalBi
F78424 art osRecovery:Start) warning

[FSM:STAGE:REMOTE-ERROR]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmRmtErrComputePhysicalBiosRecovery:St STAGE:sam:dme:ComputePhysicalBi
F78424 opVMediaLocal osRecovery:StopVMediaLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmRmtErrComputePhysicalBiosRecovery:St STAGE:sam:dme:ComputePhysicalBi
F78424 opVMediaPeer osRecovery:StopVMediaPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmRmtErrComputePhysicalBiosRecovery:Te STAGE:sam:dme:ComputePhysicalBi
F78424 ardownVmediaLocal osRecovery:TeardownVmediaLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unprovisioning the V-Media
bootable device for server [dn](FSM-
fsmRmtErrComputePhysicalBiosRecovery:Te STAGE:sam:dme:ComputePhysicalBi
F78424 ardownVmediaPeer osRecovery:TeardownVmediaPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Waiting for completion of BIOS
recovery for server [dn] (up to 15
min)(FSM-
fsmRmtErrComputePhysicalBiosRecovery:W STAGE:sam:dme:ComputePhysicalBi
F78424 ait osRecovery:Wait) warning

[FSM:STAGE:REMOTE-ERROR]:
Power on server [serverId](FSM-
fsmRmtErrComputePhysicalCmosReset:Blad STAGE:sam:dme:ComputePhysicalC
F78426 ePowerOn mosReset:BladePowerOn) warning

[FSM:STAGE:REMOTE-ERROR]:
Resetting CMOS for server [serverId]
(FSM-
fsmRmtErrComputePhysicalCmosReset:Exec STAGE:sam:dme:ComputePhysicalC
F78426 ute mosReset:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
Preparing to check hardware
configuration server [serverId](FSM-
fsmRmtErrComputePhysicalCmosReset:PreS STAGE:sam:dme:ComputePhysicalC
F78426 anitize mosReset:PreSanitize) warning
[FSM:STAGE:REMOTE-ERROR]:
Reconfiguring BIOS Settings and
Boot Order of server [serverId] for
service profile [assignedToDn](FSM-
fsmRmtErrComputePhysicalCmosReset:Rec STAGE:sam:dme:ComputePhysicalC
F78426 onfigBios mosReset:ReconfigBios) warning

[FSM:STAGE:REMOTE-ERROR]:
Reconfiguring logical UUID of server
[serverId] for service profile
[assignedToDn](FSM-
fsmRmtErrComputePhysicalCmosReset:Rec STAGE:sam:dme:ComputePhysicalC
F78426 onfigUuid mosReset:ReconfigUuid) warning

[FSM:STAGE:REMOTE-ERROR]:
Checking hardware configuration
server [serverId](FSM-
fsmRmtErrComputePhysicalCmosReset:Sani STAGE:sam:dme:ComputePhysicalC
F78426 tize mosReset:Sanitize) warning

[FSM:STAGE:REMOTE-ERROR]:
Resetting Management Controller
on server [dn](FSM-
fsmRmtErrComputePhysicalResetBmc:Execu STAGE:sam:dme:ComputePhysicalRe
F78427 te setBmc:Execute) warning

[FSM:STAGE:REMOTE-ERROR]: Reset
IOM [id] on Fex [chassisId](FSM-
fsmRmtErrEquipmentIOCardResetIom:Exec STAGE:sam:dme:EquipmentIOCardR
F78428 ute esetIom:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
external mgmt user deployment on
server [dn] (profile [assignedToDn])
(FSM-
fsmRmtErrComputePhysicalUpdateExtUsers STAGE:sam:dme:ComputePhysicalU
F78448 :Deploy pdateExtUsers:Deploy) warning

[FSM:STAGE:REMOTE-ERROR]:
create tech-support file from GUI on
local(FSM-
fsmRmtErrSysdebugTechSupportInitiate:Loc STAGE:sam:dme:SysdebugTechSupp
F78452 al ortInitiate:Local) warning

[FSM:STAGE:REMOTE-ERROR]:
delete tech-support file from GUI on
local(FSM-
fsmRmtErrSysdebugTechSupportDeleteTech STAGE:sam:dme:SysdebugTechSupp
F78453 SupFile:Local ortDeleteTechSupFile:Local) warning
[FSM:STAGE:REMOTE-ERROR]:
delete tech-support file from GUI on
peer(FSM-
fsmRmtErrSysdebugTechSupportDeleteTech STAGE:sam:dme:SysdebugTechSupp
F78453 SupFile:peer ortDeleteTechSupFile:peer) warning

[FSM:STAGE:REMOTE-ERROR]: sync
images to subordinate(FSM-
fsmRmtErrFirmwareDownloaderDownload: STAGE:sam:dme:FirmwareDownload
F78454 CopyRemote erDownload:CopyRemote) warning

[FSM:STAGE:REMOTE-ERROR]:
deleting downloadable [fileName]
on local(FSM-
fsmRmtErrFirmwareDownloaderDownload: STAGE:sam:dme:FirmwareDownload
F78454 DeleteLocal erDownload:DeleteLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
downloading image [fileName] from
[server](FSM-
fsmRmtErrFirmwareDownloaderDownload: STAGE:sam:dme:FirmwareDownload
F78454 Local erDownload:Local) warning

[FSM:STAGE:REMOTE-ERROR]:
unpacking image [fileName] on
primary(FSM-
fsmRmtErrFirmwareDownloaderDownload: STAGE:sam:dme:FirmwareDownload
F78454 UnpackLocal erDownload:UnpackLocal) warning

[FSM:STAGE:REMOTE-ERROR]: Copy
the license file to subordinate for
inventory(FSM-
fsmRmtErrLicenseDownloaderDownload:Co STAGE:sam:dme:LicenseDownloader
F78454 pyRemote Download:CopyRemote) warning

[FSM:STAGE:REMOTE-ERROR]:
deleting temporary files for
[fileName] on local(FSM-
fsmRmtErrLicenseDownloaderDownload:De STAGE:sam:dme:LicenseDownloader
F78454 leteLocal Download:DeleteLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
deleting temporary files for
[fileName] on subordinate(FSM-
fsmRmtErrLicenseDownloaderDownload:De STAGE:sam:dme:LicenseDownloader
F78454 leteRemote Download:DeleteRemote) warning

[FSM:STAGE:REMOTE-ERROR]:
downloading license file [fileName]
from [server](FSM-
fsmRmtErrLicenseDownloaderDownload:Lo STAGE:sam:dme:LicenseDownloader
F78454 cal Download:Local) warning
[FSM:STAGE:REMOTE-ERROR]:
validation for license file [fileName]
on primary(FSM-
fsmRmtErrLicenseDownloaderDownload:Val STAGE:sam:dme:LicenseDownloader
F78454 idateLocal Download:ValidateLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
validation for license file [fileName]
on subordinate(FSM-
fsmRmtErrLicenseDownloaderDownload:Val STAGE:sam:dme:LicenseDownloader
F78454 idateRemote Download:ValidateRemote) warning

[FSM:STAGE:REMOTE-ERROR]: Copy
the Core file to primary for
download(FSM-
fsmRmtErrSysdebugCoreDownload:CopyPri STAGE:sam:dme:SysdebugCoreDow
F78454 mary nload:CopyPrimary) warning

[FSM:STAGE:REMOTE-ERROR]: copy
Core file on subordinate switch to
tmp directory(FSM-
fsmRmtErrSysdebugCoreDownload:CopySu STAGE:sam:dme:SysdebugCoreDow
F78454 b nload:CopySub) warning

[FSM:STAGE:REMOTE-ERROR]:
Delete the Core file from primary
switch under tmp directory(FSM-
fsmRmtErrSysdebugCoreDownload:DeleteP STAGE:sam:dme:SysdebugCoreDow
F78454 rimary nload:DeletePrimary) warning

[FSM:STAGE:REMOTE-ERROR]:
Delete the Core file from
subordinate under tmp
directory(FSM-
fsmRmtErrSysdebugCoreDownload:DeleteS STAGE:sam:dme:SysdebugCoreDow
F78454 ub nload:DeleteSub) warning

[FSM:STAGE:REMOTE-ERROR]: Copy
the tech-support file to primary for
download(FSM-
fsmRmtErrSysdebugTechSupportDownload: STAGE:sam:dme:SysdebugTechSupp
F78454 CopyPrimary ortDownload:CopyPrimary) warning

[FSM:STAGE:REMOTE-ERROR]: copy
tech-support file on subordinate
switch to tmp directory(FSM-
fsmRmtErrSysdebugTechSupportDownload: STAGE:sam:dme:SysdebugTechSupp
F78454 CopySub ortDownload:CopySub) warning

[FSM:STAGE:REMOTE-ERROR]:
Delete the tech-support file from
primary switch under tmp
directory(FSM-
fsmRmtErrSysdebugTechSupportDownload: STAGE:sam:dme:SysdebugTechSupp
F78454 DeletePrimary ortDownload:DeletePrimary) warning
[FSM:STAGE:REMOTE-ERROR]:
Delete the tech-support file from
subordinate under tmp
directory(FSM-
fsmRmtErrSysdebugTechSupportDownload: STAGE:sam:dme:SysdebugTechSupp
F78454 DeleteSub ortDownload:DeleteSub) warning
[FSM:STAGE:REMOTE-ERROR]:
waiting for update to
complete(FSM-
STAGE:sam:dme:ComputePhysicalU
fsmRmtErrComputePhysicalUpdateAdaptor: pdateAdaptor:PollUpdateStatusLoca
F78483 PollUpdateStatusLocal l) warning
[FSM:STAGE:REMOTE-ERROR]:
waiting for update to
complete(FSM-
STAGE:sam:dme:ComputePhysicalU
fsmRmtErrComputePhysicalUpdateAdaptor: pdateAdaptor:PollUpdateStatusPeer
F78483 PollUpdateStatusPeer ) warning

[FSM:STAGE:REMOTE-ERROR]:
Power off the server(FSM-
fsmRmtErrComputePhysicalUpdateAdaptor: STAGE:sam:dme:ComputePhysicalU
F78483 PowerOff pdateAdaptor:PowerOff) warning

[FSM:STAGE:REMOTE-ERROR]:
power on the blade(FSM-
fsmRmtErrComputePhysicalUpdateAdaptor: STAGE:sam:dme:ComputePhysicalU
F78483 PowerOn pdateAdaptor:PowerOn) warning
[FSM:STAGE:REMOTE-ERROR]:
sending update request to
Adaptor(FSM-
fsmRmtErrComputePhysicalUpdateAdaptor: STAGE:sam:dme:ComputePhysicalU
F78483 UpdateRequestLocal pdateAdaptor:UpdateRequestLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
sending update request to
Adaptor(FSM-
fsmRmtErrComputePhysicalUpdateAdaptor: STAGE:sam:dme:ComputePhysicalU
F78483 UpdateRequestPeer pdateAdaptor:UpdateRequestPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
activating backup image of
Adaptor(FSM-
fsmRmtErrComputePhysicalActivateAdaptor STAGE:sam:dme:ComputePhysicalAc
F78484 :ActivateLocal tivateAdaptor:ActivateLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
activating backup image of
Adaptor(FSM-
fsmRmtErrComputePhysicalActivateAdaptor STAGE:sam:dme:ComputePhysicalAc
F78484 :ActivatePeer tivateAdaptor:ActivatePeer) warning

[FSM:STAGE:REMOTE-ERROR]:
power on the blade(FSM-
fsmRmtErrComputePhysicalActivateAdaptor STAGE:sam:dme:ComputePhysicalAc
F78484 :PowerOn tivateAdaptor:PowerOn) warning
[FSM:STAGE:REMOTE-ERROR]:
reseting the blade(FSM-
fsmRmtErrComputePhysicalActivateAdaptor STAGE:sam:dme:ComputePhysicalAc
F78484 :Reset tivateAdaptor:Reset) warning

[FSM:STAGE:REMOTE-ERROR]:
applying changes to catalog(FSM-
fsmRmtErrCapabilityCatalogueActivateCatal STAGE:sam:dme:CapabilityCatalogu
F78485 og:ApplyCatalog eActivateCatalog:ApplyCatalog) warning
[FSM:STAGE:REMOTE-ERROR]:
syncing catalog changes to
subordinate(FSM-
fsmRmtErrCapabilityCatalogueActivateCatal STAGE:sam:dme:CapabilityCatalogu
F78485 og:CopyRemote eActivateCatalog:CopyRemote) warning

[FSM:STAGE:REMOTE-ERROR]:
evaluating status of activation(FSM-
fsmRmtErrCapabilityCatalogueActivateCatal STAGE:sam:dme:CapabilityCatalogu
F78485 og:EvaluateStatus eActivateCatalog:EvaluateStatus) warning

[FSM:STAGE:REMOTE-ERROR]:
rescanning image files(FSM-
fsmRmtErrCapabilityCatalogueActivateCatal STAGE:sam:dme:CapabilityCatalogu
F78485 og:RescanImages eActivateCatalog:RescanImages) warning

[FSM:STAGE:REMOTE-ERROR]:
activating catalog changes(FSM-
fsmRmtErrCapabilityCatalogueActivateCatal STAGE:sam:dme:CapabilityCatalogu
F78485 og:UnpackLocal eActivateCatalog:UnpackLocal) warning
[FSM:STAGE:REMOTE-ERROR]:
applying changes to catalog(FSM-
STAGE:sam:dme:CapabilityMgmtExt
fsmRmtErrCapabilityMgmtExtensionActivat ensionActivateMgmtExt:ApplyCatalo
F78486 eMgmtExt:ApplyCatalog g) warning

[FSM:STAGE:REMOTE-ERROR]:
syncing management extension
changes to subordinate(FSM-
STAGE:sam:dme:CapabilityMgmtExt
fsmRmtErrCapabilityMgmtExtensionActivat ensionActivateMgmtExt:CopyRemot
F78486 eMgmtExt:CopyRemote e) warning

[FSM:STAGE:REMOTE-ERROR]:
evaluating status of activation(FSM-
STAGE:sam:dme:CapabilityMgmtExt
fsmRmtErrCapabilityMgmtExtensionActivat ensionActivateMgmtExt:EvaluateSta
F78486 eMgmtExt:EvaluateStatus tus) warning
[FSM:STAGE:REMOTE-ERROR]:
rescanning image files(FSM-
STAGE:sam:dme:CapabilityMgmtExt
fsmRmtErrCapabilityMgmtExtensionActivat ensionActivateMgmtExt:RescanImag
F78486 eMgmtExt:RescanImages es) warning
[FSM:STAGE:REMOTE-ERROR]:
activating management extension
changes(FSM-
STAGE:sam:dme:CapabilityMgmtExt
fsmRmtErrCapabilityMgmtExtensionActivat ensionActivateMgmtExt:UnpackLoca
F78486 eMgmtExt:UnpackLocal l) warning

[FSM:STAGE:REMOTE-ERROR]:
Installing license on primary(FSM-
STAGE:sam:dme:LicenseFileInstall:Lo
F78491 fsmRmtErrLicenseFileInstall:Local cal) warning
[FSM:STAGE:REMOTE-ERROR]:
Installing license on
subordinate(FSM-
STAGE:sam:dme:LicenseFileInstall:R
F78491 fsmRmtErrLicenseFileInstall:Remote emote) warning

[FSM:STAGE:REMOTE-ERROR]:
Clearing license on primary(FSM-
STAGE:sam:dme:LicenseFileClear:Lo
F78492 fsmRmtErrLicenseFileClear:Local cal) warning
[FSM:STAGE:REMOTE-ERROR]:
Clearing license on
subordinate(FSM-
STAGE:sam:dme:LicenseFileClear:Re
F78492 fsmRmtErrLicenseFileClear:Remote mote) warning

[FSM:STAGE:REMOTE-ERROR]:
Updating on primary(FSM-
fsmRmtErrLicenseInstanceUpdateFlexlm:Lo STAGE:sam:dme:LicenseInstanceUp
F78493 cal dateFlexlm:Local) warning

[FSM:STAGE:REMOTE-ERROR]:
Updating on subordinate(FSM-
fsmRmtErrLicenseInstanceUpdateFlexlm:Re STAGE:sam:dme:LicenseInstanceUp
F78493 mote dateFlexlm:Remote) warning

[FSM:STAGE:REMOTE-ERROR]:
configuring SoL interface on server
[dn](FSM-
fsmRmtErrComputePhysicalConfigSoL:Execu STAGE:sam:dme:ComputePhysicalCo
F78523 te nfigSoL:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
removing SoL interface configuration
from server [dn](FSM-
fsmRmtErrComputePhysicalUnconfigSoL:Ex STAGE:sam:dme:ComputePhysicalU
F78524 ecute nconfigSoL:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
Shutting down port(FSM-
fsmRmtErrPortPIoInCompatSfpPresence:Sh STAGE:sam:dme:PortPIoInCompatSf
F78529 utdown pPresence:Shutdown) warning
[FSM:STAGE:REMOTE-ERROR]:
Execute Diagnostic Interrupt(NMI)
for server [dn](FSM-
fsmRmtErrComputePhysicalDiagnosticInterr STAGE:sam:dme:ComputePhysicalDi
F78556 upt:Execute agnosticInterrupt:Execute) warning
[FSM:STAGE:REMOTE-ERROR]:
(FSM-
fsmRmtErrEquipmentChassisDynamicReallo STAGE:sam:dme:EquipmentChassisD
F78574 cation:Config ynamicReallocation:Config) warning
[FSM:STAGE:REMOTE-ERROR]:
Execute KVM Reset for server [dn]
(FSM-
fsmRmtErrComputePhysicalResetKvm:Execu STAGE:sam:dme:ComputePhysicalRe
F78603 te setKvm:Execute) warning

[FSM:STAGE:REMOTE-ERROR]:
Configuring connectivity on
CIMC(FSM-
fsmRmtErrMgmtControllerOnline:BmcConfi STAGE:sam:dme:MgmtControllerOnl
F78609 gureConnLocal ine:BmcConfigureConnLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configuring connectivity on
CIMC(FSM-
fsmRmtErrMgmtControllerOnline:BmcConfi STAGE:sam:dme:MgmtControllerOnl
F78609 gureConnPeer ine:BmcConfigureConnPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
Configuring fabric-interconnect
connectivity to CIMC(FSM-
fsmRmtErrMgmtControllerOnline:SwConfig STAGE:sam:dme:MgmtControllerOnl
F78609 ureConnLocal ine:SwConfigureConnLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Configuring fabric-interconnect
connectivity to CIMC(FSM-
fsmRmtErrMgmtControllerOnline:SwConfig STAGE:sam:dme:MgmtControllerOnl
F78609 ureConnPeer ine:SwConfigureConnPeer) warning

[FSM:STAGE:REMOTE-ERROR]:
cleaning host entries on local fabric-
interconnect(FSM-
fsmRmtErrComputeRackUnitOffline:Cleanu STAGE:sam:dme:ComputeRackUnitO
F78610 pLocal ffline:CleanupLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
cleaning host entries on peer fabric-
interconnect(FSM-
fsmRmtErrComputeRackUnitOffline:Cleanu STAGE:sam:dme:ComputeRackUnitO
F78610 pPeer ffline:CleanupPeer) warning
[FSM:STAGE:REMOTE-ERROR]:
Unconfiguring fabric-interconnect
connectivity to CIMC of server [id]
(FSM-
fsmRmtErrComputeRackUnitOffline:SwUnco STAGE:sam:dme:ComputeRackUnitO
F78610 nfigureLocal ffline:SwUnconfigureLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
Unconfiguring fabric-interconnect
connectivity to CIMC of server [id]
(FSM-
fsmRmtErrComputeRackUnitOffline:SwUnco STAGE:sam:dme:ComputeRackUnitO
F78610 nfigurePeer ffline:SwUnconfigurePeer) warning
[FSM:STAGE:REMOTE-ERROR]:
setting FI locator led to [adminState]
(FSM-
fsmRmtErrEquipmentLocatorLedSetFiLocato STAGE:sam:dme:EquipmentLocatorL
F78627 rLed:Execute edSetFiLocatorLed:Execute) warning

[FSM:STAGE:REMOTE-ERROR]: VNIC
profile alias configuration on local
fabric(FSM-
STAGE:sam:dme:VnicProfileSetDeplo
F78663 fsmRmtErrVnicProfileSetDeployAlias:Local yAlias:Local) warning

[FSM:STAGE:REMOTE-ERROR]: VNIC
profile alias configuration on peer
fabric(FSM-
STAGE:sam:dme:VnicProfileSetDeplo
F78663 fsmRmtErrVnicProfileSetDeployAlias:Peer yAlias:Peer) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure physical port types on
fabric interconnect [id](FSM-
STAGE:sam:dme:SwPhysConfPhysica
F78679 fsmRmtErrSwPhysConfPhysical:ConfigSwA l:ConfigSwA) warning

[FSM:STAGE:REMOTE-ERROR]:
Configure physical port types on
fabric interconnect [id](FSM-
STAGE:sam:dme:SwPhysConfPhysica
F78679 fsmRmtErrSwPhysConfPhysical:ConfigSwB l:ConfigSwB) warning

[FSM:STAGE:REMOTE-ERROR]:
Performing local port inventory of
switch [id](FSM-
fsmRmtErrSwPhysConfPhysical:PortInventor STAGE:sam:dme:SwPhysConfPhysica
F78679 ySwA l:PortInventorySwA) warning
[FSM:STAGE:REMOTE-ERROR]:
Performing peer port inventory of
switch [id](FSM-
fsmRmtErrSwPhysConfPhysical:PortInventor STAGE:sam:dme:SwPhysConfPhysica
F78679 ySwB l:PortInventorySwB) warning

[FSM:STAGE:REMOTE-ERROR]:
Verifying physical transition on
fabric interconnect [id](FSM-
fsmRmtErrSwPhysConfPhysical:VerifyPhysC STAGE:sam:dme:SwPhysConfPhysica
F78679 onfig l:VerifyPhysConfig) warning

[FSM:STAGE:REMOTE-ERROR]:
external VM management cluster
role configuration on local
fabric(FSM-
STAGE:sam:dme:ExtvmmEpClusterR
F78694 fsmRmtErrExtvmmEpClusterRole:SetLocal ole:SetLocal) warning

[FSM:STAGE:REMOTE-ERROR]:
external VM management cluster
role configuration on peer
fabric(FSM-
STAGE:sam:dme:ExtvmmEpClusterR
F78694 fsmRmtErrExtvmmEpClusterRole:SetPeer ole:SetPeer) warning

[FSM:STAGE:REMOTE-ERROR]: Turn
beacon lights on/off for [dn] on
fabric interconnect [id](FSM-
fsmRmtErrEquipmentBeaconLedIlluminate: STAGE:sam:dme:EquipmentBeaconL
F78702 ExecuteA edIlluminate:ExecuteA) warning

[FSM:STAGE:REMOTE-ERROR]: Turn
beacon lights on/off for [dn] on
fabric interconnect [id](FSM-
fsmRmtErrEquipmentBeaconLedIlluminate: STAGE:sam:dme:EquipmentBeaconL
F78702 ExecuteB edIlluminate:ExecuteB) warning
[FSM:STAGE:REMOTE-ERROR]:
Configure admin speed for [dn]
(FSM-
fsmRmtErrEtherServerIntFIoConfigSpeed:Co STAGE:sam:dme:EtherServerIntFIoC
F78711 nfigure onfigSpeed:Configure) warning
[FSM:STAGE:REMOTE-ERROR]:
clearing pending BIOS image
update(FSM-
fsmRmtErrComputePhysicalUpdateBIOS:Cle STAGE:sam:dme:ComputePhysicalU
F78721 ar pdateBIOS:Clear) warning

[FSM:STAGE:REMOTE-ERROR]:
waiting for pending BIOS image
update to clear(FSM-
fsmRmtErrComputePhysicalUpdateBIOS:Pol STAGE:sam:dme:ComputePhysicalU
F78721 lClearStatus pdateBIOS:PollClearStatus) warning
[FSM:STAGE:REMOTE-ERROR]:
waiting for BIOS update to
complete(FSM-
fsmRmtErrComputePhysicalUpdateBIOS:Pol STAGE:sam:dme:ComputePhysicalU
F78721 lUpdateStatus pdateBIOS:PollUpdateStatus) warning
[FSM:STAGE:REMOTE-ERROR]:
sending BIOS update request to
CIMC(FSM-
fsmRmtErrComputePhysicalUpdateBIOS:Up STAGE:sam:dme:ComputePhysicalU
F78721 dateRequest pdateBIOS:UpdateRequest) warning

[FSM:STAGE:REMOTE-ERROR]:
activating BIOS image(FSM-
fsmRmtErrComputePhysicalActivateBIOS:Ac STAGE:sam:dme:ComputePhysicalAc
F78722 tivate tivateBIOS:Activate) warning
[FSM:STAGE:REMOTE-ERROR]:
clearing pending BIOS image
activate(FSM-
fsmRmtErrComputePhysicalActivateBIOS:Cl STAGE:sam:dme:ComputePhysicalAc
F78722 ear tivateBIOS:Clear) warning

[FSM:STAGE:REMOTE-ERROR]:
waiting for BIOS activate(FSM-
fsmRmtErrComputePhysicalActivateBIOS:Po STAGE:sam:dme:ComputePhysicalAc
F78722 llActivateStatus tivateBIOS:PollActivateStatus) warning

[FSM:STAGE:REMOTE-ERROR]:
waiting for pending BIOS image
activate to clear(FSM-
fsmRmtErrComputePhysicalActivateBIOS:Po STAGE:sam:dme:ComputePhysicalAc
F78722 llClearStatus tivateBIOS:PollClearStatus) warning

[FSM:STAGE:REMOTE-ERROR]:
Power off the server(FSM-
fsmRmtErrComputePhysicalActivateBIOS:Po STAGE:sam:dme:ComputePhysicalAc
F78722 werOff tivateBIOS:PowerOff) warning

[FSM:STAGE:REMOTE-ERROR]:
power on the server(FSM-
fsmRmtErrComputePhysicalActivateBIOS:Po STAGE:sam:dme:ComputePhysicalAc
F78722 werOn tivateBIOS:PowerOn) warning

[FSM:STAGE:REMOTE-ERROR]:
updating BIOS tokens(FSM-
fsmRmtErrComputePhysicalActivateBIOS:Up STAGE:sam:dme:ComputePhysicalAc
F78722 dateTokens tivateBIOS:UpdateTokens) warning

[FSM:FAILED]:
sam:dme:EquipmentIOCardFePresen
F999445 fsmFailEquipmentIOCardFePresence ce critical
[FSM:FAILED]:
F999446 fsmFailEquipmentIOCardFeConn sam:dme:EquipmentIOCardFeConn critical

[FSM:FAILED]:
sam:dme:EquipmentChassisRemove
F999447 fsmFailEquipmentChassisRemoveChassis Chassis critical
[FSM:FAILED]:
sam:dme:EquipmentLocatorLedSetL
F999448 fsmFailEquipmentLocatorLedSetLocatorLed ocatorLed critical

[FSM:FAILED]:
sam:dme:MgmtControllerExtMgmtIf
F999558 fsmFailMgmtControllerExtMgmtIfConfig Config critical

[FSM:FAILED]:
sam:dme:FabricComputeSlotEpIden
F999559 fsmFailFabricComputeSlotEpIdentify tify critical
[FSM:FAILED]:
F999560 fsmFailComputeBladeDiscover sam:dme:ComputeBladeDiscover critical
[FSM:FAILED]:
F999560 fsmFailComputeRackUnitDiscover sam:dme:ComputeRackUnitDiscover critical

[FSM:FAILED]:
sam:dme:EquipmentChassisPsuPolic
F999573 fsmFailEquipmentChassisPsuPolicyConfig yConfig critical

[FSM:FAILED]:
sam:dme:AdaptorHostFcIfResetFcPe
F999574 fsmFailAdaptorHostFcIfResetFcPersBinding rsBinding critical
[FSM:FAILED]:
F999575 fsmFailComputeBladeDiag sam:dme:ComputeBladeDiag critical

[FSM:FAILED]:
sam:dme:FabricLanCloudSwitchMod
F999579 fsmFailFabricLanCloudSwitchMode e critical

[FSM:FAILED]:
sam:dme:FabricSanCloudSwitchMod
F999579 fsmFailFabricSanCloudSwitchMode e critical
[FSM:FAILED]:
F999616 fsmFailCommSvcEpUpdateSvcEp sam:dme:CommSvcEpUpdateSvcEp critical
[FSM:FAILED]:
sam:dme:CommSvcEpRestartWebSv
F999617 fsmFailCommSvcEpRestartWebSvc c critical
[FSM:FAILED]:
F999619 fsmFailAaaEpUpdateEp sam:dme:AaaEpUpdateEp critical
[FSM:FAILED]:
F999619 fsmFailPkiEpUpdateEp sam:dme:PkiEpUpdateEp critical

[FSM:FAILED]:
sam:dme:StatsCollectionPolicyUpdat
F999619 fsmFailStatsCollectionPolicyUpdateEp eEp critical
[FSM:FAILED]:
F999620 fsmFailAaaRealmUpdateRealm sam:dme:AaaRealmUpdateRealm critical
[FSM:FAILED]:
F999621 fsmFailAaaUserEpUpdateUserEp sam:dme:AaaUserEpUpdateUserEp critical
[FSM:FAILED]:
F999640 fsmFailSysfileMutationSingle sam:dme:SysfileMutationSingle critical
[FSM:FAILED]:
F999641 fsmFailSysfileMutationGlobal sam:dme:SysfileMutationGlobal critical

[FSM:FAILED]:
fsmFailSysdebugManualCoreFileExportTarg sam:dme:SysdebugManualCoreFileE
F999644 etExport xportTargetExport critical
[FSM:FAILED]:
F999645 fsmFailFabricEpMgrConfigure sam:dme:FabricEpMgrConfigure critical
[FSM:FAILED]:
F999645 fsmFailLsServerConfigure sam:dme:LsServerConfigure critical

[FSM:FAILED]:
fsmFailSysdebugAutoCoreFileExportTargetC sam:dme:SysdebugAutoCoreFileExp
F999645 onfigure ortTargetConfigure critical

[FSM:FAILED]:
fsmFailSysdebugLogControlEpLogControlPer sam:dme:SysdebugLogControlEpLog
F999646 sist ControlPersist critical
[FSM:FAILED]:
F999674 fsmFailEpqosDefinitionDeploy sam:dme:EpqosDefinitionDeploy critical
[FSM:FAILED]:
F999674 fsmFailSwAccessDomainDeploy sam:dme:SwAccessDomainDeploy critical
[FSM:FAILED]:
F999674 fsmFailSwEthLanBorderDeploy sam:dme:SwEthLanBorderDeploy critical
[FSM:FAILED]:
F999674 fsmFailSwEthMonDeploy sam:dme:SwEthMonDeploy critical
[FSM:FAILED]:
F999674 fsmFailSwFcMonDeploy sam:dme:SwFcMonDeploy critical
[FSM:FAILED]:
F999674 fsmFailSwFcSanBorderDeploy sam:dme:SwFcSanBorderDeploy critical
[FSM:FAILED]:
F999674 fsmFailSwUtilityDomainDeploy sam:dme:SwUtilityDomainDeploy critical
[FSM:FAILED]:
F999674 fsmFailVnicProfileSetDeploy sam:dme:VnicProfileSetDeploy critical
[FSM:FAILED]:
F999681 fsmFailSyntheticFsObjCreate sam:dme:SyntheticFsObjCreate critical

[FSM:FAILED]:
sam:dme:FirmwareDistributableDele
F999691 fsmFailFirmwareDistributableDelete te critical
[FSM:FAILED]:
F999691 fsmFailFirmwareImageDelete sam:dme:FirmwareImageDelete critical

[FSM:FAILED]:
sam:dme:MgmtControllerUpdateSwi
F999693 fsmFailMgmtControllerUpdateSwitch tch critical
[FSM:FAILED]:
sam:dme:MgmtControllerUpdateIO
F999694 fsmFailMgmtControllerUpdateIOM M critical
[FSM:FAILED]:
sam:dme:MgmtControllerActivateIO
F999695 fsmFailMgmtControllerActivateIOM M critical

[FSM:FAILED]:
sam:dme:MgmtControllerUpdateBM
F999696 fsmFailMgmtControllerUpdateBMC C critical

[FSM:FAILED]:
sam:dme:MgmtControllerActivateB
F999697 fsmFailMgmtControllerActivateBMC MC critical
[FSM:FAILED]:
sam:dme:CallhomeEpConfigCallhom
F999710 fsmFailCallhomeEpConfigCallhome e critical
[FSM:FAILED]:
sam:dme:MgmtIfSwMgmtOobIfCon
F999713 fsmFailMgmtIfSwMgmtOobIfConfig fig critical

[FSM:FAILED]:
sam:dme:MgmtIfSwMgmtInbandIfC
F999714 fsmFailMgmtIfSwMgmtInbandIfConfig onfig critical
[FSM:FAILED]:
F999719 fsmFailMgmtIfVirtualIfConfig sam:dme:MgmtIfVirtualIfConfig critical
[FSM:FAILED]:
F999720 fsmFailMgmtIfEnableVip sam:dme:MgmtIfEnableVip critical
[FSM:FAILED]:
F999721 fsmFailMgmtIfDisableVip sam:dme:MgmtIfDisableVip critical
[FSM:FAILED]:
F999722 fsmFailMgmtIfEnableHA sam:dme:MgmtIfEnableHA critical
[FSM:FAILED]:
F999723 fsmFailMgmtBackupBackup sam:dme:MgmtBackupBackup critical
[FSM:FAILED]:
F999724 fsmFailMgmtImporterImport sam:dme:MgmtImporterImport critical

[FSM:FAILED]:
sam:dme:QosclassDefinitionConfigGl
F999785 fsmFailQosclassDefinitionConfigGlobalQoS obalQoS critical

[FSM:FAILED]:
sam:dme:EpqosDefinitionDelTaskRe
F999790 fsmFailEpqosDefinitionDelTaskRemove move critical
[FSM:FAILED]:
sam:dme:EquipmentIOCardResetCm
F999843 fsmFailEquipmentIOCardResetCmc c critical

[FSM:FAILED]:
sam:dme:MgmtControllerUpdateUC
F999855 fsmFailMgmtControllerUpdateUCSManager SManager critical
[FSM:FAILED]:
F999863 fsmFailMgmtControllerSysConfig sam:dme:MgmtControllerSysConfig critical
[FSM:FAILED]:
F999892 fsmFailAdaptorExtEthIfPathReset sam:dme:AdaptorExtEthIfPathReset critical
[FSM:FAILED]:
sam:dme:AdaptorHostEthIfCircuitRe
F999897 fsmFailAdaptorHostEthIfCircuitReset set critical

[FSM:FAILED]:
sam:dme:AdaptorHostFcIfCircuitRes
F999897 fsmFailAdaptorHostFcIfCircuitReset et critical
[FSM:FAILED]:
sam:dme:ExtvmmMasterExtKeyCon
F999919 fsmFailExtvmmMasterExtKeyConfig fig critical
[FSM:FAILED]:
F999919 fsmFailExtvmmProviderConfig sam:dme:ExtvmmProviderConfig critical
[FSM:FAILED]:
F999919 fsmFailVmLifeCyclePolicyConfig sam:dme:VmLifeCyclePolicyConfig critical
[FSM:FAILED]:
sam:dme:ExtvmmKeyStoreCertInstal
F999920 fsmFailExtvmmKeyStoreCertInstall l critical

[FSM:FAILED]:
fsmFailExtvmmSwitchDelTaskRemoveProvid sam:dme:ExtvmmSwitchDelTaskRe
F999921 er moveProvider critical
[FSM:FAILED]:
F999944 fsmFailCapabilityUpdaterUpdater sam:dme:CapabilityUpdaterUpdater critical

[FSM:FAILED]:
fsmFailComputeBladeUpdateBoardControll sam:dme:ComputeBladeUpdateBoar
F999970 er dController critical

[FSM:FAILED]:
sam:dme:CapabilityCatalogueDeploy
F999971 fsmFailCapabilityCatalogueDeployCatalogue Catalogue critical
[FSM:FAILED]:
F999982 fsmFailEquipmentFexRemoveFex sam:dme:EquipmentFexRemoveFex critical

[FSM:FAILED]:
fsmFailEquipmentLocatorLedSetFeLocatorL sam:dme:EquipmentLocatorLedSetF
F999983 ed eLocatorLed critical
[FSM:FAILED]:
sam:dme:ComputePhysicalPowerCa
F999984 fsmFailComputePhysicalPowerCap p critical

[FSM:FAILED]:
sam:dme:EquipmentChassisPowerC
F999984 fsmFailEquipmentChassisPowerCap ap critical

[FSM:FAILED]:
sam:dme:EquipmentIOCardMuxOffli
F999985 fsmFailEquipmentIOCardMuxOffline ne critical
[FSM:FAILED]:
F1000013 fsmFailComputePhysicalAssociate sam:dme:ComputePhysicalAssociate critical

[FSM:FAILED]:
sam:dme:ComputePhysicalDisassoci
F1000014 fsmFailComputePhysicalDisassociate ate critical
[FSM:FAILED]:
sam:dme:ComputePhysicalDecommi
F1000016 fsmFailComputePhysicalDecommission ssion critical

[FSM:FAILED]:
sam:dme:ComputePhysicalSoftShutd
F1000017 fsmFailComputePhysicalSoftShutdown own critical

[FSM:FAILED]:
sam:dme:ComputePhysicalHardShut
F1000018 fsmFailComputePhysicalHardShutdown down critical
[FSM:FAILED]:
F1000019 fsmFailComputePhysicalTurnup sam:dme:ComputePhysicalTurnup critical

[FSM:FAILED]:
sam:dme:ComputePhysicalPowercyc
F1000020 fsmFailComputePhysicalPowercycle le critical
[FSM:FAILED]:
sam:dme:ComputePhysicalHardrese
F1000021 fsmFailComputePhysicalHardreset t critical
[FSM:FAILED]:
F1000022 fsmFailComputePhysicalSoftreset sam:dme:ComputePhysicalSoftreset critical

[FSM:FAILED]:
sam:dme:ComputePhysicalSwConnU
F1000023 fsmFailComputePhysicalSwConnUpd pd critical

[FSM:FAILED]:
sam:dme:ComputePhysicalBiosReco
F1000024 fsmFailComputePhysicalBiosRecovery very critical

[FSM:FAILED]:
sam:dme:ComputePhysicalCmosRes
F1000026 fsmFailComputePhysicalCmosReset et critical
[FSM:FAILED]:
sam:dme:ComputePhysicalResetBm
F1000027 fsmFailComputePhysicalResetBmc c critical
[FSM:FAILED]:
sam:dme:EquipmentIOCardResetIo
F1000028 fsmFailEquipmentIOCardResetIom m critical

[FSM:FAILED]:
sam:dme:ComputePhysicalUpdateEx
F1000048 fsmFailComputePhysicalUpdateExtUsers tUsers critical

[FSM:FAILED]:
sam:dme:SysdebugTechSupportIniti
F1000052 fsmFailSysdebugTechSupportInitiate ate critical

[FSM:FAILED]:
fsmFailSysdebugTechSupportDeleteTechSu sam:dme:SysdebugTechSupportDele
F1000053 pFile teTechSupFile critical

[FSM:FAILED]:
sam:dme:FirmwareDownloaderDow
F1000054 fsmFailFirmwareDownloaderDownload nload critical
[FSM:FAILED]:
sam:dme:LicenseDownloaderDownl
F1000054 fsmFailLicenseDownloaderDownload oad critical
[FSM:FAILED]:
F1000054 fsmFailSysdebugCoreDownload sam:dme:SysdebugCoreDownload critical

[FSM:FAILED]:
sam:dme:SysdebugTechSupportDow
F1000054 fsmFailSysdebugTechSupportDownload nload critical

[FSM:FAILED]:
sam:dme:ComputePhysicalUpdateA
F1000083 fsmFailComputePhysicalUpdateAdaptor daptor critical

[FSM:FAILED]:
sam:dme:ComputePhysicalActivateA
F1000084 fsmFailComputePhysicalActivateAdaptor daptor critical

[FSM:FAILED]:
sam:dme:CapabilityCatalogueActivat
F1000085 fsmFailCapabilityCatalogueActivateCatalog eCatalog critical

[FSM:FAILED]:
fsmFailCapabilityMgmtExtensionActivateMg sam:dme:CapabilityMgmtExtension
F1000086 mtExt ActivateMgmtExt critical
[FSM:FAILED]:
F1000091 fsmFailLicenseFileInstall sam:dme:LicenseFileInstall critical
[FSM:FAILED]:
F1000092 fsmFailLicenseFileClear sam:dme:LicenseFileClear critical

[FSM:FAILED]:
sam:dme:LicenseInstanceUpdateFle
F1000093 fsmFailLicenseInstanceUpdateFlexlm xlm critical
[FSM:FAILED]:
F1000123 fsmFailComputePhysicalConfigSoL sam:dme:ComputePhysicalConfigSoL critical

[FSM:FAILED]:
sam:dme:ComputePhysicalUnconfig
F1000124 fsmFailComputePhysicalUnconfigSoL SoL critical

[FSM:FAILED]:
sam:dme:PortPIoInCompatSfpPrese
F1000129 fsmFailPortPIoInCompatSfpPresence nce critical

[FSM:FAILED]:
sam:dme:ComputePhysicalDiagnosti
F1000156 fsmFailComputePhysicalDiagnosticInterrupt cInterrupt critical

[FSM:FAILED]:
fsmFailEquipmentChassisDynamicReallocati sam:dme:EquipmentChassisDynamic
F1000174 on Reallocation critical
[FSM:FAILED]:
sam:dme:ComputePhysicalResetKv
F1000203 fsmFailComputePhysicalResetKvm m critical
[FSM:FAILED]:
F1000209 fsmFailMgmtControllerOnline sam:dme:MgmtControllerOnline critical
[FSM:FAILED]:
F1000210 fsmFailComputeRackUnitOffline sam:dme:ComputeRackUnitOffline critical

[FSM:FAILED]:
fsmFailEquipmentLocatorLedSetFiLocatorLe sam:dme:EquipmentLocatorLedSetFi
F1000227 d LocatorLed critical
[FSM:FAILED]:
F1000263 fsmFailVnicProfileSetDeployAlias sam:dme:VnicProfileSetDeployAlias critical
[FSM:FAILED]:
F1000279 fsmFailSwPhysConfPhysical sam:dme:SwPhysConfPhysical critical
[FSM:FAILED]:
F1000294 fsmFailExtvmmEpClusterRole sam:dme:ExtvmmEpClusterRole critical

[FSM:FAILED]:
sam:dme:EquipmentBeaconLedIllum
F1000302 fsmFailEquipmentBeaconLedIlluminate inate critical

[FSM:FAILED]:
sam:dme:EtherServerIntFIoConfigSp
F1000311 fsmFailEtherServerIntFIoConfigSpeed eed critical

[FSM:FAILED]:
sam:dme:ComputePhysicalUpdateBI
F1000321 fsmFailComputePhysicalUpdateBIOS OS critical

[FSM:FAILED]:
sam:dme:ComputePhysicalActivateB
F1000322 fsmFailComputePhysicalActivateBIOS IOS critical
DPMR1-Severitycause affectedMoDn

server-moved fabric/server/chassis-[chassisId]/slot-[slotId]

server-identification-problem fabric/server/chassis-[chassisId]/slot-[slotId]

configuration-failed org-[name]/tier-[name]/ls-[name]/ether-[name]

configuration-failed org-[name]/tier-[name]/ls-[name]/fc-[name]

equipment-inoperable sys/chassis-[id]/blade-[slotId]/board/cpu-[id]
thermal-problem sys/chassis-[id]/blade-[slotId]/board/cpu-[id]

thermal-problem sys/chassis-[id]/blade-[slotId]/board/cpu-[id]
thermal-problem sys/chassis-[id]/blade-[slotId]/board/cpu-[id]

voltage-problem sys/chassis-[id]/blade-[slotId]/board/cpu-[id]

voltage-problem sys/chassis-[id]/blade-[slotId]/board/cpu-[id]

voltage-problem sys/chassis-[id]/blade-[slotId]/board/cpu-[id]
sys/chassis-[id]/blade-[slotId]/board/storage-
equipment-inoperable [type]-[id]/disk-[id]

capacity-exceeded sys/switch-[id]/stor-part-[name]

capacity-exceeded sys/switch-[id]/stor-part-[name]

sys/chassis-[id]/blade-[slotId]/board/memarray-
equipment-degraded [id]/mem-[id]

sys/chassis-[id]/blade-[slotId]/board/memarray-
equipment-inoperable [id]/mem-[id]
sys/chassis-[id]/blade-[slotId]/board/memarray-
thermal-problem [id]/mem-[id]

sys/chassis-[id]/blade-[slotId]/board/memarray-
thermal-problem [id]/mem-[id]
sys/chassis-[id]/blade-[slotId]/board/memarray-
thermal-problem [id]/mem-[id]

sys/chassis-[id]/blade-[slotId]/board/memarray-
voltage-problem [id]

sys/chassis-[id]/blade-[slotId]/board/memarray-
voltage-problem [id]
sys/chassis-[id]/blade-[slotId]/board/memarray-
voltage-problem [id]

unidentifiable-fru sys/chassis-[id]/blade-[slotId]/adaptor-[id]

equipment-missing sys/chassis-[id]/blade-[slotId]/adaptor-[id]

connectivity-problem sys/chassis-[id]/blade-[slotId]/adaptor-[id]

2
0
link-down 7
2
0
link-down 9

link-down sys/chassis-[id]/slot-[id]/[type]/port-[portId]

port-failed sys/chassis-[id]/slot-[id]/[type]/port-[portId]

port-failed sys/chassis-[id]/slot-[id]/[type]/port-[portId]

port-failed sys/chassis-[id]/slot-[id]/[type]/port-[portId]

2
8
operational-state-down 2
sys/chassis-[id]/blade-[slotId]/fabric-[switchId]/
link-down vc-[id]

equipment-inoperable sys/switch-[id]

link-down sys/mgmt-entity-[id]

link-down sys/mgmt-entity-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/
insufficient-resources dcxns-[switchId]

3
0
insufficiently-equipped 5
3
0
identity-unestablishable 6

power-problem sys/chassis-[id]/blade-[slotId]/board

3
1
power-problem 1

3
1
thermal-problem 2

3
1
equipment-inoperable 3

3
1
discovery-failed 4
3
1
association-failed 5

3
1
equipment-inoperable 7

3
1
equipment-missing 8

3
1
equipment-missing 9

3
2
identity-unestablishable 0
3
2
equipment-inaccessible 1

3
2
equipment-inaccessible 2

server-failed org-[name]/tier-[name]/ls-[name]

discovery-failed org-[name]/tier-[name]/ls-[name]
configuration-failure org-[name]/tier-[name]/ls-[name]

maintenance-failed org-[name]/tier-[name]/ls-[name]

equipment-removed org-[name]/tier-[name]/ls-[name]

server-inaccessible org-[name]/tier-[name]/ls-[name]
association-failed org-[name]/tier-[name]/ls-[name]

unassociated org-[name]/tier-[name]/ls-[name]

server-failed org-[name]/tier-[name]/ls-[name]

satellite-connection-absent sys/chassis-[id]/slot-[id]/[type]/port-[portId]

satellite-mis-connected sys/chassis-[id]/slot-[id]/[type]/port-[portId]
power-problem sys/rack-unit-[id]/psu-[id]

equipment-degraded sys/rack-unit-[id]/fan-module-[tray]-[id]/fan-[id]

equipment-inoperable sys/rack-unit-[id]/fan-module-[tray]-[id]/fan-[id]
equipment-inoperable sys/rack-unit-[id]/psu-[id]

equipment-removed sys/chassis-[id]/slot-[id]

equipment-missing sys/rack-unit-[id]/fan-module-[tray]-[id]

equipment-missing sys/rack-unit-[id]/psu-[id]
thermal-problem sys/chassis-[id]/slot-[id]
thermal-problem sys/rack-unit-[id]/fan-module-[tray]-[id]

thermal-problem sys/rack-unit-[id]/psu-[id]
thermal-problem sys/rack-unit-[id]/fan-module-[tray]-[id]

thermal-problem sys/rack-unit-[id]/psu-[id]
thermal-problem sys/rack-unit-[id]/fan-module-[tray]-[id]

thermal-problem sys/rack-unit-[id]/psu-[id]
voltage-problem sys/rack-unit-[id]/psu-[id]

voltage-problem sys/rack-unit-[id]/psu-[id]

voltage-problem sys/rack-unit-[id]/psu-[id]

performance-problem sys/rack-unit-[id]/psu-[id]

performance-problem sys/rack-unit-[id]/psu-[id]

performance-problem sys/rack-unit-[id]/psu-[id]

performance-problem sys/rack-unit-[id]/fan-module-[tray]-[id]/fan-[id]
performance-problem sys/rack-unit-[id]/fan-module-[tray]-[id]/fan-[id]

performance-problem sys/rack-unit-[id]/fan-module-[tray]-[id]/fan-[id]

firmware-upgrade-problem sys/chassis-[id]/slot-[id]

unsupported-connectivity-
configuration sys/chassis-[id]

equipment-unacknowledged sys/chassis-[id]

unsupported-connectivity-
configuration sys/chassis-[id]/slot-[id]
equipment-unacknowledged sys/chassis-[id]/slot-[id]

equipment-disconnected sys/chassis-[id]/slot-[id]

fru-problem sys/chassis-[id]

fru-problem sys/chassis-[id]/slot-[id]

fru-problem sys/rack-unit-[id]/fan-module-[tray]-[id]

fru-problem sys/rack-unit-[id]/psu-[id]
power-problem sys/chassis-[id]

thermal-problem sys/chassis-[id]
thermal-problem sys/chassis-[id]
thermal-problem sys/chassis-[id]

voltage-problem sys/chassis-[id]/blade-[slotId]/board

voltage-problem sys/chassis-[id]/blade-[slotId]/board
election-failure sys/mgmt-entity-[id]

ha-not-ready sys/mgmt-entity-[id]

version-incompatible sys/mgmt-entity-[id]

equipment-missing sys/rack-unit-[id]/fan-module-[tray]-[id]/fan-[id]
auto-firmware-upgrade sys/chassis-[id]/slot-[id]

org-[name]/pack-image-[hwVendor]|
image-deleted [hwModel]|[type]

unexpected-number-of-links sys/chassis-[id]/slot-[id]/[type]/port-[portId]

management-services-failure sys/mgmt-entity-[id]

management-services-
unresponsive sys/mgmt-entity-[id]
equipment-inoperable sys/chassis-[id]

interface-failed sys/chassis-[id]/blade-[slotId]/diag/port-[portId]

sys/chassis-[id]/blade-[slotId]/fabric-[switchId]/
cmc-vif-down vc-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/
log-capacity mgmt/log-[type]-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/
log-capacity mgmt/log-[type]-[id]
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/
log-capacity mgmt/log-[type]-[id]

empty-pool org-[name]/compute-pool-[name]

empty-pool org-[name]/uuid-pool-[name]

empty-pool org-[name]/ip-pool-[name]

empty-pool org-[name]/mac-pool-[name]

image-unusable sys/chassis-[id]/blade-[slotId]/bios/fw-updatable
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
image-cannot-boot eth-[id]/fw-boot-def/bootunit-[type]

empty-pool org-[name]/wwn-pool-[name]

equipment-inaccessible sys/chassis-[id]/slot-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/ext-
vif-down eth-[id]/vif-[id]
equipment-degraded sys/rack-unit-[id]/fan-module-[tray]-[id]

equipment-problem sys/chassis-[id]/slot-[id]

performance-problem sys/rack-unit-[id]/fan-module-[tray]-[id]/fan-[id]

sys/chassis-[id]/blade-[slotId]/board/memarray-
identity-unestablishable [id]/mem-[id]
5
1
equipment-problem 7

equipment-offline sys/rack-unit-[id]/psu-[id]

sys/chassis-[id]/blade-[slotId]/board/storage-
equipment-inoperable [type]-[id]/raid-battery

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/
file-transfer-failed mgmt/log-[type]-[id]

equipment-inoperable sys/chassis-[id]/blade-[slotId]/board/rtc-battery
sys/chassis-[id]/blade-[slotId]/board/sensor-unit-
thermal-problem [id]

sys/chassis-[id]/blade-[slotId]/board/sensor-unit-
thermal-problem [id]
sys/chassis-[id]/blade-[slotId]/board/sensor-unit-
thermal-problem [id]

thermal-problem sys/chassis-[id]/blade-[slotId]/board/iohub

thermal-problem sys/chassis-[id]/blade-[slotId]/board/iohub

thermal-problem sys/chassis-[id]/blade-[slotId]/board/iohub

identity-unestablishable sys/chassis-[id]
limit-reached sys/switch-[id]/vlan-port-ns

primary-vlan-missing-isolated fabric/eth-estc/[id]/net-[name]

empty-pin-group fabric/lan/lan-pin-group-[name]

empty-pin-group fabric/san/san-pin-group-[name]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/ext-
link-misconnected eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
link-misconnected eth-[id]
power-cap-fail sys/chassis-[id]/blade-[slotId]/budget

power-cap-fail sys/chassis-[id]/blade-[slotId]/budget

power-cap-fail sys/chassis-[id]/blade-[slotId]/budget

power-cap-fail sys/power-ep/group-[name]

power-cap-fail sys/power-ep/group-[name]
sys/license/feature-[name]-[vendor]-[version]/
license-graceperiod-entered inst-[scope]

sys/license/feature-[name]-[vendor]-[version]/
license-graceperiod-10days inst-[scope]

sys/license/feature-[name]-[vendor]-[version]/
license-graceperiod-30days inst-[scope]

sys/license/feature-[name]-[vendor]-[version]/
license-graceperiod-60days inst-[scope]
sys/license/feature-[name]-[vendor]-[version]/
license-graceperiod-90days inst-[scope]

sys/license/feature-[name]-[vendor]-[version]/
license-graceperiod-119days inst-[scope]

sys/license/feature-[name]-[vendor]-[version]/
license-graceperiod-expired inst-[scope]

license-file-uninstallable sys/license/file-[scope]:[id]
license-file-not-deleted sys/license/file-[scope]:[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
link-misconnected eth-[id]/if-[id]

assignment-failed org-[name]/tier-[name]/ls-[name]/

equipment-problem sys/fex-[id]

fru-problem sys/fex-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
link-missing eth-[id]
unsupported-transceiver sys/chassis-[id]/slot-[id]/[type]/port-[portId]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
link-missing eth-[id]/if-[id]

fabric/lan/[id]/pc-[portId]/ep-slot-[slotId]-port-
membership-down [portId]

fabric/san/[id]/pc-[portId]/ep-slot-[slotId]-port-
membership-down [portId]
thermal-problem sys/chassis-[id]/slot-[id]
thermal-problem sys/chassis-[id]/slot-[id]
thermal-problem sys/chassis-[id]/slot-[id]

equipment-inoperable sys/chassis-[id]

fabric/san/[id]/pc-[portId]/ep-slot-[slotId]-port-
incompatible-speed [portId]

incompatible-speed fabric/san/[id]/pc-[portId]
mgmtif-down sys/switch-[id]/extmgmt-intf

group-cap-insufficient sys/power-ep/group-[name]/ch-member-[id]

old-chassis-component-
firmware sys/power-ep/group-[name]/ch-member-[id]

psu-insufficient sys/power-ep/group-[name]/ch-member-[id]

psu-redundancy-fail sys/power-ep/group-[name]/ch-member-[id]

power-consumption-hit-limit sys/chassis-[id]/blade-[slotId]/budget

tftp-server-error sys/sysdebug/file-export

7
5
config-error 7
psu-failure sys/chassis-[id]/blade-[slotId]/budget

no-ack-from-bios sys/chassis-[id]/blade-[slotId]/budget

no-cap-fail org-[name]/power-policy-[name]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
new-link eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/ext-
link-missing eth-[id]

equipment-inoperable sys/chassis-[id]/blade-[slotId]/board/disk-[id]

fabric/eth-estc/[id]/pc-[portId]/ep-slot-[slotId]-
membership-down port-[portId]
identity-unestablishable sys/fex-[id]

equipment-inoperable sys/rack-unit-[id]/fan-module-[tray]-[id]

non-existent-scheduler org-[name]/maint-[name]

vsan-misconfigured fabric/fc-estc/[id]/net-[name]

fabric/fc-estc/[id]/phys-fc-slot-[slotId]-port-
vsan-misconfigured [portId]/vsan-[id]

old-firmware sys/chassis-[id]/blade-[slotId]/budget
identity-unestablishable sys/chassis-[id]/blade-[slotId]/board/cpu-[id]

empty-pool org-[name]/iqn-pool-[name]

fabric/server/sw-[id]/pc-[portId]/ep-slot-[slotId]-
membership-down port-[portId]

config-error fabric/[id]
vlan-misconfigured fabric/eth-estc/[id]/net-[name]

8
3
interface-misconfigured 4

missing-primary-vlan fabric/lan/[id]/phys-slot-[slotId]-port-[portId]

missing-primary-vlan fabric/lan/[id]/pc-[portId]

pinning-mismatch org-[name]/tier-[name]/ls-[name]/ether-[name]
pinning-misconfig org-[name]/tier-[name]/ls-[name]/ether-[name]

equipment-disabled sys/chassis-[id]/blade-[slotId]/board/cpu-[id]

sys/chassis-[id]/blade-[slotId]/board/storage-
equipment-inoperable [type]-[id]/lun-[id]

sys/chassis-[id]/blade-[slotId]/board/memarray-
equipment-disabled [id]/mem-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
activation-failed eth-[id]/fw-boot-def/bootunit-[type]

8
5
operational-state-down 8
device-shared-storage-error sys/mgmt-entity-[id]

device-shared-storage-error sys/mgmt-entity-[id]

device-shared-storage-error sys/mgmt-entity-[id]

ha-ssh-keys-mismatched sys/mgmt-entity-[id]

ucsm-process-failure sys/mgmt-entity-[id]/[name]

power-problem sys/chassis-[id]/blade-[slotId]/board

thermal-problem sys/chassis-[id]/blade-[slotId]/board
vmm/computeEp-[uuid]/nic-[name]/sw-
vif-down [phSwitchId]vif-[vifId]

equipment-offline sys/rack-unit-[id]/psu-[id]

power-problem sys/rack-unit-[id]/psu-[id]

power-problem sys/rack-unit-[id]/psu-[id]

power-down sys/switch-[id]/slot-[id]

inventory-failed sys/switch-[id]
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/
unidentifiable-fru adaptor-extn-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/
equipment-missing adaptor-extn-[id]

fex-unsupported sys/fex-[id]

configuration-failed org-[name]/tier-[name]/ls-[name]/iscsi-[name]

check-license-failed sys/chassis-[id]/slot-[id]

identify-failed sys/chassis-[id]/slot-[id]
configure-end-point-failed sys/chassis-[id]/slot-[id]

configure-sw-mgmt-end-
point-failed sys/chassis-[id]/slot-[id]

configure-vif-ns-failed sys/chassis-[id]/slot-[id]

discover-chassis-failed sys/chassis-[id]/slot-[id]

enable-chassis-failed sys/chassis-[id]/slot-[id]

disable-end-point-failed sys/chassis-[id]

un-identify-local-failed sys/chassis-[id]

un-identify-peer-failed sys/chassis-[id]

wait-failed sys/chassis-[id]
decomission-failed sys/chassis-[id]

execute-failed sys/chassis-[id]/blade-[slotId]/locator-led

primary-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

secondary-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

execute-local-failed fabric/server/chassis-[chassisId]/slot-[slotId]

execute-peer-failed fabric/server/chassis-[chassisId]/slot-[slotId]

bios-post-completion-failed sys/chassis-[id]/blade-[slotId]

blade-boot-pnuos-failed sys/chassis-[id]/blade-[slotId]

blade-boot-wait-failed sys/chassis-[id]/blade-[slotId]
blade-power-on-failed sys/chassis-[id]/blade-[slotId]

blade-read-smbios-failed sys/chassis-[id]/blade-[slotId]

bmc-config-pnuos-failed sys/chassis-[id]/blade-[slotId]

bmc-inventory-failed sys/chassis-[id]/blade-[slotId]

bmc-pre-config-pnuoslocal-
failed sys/chassis-[id]/blade-[slotId]

bmc-pre-config-pnuospeer-
failed sys/chassis-[id]/blade-[slotId]

bmc-presence-failed sys/chassis-[id]/blade-[slotId]

bmc-shutdown-discovered-
failed sys/chassis-[id]/blade-[slotId]
config-fe-local-failed sys/chassis-[id]/blade-[slotId]

config-fe-peer-failed sys/chassis-[id]/blade-[slotId]

config-user-access-failed sys/chassis-[id]/blade-[slotId]

handle-pooling-failed sys/chassis-[id]/blade-[slotId]

nic-config-pnuoslocal-failed sys/chassis-[id]/blade-[slotId]

nic-config-pnuospeer-failed sys/chassis-[id]/blade-[slotId]

nic-presence-local-failed sys/chassis-[id]/blade-[slotId]

nic-presence-peer-failed sys/chassis-[id]/blade-[slotId]
nic-unconfig-pnuoslocal-failed sys/chassis-[id]/blade-[slotId]

nic-unconfig-pnuospeer-failed sys/chassis-[id]/blade-[slotId]

pnuoscatalog-failed sys/chassis-[id]/blade-[slotId]

pnuosident-failed sys/chassis-[id]/blade-[slotId]

pnuosinventory-failed sys/chassis-[id]/blade-[slotId]

pnuospolicy-failed sys/chassis-[id]/blade-[slotId]

pnuosscrub-failed sys/chassis-[id]/blade-[slotId]

pnuosself-test-failed sys/chassis-[id]/blade-[slotId]
pre-sanitize-failed sys/chassis-[id]/blade-[slotId]

sanitize-failed sys/chassis-[id]/blade-[slotId]

setup-vmedia-local-failed sys/chassis-[id]/blade-[slotId]

setup-vmedia-peer-failed sys/chassis-[id]/blade-[slotId]

sol-redirect-disable-failed sys/chassis-[id]/blade-[slotId]

sol-redirect-enable-failed sys/chassis-[id]/blade-[slotId]

sw-config-pnuoslocal-failed sys/chassis-[id]/blade-[slotId]

sw-config-pnuospeer-failed sys/chassis-[id]/blade-[slotId]
sw-unconfig-pnuoslocal-failed sys/chassis-[id]/blade-[slotId]

sw-unconfig-pnuospeer-failed sys/chassis-[id]/blade-[slotId]

teardown-vmedia-local-failed sys/chassis-[id]/blade-[slotId]

teardown-vmedia-peer-failed sys/chassis-[id]/blade-[slotId]

hag-connect-failed sys/chassis-[id]/blade-[slotId]

hag-disconnect-failed sys/chassis-[id]/blade-[slotId]

serial-debug-connect-failed sys/chassis-[id]/blade-[slotId]
serial-debug-disconnect-failed sys/chassis-[id]/blade-[slotId]

bios-post-completion-failed sys/rack-unit-[id]

bmc-config-pnuos-failed sys/rack-unit-[id]

bmc-configure-conn-local-
failed sys/rack-unit-[id]

bmc-configure-conn-peer-
failed sys/rack-unit-[id]

bmc-inventory-failed sys/rack-unit-[id]

bmc-preconfig-pnuoslocal-
failed sys/rack-unit-[id]

bmc-preconfig-pnuospeer-
failed sys/rack-unit-[id]

bmc-presence-failed sys/rack-unit-[id]
bmc-shutdown-discovered-
failed sys/rack-unit-[id]

bmc-unconfig-pnuos-failed sys/rack-unit-[id]

boot-pnuos-failed sys/rack-unit-[id]

boot-wait-failed sys/rack-unit-[id]

config-discovery-mode-failed sys/rack-unit-[id]

config-niv-mode-failed sys/rack-unit-[id]

config-user-access-failed sys/rack-unit-[id]

handle-pooling-failed sys/rack-unit-[id]

nic-inventory-local-failed sys/rack-unit-[id]
nic-inventory-peer-failed sys/rack-unit-[id]

pnuoscatalog-failed sys/rack-unit-[id]

pnuosconn-status-failed sys/rack-unit-[id]

pnuosconnectivity-failed sys/rack-unit-[id]

pnuosident-failed sys/rack-unit-[id]

pnuosinventory-failed sys/rack-unit-[id]

pnuospolicy-failed sys/rack-unit-[id]

pnuosscrub-failed sys/rack-unit-[id]

pnuosself-test-failed sys/rack-unit-[id]
pre-sanitize-failed sys/rack-unit-[id]

read-smbios-failed sys/rack-unit-[id]

sanitize-failed sys/rack-unit-[id]

sol-redirect-disable-failed sys/rack-unit-[id]

sol-redirect-enable-failed sys/rack-unit-[id]

sw-config-pnuoslocal-failed sys/rack-unit-[id]

sw-config-pnuospeer-failed sys/rack-unit-[id]

sw-config-port-niv-local-failed sys/rack-unit-[id]

sw-config-port-niv-peer-failed sys/rack-unit-[id]
sw-configure-conn-local-failed sys/rack-unit-[id]

sw-configure-conn-peer-failed sys/rack-unit-[id]

sw-pnuosconnectivity-local-
failed sys/rack-unit-[id]

sw-pnuosconnectivity-peer-
failed sys/rack-unit-[id]

sw-unconfig-port-niv-local-
failed sys/rack-unit-[id]

sw-unconfig-port-niv-peer-
failed sys/rack-unit-[id]

hag-connect-failed sys/rack-unit-[id]

hag-disconnect-failed sys/rack-unit-[id]
serial-debug-connect-failed sys/rack-unit-[id]

serial-debug-disconnect-failed sys/rack-unit-[id]

wait-for-conn-ready-failed sys/rack-unit-[id]

execute-failed sys/chassis-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
execute-local-failed fc-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
execute-peer-failed fc-[id]

bios-post-completion-failed sys/chassis-[id]/blade-[slotId]

blade-boot-failed sys/chassis-[id]/blade-[slotId]

blade-boot-wait-failed sys/chassis-[id]/blade-[slotId]
blade-power-on-failed sys/chassis-[id]/blade-[slotId]

blade-read-smbios-failed sys/chassis-[id]/blade-[slotId]

bmc-config-pnuos-failed sys/chassis-[id]/blade-[slotId]

bmc-inventory-failed sys/chassis-[id]/blade-[slotId]

bmc-presence-failed sys/chassis-[id]/blade-[slotId]

bmc-shutdown-diag-
completed-failed sys/chassis-[id]/blade-[slotId]

cleanup-server-conn-sw-
afailed sys/chassis-[id]/blade-[slotId]

cleanup-server-conn-sw-
bfailed sys/chassis-[id]/blade-[slotId]
config-fe-local-failed sys/chassis-[id]/blade-[slotId]

config-fe-peer-failed sys/chassis-[id]/blade-[slotId]

config-user-access-failed sys/chassis-[id]/blade-[slotId]

debug-wait-failed sys/chassis-[id]/blade-[slotId]

derive-config-failed sys/chassis-[id]/blade-[slotId]

disable-server-conn-sw-
afailed sys/chassis-[id]/blade-[slotId]

disable-server-conn-sw-
bfailed sys/chassis-[id]/blade-[slotId]

enable-server-conn-sw-afailed sys/chassis-[id]/blade-[slotId]
enable-server-conn-sw-bfailed sys/chassis-[id]/blade-[slotId]

evaluate-status-failed sys/chassis-[id]/blade-[slotId]

fabricatraffic-test-status-failed sys/chassis-[id]/blade-[slotId]

fabricbtraffic-test-status-failed sys/chassis-[id]/blade-[slotId]

generate-log-wait-failed sys/chassis-[id]/blade-[slotId]

generate-report-failed sys/chassis-[id]/blade-[slotId]

host-catalog-failed sys/chassis-[id]/blade-[slotId]

host-connect-failed sys/chassis-[id]/blade-[slotId]
host-disconnect-failed sys/chassis-[id]/blade-[slotId]

host-ident-failed sys/chassis-[id]/blade-[slotId]

host-inventory-failed sys/chassis-[id]/blade-[slotId]

host-policy-failed sys/chassis-[id]/blade-[slotId]

host-server-diag-failed sys/chassis-[id]/blade-[slotId]

host-server-diag-status-failed sys/chassis-[id]/blade-[slotId]

nic-config-local-failed sys/chassis-[id]/blade-[slotId]

nic-config-peer-failed sys/chassis-[id]/blade-[slotId]
nic-inventory-local-failed sys/chassis-[id]/blade-[slotId]

nic-inventory-peer-failed sys/chassis-[id]/blade-[slotId]

nic-presence-local-failed sys/chassis-[id]/blade-[slotId]

nic-presence-peer-failed sys/chassis-[id]/blade-[slotId]

nic-unconfig-local-failed sys/chassis-[id]/blade-[slotId]

nic-unconfig-peer-failed sys/chassis-[id]/blade-[slotId]

remove-config-failed sys/chassis-[id]/blade-[slotId]

removevmedia-local-failed sys/chassis-[id]/blade-[slotId]
removevmedia-peer-failed sys/chassis-[id]/blade-[slotId]

restore-config-fe-local-failed sys/chassis-[id]/blade-[slotId]

restore-config-fe-peer-failed sys/chassis-[id]/blade-[slotId]

set-diag-user-failed sys/chassis-[id]/blade-[slotId]

setupvmedia-local-failed sys/chassis-[id]/blade-[slotId]

setupvmedia-peer-failed sys/chassis-[id]/blade-[slotId]

sol-redirect-disable-failed sys/chassis-[id]/blade-[slotId]

sol-redirect-enable-failed sys/chassis-[id]/blade-[slotId]
start-fabricatraffic-test-failed sys/chassis-[id]/blade-[slotId]

start-fabricbtraffic-test-failed sys/chassis-[id]/blade-[slotId]

stopvmedia-local-failed sys/chassis-[id]/blade-[slotId]

stopvmedia-peer-failed sys/chassis-[id]/blade-[slotId]

sw-config-local-failed sys/chassis-[id]/blade-[slotId]

sw-config-peer-failed sys/chassis-[id]/blade-[slotId]

sw-unconfig-local-failed sys/chassis-[id]/blade-[slotId]
sw-unconfig-peer-failed sys/chassis-[id]/blade-[slotId]

unconfig-user-access-failed sys/chassis-[id]/blade-[slotId]

serial-debug-connect-failed sys/chassis-[id]/blade-[slotId]

serial-debug-disconnect-failed sys/chassis-[id]/blade-[slotId]

sw-config-local-failed fabric/lan

sw-config-peer-failed fabric/lan

sw-config-local-failed fabric/san

sw-config-peer-failed fabric/san

propogate-ep-settings-failed sys/svc-ext
propogate-ep-time-zone-
settings-local-failed sys/svc-ext

propogate-ep-time-zone-
settings-peer-failed sys/svc-ext

propogate-ep-time-zone-
settings-to-adaptors-local-
failed sys/svc-ext

propogate-ep-time-zone-
settings-to-adaptors-peer-
failed sys/svc-ext

propogate-ep-time-zone-
settings-to-fex-iom-local-failed sys/svc-ext

propogate-ep-time-zone-
settings-to-fex-iom-peer-failed sys/svc-ext

set-ep-local-failed sys/svc-ext
set-ep-peer-failed sys/svc-ext

local-failed sys/svc-ext

peer-failed sys/svc-ext

set-ep-local-failed sys/

set-ep-peer-failed sys/

set-key-ring-local-failed sys/pki-ext

set-key-ring-peer-failed sys/pki-ext

set-ep-afailed stats/coll-policy-[name]

set-ep-bfailed stats/coll-policy-[name]

set-realm-local-failed sys/
set-realm-peer-failed sys/

set-user-local-failed sys/user-ext

set-user-peer-failed sys/user-ext

execute-failed sys/corefiles/file-[name]|[switchId]/mutation

local-failed sys/corefiles/file-[name]|[switchId]/mutation

peer-failed sys/corefiles/file-[name]|[switchId]/mutation

sys/corefiles/file-[name]|[switchId]/export-to-
execute-failed [hostname]

apply-config-failed fabric/[id]

apply-physical-failed fabric/[id]

validate-configuration-failed fabric/[id]

wait-on-phys-failed fabric/[id]
analyze-impact-failed org-[name]/tier-[name]/ls-[name]

apply-config-failed org-[name]/tier-[name]/ls-[name]

apply-identifiers-failed org-[name]/tier-[name]/ls-[name]

apply-policies-failed org-[name]/tier-[name]/ls-[name]

apply-template-failed org-[name]/tier-[name]/ls-[name]

evaluate-association-failed org-[name]/tier-[name]/ls-[name]

resolve-boot-config-failed org-[name]/tier-[name]/ls-[name]

wait-for-maint-permission-
failed org-[name]/tier-[name]/ls-[name]

wait-for-maint-window-failed org-[name]/tier-[name]/ls-[name]

local-failed sys/sysdebug/file-export
peer-failed sys/sysdebug/file-export

local-failed sys/sysdebug/logcontrol

peer-failed sys/sysdebug/logcontrol

local-failed org-[name]/ep-qos-[name]

peer-failed org-[name]/ep-qos-[name]

update-connectivity-failed sys/switch-[id]/access-eth

update-connectivity-failed sys/switch-[id]/border-eth

update-eth-mon-failed sys/switch-[id]/lanmon-eth/mon-[name]

update-fc-mon-failed sys/switch-[id]/sanmon-fc/mon-[name]
update-connectivity-failed sys/switch-[id]/border-fc

update-connectivity-failed sys/switch-[id]/utility-eth

local-failed fabric/lan/profiles

peer-failed fabric/lan/profiles

create-local-failed sys/file-[name]

create-remote-failed sys/file-[name]

local-failed sys/fw-catalogue/distrib-[name]

remote-failed sys/fw-catalogue/distrib-[name]

local-failed sys/fw-catalogue/image-[name]

remote-failed sys/fw-catalogue/image-[name]
reset-local-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

reset-remote-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

update-local-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

update-remote-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

verify-local-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

verify-remote-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

poll-update-status-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

update-request-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

activate-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

reset-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt
poll-update-status-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

update-request-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

activate-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

reset-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

set-local-failed call-home

set-peer-failed call-home

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
switch-failed eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
switch-failed eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
local-failed eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
remote-failed eth-[id]/if-[id]
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
local-failed eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
peer-failed eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
local-failed eth-[id]/if-[id]

backup-local-failed sys/backup-[hostname]

upload-failed sys/backup-[hostname]

config-failed sys/import-config-[hostname]

download-local-failed sys/import-config-[hostname]

report-results-failed sys/import-config-[hostname]

set-local-failed fabric/lan/classes
set-peer-failed fabric/lan/classes

local-failed org-[name]/ep-qos-deletion-[defIntId]

peer-failed org-[name]/ep-qos-deletion-[defIntId]

execute-failed sys/chassis-[id]/slot-[id]

execute-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

start-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

primary-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

secondary-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/ext-
disable-failed eth-[id]
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/ext-
enable-failed eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
disable-afailed eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
disable-bfailed eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
enable-afailed eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
enable-bfailed eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
disable-afailed fc-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
disable-bfailed fc-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
enable-afailed fc-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
enable-bfailed fc-[id]
set-local-failed sys/extvm-mgmt/ext-key

set-peer-failed sys/extvm-mgmt/ext-key

get-version-failed sys/extvm-mgmt/vm-[name]

set-local-failed sys/extvm-mgmt/vm-[name]

set-peer-failed sys/extvm-mgmt/vm-[name]

local-failed org-[name]/vm-lc-policy

peer-failed org-[name]/vm-lc-policy

set-local-failed sys/extvm-mgmt/key-store

set-peer-failed sys/extvm-mgmt/key-store
remove-local-failed sys/extvm-mgmt/vsw-deltask-[swIntId]

apply-failed capabilities/ep/updater-[fileName]

copy-remote-failed capabilities/ep/updater-[fileName]

delete-local-failed capabilities/ep/updater-[fileName]

evaluate-status-failed capabilities/ep/updater-[fileName]

local-failed capabilities/ep/updater-[fileName]

rescan-images-failed capabilities/ep/updater-[fileName]

unpack-local-failed capabilities/ep/updater-[fileName]

blade-power-off-failed sys/chassis-[id]/blade-[slotId]

blade-power-on-failed sys/chassis-[id]/blade-[slotId]
poll-update-status-failed sys/chassis-[id]/blade-[slotId]

prepare-for-update-failed sys/chassis-[id]/blade-[slotId]

update-request-failed sys/chassis-[id]/blade-[slotId]

sync-bladeaglocal-failed capabilities

sync-bladeagremote-failed capabilities

sync-hostagentaglocal-failed capabilities

sync-hostagentagremote-
failed capabilities

sync-nicaglocal-failed capabilities
sync-nicagremote-failed capabilities

sync-portaglocal-failed capabilities

sync-portagremote-failed capabilities

finalize-failed capabilities

cleanup-entries-failed sys/fex-[id]

un-identify-local-failed sys/fex-[id]

wait-failed sys/fex-[id]

decomission-failed sys/fex-[id]

execute-failed sys/chassis-[id]/blade-[slotId]/locator-led
1
6
9
4
config-failed 4

config-failed sys/chassis-[id]

cleanup-entries-failed sys/chassis-[id]/slot-[id]
1
6
9
7
activate-bios-failed 3
1
6
9
7
bios-img-update-failed 3

1
6
9
7
bios-post-completion-failed 3

1
6
9
7
blade-power-off-failed 3

1
6
9
7
bmc-config-pnuos-failed 3

1
6
9
bmc-preconfig-pnuoslocal- 7
failed 3
1
6
9
bmc-preconfig-pnuospeer- 7
failed 3

1
6
9
7
bmc-unconfig-pnuos-failed 3

1
6
9
7
boot-host-failed 3

1
6
9
7
boot-pnuos-failed 3
1
6
9
7
boot-wait-failed 3
1
6
9
7
clear-bios-update-failed 3

1
6
9
7
config-so-lfailed 3
1
6
9
7
config-user-access-failed 3

1
6
9
7
config-uuid-failed 3
1
6
9
7
hba-img-update-failed 3

1
6
9
7
hostosconfig-failed 3

1
6
9
7
hostosident-failed 3

1
6
9
7
hostospolicy-failed 3

1
6
9
7
hostosvalidate-failed 3
1
6
9
7
local-disk-fw-update-failed 3

1
6
9
7
nic-config-hostoslocal-failed 3

1
6
9
7
nic-config-hostospeer-failed 3

1
6
9
7
nic-config-pnuoslocal-failed 3
1
6
9
7
nic-config-pnuospeer-failed 3
1
6
9
7
nic-img-update-failed 3

1
6
9
7
nic-unconfig-pnuoslocal-failed 3

1
6
9
7
nic-unconfig-pnuospeer-failed 3
1
6
9
7
pnuoscatalog-failed 3

1
6
9
7
pnuosconfig-failed 3
1
6
9
7
pnuosident-failed 3
1
6
9
7
pnuosinventory-failed 3

1
6
9
7
pnuoslocal-disk-config-failed 3
1
6
9
7
pnuospolicy-failed 3

1
6
9
7
pnuosself-test-failed 3

1
6
9
7
pnuosunload-drivers-failed 3

1
6
9
7
pnuosvalidate-failed 3
1
6
9
7
poll-bios-activate-status-failed 3
1
6
9
7
poll-bios-update-status-failed 3

1
6
9
poll-board-ctrl-update-status- 7
failed 3

1
6
9
poll-clear-bios-update-status- 7
failed 3

1
6
9
7
power-on-failed 3
1
6
9
7
pre-sanitize-failed 3
1
6
9
7
prepare-for-boot-failed 3
1
6
9
7
sanitize-failed 3

1
6
9
7
sol-redirect-disable-failed 3

1
6
9
7
sol-redirect-enable-failed 3
1
6
9
7
storage-ctlr-img-update-failed 3

1
6
9
7
sw-config-hostoslocal-failed 3

1
6
9
7
sw-config-hostospeer-failed 3

1
6
9
7
sw-config-pnuoslocal-failed 3
1
6
9
7
sw-config-pnuospeer-failed 3

1
6
9
7
sw-config-port-niv-local-failed 3

1
6
9
7
sw-config-port-niv-peer-failed 3

1
6
9
7
sw-unconfig-pnuoslocal-failed 3

1
6
9
7
sw-unconfig-pnuospeer-failed 3
1
6
9
7
update-bios-request-failed 3

1
6
9
update-board-ctrl-request- 7
failed 3
1
6
9
activate-adaptor-nw-fw-local- 7
failed 3
1
6
9
activate-adaptor-nw-fw-peer- 7
failed 3
1
6
9
7
activateibmcfw-failed 3

1
6
9
7
hag-hostosconnect-failed 3

1
6
9
7
hag-pnuosconnect-failed 3

1
6
9
7
hag-pnuosdisconnect-failed 3
1
6
9
7
resetibmc-failed 3

1
6
9
serial-debug-pnuosconnect- 7
failed 3

1
6
9
serial-debug- 7
pnuosdisconnect-failed 3
1
6
9
update-adaptor-nw-fw-local- 7
failed 3
1
6
9
update-adaptor-nw-fw-peer- 7
failed 3
1
6
9
7
updateibmcfw-failed 3

1
6
9
wait-for-adaptor-nw-fw- 7
update-local-failed 3

1
6
9
wait-for-adaptor-nw-fw- 7
update-peer-failed 3

1
6
9
7
wait-foribmcfw-update-failed 3

1
6
9
7
bios-post-completion-failed 4

1
6
9
7
bmc-config-pnuos-failed 4

1
6
9
bmc-preconfig-pnuoslocal- 7
failed 4

1
6
9
bmc-preconfig-pnuospeer- 7
failed 4
1
6
9
7
bmc-unconfig-pnuos-failed 4

1
6
9
7
boot-pnuos-failed 4
1
6
9
7
boot-wait-failed 4

1
6
9
7
config-bios-failed 4
1
6
9
7
config-user-access-failed 4

1
6
9
7
handle-pooling-failed 4

1
6
9
7
nic-config-pnuoslocal-failed 4

1
6
9
7
nic-config-pnuospeer-failed 4

1
6
9
7
nic-unconfig-hostoslocal-failed 4
1
6
9
7
nic-unconfig-hostospeer-failed 4

1
6
9
7
nic-unconfig-pnuoslocal-failed 4

1
6
9
7
nic-unconfig-pnuospeer-failed 4
1
6
9
7
pnuoscatalog-failed 4

1
6
9
7
pnuosident-failed 4

1
6
9
7
pnuospolicy-failed 4
1
6
9
7
pnuosscrub-failed 4

1
6
9
7
pnuosself-test-failed 4

1
6
9
7
pnuosunconfig-failed 4
1
6
9
7
pnuosvalidate-failed 4

1
6
9
7
power-on-failed 4

1
6
9
7
pre-sanitize-failed 4
1
6
9
7
sanitize-failed 4
1
6
9
7
shutdown-failed 4

1
6
9
7
sol-redirect-disable-failed 4

1
6
9
7
sol-redirect-enable-failed 4

1
6
9
7
sw-config-pnuoslocal-failed 4

1
6
9
7
sw-config-pnuospeer-failed 4
1
6
9
7
sw-config-port-niv-local-failed 4

1
6
9
7
sw-config-port-niv-peer-failed 4

1
6
9
7
sw-unconfig-hostoslocal-failed 4

1
6
9
7
sw-unconfig-hostospeer-failed 4

1
6
9
7
sw-unconfig-pnuoslocal-failed 4

1
6
9
7
sw-unconfig-pnuospeer-failed 4

1
6
9
7
unconfig-bios-failed 4

1
6
9
7
unconfig-so-lfailed 4
1
6
9
7
unconfig-uuid-failed 4

1
6
9
7
hag-pnuosconnect-failed 4

1
6
9
7
hag-pnuosdisconnect-failed 4

1
6
9
serial-debug-pnuosconnect- 7
failed 4

1
6
9
serial-debug- 7
pnuosdisconnect-failed 4

1
6
9
7
cleanupcimc-failed 6
1
6
9
7
execute-failed 6

1
6
9
7
stopvmedia-local-failed 6
1
6
9
7
stopvmedia-peer-failed 6
1
6
9
7
execute-failed 7
1
6
9
7
execute-failed 8
1
6
9
7
execute-failed 9
1
6
9
8
execute-failed 0

1
6
9
8
pre-sanitize-failed 0

1
6
9
8
sanitize-failed 0
1
6
9
8
execute-failed 1

1
6
9
8
pre-sanitize-failed 1

1
6
9
8
sanitize-failed 1
1
6
9
8
execute-failed 2

1
6
9
8
pre-sanitize-failed 2

1
6
9
8
sanitize-failed 2
1
6
9
8
a-failed 3
1
6
9
8
b-failed 3

1
6
9
8
cleanup-failed 4

1
6
9
8
pre-sanitize-failed 4

1
6
9
8
reset-failed 4

1
6
9
8
sanitize-failed 4
1
6
9
8
setup-vmedia-local-failed 4

1
6
9
8
setup-vmedia-peer-failed 4

1
6
9
8
shutdown-failed 4
1
6
9
8
start-failed 4

1
6
9
8
stopvmedia-local-failed 4

1
6
9
8
stopvmedia-peer-failed 4

1
6
9
8
teardown-vmedia-local-failed 4

1
6
9
8
teardown-vmedia-peer-failed 4
1
6
9
8
wait-failed 4
1
6
9
8
blade-power-on-failed 6
1
6
9
8
execute-failed 6

1
6
9
8
pre-sanitize-failed 6

1
6
9
8
reconfig-bios-failed 6

1
6
9
8
reconfig-uuid-failed 6

1
6
9
8
sanitize-failed 6

1
6
9
8
execute-failed 7

execute-failed sys/chassis-[id]/slot-[id]
1
7
0
0
deploy-failed 8

local-failed sys/tech-support-files/tech-support-[creationTS]

local-failed sys/tech-support-files/tech-support-[creationTS]

peer-failed sys/tech-support-files/tech-support-[creationTS]

copy-remote-failed sys/fw-catalogue/dnld-[fileName]

delete-local-failed sys/fw-catalogue/dnld-[fileName]

local-failed sys/fw-catalogue/dnld-[fileName]

unpack-local-failed sys/fw-catalogue/dnld-[fileName]

copy-remote-failed sys/license/dnld-[fileName]
delete-local-failed sys/license/dnld-[fileName]

delete-remote-failed sys/license/dnld-[fileName]

local-failed sys/license/dnld-[fileName]

validate-local-failed sys/license/dnld-[fileName]

validate-remote-failed sys/license/dnld-[fileName]

copy-primary-failed sys/corefiles/file-[name]|[switchId]

copy-sub-failed sys/corefiles/file-[name]|[switchId]

delete-primary-failed sys/corefiles/file-[name]|[switchId]

delete-sub-failed sys/corefiles/file-[name]|[switchId]
copy-primary-failed sys/tech-support-files/tech-support-[creationTS]

copy-sub-failed sys/tech-support-files/tech-support-[creationTS]

delete-primary-failed sys/tech-support-files/tech-support-[creationTS]

delete-sub-failed sys/tech-support-files/tech-support-[creationTS]
1
7
0
4
poll-update-status-local-failed 3
1
7
0
4
poll-update-status-peer-failed 3
1
7
0
4
power-off-failed 3
1
7
0
4
power-on-failed 3
1
7
0
4
update-request-local-failed 3
1
7
0
4
update-request-peer-failed 3
1
7
0
4
activate-local-failed 4

1
7
0
4
activate-peer-failed 4
1
7
0
4
power-on-failed 4
1
7
0
4
reset-failed 4

apply-catalog-failed capabilities

copy-remote-failed capabilities

evaluate-status-failed capabilities

rescan-images-failed capabilities

unpack-local-failed capabilities

apply-catalog-failed capabilities/ep/mgmt-ext
copy-remote-failed capabilities/ep/mgmt-ext

evaluate-status-failed capabilities/ep/mgmt-ext

rescan-images-failed capabilities/ep/mgmt-ext

unpack-local-failed capabilities/ep/mgmt-ext

local-failed sys/license/file-[scope]:[id]

remote-failed sys/license/file-[scope]:[id]

local-failed sys/license/file-[scope]:[id]

remote-failed sys/license/file-[scope]:[id]

sys/license/feature-[name]-[vendor]-[version]/
local-failed inst-[scope]

sys/license/feature-[name]-[vendor]-[version]/
remote-failed inst-[scope]
1
7
0
8
execute-failed 3

1
7
0
8
execute-failed 4

shutdown-failed sys/chassis-[id]/slot-[id]/[type]/port-[portId]

1
7
1
1
execute-failed 6

config-failed sys/chassis-[id]
1
7
1
6
execute-failed 3

bmc-configure-conn-local-
failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

bmc-configure-conn-peer-
failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

sw-configure-conn-local-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt
sw-configure-conn-peer-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

cleanup-local-failed sys/rack-unit-[id]

cleanup-peer-failed sys/rack-unit-[id]

sw-unconfigure-local-failed sys/rack-unit-[id]

sw-unconfigure-peer-failed sys/rack-unit-[id]

execute-failed sys/chassis-[id]/blade-[slotId]/locator-led

local-failed fabric/lan/profiles

peer-failed fabric/lan/profiles
config-sw-afailed sys/switch-[id]/phys

config-sw-bfailed sys/switch-[id]/phys

port-inventory-sw-afailed sys/switch-[id]/phys

port-inventory-sw-bfailed sys/switch-[id]/phys

verify-phys-config-failed sys/switch-[id]/phys

set-local-failed sys/extvm-mgmt

set-peer-failed sys/extvm-mgmt

execute-afailed sys/chassis-[id]/blade-[slotId]/beacon

execute-bfailed sys/chassis-[id]/blade-[slotId]/beacon
configure-failed sys/chassis-[id]/blade-[slotId]/diag/port-[portId]
1
7
2
8
clear-failed 1

1
7
2
8
poll-clear-status-failed 1
1
7
2
8
poll-update-status-failed 1
1
7
2
8
update-request-failed 1
1
7
2
8
activate-failed 2
1
7
2
8
clear-failed 2
1
7
2
8
poll-activate-status-failed 2

1
7
2
8
poll-clear-status-failed 2
1
7
2
8
power-off-failed 2
1
7
2
8
power-on-failed 2
1
7
2
8
update-tokens-failed 2

check-license-failed sys/chassis-[id]/slot-[id]

identify-failed sys/chassis-[id]/slot-[id]

configure-end-point-failed sys/chassis-[id]/slot-[id]

configure-sw-mgmt-end-
point-failed sys/chassis-[id]/slot-[id]

configure-vif-ns-failed sys/chassis-[id]/slot-[id]

discover-chassis-failed sys/chassis-[id]/slot-[id]

enable-chassis-failed sys/chassis-[id]/slot-[id]
disable-end-point-failed sys/chassis-[id]

un-identify-local-failed sys/chassis-[id]

un-identify-peer-failed sys/chassis-[id]

wait-failed sys/chassis-[id]

decomission-failed sys/chassis-[id]

execute-failed sys/chassis-[id]/blade-[slotId]/locator-led

primary-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

secondary-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

execute-local-failed fabric/server/chassis-[chassisId]/slot-[slotId]
execute-peer-failed fabric/server/chassis-[chassisId]/slot-[slotId]

bios-post-completion-failed sys/chassis-[id]/blade-[slotId]

blade-boot-pnuos-failed sys/chassis-[id]/blade-[slotId]

blade-boot-wait-failed sys/chassis-[id]/blade-[slotId]

blade-power-on-failed sys/chassis-[id]/blade-[slotId]

blade-read-smbios-failed sys/chassis-[id]/blade-[slotId]

bmc-config-pnuos-failed sys/chassis-[id]/blade-[slotId]

bmc-inventory-failed sys/chassis-[id]/blade-[slotId]
bmc-pre-config-pnuoslocal-
failed sys/chassis-[id]/blade-[slotId]

bmc-pre-config-pnuospeer-
failed sys/chassis-[id]/blade-[slotId]

bmc-presence-failed sys/chassis-[id]/blade-[slotId]

bmc-shutdown-discovered-
failed sys/chassis-[id]/blade-[slotId]

config-fe-local-failed sys/chassis-[id]/blade-[slotId]

config-fe-peer-failed sys/chassis-[id]/blade-[slotId]

config-user-access-failed sys/chassis-[id]/blade-[slotId]

handle-pooling-failed sys/chassis-[id]/blade-[slotId]
nic-config-pnuoslocal-failed sys/chassis-[id]/blade-[slotId]

nic-config-pnuospeer-failed sys/chassis-[id]/blade-[slotId]

nic-presence-local-failed sys/chassis-[id]/blade-[slotId]

nic-presence-peer-failed sys/chassis-[id]/blade-[slotId]

nic-unconfig-pnuoslocal-failed sys/chassis-[id]/blade-[slotId]

nic-unconfig-pnuospeer-failed sys/chassis-[id]/blade-[slotId]

pnuoscatalog-failed sys/chassis-[id]/blade-[slotId]

pnuosident-failed sys/chassis-[id]/blade-[slotId]
pnuosinventory-failed sys/chassis-[id]/blade-[slotId]

pnuospolicy-failed sys/chassis-[id]/blade-[slotId]

pnuosscrub-failed sys/chassis-[id]/blade-[slotId]

pnuosself-test-failed sys/chassis-[id]/blade-[slotId]

pre-sanitize-failed sys/chassis-[id]/blade-[slotId]

sanitize-failed sys/chassis-[id]/blade-[slotId]

setup-vmedia-local-failed sys/chassis-[id]/blade-[slotId]

setup-vmedia-peer-failed sys/chassis-[id]/blade-[slotId]
sol-redirect-disable-failed sys/chassis-[id]/blade-[slotId]

sol-redirect-enable-failed sys/chassis-[id]/blade-[slotId]

sw-config-pnuoslocal-failed sys/chassis-[id]/blade-[slotId]

sw-config-pnuospeer-failed sys/chassis-[id]/blade-[slotId]

sw-unconfig-pnuoslocal-failed sys/chassis-[id]/blade-[slotId]

sw-unconfig-pnuospeer-failed sys/chassis-[id]/blade-[slotId]

teardown-vmedia-local-failed sys/chassis-[id]/blade-[slotId]
teardown-vmedia-peer-failed sys/chassis-[id]/blade-[slotId]

hag-connect-failed sys/chassis-[id]/blade-[slotId]

hag-disconnect-failed sys/chassis-[id]/blade-[slotId]

serial-debug-connect-failed sys/chassis-[id]/blade-[slotId]

serial-debug-disconnect-failed sys/chassis-[id]/blade-[slotId]

bios-post-completion-failed sys/rack-unit-[id]

bmc-config-pnuos-failed sys/rack-unit-[id]
bmc-configure-conn-local-
failed sys/rack-unit-[id]

bmc-configure-conn-peer-
failed sys/rack-unit-[id]

bmc-inventory-failed sys/rack-unit-[id]

bmc-preconfig-pnuoslocal-
failed sys/rack-unit-[id]

bmc-preconfig-pnuospeer-
failed sys/rack-unit-[id]

bmc-presence-failed sys/rack-unit-[id]

bmc-shutdown-discovered-
failed sys/rack-unit-[id]

bmc-unconfig-pnuos-failed sys/rack-unit-[id]

boot-pnuos-failed sys/rack-unit-[id]
boot-wait-failed sys/rack-unit-[id]

config-discovery-mode-failed sys/rack-unit-[id]

config-niv-mode-failed sys/rack-unit-[id]

config-user-access-failed sys/rack-unit-[id]

handle-pooling-failed sys/rack-unit-[id]

nic-inventory-local-failed sys/rack-unit-[id]

nic-inventory-peer-failed sys/rack-unit-[id]

pnuoscatalog-failed sys/rack-unit-[id]

pnuosconn-status-failed sys/rack-unit-[id]
pnuosconnectivity-failed sys/rack-unit-[id]

pnuosident-failed sys/rack-unit-[id]

pnuosinventory-failed sys/rack-unit-[id]

pnuospolicy-failed sys/rack-unit-[id]

pnuosscrub-failed sys/rack-unit-[id]

pnuosself-test-failed sys/rack-unit-[id]

pre-sanitize-failed sys/rack-unit-[id]

read-smbios-failed sys/rack-unit-[id]

sanitize-failed sys/rack-unit-[id]
sol-redirect-disable-failed sys/rack-unit-[id]

sol-redirect-enable-failed sys/rack-unit-[id]

sw-config-pnuoslocal-failed sys/rack-unit-[id]

sw-config-pnuospeer-failed sys/rack-unit-[id]

sw-config-port-niv-local-failed sys/rack-unit-[id]

sw-config-port-niv-peer-failed sys/rack-unit-[id]

sw-configure-conn-local-failed sys/rack-unit-[id]

sw-configure-conn-peer-failed sys/rack-unit-[id]
sw-pnuosconnectivity-local-
failed sys/rack-unit-[id]

sw-pnuosconnectivity-peer-
failed sys/rack-unit-[id]

sw-unconfig-port-niv-local-
failed sys/rack-unit-[id]

sw-unconfig-port-niv-peer-
failed sys/rack-unit-[id]

hag-connect-failed sys/rack-unit-[id]

hag-disconnect-failed sys/rack-unit-[id]

serial-debug-connect-failed sys/rack-unit-[id]

serial-debug-disconnect-failed sys/rack-unit-[id]

wait-for-conn-ready-failed sys/rack-unit-[id]
execute-failed sys/chassis-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
execute-local-failed fc-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
execute-peer-failed fc-[id]

bios-post-completion-failed sys/chassis-[id]/blade-[slotId]

blade-boot-failed sys/chassis-[id]/blade-[slotId]

blade-boot-wait-failed sys/chassis-[id]/blade-[slotId]

blade-power-on-failed sys/chassis-[id]/blade-[slotId]

blade-read-smbios-failed sys/chassis-[id]/blade-[slotId]
bmc-config-pnuos-failed sys/chassis-[id]/blade-[slotId]

bmc-inventory-failed sys/chassis-[id]/blade-[slotId]

bmc-presence-failed sys/chassis-[id]/blade-[slotId]

bmc-shutdown-diag-
completed-failed sys/chassis-[id]/blade-[slotId]

cleanup-server-conn-sw-
afailed sys/chassis-[id]/blade-[slotId]

cleanup-server-conn-sw-
bfailed sys/chassis-[id]/blade-[slotId]

config-fe-local-failed sys/chassis-[id]/blade-[slotId]

config-fe-peer-failed sys/chassis-[id]/blade-[slotId]
config-user-access-failed sys/chassis-[id]/blade-[slotId]

debug-wait-failed sys/chassis-[id]/blade-[slotId]

derive-config-failed sys/chassis-[id]/blade-[slotId]

disable-server-conn-sw-
afailed sys/chassis-[id]/blade-[slotId]

disable-server-conn-sw-
bfailed sys/chassis-[id]/blade-[slotId]

enable-server-conn-sw-afailed sys/chassis-[id]/blade-[slotId]

enable-server-conn-sw-bfailed sys/chassis-[id]/blade-[slotId]

evaluate-status-failed sys/chassis-[id]/blade-[slotId]
fabricatraffic-test-status-failed sys/chassis-[id]/blade-[slotId]

fabricbtraffic-test-status-failed sys/chassis-[id]/blade-[slotId]

generate-log-wait-failed sys/chassis-[id]/blade-[slotId]

generate-report-failed sys/chassis-[id]/blade-[slotId]

host-catalog-failed sys/chassis-[id]/blade-[slotId]

host-connect-failed sys/chassis-[id]/blade-[slotId]

host-disconnect-failed sys/chassis-[id]/blade-[slotId]

host-ident-failed sys/chassis-[id]/blade-[slotId]
host-inventory-failed sys/chassis-[id]/blade-[slotId]

host-policy-failed sys/chassis-[id]/blade-[slotId]

host-server-diag-failed sys/chassis-[id]/blade-[slotId]

host-server-diag-status-failed sys/chassis-[id]/blade-[slotId]

nic-config-local-failed sys/chassis-[id]/blade-[slotId]

nic-config-peer-failed sys/chassis-[id]/blade-[slotId]

nic-inventory-local-failed sys/chassis-[id]/blade-[slotId]

nic-inventory-peer-failed sys/chassis-[id]/blade-[slotId]
nic-presence-local-failed sys/chassis-[id]/blade-[slotId]

nic-presence-peer-failed sys/chassis-[id]/blade-[slotId]

nic-unconfig-local-failed sys/chassis-[id]/blade-[slotId]

nic-unconfig-peer-failed sys/chassis-[id]/blade-[slotId]

remove-config-failed sys/chassis-[id]/blade-[slotId]

removevmedia-local-failed sys/chassis-[id]/blade-[slotId]

removevmedia-peer-failed sys/chassis-[id]/blade-[slotId]

restore-config-fe-local-failed sys/chassis-[id]/blade-[slotId]
restore-config-fe-peer-failed sys/chassis-[id]/blade-[slotId]

set-diag-user-failed sys/chassis-[id]/blade-[slotId]

setupvmedia-local-failed sys/chassis-[id]/blade-[slotId]

setupvmedia-peer-failed sys/chassis-[id]/blade-[slotId]

sol-redirect-disable-failed sys/chassis-[id]/blade-[slotId]

sol-redirect-enable-failed sys/chassis-[id]/blade-[slotId]

start-fabricatraffic-test-failed sys/chassis-[id]/blade-[slotId]

start-fabricbtraffic-test-failed sys/chassis-[id]/blade-[slotId]
stopvmedia-local-failed sys/chassis-[id]/blade-[slotId]

stopvmedia-peer-failed sys/chassis-[id]/blade-[slotId]

sw-config-local-failed sys/chassis-[id]/blade-[slotId]

sw-config-peer-failed sys/chassis-[id]/blade-[slotId]

sw-unconfig-local-failed sys/chassis-[id]/blade-[slotId]

sw-unconfig-peer-failed sys/chassis-[id]/blade-[slotId]

unconfig-user-access-failed sys/chassis-[id]/blade-[slotId]
serial-debug-connect-failed sys/chassis-[id]/blade-[slotId]

serial-debug-disconnect-failed sys/chassis-[id]/blade-[slotId]

sw-config-local-failed fabric/lan

sw-config-peer-failed fabric/lan

sw-config-local-failed fabric/san

sw-config-peer-failed fabric/san

propogate-ep-settings-failed sys/svc-ext

propogate-ep-time-zone-
settings-local-failed sys/svc-ext

propogate-ep-time-zone-
settings-peer-failed sys/svc-ext
propogate-ep-time-zone-
settings-to-adaptors-local-
failed sys/svc-ext

propogate-ep-time-zone-
settings-to-adaptors-peer-
failed sys/svc-ext

propogate-ep-time-zone-
settings-to-fex-iom-local-failed sys/svc-ext

propogate-ep-time-zone-
settings-to-fex-iom-peer-failed sys/svc-ext

set-ep-local-failed sys/svc-ext

set-ep-peer-failed sys/svc-ext

local-failed sys/svc-ext

peer-failed sys/svc-ext
set-ep-local-failed sys/

set-ep-peer-failed sys/

set-key-ring-local-failed sys/pki-ext

set-key-ring-peer-failed sys/pki-ext

set-ep-afailed stats/coll-policy-[name]

set-ep-bfailed stats/coll-policy-[name]

set-realm-local-failed sys/

set-realm-peer-failed sys/

set-user-local-failed sys/user-ext

set-user-peer-failed sys/user-ext
execute-failed sys/corefiles/file-[name]|[switchId]/mutation

local-failed sys/corefiles/file-[name]|[switchId]/mutation

peer-failed sys/corefiles/file-[name]|[switchId]/mutation

sys/corefiles/file-[name]|[switchId]/export-to-
execute-failed [hostname]

apply-config-failed fabric/[id]

apply-physical-failed fabric/[id]

validate-configuration-failed fabric/[id]

wait-on-phys-failed fabric/[id]

analyze-impact-failed org-[name]/tier-[name]/ls-[name]

apply-config-failed org-[name]/tier-[name]/ls-[name]

apply-identifiers-failed org-[name]/tier-[name]/ls-[name]
apply-policies-failed org-[name]/tier-[name]/ls-[name]

apply-template-failed org-[name]/tier-[name]/ls-[name]

evaluate-association-failed org-[name]/tier-[name]/ls-[name]

resolve-boot-config-failed org-[name]/tier-[name]/ls-[name]

wait-for-maint-permission-
failed org-[name]/tier-[name]/ls-[name]

wait-for-maint-window-failed org-[name]/tier-[name]/ls-[name]

local-failed sys/sysdebug/file-export

peer-failed sys/sysdebug/file-export

local-failed sys/sysdebug/logcontrol

peer-failed sys/sysdebug/logcontrol
local-failed org-[name]/ep-qos-[name]

peer-failed org-[name]/ep-qos-[name]

update-connectivity-failed sys/switch-[id]/access-eth

update-connectivity-failed sys/switch-[id]/border-eth

update-eth-mon-failed sys/switch-[id]/lanmon-eth/mon-[name]

update-fc-mon-failed sys/switch-[id]/sanmon-fc/mon-[name]

update-connectivity-failed sys/switch-[id]/border-fc

update-connectivity-failed sys/switch-[id]/utility-eth
local-failed fabric/lan/profiles

peer-failed fabric/lan/profiles

create-local-failed sys/file-[name]

create-remote-failed sys/file-[name]

local-failed sys/fw-catalogue/distrib-[name]

remote-failed sys/fw-catalogue/distrib-[name]

local-failed sys/fw-catalogue/image-[name]

remote-failed sys/fw-catalogue/image-[name]

reset-local-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt
reset-remote-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

update-local-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

update-remote-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

verify-local-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

verify-remote-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

poll-update-status-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

update-request-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

activate-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

reset-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt
poll-update-status-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

update-request-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

activate-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

reset-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

set-local-failed call-home

set-peer-failed call-home

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
switch-failed eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
switch-failed eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
local-failed eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
remote-failed eth-[id]/if-[id]
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
local-failed eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
peer-failed eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
local-failed eth-[id]/if-[id]

backup-local-failed sys/backup-[hostname]

upload-failed sys/backup-[hostname]

config-failed sys/import-config-[hostname]

download-local-failed sys/import-config-[hostname]

report-results-failed sys/import-config-[hostname]

set-local-failed fabric/lan/classes
set-peer-failed fabric/lan/classes

local-failed org-[name]/ep-qos-deletion-[defIntId]

peer-failed org-[name]/ep-qos-deletion-[defIntId]

execute-failed sys/chassis-[id]/slot-[id]

execute-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

start-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

primary-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

secondary-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/ext-
disable-failed eth-[id]
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/ext-
enable-failed eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
disable-afailed eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
disable-bfailed eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
enable-afailed eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
enable-bfailed eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
disable-afailed fc-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
disable-bfailed fc-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
enable-afailed fc-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
enable-bfailed fc-[id]
set-local-failed sys/extvm-mgmt/ext-key

set-peer-failed sys/extvm-mgmt/ext-key

get-version-failed sys/extvm-mgmt/vm-[name]

set-local-failed sys/extvm-mgmt/vm-[name]

set-peer-failed sys/extvm-mgmt/vm-[name]

local-failed org-[name]/vm-lc-policy

peer-failed org-[name]/vm-lc-policy

set-local-failed sys/extvm-mgmt/key-store

set-peer-failed sys/extvm-mgmt/key-store
remove-local-failed sys/extvm-mgmt/vsw-deltask-[swIntId]

apply-failed capabilities/ep/updater-[fileName]

copy-remote-failed capabilities/ep/updater-[fileName]

delete-local-failed capabilities/ep/updater-[fileName]

evaluate-status-failed capabilities/ep/updater-[fileName]

local-failed capabilities/ep/updater-[fileName]

rescan-images-failed capabilities/ep/updater-[fileName]

unpack-local-failed capabilities/ep/updater-[fileName]

blade-power-off-failed sys/chassis-[id]/blade-[slotId]

blade-power-on-failed sys/chassis-[id]/blade-[slotId]
poll-update-status-failed sys/chassis-[id]/blade-[slotId]

prepare-for-update-failed sys/chassis-[id]/blade-[slotId]

update-request-failed sys/chassis-[id]/blade-[slotId]

sync-bladeaglocal-failed capabilities

sync-bladeagremote-failed capabilities

sync-hostagentaglocal-failed capabilities

sync-hostagentagremote-
failed capabilities

sync-nicaglocal-failed capabilities
sync-nicagremote-failed capabilities

sync-portaglocal-failed capabilities

sync-portagremote-failed capabilities

finalize-failed capabilities

cleanup-entries-failed sys/fex-[id]

un-identify-local-failed sys/fex-[id]

wait-failed sys/fex-[id]

decomission-failed sys/fex-[id]

execute-failed sys/chassis-[id]/blade-[slotId]/locator-led
7
8
3
8
config-failed 4

config-failed sys/chassis-[id]

cleanup-entries-failed sys/chassis-[id]/slot-[id]
7
8
4
1
activate-bios-failed 3
7
8
4
1
bios-img-update-failed 3

7
8
4
1
bios-post-completion-failed 3

7
8
4
1
blade-power-off-failed 3

7
8
4
1
bmc-config-pnuos-failed 3

7
8
4
bmc-preconfig-pnuoslocal- 1
failed 3
7
8
4
bmc-preconfig-pnuospeer- 1
failed 3

7
8
4
1
bmc-unconfig-pnuos-failed 3

7
8
4
1
boot-host-failed 3

7
8
4
1
boot-pnuos-failed 3
7
8
4
1
boot-wait-failed 3
7
8
4
1
clear-bios-update-failed 3

7
8
4
1
config-so-lfailed 3
7
8
4
1
config-user-access-failed 3

7
8
4
1
config-uuid-failed 3
7
8
4
1
hba-img-update-failed 3

7
8
4
1
hostosconfig-failed 3

7
8
4
1
hostosident-failed 3

7
8
4
1
hostospolicy-failed 3

7
8
4
1
hostosvalidate-failed 3
7
8
4
1
local-disk-fw-update-failed 3

7
8
4
1
nic-config-hostoslocal-failed 3

7
8
4
1
nic-config-hostospeer-failed 3
7
8
4
1
nic-config-pnuoslocal-failed 3

7
8
4
1
nic-config-pnuospeer-failed 3
7
8
4
1
nic-img-update-failed 3

7
8
4
1
nic-unconfig-pnuoslocal-failed 3

7
8
4
1
nic-unconfig-pnuospeer-failed 3

7
8
4
1
pnuoscatalog-failed 3

7
8
4
1
pnuosconfig-failed 3
7
8
4
1
pnuosident-failed 3
7
8
4
1
pnuosinventory-failed 3
7
8
4
1
pnuoslocal-disk-config-failed 3

7
8
4
1
pnuospolicy-failed 3

7
8
4
1
pnuosself-test-failed 3

7
8
4
1
pnuosunload-drivers-failed 3

7
8
4
1
pnuosvalidate-failed 3
7
8
4
1
poll-bios-activate-status-failed 3
7
8
4
1
poll-bios-update-status-failed 3

7
8
4
poll-board-ctrl-update-status- 1
failed 3

7
8
4
poll-clear-bios-update-status- 1
failed 3
7
8
4
1
power-on-failed 3

7
8
4
1
pre-sanitize-failed 3
7
8
4
1
prepare-for-boot-failed 3

7
8
4
1
sanitize-failed 3

7
8
4
1
sol-redirect-disable-failed 3

7
8
4
1
sol-redirect-enable-failed 3
7
8
4
1
storage-ctlr-img-update-failed 3

7
8
4
1
sw-config-hostoslocal-failed 3

7
8
4
1
sw-config-hostospeer-failed 3
7
8
4
1
sw-config-pnuoslocal-failed 3

7
8
4
1
sw-config-pnuospeer-failed 3

7
8
4
1
sw-config-port-niv-local-failed 3

7
8
4
1
sw-config-port-niv-peer-failed 3

7
8
4
1
sw-unconfig-pnuoslocal-failed 3

7
8
4
1
sw-unconfig-pnuospeer-failed 3
7
8
4
1
update-bios-request-failed 3

7
8
4
update-board-ctrl-request- 1
failed 3
7
8
4
activate-adaptor-nw-fw-local- 1
failed 3
7
8
4
activate-adaptor-nw-fw-peer- 1
failed 3

7
8
4
1
activateibmcfw-failed 3

7
8
4
1
hag-hostosconnect-failed 3

7
8
4
1
hag-pnuosconnect-failed 3

7
8
4
1
hag-pnuosdisconnect-failed 3
7
8
4
1
resetibmc-failed 3

7
8
4
serial-debug-pnuosconnect- 1
failed 3

7
8
4
serial-debug- 1
pnuosdisconnect-failed 3
7
8
4
update-adaptor-nw-fw-local- 1
failed 3
7
8
4
update-adaptor-nw-fw-peer- 1
failed 3

7
8
4
1
updateibmcfw-failed 3

7
8
4
wait-for-adaptor-nw-fw- 1
update-local-failed 3

7
8
4
wait-for-adaptor-nw-fw- 1
update-peer-failed 3

7
8
4
1
wait-foribmcfw-update-failed 3

7
8
4
1
bios-post-completion-failed 4

7
8
4
1
bmc-config-pnuos-failed 4
7
8
4
bmc-preconfig-pnuoslocal- 1
failed 4

7
8
4
bmc-preconfig-pnuospeer- 1
failed 4

7
8
4
1
bmc-unconfig-pnuos-failed 4

7
8
4
1
boot-pnuos-failed 4
7
8
4
1
boot-wait-failed 4

7
8
4
1
config-bios-failed 4
7
8
4
1
config-user-access-failed 4

7
8
4
1
handle-pooling-failed 4

7
8
4
1
nic-config-pnuoslocal-failed 4
7
8
4
1
nic-config-pnuospeer-failed 4

7
8
4
1
nic-unconfig-hostoslocal-failed 4

7
8
4
1
nic-unconfig-hostospeer-failed 4

7
8
4
1
nic-unconfig-pnuoslocal-failed 4

7
8
4
1
nic-unconfig-pnuospeer-failed 4

7
8
4
1
pnuoscatalog-failed 4

7
8
4
1
pnuosident-failed 4

7
8
4
1
pnuospolicy-failed 4
7
8
4
1
pnuosscrub-failed 4
7
8
4
1
pnuosself-test-failed 4

7
8
4
1
pnuosunconfig-failed 4

7
8
4
1
pnuosvalidate-failed 4

7
8
4
1
power-on-failed 4

7
8
4
1
pre-sanitize-failed 4

7
8
4
1
sanitize-failed 4
7
8
4
1
shutdown-failed 4

7
8
4
1
sol-redirect-disable-failed 4
7
8
4
1
sol-redirect-enable-failed 4

7
8
4
1
sw-config-pnuoslocal-failed 4

7
8
4
1
sw-config-pnuospeer-failed 4

7
8
4
1
sw-config-port-niv-local-failed 4

7
8
4
1
sw-config-port-niv-peer-failed 4

7
8
4
1
sw-unconfig-hostoslocal-failed 4

7
8
4
1
sw-unconfig-hostospeer-failed 4

7
8
4
1
sw-unconfig-pnuoslocal-failed 4
7
8
4
1
sw-unconfig-pnuospeer-failed 4

7
8
4
1
unconfig-bios-failed 4

7
8
4
1
unconfig-so-lfailed 4

7
8
4
1
unconfig-uuid-failed 4

7
8
4
1
hag-pnuosconnect-failed 4

7
8
4
1
hag-pnuosdisconnect-failed 4

7
8
4
serial-debug-pnuosconnect- 1
failed 4
7
8
4
serial-debug- 1
pnuosdisconnect-failed 4

7
8
4
1
cleanupcimc-failed 6
7
8
4
1
execute-failed 6

7
8
4
1
stopvmedia-local-failed 6

7
8
4
1
stopvmedia-peer-failed 6
7
8
4
1
execute-failed 7
7
8
4
1
execute-failed 8
7
8
4
1
execute-failed 9
7
8
4
2
execute-failed 0
7
8
4
2
pre-sanitize-failed 0

7
8
4
2
sanitize-failed 0
7
8
4
2
execute-failed 1

7
8
4
2
pre-sanitize-failed 1

7
8
4
2
sanitize-failed 1
7
8
4
2
execute-failed 2

7
8
4
2
pre-sanitize-failed 2

7
8
4
2
sanitize-failed 2
7
8
4
2
a-failed 3
7
8
4
2
b-failed 3

7
8
4
2
cleanup-failed 4

7
8
4
2
pre-sanitize-failed 4

7
8
4
2
reset-failed 4

7
8
4
2
sanitize-failed 4

7
8
4
2
setup-vmedia-local-failed 4

7
8
4
2
setup-vmedia-peer-failed 4

7
8
4
2
shutdown-failed 4
7
8
4
2
start-failed 4

7
8
4
2
stopvmedia-local-failed 4

7
8
4
2
stopvmedia-peer-failed 4

7
8
4
2
teardown-vmedia-local-failed 4

7
8
4
2
teardown-vmedia-peer-failed 4

7
8
4
2
wait-failed 4
7
8
4
2
blade-power-on-failed 6

7
8
4
2
execute-failed 6

7
8
4
2
pre-sanitize-failed 6
7
8
4
2
reconfig-bios-failed 6

7
8
4
2
reconfig-uuid-failed 6

7
8
4
2
sanitize-failed 6

7
8
4
2
execute-failed 7

execute-failed sys/chassis-[id]/slot-[id]

7
8
4
4
deploy-failed 8

local-failed sys/tech-support-files/tech-support-[creationTS]

local-failed sys/tech-support-files/tech-support-[creationTS]
peer-failed sys/tech-support-files/tech-support-[creationTS]

copy-remote-failed sys/fw-catalogue/dnld-[fileName]

delete-local-failed sys/fw-catalogue/dnld-[fileName]

local-failed sys/fw-catalogue/dnld-[fileName]

unpack-local-failed sys/fw-catalogue/dnld-[fileName]

copy-remote-failed sys/license/dnld-[fileName]

delete-local-failed sys/license/dnld-[fileName]

delete-remote-failed sys/license/dnld-[fileName]

local-failed sys/license/dnld-[fileName]
validate-local-failed sys/license/dnld-[fileName]

validate-remote-failed sys/license/dnld-[fileName]

copy-primary-failed sys/corefiles/file-[name]|[switchId]

copy-sub-failed sys/corefiles/file-[name]|[switchId]

delete-primary-failed sys/corefiles/file-[name]|[switchId]

delete-sub-failed sys/corefiles/file-[name]|[switchId]

copy-primary-failed sys/tech-support-files/tech-support-[creationTS]

copy-sub-failed sys/tech-support-files/tech-support-[creationTS]

delete-primary-failed sys/tech-support-files/tech-support-[creationTS]
delete-sub-failed sys/tech-support-files/tech-support-[creationTS]
7
8
4
8
poll-update-status-local-failed 3
7
8
4
8
poll-update-status-peer-failed 3
7
8
4
8
power-off-failed 3
7
8
4
8
power-on-failed 3
7
8
4
8
update-request-local-failed 3
7
8
4
8
update-request-peer-failed 3

7
8
4
8
activate-local-failed 4

7
8
4
8
activate-peer-failed 4
7
8
4
8
power-on-failed 4
7
8
4
8
reset-failed 4

apply-catalog-failed capabilities

copy-remote-failed capabilities

evaluate-status-failed capabilities

rescan-images-failed capabilities

unpack-local-failed capabilities

apply-catalog-failed capabilities/ep/mgmt-ext

copy-remote-failed capabilities/ep/mgmt-ext

evaluate-status-failed capabilities/ep/mgmt-ext

rescan-images-failed capabilities/ep/mgmt-ext
unpack-local-failed capabilities/ep/mgmt-ext

local-failed sys/license/file-[scope]:[id]

remote-failed sys/license/file-[scope]:[id]

local-failed sys/license/file-[scope]:[id]

remote-failed sys/license/file-[scope]:[id]

sys/license/feature-[name]-[vendor]-[version]/
local-failed inst-[scope]

sys/license/feature-[name]-[vendor]-[version]/
remote-failed inst-[scope]

7
8
5
2
execute-failed 3

7
8
5
2
execute-failed 4

shutdown-failed sys/chassis-[id]/slot-[id]/[type]/port-[portId]
7
8
5
5
execute-failed 6

config-failed sys/chassis-[id]
7
8
6
0
execute-failed 3

bmc-configure-conn-local-
failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

bmc-configure-conn-peer-
failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

sw-configure-conn-local-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

sw-configure-conn-peer-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

cleanup-local-failed sys/rack-unit-[id]

cleanup-peer-failed sys/rack-unit-[id]
sw-unconfigure-local-failed sys/rack-unit-[id]

sw-unconfigure-peer-failed sys/rack-unit-[id]

execute-failed sys/chassis-[id]/blade-[slotId]/locator-led

local-failed fabric/lan/profiles

peer-failed fabric/lan/profiles

config-sw-afailed sys/switch-[id]/phys

config-sw-bfailed sys/switch-[id]/phys

port-inventory-sw-afailed sys/switch-[id]/phys
port-inventory-sw-bfailed sys/switch-[id]/phys

verify-phys-config-failed sys/switch-[id]/phys

set-local-failed sys/extvm-mgmt

set-peer-failed sys/extvm-mgmt

execute-afailed sys/chassis-[id]/blade-[slotId]/beacon

execute-bfailed sys/chassis-[id]/blade-[slotId]/beacon

configure-failed sys/chassis-[id]/blade-[slotId]/diag/port-[portId]
7
8
7
2
clear-failed 1

7
8
7
2
poll-clear-status-failed 1
7
8
7
2
poll-update-status-failed 1
7
8
7
2
update-request-failed 1
7
8
7
2
activate-failed 2
7
8
7
2
clear-failed 2
7
8
7
2
poll-activate-status-failed 2

7
8
7
2
poll-clear-status-failed 2
7
8
7
2
power-off-failed 2
7
8
7
2
power-on-failed 2
7
8
7
2
update-tokens-failed 2

fsm-failed sys/chassis-[id]/slot-[id]

fsm-failed sys/chassis-[id]/slot-[id]

fsm-failed sys/chassis-[id]
fsm-failed sys/chassis-[id]/blade-[slotId]/locator-led

fsm-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

fsm-failed fabric/server/chassis-[chassisId]/slot-[slotId]

fsm-failed sys/chassis-[id]/blade-[slotId]

fsm-failed sys/rack-unit-[id]

fsm-failed sys/chassis-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
fsm-failed fc-[id]

fsm-failed sys/chassis-[id]/blade-[slotId]

fsm-failed fabric/lan

fsm-failed fabric/san

fsm-failed sys/svc-ext

fsm-failed sys/svc-ext

fsm-failed sys/

fsm-failed sys/pki-ext

fsm-failed stats/coll-policy-[name]

fsm-failed sys/

fsm-failed sys/user-ext

fsm-failed sys/corefiles/file-[name]|[switchId]/mutation
fsm-failed sys/corefiles/file-[name]|[switchId]/mutation

sys/corefiles/file-[name]|[switchId]/export-to-
fsm-failed [hostname]

fsm-failed fabric/[id]

fsm-failed org-[name]/tier-[name]/ls-[name]

fsm-failed sys/sysdebug/file-export

fsm-failed sys/sysdebug/logcontrol

fsm-failed org-[name]/ep-qos-[name]

fsm-failed sys/switch-[id]/access-eth

fsm-failed sys/switch-[id]/border-eth

fsm-failed sys/switch-[id]/lanmon-eth/mon-[name]

fsm-failed sys/switch-[id]/sanmon-fc/mon-[name]

fsm-failed sys/switch-[id]/border-fc

fsm-failed sys/switch-[id]/utility-eth

fsm-failed fabric/lan/profiles

fsm-failed sys/file-[name]

fsm-failed sys/fw-catalogue/distrib-[name]

fsm-failed sys/fw-catalogue/image-[name]

fsm-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

fsm-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt
fsm-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

fsm-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

fsm-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

fsm-failed call-home
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
fsm-failed eth-[id]/if-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
fsm-failed eth-[id]/if-[id]
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
fsm-failed eth-[id]/if-[id]
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
fsm-failed eth-[id]/if-[id]
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
fsm-failed eth-[id]/if-[id]
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
fsm-failed eth-[id]/if-[id]

fsm-failed sys/backup-[hostname]

fsm-failed sys/import-config-[hostname]

fsm-failed fabric/lan/classes

fsm-failed org-[name]/ep-qos-deletion-[defIntId]

fsm-failed sys/chassis-[id]/slot-[id]

fsm-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt

fsm-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/ext-
fsm-failed eth-[id]
sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
fsm-failed eth-[id]

sys/chassis-[id]/blade-[slotId]/adaptor-[id]/host-
fsm-failed fc-[id]

fsm-failed sys/extvm-mgmt/ext-key

fsm-failed sys/extvm-mgmt/vm-[name]

fsm-failed org-[name]/vm-lc-policy

fsm-failed sys/extvm-mgmt/key-store

fsm-failed sys/extvm-mgmt/vsw-deltask-[swIntId]

fsm-failed capabilities/ep/updater-[fileName]

fsm-failed sys/chassis-[id]/blade-[slotId]

fsm-failed capabilities

fsm-failed sys/fex-[id]

fsm-failed sys/chassis-[id]/blade-[slotId]/locator-led

fsm-failed

fsm-failed sys/chassis-[id]

fsm-failed sys/chassis-[id]/slot-[id]

fsm-failed

fsm-failed
fsm-failed

fsm-failed

fsm-failed

fsm-failed

fsm-failed

fsm-failed

fsm-failed

fsm-failed

fsm-failed

fsm-failed

fsm-failed

fsm-failed sys/chassis-[id]/slot-[id]

fsm-failed

fsm-failed sys/tech-support-files/tech-support-[creationTS]

fsm-failed sys/tech-support-files/tech-support-[creationTS]

fsm-failed sys/fw-catalogue/dnld-[fileName]
fsm-failed sys/license/dnld-[fileName]

fsm-failed sys/corefiles/file-[name]|[switchId]

fsm-failed sys/tech-support-files/tech-support-[creationTS]

fsm-failed

fsm-failed

fsm-failed capabilities

fsm-failed capabilities/ep/mgmt-ext

fsm-failed sys/license/file-[scope]:[id]

fsm-failed sys/license/file-[scope]:[id]

sys/license/feature-[name]-[vendor]-[version]/
fsm-failed inst-[scope]

fsm-failed

fsm-failed

fsm-failed sys/chassis-[id]/slot-[id]/[type]/port-[portId]

fsm-failed

fsm-failed sys/chassis-[id]

fsm-failed

fsm-failed sys/chassis-[id]/blade-[slotId]/adaptor-[id]/mgmt
fsm-failed sys/rack-unit-[id]

fsm-failed sys/chassis-[id]/blade-[slotId]/locator-led

fsm-failed fabric/lan/profiles

fsm-failed sys/switch-[id]/phys

fsm-failed sys/extvm-mgmt

fsm-failed sys/chassis-[id]/blade-[slotId]/beacon

fsm-failed sys/chassis-[id]/blade-[slotId]/diag/port-[portId]

fsm-failed

fsm-failed
DPMR1
Fault
code Fault Name

F0376 fltEquipmentIOCardRemoved

F0460 fltSysdebugMEpLogMEpLogLow

F0461 fltSysdebugMEpLogMEpLogVeryLow

F0462 fltSysdebugMEpLogMEpLogFull
F0313 fltComputePhysicalBiosPostTimeout

fltComputeBoardMotherBoardVoltageUppe
F0920 rThresholdCritical

fltComputeBoardMotherBoardVoltageLowe
F0921 rThresholdCritical

fltComputeBoardMotherBoardVoltageThres
F0919 holdLowerNonRecoverable

fltComputeBoardMotherBoardVoltageThres
F0918 holdUpperNonRecoverable

F1040 fltComputeBoardPowerUsageProblem
F0869 fltComputeBoardThermalProblem

F0310 fltComputeBoardPowerError

F0868 fltComputeBoardPowerFail

fltComputeBoardCmosVoltageThresholdCri
F0424 tical

fltComputeBoardCmosVoltageThresholdNo
F0425 nRecoverable
F0538 fltComputeIOHubThermalNonCritical

F0539 fltComputeIOHubThermalThresholdCritical

fltComputeIOHubThermalThresholdNonRec
F0540 overable
F0175 fltProcessorUnitThermalNonCritical
F0176 fltProcessorUnitThermalThresholdCritical
fltProcessorUnitThermalThresholdNonReco
F0177 verable
fltEquipmentIOCardThermalThresholdNonC
F0729 ritical
fltEquipmentIOCardThermalThresholdCritic
F0730 al
fltEquipmentIOCardThermalThresholdNonR
F0731 ecoverable
F0379 fltEquipmentIOCardThermalProblem

F1004 fltStorageControllerInoperable
F0181 fltStorageLocalDiskInoperable

F1006 fltStorageLocalDiskCopybackFailed

F1007 fltStorageVirtualDriveInoperable
F1008 fltStorageVirtualDriveDegraded

F1009 fltStorageVirtualDriveReconstructionFailed

fltStorageVirtualDriveConsistencyCheckFaile
F1010 d

F0531 fltStorageRaidBatteryInoperable
F0997 fltStorageRaidBatteryDegraded

F0998 fltStorageRaidBatteryRelearnAborted

F0999 fltStorageRaidBatteryRelearnFailed
F0434 fltEquipmentFanMissing

F0395 fltEquipmentFanPerfThresholdNonCritical
F0396 fltEquipmentFanPerfThresholdCritical

fltEquipmentFanPerfThresholdNonRecovera
F0397 ble

F0502 fltMemoryUnitIdentityUnestablishable
F0184 fltMemoryUnitDegraded

F0185 fltMemoryUnitInoperable
F0186 fltMemoryUnitThermalThresholdNonCritical
F0187 fltMemoryUnitThermalThresholdCritical
fltMemoryUnitThermalThresholdNonRecov
F0188 erable

F0378 fltEquipmentPsuMissing
fltEquipmentPsuThermalThresholdNonCritic
F0381 al
F0383 fltEquipmentPsuThermalThresholdCritical
fltEquipmentPsuThermalThresholdNonReco
F0385 verable

F0392 fltEquipmentPsuPerfThresholdNonCritical

F0393 fltEquipmentPsuPerfThresholdCritical
fltEquipmentPsuPerfThresholdNonRecovera
F0394 ble

F0389 fltEquipmentPsuVoltageThresholdCritical

fltEquipmentPsuVoltageThresholdNonReco
F0391 verable

F0882 fltEquipmentPsuPowerThreshold

F0883 fltEquipmentPsuInputError
F0374 fltEquipmentPsuInoperable

F0407 fltEquipmentPsuIdentity

fltPowerChassisMemberChassisPsuRedunda
F0743 nceFailure
fltEquipmentChassisThermalThresholdNonC
F0410 ritical
fltEquipmentChassisThermalThresholdCritic
F0409 al
fltEquipmentChassisThermalThresholdNonR
F0411 ecoverable
F0174 fltProcessorUnitInoperable

F0842 fltProcessorUnitDisabled

fltProcessorUnitVoltageThresholdNonCritica
F0178 l
F0179 fltProcessorUnitVoltageThresholdCritical

fltProcessorUnitVoltageThresholdNonRecov
F0180 erable

F0517 fltComputePhysicalPostfailure

F0190 fltMemoryArrayVoltageThresholdCritical
fltMemoryArrayVoltageThresholdNonRecov
F0191 erable

fltEquipmentFanModuleThermalThresholdN
F0380 onCritical
fltEquipmentFanModuleThermalThresholdC
F0382 ritical
fltEquipmentFanModuleThermalThresholdN
F0384 onRecoverable

F0528 fltEquipmentPsuOffline
fltEquipmentPsuVoltageThresholdNonCritic
F0387 al

F0320 fltComputePhysicalUnidentified

F0203 fltAdaptorUnitMissing

F1005 fltStorageLocalDiskRebuildFailed
F0371 fltEquipmentFanDegraded

F0181 fltStorageFlexFlash HV Missing


F1003 fltStorageControllerPatrolReadFailed

F1008 Flex Flash Virtual Drive degraded

F1007 Flex Flash Virtual DriveInoperable


DPMR1-
Message Severity Cause

IOCard [location] on server [Id] is removed. critical equipmentMissing

Log capacity on Management Controller on server [id] is


[capacity] info log-capacity

Log capacity on Management Controller on server [id] is


[capacity] info log-capacity

Log capacity on Management Controller on server [id] is


[capacity] info log-capacity
Server [id] BIOS failed power-on self testServer [chassisId]
BIOS failed power-on self test. critical equipment-inoperable

Motherboard of server [id] voltage: [voltage] major voltage-problem

Motherboard of server [id] voltage: [voltage] major voltage-problem

Motherboard of server [id] voltage: [voltage] critical voltage-problem

Motherboard of server [id] voltage: [voltage] critical voltage-problem

Motherboard of server [id] power: [power] major|critical power-problem


Motherboard of server [id] : thermal: [thermal] critical thermal-problem

Motherboard of server [id] power: [operPower] critcal voltageProblem

Motherboard of server [id] power: [power] critical power-problem

CMOS battery voltage on server [id] is [cmosVoltage] major voltage-problem

CMOS battery voltage on server [id] is [cmosVoltage] critical voltage-problem


IO Hub on server [Id] temperature: [thermal] minor thermal-problem

IO Hub on server [Id] temperature: [thermal] major thermal-problem

IO Hub on server [Id] temperature: [thermal] ciritcal thermal-problem


Processor [id] on server [id] temperature:[thermal] minor thermal-problem
Processor [id] on server [id] temperature:[thermal] major thermal-problem
Processor [id] on server [id] temperature:[thermal] ciritcal thermal-problem
IOCard [location] on server [id] temperature: [thermal] minor thermal-problem
IOCard [location] on server [id] temperature: [thermal] major thermal-problem
IOCard [location] on server [id] temperature: [thermal] critical thermal-problem
IOCard [location] on server [id] operState: [operState] major thermal-problem

Storage Controller [id] operability: [operability] critical equipment-inoperable


Local disk [id] on server [id] operability:
[operability] major equipment-inoperable

Local disk [id] on server [id] operability: [operability] major equipment-offline

Virtual drive [id] on server [id] operability: [operability] critical equipment-inoperable


Virtual drive [id] on server [id] operability: [operability] warning equipment-degraded

Virtual drive [id] on server [id] operability: [operability] major equipment-degraded

Virtual drive [id] on server [id] operability: [operability] major equipment-degraded

RAID Battery on server [id] operability:


[operability] major equipment-inoperable
Raid battery [id] on server [id] operability: [operability] major equipment-degraded

Raid battery [id] on server [id] operability: [operability] minor equipment-degraded

Raid battery [id] on server [id] operability: [operability] major equipment-degraded


Fan [id] in Fan Module [tray]-[id] under server [id] presence:
[presence] warning equipment-missing

Fan [id] in Fan Module [tray]-[id] under server [id] speed:


[perf] minor equipmentDegraded
Fan [id] in Fan Module [tray]-[id] under server [id] speed:
[perf] major equipmentDegraded

Fan [id] in Fan Module [tray]-[id] under server [id] speed:


[perf] critical equipmentInoperable

identity-
DIMM [location] on server [id] has an invalid FRU warning unestablishable
DIMM [location] on server [id]
operability: [operability] warning equipment-degraded

DIMM [location] on server [id]


operability: [operability] major equipment-inoperable
DIMM [location] on server [id]
temperature: [thermal] Info thermal-problem
DIMM [location] on server [id]
temperature: [thermal] major thermal-problem
DIMM [location] on server [id]
temperature: [thermal] critical thermal-problem

Power supply [id] in server [id] presence: [presence] warning equipment-missing


Power supply [id] in server [id] temperature: [thermal] minor thermal-problem
Power supply [id] in server [id] temperature: [thermal] warning thermal-problem
Power supply [id] in server [id] temperature: [thermal] major thermal-problem

Power supply [id] in server [id] output power: [perf] minor power-problem

Power supply [id] in server [id] output power: [perf] major power-problem
Power supply [id] in server [id] output power: [perf] critical power-problem

Power supply [id] in server [id] voltage: [voltage] major voltage-problem

Power supply [id] in server [id] voltage: [voltage] critical voltage-problem

Power supply [id] on server [id] has minor | major |


exceeded its power threshold. critical power-problem

Power supply [id] on server [id] has disconnected cable or bad


input voltage. ciritcal power-problem
Power supply [id] in server [id] operability: [operability] major equipment-inoperable

Power supply [id] on server [id] has a malformed FRU critical fru-problem

Server [id] was configured for redundancy, but running in a


non-redundant configuration. major psu-redundancy-fail
Thermal condition on chassis [id] cause:
[thermalStateQualifier] minor thermal-problem
Thermal condition on chassis [id] cause:
[thermalStateQualifier] major thermal-problem
Thermal condition on chassis [id] cause:
[thermalStateQualifier] critcal thermal-problem
Processor [id] on server [Id] operability: [operability] critical equipment-inoperable

Processor [id] on server [id] operState:


[operState] critical equipment-disabled

Processor [id] on server [Id] voltage: [voltage] minor voltage-problem


Processor [id] on server [Id] voltage: [voltage] major voltage-problem

Processor [id] on server [Id] voltage: [voltage] critical voltage-problem

Server [id] POST or diagnostic failure critcal equipment-problem

Memory array [id] on server [id] voltage: [voltage] warning voltageProblem


Memory array [id] on server [id] voltage: [voltage] major voltageProblem

Fan module [tray]-[id] in server [id] temperature: [thermal] thermal-problem


Fan module [tray]-[id] in server [id] temperature: [thermal] major thermal-problem
Fan module [tray]-[id] in server [id] temperature: [thermal] critical thermal-problem

Power supply [id] in server [id] power: [power] warning equipment-offline


Power supply [id] in server [id] voltage: [voltage] minor voltage-problem

Server [id] Chassis open. warning equipmentProblem

Adapter [id] in server [id] presence: [presence] warning equipmentMissing

Local disk [id] on server [id] operability: [operability]. Reason:


[operQualifierReason] major equipmentOffline
Fan [id] in Fan Module [tray]-[id] under server [id] operability:
[operability] warning equipmentDegraded

Physical drive [id] on Storage Controller [id] operability:


[operability] major equipment missing
Local disk [id] on server [id] had a patrol read failure. Reason:
[operQualifierReason] warning equipment-inoperable

Virtual drive [id] on Storage Controller [id] operability:


[operability] warning equipment-degraded

Virtual drive [id] on Storage Controller [id] operability:


[operability] ciritcal equipment-inoperable
DN Explanation

This fault typically occurs because


an I/O card is removed from the
chassis, or when the card or the
slot
sys/rack-unit-1/slot-[Id] is faulty.

This fault typically occurs because


Cisco Integrated Management
Controller (CIMC) has detected
that
the System Event Log (SEL) on the
server is almost full. The available
capacity in the log is low. This
is an information-level fault and
can be ignored if you do not want
sys/rack-unit-1/mgmt/log-SEL-0 to clear the SEL at this time.

This fault typically occurs because


Cisco Integrated Management
Controller (CIMC) has detected
that
the System Event Log (SEL) on the
server is almost full. The available
capacity in the log is very low.
This is an information-level fault
and can be ignored if you do not
sys/rack-unit-1/mgmt/log-SEL-0 want to clear the SEL at this time.

This fault typically occurs because


sys/rack-unit-1/mgmt/log-SEL-0 the CIMC SEL is full.
This fault typically occurs when
the server did not complete the
sys/rack-unit-1/board BIOS POST.

This fault typically occurs when


one or more motherboard input
voltages has exceeded upper
critical
sys/rack-unit-1/board thresholds.

This fault typically occurs when


one or more motherboard input
voltages has crossed lower critical
sys/rack-unit-1/board thresholds.

This fault typically occurs when


one or more motherboard input
voltages has dropped too low and
is
sys/rack-unit-1/board unlikely to recover.

This fault typically occurs when


one or more motherboard input
voltages has become too high and
sys/rack-unit-1/board is unlikely to recover.

This fault typically occurs when


the motherboard power
consumption exceeds certain
threshold limits.
When this happens, the power
usage sensors on a server detects
sys/rack-unit-1/board a problem
This fault typically occurs when
the motherboard thermal sensors
sys/rack-unit-1/board on a server detect a problem.

This fault typically occurs when


the server power sensors have
sys/rack-unit-1/board detected a problem.

This fault typically occurs when


the power sensors on a server
sys/rack-unit-1/board detect a problem.

This fault is raised when the


CMOS battery voltage has
dropped to lower than the normal
operating
range. This could impact the clock
sys/rack-unit-1/board/cpu-[Id] and other CMOS settings.

This fault is raised when the


CMOS battery voltage has
dropped quite low and is unlikely
to recover.
This impacts the clock and other
sys/rack-unit-1/board/cpu-[Id] CMOS settings.
This fault is raised when the I/O
controller temperature is outside
sys/rack-unit-1/board the upper or lower non-critical
threshold.

This fault is raised when the I/O


controller temperature is outside
the upper or lower critical
sys/rack-unit-1/board/iohub-[id] threshold.

This fault is raised when the I/O


controller temperature is outside
the recoverable range of
sys/rack-unit-1/board/iohub-[id] operation.
This fault occurs when the
processor temperature on a
server exceeds a non-critical
threshold value, but
is still below the critical threshold.
Be aware of the following
possible contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
than 95F (35C).
• If sensors on a CPU reach
179.6F (82C), the system will take
sys/rack-unit-1/board/cpu-[Id] that CPU offline.
This fault occurs when the
processor temperature on a rack
server exceeds a critical threshold
value. Be
aware of the following possible
contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
than 95F (35C).
• If sensors on a CPU reach
sys/rack-unit-1/board/cpu-[Id] 179.6F (82C), the system will take
that CPU offline.
This fault occurs when the
processor temperature on a rack
server has been out of the
operating range,
and the issue is not recoverable.
Be aware of the following
possible contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
than 95F (35C).
• If sensors on a CPU reach
sys/rack-unit-1/board/cpu-[Id] 179.6F (82C), the system will take
that CPU offline.
This fault occurs when the
temperature of an I/O card has
exceeded a non-critical threshold
value, but is
still below the critical threshold.
Be aware of the following
possible contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
than 95F (35C).
• If sensors on a CPU reach
179.6F (82C), the system will take
sys/rack-unit-1/adaptor-[Id] that CPU offline
This fault occurs when the
temperature of an I/O card has
exceeded a critical threshold
value. Be aware
of the following possible
contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
than 95F (35C).
• If sensors on a CPU reach
179.6F (82C), the system will take
sys/rack-unit-1/adaptor-[Id] that CPU offline
This fault occurs when the
temperature of an I/O card has
been out of the operating range,
and the issue
is not recoverable. Be aware of
the following possible
contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
than 95F (35C).
• If sensors on a CPU reach
179.6F (82C), the system will take
sys/rack-unit-1/adaptor-[Id] that CPU offline.
This fault occurs when there is a
thermal problem on an I/O card.
Be aware of the following
possible
contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
sys/rack-unit-1/adaptor-[Id] than 95F (35C).

This fault indicates a non-


recoverable storage controller
sys/rack-unit-1/board/storage-SAS- failure. This happens when the
SLOT-[Id] storage system
This fault occurs when the local
disk has become inoperable or
has been removed while the
sys/rack-unit-1/board/storage-SAS- server was
SLOT-[Id]/pd-[Id] in use.

This fault indicates a physical disk


copyback failure. This fault could
indicate a physical drive problem
sys/rack-unit-1/board/storage-SAS- or an issue with the RAID
SLOT-[Id]/pd-[Id] configuration.

This fault indicates a non-


sys/rack-unit-1/board/storage-SAS- recoverable error with the virtual
SLOT-[Id]/vd-[Id] drive.
sys/rack-unit-1/board/storage-SAS- This fault indicates a recoverable
SLOT-[Id]/vd-[Id] error with the virtual drive.

This fault indicates a failure in the


sys/rack-unit-1/board/storage-SAS- reconstruction process of the
SLOT-[Id]/vd-[Id] virtual drive.

This fault indicates a consistency


sys/rack-unit-1/board/storage-SAS- check failure with the virtual
SLOT-[Id]/vd-[Id] drive.

This fault occurs when the RAID


sys/rack-unit-1/board/storage-SAS- battery voltage is below the
SLOT-ctrl-[Id]/raid-battery-[Id] normal operating range.
sys/rack-unit-1/board/storage-SAS- This fault indicates a controller
SLOT-ctrl-[Id]/raid-battery-[Id] battery backup unit failure.

This fault indicates that a


sys/rack-unit-1/board/storage-SAS- controller battery relearn process
SLOT-ctrl-[Id]/raid-battery-[Id] was aborted.

sys/rack-unit-1/board/storage-SAS- This fault indicates a controller


SLOT-ctrl-[Id]/raid-battery-[Id] battery relearn failure.
This fault occurs in the unlikely
sys/rack-unit-1/fan-module-1-[Id]/fan- event that a fan in a fan module
[Id] cannot be detected.

This fault occurs when the fan


speed reading from the fan
controller does not match the
desired fan speed
and is outside of the normal
operating range. This can indicate
a problem with a fan or with the
sys/rack-unit-1/fan-module-1-[Id]/fan- reading
[Id] from the fan controller.
This fault occurs when the fan
speed read from the fan
controller does not match the
desired fan speed
and has exceeded the critical
threshold and is in risk of failure.
This can indicate a problem with a
fan
sys/rack-unit-1/fan-module-1-[Id]/fan- or with the reading from the fan
[Id] controller.

This fault occurs when the fan


speed read from the fan
controller has far exceeded the
desired fan speed.
sys/rack-unit-1/fan-module-1-[Id]/fan- It usually indicates that the fan
[Id] has failed.

This fault typically occurs when a


sensor has detected an
unsupported DIMM in the server.
For example,
sys/rack-unit-1/board/memarray- the model, vendor, or revision is
[Id]/mem-[Id] not recognized.
This fault occurs when a DIMM is
in a degraded operability state.
This state typically occurs when
an
excessive number of correctable
sys/rack-unit-1/board/memarray- ECC errors are reported on the
[Id]/mem-[Id] DIMM by the server BIOS.

This fault typically occurs because


an above threshold number of
correctable or uncorrectable
errors has
sys/rack-unit-1/board/memarray- occurred on a DIMM. The DIMM
[Id]/mem-[Id] may be inoperable.
This fault occurs when the
temperature of a memory unit on
a server exceeds a non-critical
threshold
value, but is still below the critical
threshold. Be aware of the
following possible contributing
factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment.
Inaddition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
than 95F (35C).
• If sensors on a CPU reach
sys/rack-unit-1/board/memarray- 179.6F (82C), the system will take
[Id]/mem-[Id] that CPU offline.
This fault occurs when the
temperature of a memory unit on
a server exceeds a critical
threshold value.
Be aware of the following
possible contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
than 95F (35C).
• If sensors on a CPU reach
sys/rack-unit-1/board/memarray- 179.6F (82C), the system will take
[Id]/mem-[Id] that CPU offline.
This fault occurs when the
temperature of a memory unit on
a server has been out of the
operating range,
and the issue is not recoverable.
Be aware of the following
possible contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
than 95F (35C).
• If sensors on a CPU reach
sys/rack-unit-1/board/memarray- 179.6F (82C), the system will take
[Id]/mem-[Id] that CPU offline.

This fault typically occurs when


the power supply module is either
missing or the input power to the
sys/rack-unit-1/psu-[Id] server is absent.
This fault occurs when the
temperature of a PSU module has
exceeded a non-critical threshold
value, but
is still below the critical threshold.
Be aware of the following
possible contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
sys/rack-unit-1/psu-[Id] than 95F (35C).
This fault occurs when the
temperature of a PSU module has
exceeded a critical threshold
value. Be
aware of the following possible
contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
sys/rack-unit-1/psu-[Id] than 95F (35C).
This fault occurs when the
temperature of a PSU module has
been out of operating range, and
the issue
is not recoverable. Be aware of
the following possible
contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
sys/rack-unit-1/psu-[Id] than 95F (35C).

This fault is raised as a warning if


the current output of the PSU in a
rack server does not match the
sys/rack-unit-1/psu-[Id] desired output value.

This fault is raised as a warning if


the current output of the PSU in a
rack server does not match the
sys/rack-unit-1/psu-[Id] desired output value.
This fault is raised as a warning if
the current output of the PSU in a
rack server does not match the
sys/rack-unit-1/psu-[Id] desired output value.

This fault occurs when the PSU


voltage has exceeded the
sys/rack-unit-1/psu-[Id] specified hardware voltage rating.

This fault occurs when the PSU


voltage has exceeded the
specified hardware voltage rating
and PSU
hardware may have been
damaged as a result or may be at
sys/rack-unit-1/psu-[Id] risk of being damaged.

This fault occurs when a power


supply unit is drawing too much
sys/rack-unit-1/psu-[Id] current.

This fault occurs when a power


cable is disconnected or input
sys/rack-unit-1/psu-[Id] voltage is incorrect
This fault typically occurs when
the power supply unit is either
offline or the input/output
voltage is out
sys/rack-unit-1/psu-[Id] of range.

This fault typically occurs when


the FRU information for a power
supply unit is corrupted or
sys/rack-unit-1/psu-[Id] malformed.

This fault typically occurs when


chassis power redundancy has
sys/rack-unit-1/psu-[Id] failed.
This fault occurs under the
following condition:
• If a component within a chassis
sys/rack-unit-1 is operating outside the safe
sys/rack-unit-1/board thermal operating range.
This fault occurs under the
following condition:
• If a component within a chassis
sys/rack-unit-1 is operating outside the safe
sys/rack-unit-1/board thermal operating range.
This fault occurs under the
following condition:
• If a component within a chassis
sys/rack-unit-1 is operating outside the safe
sys/rack-unit-1/board thermal operating range.
This fault occurs in the event the
processor encounters a
catastrophic error or has
sys/rack-unit-1/board/cpu-[Id] exceeded the pre-set
sys/rack-unit-1/board thermal/power thresholds.

This fault occurs in the unlikely


sys/rack-unit-1/board/cpu-[Id] event that a processor is disabled.

This fault occurs when the


processor voltage is out of normal
operating range, but has not yet
reached a
critical stage. Normally the
processor recovers itself from this
sys/rack-unit-1/board/cpu-[Id] situation.
This fault occurs when the
processor voltage has exceeded
the specified hardware voltage
sys/rack-unit-1/board/cpu-[Id] rating.

This fault occurs when the


processor voltage has exceeded
the specified hardware voltage
rating and may
cause processor hardware
sys/rack-unit-1/board/cpu-[Id] damage.

This fault typically occurs when


the server has encountered a
diagnostic failure or an error
sys/rack-unit-1 during POST.

This fault occurs when the


memory array voltage exceeds
the specified hardware voltage
sys/rack-unit-1/board/memarray-[Id] rating.
This fault occurs when the
memory array voltage exceeded
the specified hardware voltage
rating and
potentially the memory hardware
sys/rack-unit-1/board/memarray-[Id] may be damaged.

This fault occurs when the


temperature of a fan module has
exceeded a non-critical threshold
value, but
is still below the critical threshold.
Be aware of the following
possible contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
sys/rack-unit-1/fan-module-[Id]-[id] than 95F (35C).
This fault occurs when the
temperature of a fan module has
exceeded a critical threshold
value. Be aware
of the following possible
contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
sys/rack-unit-1/fan-module-[Id]-[id] than 95F (35C).
This fault occurs when the
temperature of a fan module has
exceeded a critical threshold
value. Be aware
of the following possible
contributing factors:
• Temperature extremes can
cause Cisco UCS equipment to
operate at reduced efficiency and
cause a
variety of problems, including
early degradation, failure of chips,
and failure of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to
become loose in their sockets.
• Cisco UCS equipment should
operate in an environment that
provides an inlet air temperature
not
colder than 50F (10C) nor hotter
sys/rack-unit-1/fan-module-[Id]-[id] than 95F (35C).

This fault typically occurs when


CIMC detects that a power supply
sys/rack-unit-1/psu-[Id] unit in a chassis is offline
This fault occurs when the PSU
voltage is out of normal operating
range, but has not reached to a
critical
stage yet. Normally the PSU will
sys/rack-unit-1/psu-[Id] recover itself from this situation.
This fault occurs when server
sys/rack-unit-1 chassis or cover has been opened.

The adapter is missing. CIMC


raises this fault when any of the
following scenarios occur:
• The endpoint reports there is no
adapter in the adaptor slot.
• The endpoint cannot detect or
communicate with the adapter in
sys/rack-unit-1/adaptor-[Id] the adaptor slot.

sys/rack-unit-1/board/storage-[Id]- This fault indicates a failure in the


SLOT-[Id]/pd-[Id] rebuild process of the Local disk.
This fault occurs when one or
more fans in a fan module are not
sys/rack-unit-1/fan-module-1-[Id]/fan- operational, but at least one fan is
[Id] operational.

This fault occurs when the Flex


sys/rack-unit-1/board/storage-FLASH- Flash drive removed from slot
ctlr-[Id]/pd-HV while server was in use.
This fault indicates that the
review of the storage system for
sys/rack-unit-1/board/storage-SAS- potential physical disk errors has
SLOT-ctrl-[Id] failed.

This fault indicates a recoverable


sys/rack-unit-1/board/storage-FLASH- error with the Flex Flash virtual
ctlr-[Id]/vd-HV drive.

This fault indicates a non-


sys/rack-unit-1/board/storage-FLASH- recoverable error with the Flex
ctlr-[Id]/vd-HV Flash virtual drive.
Recommended Action Probable Cause AffectedDN

If you see this fault, take the


following actions:
Step 1 Re-seat/re-insert the I/O
card. Prior to re-inserting this server
component, see the server-specific
Installation and Service Guide for
prerequisites, safety
recommendations and warnings.
Step 2 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following action:
Step 1 You may choose to clear the
SEL.

If you see this fault, take the


following action:
Step 1 You may choose to clear the
SEL.

If you see this fault, take the


following action:
Step 1 You may choose to clear the
SEL.
If you see this fault, take the
following actions:
Step 1 Connect to the CIMC WebUI
and launch the KVM console to
monitor the BIOS POST completion.
Step 2 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

This fault typically occurs when one


or more motherboard input
voltages has exceeded upper critical
thresholds.

If you see this fault, take the


following actions:
Step 1 Reseat or replace the power
supply. Prior to replacing this
component, see the server-specific
Installation
and Service Guide for prerequisites,
safety recommendations and
warnings.
Step 2 If the issue persists, create a
tech-support file and contact TAC.

If you see this fault, take the


following action:
Step 1 Contact Cisco TAC.

If you see this fault, take the


following action:
Step 1 Contact Cisco TAC.

If you see this fault, take the


following action:
Step 1 Contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Verify that the server fans
are working properly.
Step 2 Wait for 24 hours to see if
the problem resolves itself.
Step 3 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Reseat/replace the power
supply. Prior to replacing this
component, see the server-specific
Installation
and Service Guide for prerequisites,
safety recommendations and
warnings.
Step 2 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following action:
Step 1 Create a tech-support file
and contact Cisco TAC.

If you see this fault, take the


following action:
Step 1 Replace the CMOS battery.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations and
warnings.

If you see this fault, take the


following action:
Step 1 Replace the CMOS battery.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations and
warnings.
If you see this fault, take the
following actions:
Step 1 Monitor other
environmental events related to
this server and ensure the
temperature ranges are within
recommended ranges.
Step 2 If this action did not solve
the problem, contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Monitor other
environmental events related to the
server and ensure the temperature
ranges are within
recommended ranges.
Step 2 Consider turning off the
server for a while if possible.
Step 3 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Shut down the server
immediately.
Step 2 Create a tech-support file
and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
server.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the
servers have adequate airflow,
including
front and back clearance.
Step 3 Verify that the airflows on
the servers are not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 6 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
server.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the
servers have adequate airflow,
including
front and back clearance.
Step 3 Verify that the airflows on
the servers are not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 6 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
server.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the
servers have adequate airflow,
including
front and back clearance.
Step 3 Verify that the airflows on
the servers are not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 6 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
I/O card.
Step 2 Verify that the airflows on
the servers are not obstructed.
Step 3 Verify that the site cooling
system is operating properly.
Step 4 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 5 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
I/O card.
Step 2 Verify that the site cooling
system is operating properly.
Step 3 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 4 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
I/O card.
Step 2 Verify that the airflows on
the servers are not obstructed.
Step 3 Verify that the site cooling
system is operating properly.
Step 4 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 5 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following action:
Step 1 Review the product
specifications to determine the
temperature operating range of the
I/O card.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure that
the servers have adequate airflow,
including
front and back clearance.
Step 3 Verify that the airflows on
the servers are not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 6 Replace faulty I/O cards.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations and
warnings.
Step 7 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following action:
Step 1 Reseat or replace the storage
controller. Prior to replacing this
component, see the server-specific
Installation and Service Guide for
prerequisites, safety
recommendations and warnings.
If you see this fault, take the
following actions:
Step 1 Insert the disk in a supported
slot.
Step 2 Remove and re-insert the
local disk.
Step 3 Replace the disk, if an
additional disk is available.
Note Prior to installing or replacing
this component, see the server-
specific Installation and Service
Guide for
prerequisites, safety
recommendations and warnings.
Step 4 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Replace the physical drive
and check to see if the issue is
resolved after a rebuild. Prior to
replacing this
component, see the server-specific
Installation and Service Guide for
prerequisites, safety
recommendations and warnings.
Step 2 Reseat or replace the storage
controller.
Step 3 Check configuration options
for the storage controller in the
MegaRAID ROM configuration page.

If you see this fault, take the


following actions:
Step 1 If the data on the drive is
accessible, back up and recreate the
virtual drive.
Step 2 Replace any faulty physical
drives. Prior to replacing this
component, see the server-specific
Installation
and Service Guide for prerequisites,
safety recommendations and
warnings.
Step 3 Check for controller errors in
the MegaRAID ROM page logs.
If you see this fault, take the
following actions:
Step 1 Initiate a consistency check
on the virtual drive.
Step 2 Replace any faulty physical
drives. Prior to replacing this
component, see the server-specific
Installation
and Service Guide for prerequisites,
safety recommendations and
warnings.

If you see this fault, take the


following action:
Step 1 Restart the reconstruction
process.

If you see this fault, take the


following actions:
Step 1 Initiate a consistency check
on the virtual drive.
Step 2 Replace any faulty physical
drives. Prior to replacing this
component, see the server-specific
Installation
and Service Guide for prerequisites,
safety recommendations and
warnings.

If you see this fault, take the


following actions:
Step 1 Replace the RAID battery.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations and
warnings
Step 2 If the above action did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following action:
Step 1 Reseat or replace the battery
backup unit on the storage
controller.
Prior to replacing this component,
see the server-specific Installation
and Service Guide for
prerequisites, safety
recommendations and warnings.

If you see this fault, take the


following actions:
Step 1 Restart the relearn process
for the battery backup unit.
Step 2 Reseat or replace the battery
backup unit. Prior to replacing this
component, see the server-specific
Installation and Service Guide for
prerequisites, safety
recommendations and warnings.
Step 3 Replace the battery backup
unit if it has exceeded 100 relearn
cycles.

If you see this fault, take the


following actions:
Step 1 Restart the relearn process
for the battery backup unit.
Step 2 Reseat or replace the battery
backup unit. Prior to replacing this
component, see the server-specific
Installation and Service Guide for
prerequisites, safety
recommendations and warnings.
Step 3 Replace the battery backup
unit if it has exceeded 100 relearn
cycles.
If you see this fault, take the
following actions:
Step 1 Insert/re-insert the fan
module in the slot that is reporting
the issue.
Step 2 Replace the fan module with
a different fan module, if available.
Note Prior to installing or replacing
this component, see the server-
specific Installation and Service
Guide for
prerequisites, safety
recommendations and warnings.
Step 3 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Monitor the fan status.
Step 2 If the problem persists for a
long period of time or if other fans
do not show the same problem,
reseat
the fan.
Step 3 Replace the fan module.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations, warnings
and procedures.
Step 4 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Monitor the fan status.
Step 2 If the problem persists for a
long period of time or if other fans
do not show the same problem,
reseat
the fan.
Step 3 Replace the fan module.
Prior to replacing this component,
see the server specific Installation
and
Service Guide for prerequisites,
safety recommendations and
warnings.
Step 4 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Replace the fan. Prior to
replacing this component, see the
server-specific Installation and
Service Guide
for prerequisites, safety
recommendations and warnings.
Step 2 If the above action did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following action:
Step 1 Verify if the DIMM is
supported on the server
configuration. If the DIMM is not
supported on the server
configuration, contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Monitor the DIMM for
further ECC errors. If the high
number of errors persists, there is a
high
possibility of the DIMM becoming
inoperable.
Step 2 If the DIMM becomes
inoperable, replace the DIMM. You
can use the CIMC WebUI to locate
the faulty
DIMM. Prior to replacing this
component, see the server-specific
Installation and Service Guide for
prerequisites, safety
recommendations, warnings and
procedures.
Step 3 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Review the SEL statistics on
the DIMM to determine which
threshold was crossed.
Step 2 If necessary, replace the
DIMM. You can use the CIMC
WebUI to locate the faulty DIMM.
Prior to
replacing this component, see the
server-specific Installation and
Service Guide for prerequisites,
safety
recommendations, warnings and
procedures.
Step 3 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
server.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the
servers have adequate airflow,
including
front and back clearance.
Step 3 Verify that the airflows on
the servers are not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 6 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
server.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the
servers have adequate airflow,
including
front and back clearance.
Step 3 Verify that the airflows on
the servers are not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 6 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
server.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the
servers have adequate airflow,
including
front and back clearance.
Step 3 Verify that the airflows on
the servers are not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 6 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Check to see if the power
supply is connected to a power
source.
Step 2 If the PSU is physically
present in the slot, remove and
then re-insert it.
Step 3 If the PSU is not physically
present in the slot, insert a new
PSU.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
PSU module.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the
PSU modules have adequate
airflow,
including front and back clearance.
Step 3 Verify that the airflows are
not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 6 Replace faulty PSU modules.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations and
warnings.
Step 7 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
PSU module.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the
PSU modules have adequate
airflow,
including front and back clearance.
Step 3 Verify that the airflows are
not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 6 Replace faulty PSU modules.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations and
warnings.
Step 7 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
PSU module.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the
PSU modules have adequate
airflow,
including front and back clearance.
Step 3 Verify that the airflows are
not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 6 Replace faulty PSU modules.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations and
warnings.
Step 7 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Monitor the PSU status.
Step 2 If possible, remove and
reseat the PSU.
Step 3 If the above action did not
resolve the issue, create a tech-
support file for the chassis, and
contact Cisco
TAC.

If you see this fault, take the


following actions:
Step 1 Monitor the PSU status.
Step 2 If possible, remove and
reseat the PSU.
Step 3 If the above action did not
resolve the issue, create a tech-
support file for the chassis, and
contact Cisco
TAC.
If you see this fault, take the
following actions:
Step 1 Monitor the PSU status.
Step 2 If possible, remove and
reseat the PSU.
Step 3 If the above action did not
resolve the issue, create a tech-
support file for the chassis, and
contact Cisco
TAC.

If you see this fault, take the


following actions:
Step 1 Remove and reseat the PSU.
Step 2 Replace the PSU. Prior to
replacing this component, see the
server-specific Installation and
Service
Guide for prerequisites, safety
recommendations and warnings.
Step 3 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Remove and reseat the PSU.
Step 2 If the above action did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, create a tech-


support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Check if the power cable is
disconnected.
Step 2 Check if the input voltage is
within the correct range mentioned
the server-specific Installation and
Service Guide.
Step 3 Re-insert the PSU.
Step 4 If these actions did not solve
the problem, create a tech-support
file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Verify that the power cord is
properly connected to the PSU and
the power source.
Step 2 Verify that the power source
is 220/110 volts.
Step 3 Remove the PSU and re-
install it.
Step 4 Replace the PSU.
Note Prior to re-installing or
replacing this component, see the
server-specific Installation and
Service Guide
for prerequisites, safety
recommendations and warnings.
Step 5 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Check the server-specific
Installationg and Service Guide for
the power supply vendor
specification.
Step 2 If the above action did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Consider adding more PSUs
to the chassis.
Step 2 Replace any non-functional
PSUs. Prior to replacing this
component, see the server-specific
Installation
and Service Guide for prerequisites,
safety recommendations and
warnings.
Step 3 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the Cisco UCS Site
Preparation Guide and ensure the
server has adequate airflow,
including front
and back clearance.
Step 2 Verify that the air flows of
the servers are not obstructed.
Step 3 Verify that the site cooling
system is operating properly.
Step 4 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature
readings and ensure it is within the
recommended thermal safe
operating range.
Step 6 If the fault reports a
"Thermal Sensor threshold crossing
in the front or back pane" error for
the servers,
check if thermal faults have been
raised. Those faults include details
of the thermal condition.
Step 7 If the fault reports a "Missing
or Faulty Fan" error, check on the
status of that fan. If it needs
replacement,
create a tech-support file for the
chassis and contact Cisco TAC.
Step 8 If the above actions did not
resolve the issue and the condition
persists, create a tech-support file
for
the chassis and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the Cisco UCS Site
Preparation Guide and ensure the
server has adequate airflow,
including front
and back clearance.
Step 2 Verify that the air flows of
the servers are not obstructed.
Step 3 Verify that the site cooling
system is operating properly.
Step 4 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature
readings and ensure it is within the
recommended thermal safe
operating range.
Step 6 If the fault reports a
"Thermal Sensor threshold crossing
in the front or back pane" error for
the servers,
check if thermal faults have been
raised. Those faults include details
of the thermal condition.
Step 7 If the fault reports a "Missing
or Faulty Fan" error, check on the
status of that fan. If it needs
replacement,
create a tech-support file for the
chassis and contact Cisco TAC.
Step 8 If the above actions did not
resolve the issue and the condition
persists, create a tech-support file
for
the chassis and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the Cisco UCS Site
Preparation Guide and ensure the
server has adequate airflow,
including front
and back clearance.
Step 2 Verify that the air flows of
the servers are not obstructed.
Step 3 Verify that the site cooling
system is operating properly.
Step 4 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature
readings and ensure it is within the
recommended thermal safe
operating range.
Step 6 If the fault reports a
"Thermal Sensor threshold crossing
in the front or back pane" error for
the servers,
check if thermal faults have been
raised. Those faults include details
of the thermal condition.
Step 7 If the fault reports a "Missing
or Faulty Fan" error, check on the
status of that fan. If it needs
replacement,
create a tech-support file for the
chassis and contact Cisco TAC.
Step 8 If the above actions did not
resolve the issue and the condition
persists, create a tech-support file
for
the chassis and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 In the event that the
probable cause being indicated is a
thermal problem, check to see if the
airflow to
the server is not obstructed, and it
is adequately ventilated. If possible,
check if the heat sink is properly
seated on the processor.
Step 2 In the event that the
probable cause being indicated is
equipment inoperable, please
contact Cisco TAC
for further instructions.
Step 3 In the event that the
probable cause being indicated is a
power or voltage problem, it is
recommended to
see if the issue is resolved with an
alternate power supply. If this fails
to resolve the issue, please contact
Cisco TAC.

If you see this fault, take the


following actions:
Step 1 If this fault occurs, re-seat
the processor.
Step 2 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take these


actions:
Step 1 Monitor the processor for
further degradation.
Step 2 Review the SEL statistics on
the CPU to determine which
threshold was crossed.
Step 3 Replace the power supply.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations, and
warnings.
Step 4 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Monitor the processor for
further degradation
Step 2 Review the SEL statistics on
the CPU to determine which
threshold was crossed.
Step 3 Replace the power supply.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations, and
warnings.
Step 4 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following action:
Step 1 Create a tech-support file
and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Check the POST result for the
server.
Step 2 Reboot the server.
Step 3 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Review the SEL statistics on
the DIMM to determine which
threshold was crossed.
Step 2 Monitor the memory array
for further degradation.
Step 3 Replace the power supply.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations, and
warnings.
Step 4 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the SEL statistics on
the DIMM to determine which
threshold was crossed.
Step 2 Monitor the memory array
for further degradation.
Step 3 Replace the power supply.
Prior to replacing this component,
see the server-specific Installation
and
Service Guide for prerequisites,
safety recommendations and
warnings.
Step 4 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
fan module.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the fan
modules have adequate airflow,
including front and back clearance.
Step 3 Verify that the air flows are
not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Power off unused rack
servers.
Step 6 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 7 Replace faulty fan modules.
Step 8 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
fan module.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the fan
modules have adequate airflow,
including front and back clearance.
Step 3 Verify that the air flows are
not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Power off unused rack
servers.
Step 6 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 7 Replace faulty fan modules.
Step 8 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
Recommended Action:
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
fan module.
Step 2 Review the Cisco UCS Site
Preparation Guide to ensure the fan
modules have adequate airflow,
including front and back clearance.
Step 3 Verify that the air flows are
not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Power off unused rack
servers.
Step 6 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 7 Replace faulty fan modules.
Step 8 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Verify that the power cord is
properly connected to the PSU and
the power source.
Step 2 Verify that the power source
is 220 volts.
Step 3 Verify that the PSU is
properly installed.
Step 4 Remove the PSU and
reinstall it.
Step 5 Replace the PSU.
Step 6 If the above actions did not
resolve the issue, note down the
type of PSU, create a tech-support
file, and
contact Cisco Technical Support.
If you see this fault, take the
following action:
Step 1 Monitor the PSU for further
degradation.
Step 2 Remove and reseat the PSU.
Step 3 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
Make sure that the server
chassis/cover is in place.

If you see this fault, take the


following actions:
Step 1 Make sure an adapter is
inserted in the adaptor slot in the
server.
Step 2 Check whether the adapter is
connected and configured properly
and is running the recommended
firmware version.
Step 3 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following action:
Step 1 Restart the rebuild process.
If you see this fault, take the
following actions:
Step 1 Review the product
specifications to determine the
temperature operating range of the
fan module.
Step 2 Review the Cisco UCS Site
Preparation Guide and ensure the
fan module has adequate airflow,
including
front and back clearance.
Step 3 Verify that the air flows of
the servers are not obstructed.
Step 4 Verify that the site cooling
system is operating properly.
Step 5 Clean the installation site at
regular intervals to avoid buildup of
dust and debris, which can cause a
system to overheat.
Step 6 Replace the faulty fan
modules. Prior to replacing this
component, see the server-specific
Installation
and Service Guide for prerequisites,
safety recommendations and
warnings.
Step 7 If the above actions dd not
resolve the issue, create a tech-
support file and contact Cisco TAC.

If you see this fault, take the


following actions:
Step 1 Insert the disk in a supported
slot.
Step 2 Replace the disk, if an
additional drive is available.
Note Prior to installing or replacing
this component, see the server-
specific Installation and Service
Guide for
prerequisites, safety
recommendations and warnings.
Step 3 If the above actions did not
resolve the issue, create a tech-
support file and contact Cisco TAC.
If you see this fault, take the
following actions:
Step 1 Initiate a consistency check
on the virtual drive.
Step 2 Replace any faulty physical
drives. Prior to replacing this
component, see the server-specific
Installation
and Service Guide for prerequisites,
safety recommendations and
warnings.

If you see this fault, take the


following actions:
Step 1 Synchronize the virtual drive
using Cisco UCS SCU to make the VD
optimal.
Step 2 Replace any faulty Flex Flash
drives. Prior to replacing this
component, see the server-specific
Installation and Service Guide for
prerequisites, safety
recommendations and warnings.

If you see this fault, take the


following actions:
Step 1 If the data on the drive is
accessible, back up and recreate the
virtual drive.
Step 2 Replace any faulty Flex Flash
drives. Prior to replacing this
component, see the server-specific
Installation and Service Guide for
prerequisites, safety
recommendations and warnings.
Step 3 Synchronize the virtual drive
using Cisco UCS-SCU to make the
VD optimal.
Description Sensors - SL1 Sensors-SL2
Sensors - AL1 Sensors- AL2 Sensors-Alpine
Sensors-Amador Comment
Fault code Fault Name Message Severity

IOCard [location] on
F0376 fltEquipmentIOCardRemoved [serverId] is removed. critical

Log capacity on
Management Controller
F0460 fltSysdebugMEpLogMEpLogLog on [serverid] is [capacity] info

Log capacity on
Management Controller
F0461 fltSysdebugMEpLogMEpLogVeryLow on [serverid] is [capacity] info

Log capacity on
Management Controller
F0462 fltSysdebugMEpLogMEpLogFull on [serverid] is [capacity] info

[Serverid] BIOS failed


power-on self testServer
[chassisId] BIOS failed
F0313 fltComputePhysicalBiosPostTimeout power-on self test. critical
Motherboard of
fltComputeBoardMotherBoardVoltageUpperTh [serverid] voltage:
F0920 resholdCritical [voltage] major

Motherboard of
fltComputeBoardMotherBoardVoltageLowerTh [serverid] voltage:
F0921 resholdCritical [voltage] major

Motherboard of
fltComputeBoardMotherBoardVoltageThreshol [serverid] voltage:
F0919 dLowerNonRecoverable [voltage] critical

Motherboard of
fltComputeBoardMotherBoardVoltageThreshol [serverid] voltage:
F0918 dUpperNonRecoverable [voltage] critical
Motherboard of
F1040 fltComputeBoardPowerUsageProblem [serverid] power: [power] major|critical

Motherboard of minor |
[serverid] : thermal: major |
F0869 fltComputeBoardThermalProblem [thermal] critical

Motherboard of
[serverid] power:
F0310 fltComputeBoardPowerError [operPower] critical

Motherboard of
F0868 fltComputeBoardPowerFail [serverid] power: [power] critical

CMOS battery voltage on


[serverid] is
F0424 fltComputeBoardCmosVoltageThresholdCritical [cmosVoltage] major

CMOS battery voltage on


fltComputeBoardCmosVoltageThresholdNonRe [serverid] is
F0425 coverable [cmosVoltage] critical

IO Hub on [serverId]
F0538 fltComputeIOHubThermalNonCritical temperature: [thermal] minor
IO Hub on [serverId]
F0539 fltComputeIOHubThermalThresholdCritical temperature: [thermal] major

fltComputeIOHubThermalThresholdNonRecove IO Hub on [serverId]


F0540 rable temperature: [thermal] critical

Processor [id] on
[serverid] temperature:
F0175 fltProcessorUnitThermalNonCritical [thermal] minor
Processor [id] on
[serverid] temperature:
F0176 fltProcessorUnitThermalThresholdCritical [thermal] major

Processor [id] on
fltProcessorUnitThermalThresholdNonRecover [serverid] temperature:
F0177 able [thermal] critical
IOCard [location] on
fltEquipmentIOCardThermalThresholdNonCritic [serverid] temperature:
F0729 al [thermal] minor

IOCard [location] on
[serverid] temperature:
F0730 fltEquipmentIOCardThermalThresholdCritical [thermal] major
IOCard [location] on
fltEquipmentIOCardThermalThresholdNonReco [serverid] temperature:
F0731 verable [thermal] critical

IOCard [location] on
server [id] operState:
F0379 fltEquipmentIOCardThermalProblem [operState] major
Storage Controller [id]
F1004 fltStorageControllerInoperable operability: [operability] critical

Local disk [id] on


[serverid] operability: major|
F0181 fltStorageLocalDiskInoperable [operability] warning

Local disk [id] on


[serverid] operability:
F1006 fltStorageLocalDiskCopybackFailed [operability] major

Virtual drive [id] on


[serverid] operability:
F1007 fltStorageVirtualDriveInoperable [operability] critical

Virtual drive [id] on


[serverid] operability:
F1008 fltStorageVirtualDriveDegraded [operability] warning
Virtual drive [id] on
[serverid] operability:
F1009 fltStorageVirtualDriveReconstructionFailed [operability] major

Virtual drive [id] on


[serverid] operability:
F1010 fltStorageVirtualDriveConsistencyCheckFailed [operability] major

RAID Battery on
[serverid] operability:
F0531 fltStorageRaidBatteryInoperable [operability] major

Raid battery [id] on


[serverid] operability:
F0997 fltStorageRaidBatteryDegraded [operability] major

Raid battery [id] on


[serverid] operability:
F0998 fltStorageRaidBatteryRelearnAborted [operability] minor

Raid battery [id] on


[serverid] operability:
F0999 fltStorageRaidBatteryRelearnFailed [operability] major
Fan [id] in Fan Module
[tray]-[id] under
[serverid] presence:
F0434 fltEquipmentFanMissing [presence] warning

Fan [id] in Fan Module


[tray]-[id] under
F0395 fltEquipmentFanPerfThresholdNonCritical [serverid] speed: [perf] minor

Fan [id] in Fan Module


[tray]-[id] under
F0396 fltEquipmentFanPerfThresholdCritical [serverid] speed: [perf] major

Fan [id] in Fan Module


[tray]-[id] under
F0397 fltEquipmentFanPerfThresholdNonRecoverable s[erverid] speed: [perf] critical
DIMM [location] on
[serverid] has an invalid
F0502 fltMemoryUnitIdentityUnestablishable FRU warning

DIMM [location] on
[serverid]
F0184 fltMemoryUnitDegraded operability: [operability] warning

DIMM [location] on
[serverid]
F0185 fltMemoryUnitInoperable operability: [operability] major
DIMM [location] on
[serverid]
F0186 fltMemoryUnitThermalThresholdNonCritical temperature: [thermal] Info

DIMM [location] on
[serverid]
F0187 fltMemoryUnitThermalThresholdCritical temperature: [thermal] major
DIMM [location] on
fltMemoryUnitThermalThresholdNonRecovera [serverid]
F0188 ble temperature: [thermal] critical

Power supply [id] in


[serverid] presence:
F0378 fltEquipmentPsuMissing [presence] warning
Power supply [id] in
[serverid] temperature:
F0381 fltEquipmentPsuThermalThresholdNonCritical [thermal] minor

Power supply [id] in


[serverid] temperature:
F0383 fltEquipmentPsuThermalThresholdCritical [thermal] major
Power supply [id] in
fltEquipmentPsuThermalThresholdNonRecover [serverid] temperature:
F0385 able [thermal] critical

Power supply [id] in


[serverid] output power:
F0392 fltEquipmentPsuPerfThresholdNonCritical [perf] minor

Power supply [id] in


[serverid] output power:
F0393 fltEquipmentPsuPerfThresholdCritical [perf] major

Power supply [id] in


[serverid] output power:
F0394 fltEquipmentPsuPerfThresholdNonRecoverable [perf] critical
Power supply [id] in
[serverid] voltage:
F0389 fltEquipmentPsuVoltageThresholdCritical [voltage] major

Power supply [id] in


fltEquipmentPsuVoltageThresholdNonRecovera [serverid] voltage:
F0391 ble [voltage] critical

Power supply [id] on


[serverid] has minor |
exceeded its power major |
F0882 fltEquipmentPsuPowerThreshold threshold. critical

Power supply [id] on


[serverid] has
disconnected cable or
F0883 fltEquipmentPsuInputError bad input voltage. critical

Power supply [id] in


[serverid] operability:
F0374 fltEquipmentPsuInoperable [operability] major
Power supply [id] on
[serverid] has a
F0407 fltEquipmentPsuIdentity malformed FRU critical

[Serverid] was configured


for redundancy, but
fltPowerChassisMemberChassisPsuRedundance running in a non-
F0743 Failure redundant configuration. major

Thermal condition on
fltEquipmentChassisThermalThresholdNonCriti [serverid] cause:
F0410 cal [thermalStateQualifier] minor
Thermal condition on
[serverid] cause:
F0409 fltEquipmentChassisThermalThresholdCritical [thermalStateQualifier] major
Thermal condition on
fltEquipmentChassisThermalThresholdNonReco [serverid] cause:
F0411 verable [thermalStateQualifier] critical
Processor [id] on
[serverId] operability:
F0174 fltProcessorUnitInoperable [operability] critical|major

Processor [id] on
[serverid] operState:
F0842 fltProcessorUnitDisabled [operState] info

Processor [id] on
[serverId] voltage:
F0178 fltProcessorUnitVoltageThresholdNonCritical [voltage] minor
Processor [id] on
[serverId] voltage:
F0179 fltProcessorUnitVoltageThresholdCritical [voltage] major

Processor [id] on
fltProcessorUnitVoltageThresholdNonRecovera [serverId] voltage:
F0180 ble [voltage] critical

[Serverid] POST or
F0517 fltComputePhysicalPostfailure diagnostic failure critical

Memory array [id] on


[serverid] voltage:
F0190 fltMemoryArrayVoltageThresholdCritical [voltage] major
Memory array [id] on
fltMemoryArrayVoltageThresholdNonRecovera [serverid] voltage:
F0191 ble [voltage] critical

Fan module [tray]-[id] in


fltEquipmentFanModuleThermalThresholdNon [serverid] temperature:
F0380 Critical [thermal] minor
Fan module [tray]-[id] in
fltEquipmentFanModuleThermalThresholdCriti [serverid] temperature:
F0382 cal [thermal] major

Fan module [tray]-[id] in


fltEquipmentFanModuleThermalThresholdNon [serverid] temperature:
F0384 Recoverable [thermal] critical
Power supply [id] in
F0528 fltEquipmentPsuOffline [serverid] power: [power] warning

Power supply [id] in


[serverid] voltage:
F0387 fltEquipmentPsuVoltageThresholdNonCritical [voltage] minor

F0320 fltComputePhysicalUnidentified [Serverid] Chassis open. warning

Adapter [id] in [serverid]


F0203 fltAdaptorUnitMissing presence: [presence] warning

Local disk [id] on


[serverid] operability:
[operability]. Reason:
F1005 fltStorageLocalDiskRebuildFailed [operQualifierReason] major
Fan [id] in Fan Module
[tray]-[id] under server
[id] operability:
F0371 fltEquipmentFanDegraded [operability] warning

Local disk [id] on


FlexFlash Controller [id]
F0181 fltStorageFlexFlashLocalDiskMissing operability: [operability] info

Local disk [id] on server


[id] had a patrol read
failure. Reason:
F1003 fltStorageControllerPatrolReadFailed [operQualifierReason] warning

Virtual drive [id] on


FlexFlash Controller [id]
F1008 fltStorageFlexFlashVirtualDriveHVDegraded operability: [operability] warning
Virtual drive [id] on
Storage Controller [id]
F1007 fltStorageFlexFlashVirtualDriveHVInoperable operability: [operability] critical

Local disk [id] on


[serverid] operability:
F1256 fltStorageLocalDiskMissing [operability] info

Local disk [id] on


[serverid] operability:
F0996 fltStorageLocalDiskDegraded [operability] warning

1.5
Probable Cause DN Description

[sensor_name]: PCI Slot [id]


equipment- sys/rack-unit-1/equipped-slot- riser or card missing: reseat or
removed [Id] replace pci card [id]

sys/rack-unit-1/mgmt/log-SEL-
log-capacity 0 System Event log is going low

sys/rack-unit-1/mgmt/log-SEL- System Event log capacity is


log-capacity 0 very low

sys/rack-unit-1/mgmt/log-SEL- System Event log is Full: Clear


log-capacity 0 the log

equipment- BIOS POST Timeout occurred:


inoperable sys/rack-unit-1/board Contact Cisco TAC
Stand-by voltage (xV) to the
motherboard is upper critical:
Check the power supply
Auxiliary voltage (xV) to the
motherboard is upper critical:
Check the power supply
Motherboard voltage (xV) is
upper critical: Check the
voltage-problem sys/rack-unit-1/board power supply

"Stand-by voltage ([Val] V) to


the motherboard is lower
critical: Check the power
supply
Auxiliary voltage ([Val] V) to
the motherboard is lower
critical: Check the power
supply
Motherboard voltage ([Val] V)
is lower critical: Check the
voltage-problem sys/rack-unit-1/board power supply"

"Stand-by voltage ([Val] V) to


the motherboard is lower
non-recoverable: Check the
power supply
Auxiliary voltage ([Val] V) to
the motherboard is lower
non-recoverable: Check the
power supply
Motherboard voltage ([Val] V)
is lower non-recoverable:
voltage-problem sys/rack-unit-1/board Check the power supply"

"Stand-by voltage ([Val] V) to


the motherboard is upper
non-recoverable: Check the
power supply
Auxiliary voltage ([Val] V) to
the motherboard is upper
non-recoverable: Check the
power supply
Motherboard voltage ([Val] V)
is upper non-recoverable:
voltage-problem sys/rack-unit-1/board Check the power supply"
"Motherboard Power usage is
upper critical: Check hardware
Motherboard Power usage is
upper non-recoverable: Check
power-problem sys/rack-unit-1/board hardware"

Motherboard chipset
inoperable due to high
thermal-problem sys/rack-unit-1/board temperature

P[Id]V[Id]_AU[Id]_PWRGD:
Voltage rail Power Good
dropped due to PSU or HW
failure, please contact CISCO
power-problem sys/rack-unit-1/board TAC for assistance

The server failed to power


power-problem sys/rack-unit-1/board ON: Check Power Supply

Battery voltage level is upper


voltage-problem sys/rack-unit-1/board/cpu-[Id] critical: Replace battery

Battery voltage level is upper


non-recoverable: Replace
voltage-problem sys/rack-unit-1/board/cpu-[Id] battery

[sensor_name]: Motherboard
sys/rack-unit-1/board chipset temperature is upper
thermal-problem non-critical
[sensor_name]: Motherboard
chipset temperature is upper
thermal-problem sys/rack-unit-1/board critical

[sensor_name]: Motherboard
chipset temperature is upper
thermal-problem sys/rack-unit-1/board non-recoverable

Processor [Id] Thermal


threshold has crossed upper
non-critical threshold: Check
thermal-problem sys/rack-unit-1/board/cpu-[Id] cooling
Processor [Id] Thermal
threshold has crossed upper
sys/rack-unit-1/board/cpu-[Id] critical threshold: Check
thermal-problem cooling

Processor [Id] Thermal


sys/rack-unit-1/board/cpu-[Id] threshold has crossed a preset
thermal-problem threshold: Check cooling
Adaptor Unit [Id]
Temperature is non critical :
thermal-problem sys/rack-unit-1/adaptor-[Id] Check Cooling

Adaptor Unit [id]


Temperature is critical : Check
thermal-problem sys/rack-unit-1/adaptor-[Id] Cooling
Adaptor Unit [id]
Temperature is non
thermal-problem sys/rack-unit-1/adaptor-[Id] recoverable : Check Cooling

[sensor_name]: Adaptor Unit


[Id] is inoperable due to high
thermal-problem sys/rack-unit-1/adaptor-[Id] temperature : Check Cooling
Storage controller SLOT-[Id]
equipment- sys/rack-unit-1/board/storage- inoperable: reseat or replace
inoperable SAS-SLOT-[Id] the storage controller

equipment-
inoperable | Storage Local disk [Id] is
equipment- sys/rack-unit-1/board/storage- inoperable: reseat or replace
missing SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]

Storage Local disk [Id] is


sys/rack-unit-1/board/storage- inoperable: reseat or replace
equipment-offline SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]

Storage Virtual Drive [Id] is


inoperable: Check storage
equipment- sys/rack-unit-1/board/storage- controller, or reseat the
inoperable SAS-SLOT-[Id]/vd-[Id] storage drive

Storage Virtual Drive [Id] is


inoperable: Check storage
equipment- sys/rack-unit-1/board/storage- controller, or reseat the
degraded SAS-SLOT-[Id]/vd-[Id] storage drive
Storage Virtual Drive [Id]
reconstruction failed: Check
equipment- sys/rack-unit-1/board/storage- storage controller, or reseat
degraded SAS-SLOT-[Id]/vd-[Id] the storage drive

Storage Virtual Drive [Id]


Consistency Check Failed:
equipment- sys/rack-unit-1/board/storage- please check the controller, or
degraded SAS-SLOT-[Id]/vd-[Id] reseat the physical drives

Storage Raid battery [Id]


equipment- sys/rack-unit-1/board/storage- inoperable: check the raid
inoperable SAS-SLOT-[Id]/raid-battery battery

Storage Raid battery [Id]


equipment- sys/rack-unit-1/board/storage- Degraded: check the raid
degraded SAS-SLOT-[id]/raid-battery battery

Storage Raid battery [Id]


equipment- sys/rack-unit-1/board/storage- relearn aborted : check the
degraded SAS-SLOT-[Id]/raid-battery raid battery

Storage Raid battery [id]


equipment- sys/rack-unit-1/board/storage- relearn aborted : check the
degraded SAS-SLOT-[Id]/raid-battery raid battery
equipment- sys/rack-unit-1/fan-module-1- Fan [id] missing: reseat or
missing [Id]/fan-[Id] replace fan [Id]

Fan speed for fan-[Id] in Fan


Module [Id]-[Id] is lower non
critical : Check the air intake
to the server
Fan speed for fan-[Id] is lower
performance- sys/rack-unit-1/fan-module-1- non critical : Check the air
problem [Id]/fan-[Id] intake to the server

Fan speed for fan-{Id] in Fan


Module [Id]-[Id] is lower
critical : Check the air intake
to the server
Fan speed for fan-[Id] is lower
performance- sys/rack-unit-1/fan-module-1- critical : Check the air intake
problem [Id]/fan-[Id] to the server

Fan speed for fan-[Id] in Fan


Module {Id]-[Id] is lower non
recoverable : Check the air
intake to the server
Fan speed for fan-[Id] is lower
performance- sys/rack-unit-1/fan-module-1- non recoverable : Check the
problem [Id]/fan-[Id] air intake to the server
[sensor_name]: Memory Riser
[Id] missing: reseat or replace
memory riser [Id]
[sensor_name]: Memory Unit
identity- sys/rack-unit-1/board/ [Id] missing: reseat or replace
unestablishable memarray-[Id]/mem-[Id] physical memory [Id]

equipment- sys/rack-unit-1/board/ DIMM [Id] is degraded : Check


degraded memarray-[Id]/mem-[Id] or replace DIMM

equipment- sys/rack-unit-1/board/ DIMM [Id] is inoperable :


inoperable memarray-[Id]/mem-[Id] Check or replace DIMM
Memory Unit [Id]
temperature is upper non
critical:Check Cooling
%s: Memory riser %d Thermal
sys/rack-unit-1/board/ Threshold at upper non
thermal-problem memarray-[Id]/mem-[Id] critical levels: Check Cooling

Memory Unit [Id]


temperature is upper
critical:Check Cooling
[sensor_name]: Memory riser
[Id] Thermal Threshold at
sys/rack-unit-1/board/ upper critical levels: Check
thermal-problem memarray-[Id]/mem-[Id] Cooling
Memory Unit [Id]
temperature is upper non
recoverable: Check Cooling
[sensor_name]: Memory riser
[Id] Thermal Threshold at
sys/rack-unit-1/board/ upper non recoverable levels:
thermal-problem memarray-[Id]/mem-[Id] Check Cooling

equipment- Power Supply [Id] missing:


missing sys/rack-unit-1/psu-[Id] reseat or replace PS [id]
Power Supply [Id]
temperature is upper non
thermal-problem sys/rack-unit-1/psu-[Id] critical : Check cooling

Power Supply [Id]


temperature is upper critical :
thermal-problem sys/rack-unit-1/psu-[Id] Check cooling
Power Supply [Id]
temperature is upper non
recoverable : Check Power
thermal-problem sys/rack-unit-1/psu-[Id] Supply Status

Power Supply [Id] output


power is upper non critical :
Reseat or replace Power
power-problem sys/rack-unit-1/psu-[Id] Supply

Power Supply [Id] output


power is upper critical :
Reseat or replace Power
power-problem sys/rack-unit-1/psu-[Id] Supply

Power Supply [Id] output


power is upper non
recoverable : Reseat or
power-problem sys/rack-unit-1/psu-[Id] replace Power Supply
Power Supply [Id] Voltage is
upper critical : Reseat or
voltage-problem sys/rack-unit-1/psu-[Id] replace Power Supply

Power Supply [Id] Voltage is


upper non Recoverable :
Reseat or replace Power
voltage-problem sys/rack-unit-1/psu-[Id] Supply

Power Supply [Id] current is


upper non critical : Reseat or
replace Power Supply
Power Supply [Id] Current is
upper critical : Reseat or
replace Power Supply
Power Supply [Id] Current is
upper non recoverable :
Reseat or replace Power
power-problem sys/rack-unit-1/psu-[Id] Supply

Power supply [Id] is in a


degraded state, or has bad
power-problem sys/rack-unit-1/psu-[Id] input voltage

Power Supply [Id] has lost


equipment- input or input is out of range :
inoperable sys/rack-unit-1/psu-[Id] Check input to PS or replace
[sensor_name]: Power Supply
[Id] Vendor/Revision/Rating
mismatch, or PSU Processor
missing : Repace PS or Check
fru-problem sys/rack-unit-1/psu-[Id] Processor [Id]

Power Supply redundancy is


psu-redundancy- lost : Reseat or replace Power
fail sys/rack-unit-1/psu-[Id] Supply

Front Panel Thermal


Threshold at upper non
critical levels: Check Cooling
The Front Panel temperature
has crossed upper non-critical
threshold: Check device
cooling
Riser [Id] inlet temperature
has crossed upper non-critical
threshold: Check device
cooling
Riser [Id] outlet temperature
has crossed upper non-critical
sys/rack-unit-1 threshold: Check device
thermal-problem sys/rack-unit-1/board cooling
Front Panel Thermal
Threshold at upper critical
levels: Check Cooling
The Front Panel temperature
has crossed upper critical
threshold: Check device
cooling
Riser [Id] inlet temperature
has crossed upper critical
threshold: Check device
cooling
Riser [Id] outlet temperature
has crossed upper critical
sys/rack-unit-1 threshold: Check device
thermal-problem sys/rack-unit-1/board cooling
Front Panel Thermal
Threshold at upper non
recoverable levels: Check
Cooling
The Front Panel temperature
has crossed upper non-
recoverable threshold: Check
device cooling
Riser [Id] inlet temperature
has crossed upper non-
recoverable threshold: Check
device cooling
Riser [Id] outlet temperature
has crossed upper non-
sys/rack-unit-1 recoverable threshold: Check
thermal-problem sys/rack-unit-1/board device cooling
Processor [Id] is inoperable
due to high temperature:
Check cooling
A catastrophic fault has
occurred on one of the
processors: Please check the
processors' status.
Processor [Id] is operating at a
high temperature: Check
cooling
PVCCD_P1_VRHOT: Processor
1 is operating at a high
temperature: Check cooling
P1_LVC3_PWRGD: Voltage rail
Power Good dropped due to
PSU or HW failure, please
contact CISCO TAC for
assistance
P1_MEM23_MEMHOT:
Temperature sensor
corresponding to Processor 1
Memory 2/3 has asserted a
equipment- sys/rack-unit-1/board/cpu-[Id] Thermal Problem: Check
inoperable sys/rack-unit-1/board server cooling

Processor [Id] missing: Please


equipment- reseat or replace Processor
disabled sys/rack-unit-1/board/cpu-[Id] [Id]

Memory channel ([Id]) voltage


is upper non-critical
Processor [Id] voltage is upper
non-critical
Processor [Id] Voltage
threshold has crossed upper
non-critical threshold: Replace
the Power Supply and verify if
the issue is resolved. If the
voltage-problem sys/rack-unit-1/board/cpu-[Id] issue persists, call Cisco TAC
Memory channel ([Id]) voltage
is upper critical
Processor [Id] voltage is upper
critical
Processor [Id] Voltage
threshold has crossed upper
critical threshold: Replace the
Power Supply and verify if the
issue is resolved. If the issue
voltage-problem sys/rack-unit-1/board/cpu-[Id] persists, call Cisco TAC

Memory channel ([Id]) voltage


is upper non-recoverable
Processor [Id] voltage is upper
non-recoverable
Processor [Id] Voltage
threshold has crossed upper
non-recoverable threshold:
Replace the Power Supply and
verify if the issue is resolved.
If the issue persists, call Cisco
voltage-problem sys/rack-unit-1/board/cpu-[Id] TAC

equipment- [sensor_name]: Bios Post


problem sys/rack-unit-1 Failed: Check hardware

[sensor_name]: Memory riser


[Id] Voltage Threshold at
upper critical levels: Check
Power Supply; reseat power
connectors on the
motherboard
[sensor_name]: Memory riser
[Id] Voltage Threshold at
lower critical levels: Check
Power Supply; reseat power
sys/rack-unit-1/board/ connectors on the
voltage-problem memarray-[Id] motherboard
[sensor_name]: Memory riser
[Id] Voltage Threshold at
upper non recoverable levels:
Check Power Supply; reseat
power connectors on the
motherboard
[sensor_name]: Memory riser
[Id] Voltage Threshold at
lower non recoverable levels:
Check Power Supply; reseat
sys/rack-unit-1/board/ power connectors on the
voltage-problem memarray-[Id] motherboard

sys/rack-unit-1/fan-module-
thermal-problem [Id]-[id]
sys/rack-unit-1/fan-module-
thermal-problem [Id]-[id]

sys/rack-unit-1/fan-module-
thermal-problem [Id]-[id]
equipment-offline sys/rack-unit-1/psu-[Id]

voltage-problem sys/rack-unit-1/psu-[Id]
[sensor_name]: server {id]
Chassis Intrusion detected:
equipment- Please secure the server
problem sys/rack-unit-1 chassis

equipment- [sensor_name]:[id] missing:


missing sys/rack-unit-1/adaptor-[Id] reseat or replace [id]

Storage Local disk [Id] is


sys/rack-unit-1/board/storage- rebuild failed: please check
equipment-offline SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]
[sensor_name]: Fan [Id] has
asserted a predictive failure:
equipment- sys/rack-unit-1/fan-module-1- reseat or replace fan [Id]
degraded [Id]/fan-[Id]

equipment- sys/rack-unit-1/board/storage-
missing FLASH-[Id]/pd-HV

Storage controller[Id] patrol


equipment- sys/rack-unit-1/board/storage- read failed: patrol read cant
inoperable SAS-SLOT-[Id] be started

Flex Flash Virtual Drive HV


equipment- sys/rack-unit-1/board/storage- Degraded: please check the
degraded FLASH-[Id]/vd-HV flash device or the controller
Flex Flash Virtual Drive HV
equipment- sys/rack-unit-1/board/storage- inoperable: please check the
inoperable FLASH-[Id]/vd-HV flash device or the controller

Storage Local disk [Id] is


equipment- sys/rack-unit-1/board/storage- inoperable: reseat or replace
missing SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]

Storage Local disk [Id] is


degraded: please check if
equipment- sys/rack-unit-1/board/storage- rebuild or copyback of drive is
degraded SAS-SLOT-[Id]/pd-[Id] required
Explanation Recommended Action

If you see this fault, take the following actions:


Step 1 Re-seat/re-insert the I/O card. Prior to re-
inserting this server component, see the server-
specific
This fault typically occurs because an Installation and Service Guide for prerequisites, safety
I/O card is removed from the chassis, or recommendations and warnings.
when the card or the slot Step 2 If the above actions did not resolve the issue,
is faulty. create a tech-support file and contact Cisco TAC.

This fault typically occurs because Cisco


Integrated Management Controller
(CIMC) has detected that
the System Event Log (SEL) on the
server is almost full. The available
capacity in the log is low. This
is an information-level fault and can be
ignored if you do not want to clear the If you see this fault, take the following action:
SEL at this time. Step 1 You may choose to clear the SEL.

This fault typically occurs because Cisco


Integrated Management Controller
(CIMC) has detected that
the System Event Log (SEL) on the
server is almost full. The available
capacity in the log is very low.
This is an information-level fault and
can be ignored if you do not want to If you see this fault, take the following action:
clear the SEL at this time. Step 1 You may choose to clear the SEL.

This fault typically occurs because the If you see this fault, take the following action:
CIMC SEL is full. Step 1 You may choose to clear the SEL.

If you see this fault, take the following actions:


Step 1 Connect to the CIMC WebUI and launch the
KVM console to monitor the BIOS POST completion.
This fault typically occurs when the Step 2 If the above actions did not resolve the issue,
server did not complete the BIOS POST. create a tech-support file and contact Cisco TAC.
This fault typically occurs when one or This fault typically occurs when one or more
more motherboard input voltages has motherboard input voltages has exceeded upper
exceeded upper critical critical
thresholds. thresholds.

If you see this fault, take the following actions:


Step 1 Reseat or replace the power supply. Prior to
replacing this component, see the server-specific
Installation
This fault typically occurs when one or and Service Guide for prerequisites, safety
more motherboard input voltages has recommendations and warnings.
crossed lower critical Step 2 If the issue persists, create a tech-support file
thresholds. and contact TAC.

This fault typically occurs when one or


more motherboard input voltages has
dropped too low and is If you see this fault, take the following action:
unlikely to recover. Step 1 Contact Cisco TAC.

This fault typically occurs when one or


more motherboard input voltages has
become too high and is unlikely to If you see this fault, take the following action:
recover. Step 1 Contact Cisco TAC.
This fault typically occurs when the
motherboard power consumption
exceeds certain threshold limits.
When this happens, the power usage If you see this fault, take the following action:
sensors on a server detects a problem Step 1 Contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Verify that the server fans are working
properly.
Step 2 Wait for 24 hours to see if the problem resolves
This fault typically occurs when the itself.
motherboard thermal sensors on a Step 3 If the above actions did not resolve the issue,
server detect a problem. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Reseat/replace the power supply. Prior to
replacing this component, see the server-specific
Installation
and Service Guide for prerequisites, safety
This fault typically occurs when the recommendations and warnings.
server power sensors have detected a Step 2 If the above actions did not resolve the issue,
problem. create a tech-support file and contact Cisco TAC.

This fault typically occurs when the If you see this fault, take the following action:
power sensors on a server detect a Step 1 Create a tech-support file and contact Cisco
problem. TAC.

If you see this fault, take the following action:


This fault is raised when the CMOS Step 1 Replace the CMOS battery. Prior to replacing
battery voltage has dropped to lower this component, see the server-specific Installation
than the normal operating and
range. This could impact the clock and Service Guide for prerequisites, safety
other CMOS settings. recommendations and warnings.

If you see this fault, take the following action:


This fault is raised when the CMOS Step 1 Replace the CMOS battery. Prior to replacing
battery voltage has dropped quite low this component, see the server-specific Installation
and is unlikely to recover. and
This impacts the clock and other CMOS Service Guide for prerequisites, safety
settings. recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Monitor other environmental events related to
this server and ensure the temperature ranges are
This fault is raised when the I/O within
controller temperature is outside the recommended ranges.
upper or lower non-critical Step 2 If this action did not solve the problem, contact
threshold. Cisco TAC.
If you see this fault, take the following actions:
Step 1 Monitor other environmental events related to
the server and ensure the temperature ranges are
within
recommended ranges.
Step 2 Consider turning off the server for a while if
This fault is raised when the I/O possible.
controller temperature is outside the Step 3 If the above actions did not resolve the issue,
upper or lower critical threshold. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
This fault is raised when the I/O Step 1 Shut down the server immediately.
controller temperature is outside the Step 2 Create a tech-support file and contact Cisco
recoverable range of operation. TAC.

This fault occurs when the processor


temperature on a server exceeds a non-
critical threshold value, but
is still below the critical threshold. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.
This fault occurs when the processor
temperature on a rack server exceeds a
critical threshold value. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.

This fault occurs when the processor


temperature on a rack server has been
out of the operating range,
and the issue is not recoverable. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of an I/O card has exceeded a non-
critical threshold value, but is
still below the critical threshold. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a
variety of problems, including early
degradation, failure of chips, and failure
of equipment. In If you see this fault, take the following actions:
addition, extreme temperature Step 1 Review the product specifications to determine
fluctuations can cause CPUs to become the temperature operating range of the I/O card.
loose in their sockets. Step 2 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 3 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 4 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 5 If the above actions did not resolve the issue,
offline create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature


of an I/O card has exceeded a critical
threshold value. Be aware
of the following possible contributing
factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a
variety of problems, including early
degradation, failure of chips, and failure
of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to become If you see this fault, take the following actions:
loose in their sockets. Step 1 Review the product specifications to determine
• Cisco UCS equipment should operate the temperature operating range of the I/O card.
in an environment that provides an inlet Step 2 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 3 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 4 If the above actions did not resolve the issue,
offline create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of an I/O card has been out of the
operating range, and the issue
is not recoverable. Be aware of the
following possible contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a
variety of problems, including early
degradation, failure of chips, and failure
of equipment. In If you see this fault, take the following actions:
addition, extreme temperature Step 1 Review the product specifications to determine
fluctuations can cause CPUs to become the temperature operating range of the I/O card.
loose in their sockets. Step 2 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 3 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 4 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 5 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following action:


Step 1 Review the product specifications to determine
This fault occurs when there is a the temperature operating range of the I/O card.
thermal problem on an I/O card. Be Step 2 Review the Cisco UCS Site Preparation Guide to
aware of the following possible ensure that the servers have adequate airflow,
contributing factors: including
• Temperature extremes can cause front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows on the servers are not
reduced efficiency and cause a obstructed.
variety of problems, including early Step 4 Verify that the site cooling system is operating
degradation, failure of chips, and failure properly.
of equipment. In Step 5 Clean the installation site at regular intervals to
addition, extreme temperature avoid buildup of dust and debris, which can cause a
fluctuations can cause CPUs to become system to overheat.
loose in their sockets. Step 6 Replace faulty I/O cards. Prior to replacing this
• Cisco UCS equipment should operate component, see the server-specific Installation and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
If you see this fault, take the following action:
Step 1 Reseat or replace the storage controller. Prior
This fault indicates a non-recoverable to replacing this component, see the server-specific
storage controller failure. This happens Installation and Service Guide for prerequisites, safety
when the storage system recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Insert the disk in a supported slot.
Step 2 Remove and re-insert the local disk.
Step 3 Replace the disk, if an additional disk is
available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
This fault occurs when the local disk has for
become inoperable or has been prerequisites, safety recommendations and warnings.
removed while the server was Step 4 If the above actions did not resolve the issue,
in use. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Replace the physical drive and check to see if
the issue is resolved after a rebuild. Prior to replacing
this
component, see the server-specific Installation and
Service Guide for prerequisites, safety
This fault indicates a physical disk recommendations and warnings.
copyback failure. This fault could Step 2 Reseat or replace the storage controller.
indicate a physical drive problem Step 3 Check configuration options for the storage
or an issue with the RAID configuration. controller in the MegaRAID ROM configuration page.

If you see this fault, take the following actions:


Step 1 If the data on the drive is accessible, back up
and recreate the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
Installation
and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates a non-recoverable Step 3 Check for controller errors in the MegaRAID
error with the virtual drive. ROM page logs.

If you see this fault, take the following actions:


Step 1 Initiate a consistency check on the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
Installation
This fault indicates a recoverable error and Service Guide for prerequisites, safety
with the virtual drive. recommendations and warnings.
This fault indicates a failure in the
reconstruction process of the virtual If you see this fault, take the following action:
drive. Step 1 Restart the reconstruction process.

If you see this fault, take the following actions:


Step 1 Initiate a consistency check on the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
Installation
This fault indicates a consistency check and Service Guide for prerequisites, safety
failure with the virtual drive. recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Replace the RAID battery. Prior to replacing this
component, see the server-specific Installation and
Service Guide for prerequisites, safety
This fault occurs when the RAID battery recommendations and warnings
voltage is below the normal operating Step 2 If the above action did not resolve the issue,
range. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following action:


Step 1 Reseat or replace the battery backup unit on
the storage controller.
Prior to replacing this component, see the server-
This fault indicates a controller battery specific Installation and Service Guide for
backup unit failure. prerequisites, safety recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Restart the relearn process for the battery
backup unit.
Step 2 Reseat or replace the battery backup unit. Prior
to replacing this component, see the server-specific
Installation and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates that a controller Step 3 Replace the battery backup unit if it has
battery relearn process was aborted. exceeded 100 relearn cycles.

If you see this fault, take the following actions:


Step 1 Restart the relearn process for the battery
backup unit.
Step 2 Reseat or replace the battery backup unit. Prior
to replacing this component, see the server-specific
Installation and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates a controller battery Step 3 Replace the battery backup unit if it has
relearn failure. exceeded 100 relearn cycles.
If you see this fault, take the following actions:
Step 1 Insert/re-insert the fan module in the slot that
is reporting the issue.
Step 2 Replace the fan module with a different fan
module, if available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
for
This fault occurs in the unlikely event prerequisites, safety recommendations and warnings.
that a fan in a fan module cannot be Step 3 If the above actions did not resolve the issue,
detected. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the fan status.
Step 2 If the problem persists for a long period of time
or if other fans do not show the same problem, reseat
This fault occurs when the fan speed the fan.
reading from the fan controller does not Step 3 Replace the fan module. Prior to replacing this
match the desired fan speed component, see the server-specific Installation and
and is outside of the normal operating Service Guide for prerequisites, safety
range. This can indicate a problem with recommendations, warnings and procedures.
a fan or with the reading Step 4 If the above actions did not resolve the issue,
from the fan controller. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the fan status.
Step 2 If the problem persists for a long period of time
This fault occurs when the fan speed or if other fans do not show the same problem, reseat
read from the fan controller does not the fan.
match the desired fan speed Step 3 Replace the fan module. Prior to replacing this
and has exceeded the critical threshold component, see the server specific Installation and
and is in risk of failure. This can indicate Service Guide for prerequisites, safety
a problem with a fan recommendations and warnings.
or with the reading from the fan Step 4 If the above actions did not resolve the issue,
controller. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Replace the fan. Prior to replacing this
component, see the server-specific Installation and
This fault occurs when the fan speed Service Guide
read from the fan controller has far for prerequisites, safety recommendations and
exceeded the desired fan speed. warnings.
It usually indicates that the fan has Step 2 If the above action did not resolve the issue,
failed. create a tech-support file and contact Cisco TAC.
This fault typically occurs when a sensor If you see this fault, take the following action:
has detected an unsupported DIMM in Step 1 Verify if the DIMM is supported on the server
the server. For example, configuration. If the DIMM is not supported on the
the model, vendor, or revision is not server
recognized. configuration, contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the DIMM for further ECC errors. If the
high number of errors persists, there is a high
possibility of the DIMM becoming inoperable.
Step 2 If the DIMM becomes inoperable, replace the
DIMM. You can use the CIMC WebUI to locate the
faulty
This fault occurs when a DIMM is in a DIMM. Prior to replacing this component, see the
degraded operability state. This state server-specific Installation and Service Guide for
typically occurs when an prerequisites, safety recommendations, warnings and
excessive number of correctable ECC procedures.
errors are reported on the DIMM by the Step 3 If the above actions did not resolve the issue,
server BIOS. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Review the SEL statistics on the DIMM to
determine which threshold was crossed.
Step 2 If necessary, replace the DIMM. You can use
the CIMC WebUI to locate the faulty DIMM. Prior to
This fault typically occurs because an replacing this component, see the server-specific
above threshold number of correctable Installation and Service Guide for prerequisites, safety
or uncorrectable errors has recommendations, warnings and procedures.
occurred on a DIMM. The DIMM may be Step 3 If the above actions did not resolve the issue,
inoperable. create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of a memory unit on a server exceeds a
non-critical threshold
value, but is still below the critical
threshold. Be aware of the following
possible contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at If you see this fault, take the following actions:
reduced efficiency and cause a Step 1 Review the product specifications to determine
variety of problems, including early the temperature operating range of the server.
degradation, failure of chips, and failure Step 2 Review the Cisco UCS Site Preparation Guide to
of equipment. ensure the servers have adequate airflow, including
Inaddition, extreme temperature front and back clearance.
fluctuations can cause CPUs to become Step 3 Verify that the airflows on the servers are not
loose in their sockets. obstructed.
• Cisco UCS equipment should operate Step 4 Verify that the site cooling system is operating
in an environment that provides an inlet properly.
air temperature not Step 5 Clean the installation site at regular intervals to
colder than 50F (10C) nor hotter than avoid buildup of dust and debris, which can cause a
95F (35C). system to overheat.
• If sensors on a CPU reach 179.6F Step 6 If the above actions did not resolve the issue,
(82C), the system will take that CPU create a tech-support file and contact Cisco TAC.
offline.

This fault occurs when the temperature


of a memory unit on a server exceeds a
critical threshold value.
Be aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of a memory unit on a server has been
out of the operating range,
and the issue is not recoverable. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Check to see if the power supply is connected
to a power source.
This fault typically occurs when the Step 2 If the PSU is physically present in the slot,
power supply module is either missing remove and then re-insert it.
or the input power to the Step 3 If the PSU is not physically present in the slot,
server is absent. insert a new PSU.
This fault occurs when the temperature
of a PSU module has exceeded a non- If you see this fault, take the following actions:
critical threshold value, but Step 1 Review the product specifications to determine
is still below the critical threshold. Be the temperature operating range of the PSU module.
aware of the following possible Step 2 Review the Cisco UCS Site Preparation Guide to
contributing factors: ensure the PSU modules have adequate airflow,
• Temperature extremes can cause including front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows are not obstructed.
reduced efficiency and cause a Step 4 Verify that the site cooling system is operating
variety of problems, including early properly.
degradation, failure of chips, and failure Step 5 Clean the installation site at regular intervals to
of equipment. In avoid buildup of dust and debris, which can cause a
addition, extreme temperature system to overheat.
fluctuations can cause CPUs to become Step 6 Replace faulty PSU modules. Prior to replacing
loose in their sockets. this component, see the server-specific Installation
• Cisco UCS equipment should operate and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature If you see this fault, take the following actions:
of a PSU module has exceeded a critical Step 1 Review the product specifications to determine
threshold value. Be the temperature operating range of the PSU module.
aware of the following possible Step 2 Review the Cisco UCS Site Preparation Guide to
contributing factors: ensure the PSU modules have adequate airflow,
• Temperature extremes can cause including front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows are not obstructed.
reduced efficiency and cause a Step 4 Verify that the site cooling system is operating
variety of problems, including early properly.
degradation, failure of chips, and failure Step 5 Clean the installation site at regular intervals to
of equipment. In avoid buildup of dust and debris, which can cause a
addition, extreme temperature system to overheat.
fluctuations can cause CPUs to become Step 6 Replace faulty PSU modules. Prior to replacing
loose in their sockets. this component, see the server-specific Installation
• Cisco UCS equipment should operate and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature If you see this fault, take the following actions:
of a PSU module has been out of Step 1 Review the product specifications to determine
operating range, and the issue the temperature operating range of the PSU module.
is not recoverable. Be aware of the Step 2 Review the Cisco UCS Site Preparation Guide to
following possible contributing factors: ensure the PSU modules have adequate airflow,
• Temperature extremes can cause including front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows are not obstructed.
reduced efficiency and cause a Step 4 Verify that the site cooling system is operating
variety of problems, including early properly.
degradation, failure of chips, and failure Step 5 Clean the installation site at regular intervals to
of equipment. In avoid buildup of dust and debris, which can cause a
addition, extreme temperature system to overheat.
fluctuations can cause CPUs to become Step 6 Replace faulty PSU modules. Prior to replacing
loose in their sockets. this component, see the server-specific Installation
• Cisco UCS equipment should operate and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the PSU status.
Step 2 If possible, remove and reseat the PSU.
This fault is raised as a warning if the Step 3 If the above action did not resolve the issue,
current output of the PSU in a rack create a tech-support file for the chassis, and contact
server does not match the Cisco
desired output value. TAC.

If you see this fault, take the following actions:


Step 1 Monitor the PSU status.
Step 2 If possible, remove and reseat the PSU.
This fault is raised as a warning if the Step 3 If the above action did not resolve the issue,
current output of the PSU in a rack create a tech-support file for the chassis, and contact
server does not match the Cisco
desired output value. TAC.

If you see this fault, take the following actions:


Step 1 Monitor the PSU status.
Step 2 If possible, remove and reseat the PSU.
This fault is raised as a warning if the Step 3 If the above action did not resolve the issue,
current output of the PSU in a rack create a tech-support file for the chassis, and contact
server does not match the Cisco
desired output value. TAC.
If you see this fault, take the following actions:
Step 1 Remove and reseat the PSU.
Step 2 Replace the PSU. Prior to replacing this
component, see the server-specific Installation and
Service
Guide for prerequisites, safety recommendations and
This fault occurs when the PSU voltage warnings.
has exceeded the specified hardware Step 3 If the above actions did not resolve the issue,
voltage rating. create a tech-support file and contact Cisco TAC.

This fault occurs when the PSU voltage


has exceeded the specified hardware
voltage rating and PSU If you see this fault, take the following actions:
hardware may have been damaged as a Step 1 Remove and reseat the PSU.
result or may be at risk of being Step 2 If the above action did not resolve the issue,
damaged. create a tech-support file and contact Cisco TAC.

This fault occurs when a power supply If you see this fault, create a tech-support file and
unit is drawing too much current. contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Check if the power cable is disconnected.
Step 2 Check if the input voltage is within the correct
range mentioned the server-specific Installation and
Service Guide.
This fault occurs when a power cable is Step 3 Re-insert the PSU.
disconnected or input voltage is Step 4 If these actions did not solve the problem,
incorrect create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Verify that the power cord is properly
connected to the PSU and the power source.
Step 2 Verify that the power source is 220/110 volts.
Step 3 Remove the PSU and re-install it.
Step 4 Replace the PSU.
Note Prior to re-installing or replacing this component,
see the server-specific Installation and Service Guide
This fault typically occurs when the for prerequisites, safety recommendations and
power supply unit is either offline or the warnings.
input/output voltage is out Step 5 If the above actions did not resolve the issue,
of range. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Check the server-specific Installationg and
Service Guide for the power supply vendor
This fault typically occurs when the FRU specification.
information for a power supply unit is Step 2 If the above action did not resolve the issue,
corrupted or malformed. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Consider adding more PSUs to the chassis.
Step 2 Replace any non-functional PSUs. Prior to
replacing this component, see the server-specific
Installation
and Service Guide for prerequisites, safety
recommendations and warnings.
This fault typically occurs when chassis Step 3 If the above actions did not resolve the issue,
power redundancy has failed. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Review the Cisco UCS Site Preparation Guide
and ensure the server has adequate airflow, including
front
and back clearance.
Step 2 Verify that the air flows of the servers are not
obstructed.
Step 3 Verify that the site cooling system is operating
properly.
Step 4 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature readings and ensure it
is within the recommended thermal safe operating
range.
Step 6 If the fault reports a "Thermal Sensor threshold
crossing in the front or back pane" error for the
servers,
check if thermal faults have been raised. Those faults
include details of the thermal condition.
Step 7 If the fault reports a "Missing or Faulty Fan"
error, check on the status of that fan. If it needs
replacement,
create a tech-support file for the chassis and contact
This fault occurs under the following Cisco TAC.
condition: Step 8 If the above actions did not resolve the issue
• If a component within a chassis is and the condition persists, create a tech-support file
operating outside the safe thermal for
operating range. the chassis and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Review the Cisco UCS Site Preparation Guide
and ensure the server has adequate airflow, including
front
and back clearance.
Step 2 Verify that the air flows of the servers are not
obstructed.
Step 3 Verify that the site cooling system is operating
properly.
Step 4 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature readings and ensure it
is within the recommended thermal safe operating
range.
Step 6 If the fault reports a "Thermal Sensor threshold
crossing in the front or back pane" error for the
servers,
check if thermal faults have been raised. Those faults
include details of the thermal condition.
Step 7 If the fault reports a "Missing or Faulty Fan"
error, check on the status of that fan. If it needs
replacement,
create a tech-support file for the chassis and contact
This fault occurs under the following Cisco TAC.
condition: Step 8 If the above actions did not resolve the issue
• If a component within a chassis is and the condition persists, create a tech-support file
operating outside the safe thermal for
operating range. the chassis and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Review the Cisco UCS Site Preparation Guide
and ensure the server has adequate airflow, including
front
and back clearance.
Step 2 Verify that the air flows of the servers are not
obstructed.
Step 3 Verify that the site cooling system is operating
properly.
Step 4 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature readings and ensure it
is within the recommended thermal safe operating
range.
Step 6 If the fault reports a "Thermal Sensor threshold
crossing in the front or back pane" error for the
servers,
check if thermal faults have been raised. Those faults
include details of the thermal condition.
Step 7 If the fault reports a "Missing or Faulty Fan"
error, check on the status of that fan. If it needs
replacement,
create a tech-support file for the chassis and contact
This fault occurs under the following Cisco TAC.
condition: Step 8 If the above actions did not resolve the issue
• If a component within a chassis is and the condition persists, create a tech-support file
operating outside the safe thermal for
operating range. the chassis and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 In the event that the probable cause being
indicated is a thermal problem, check to see if the
airflow to
the server is not obstructed, and it is adequately
ventilated. If possible, check if the heat sink is properly
seated on the processor.
Step 2 In the event that the probable cause being
indicated is equipment inoperable, please contact
Cisco TAC
for further instructions.
Step 3 In the event that the probable cause being
indicated is a power or voltage problem, it is
This fault occurs in the event the recommended to
processor encounters a catastrophic see if the issue is resolved with an alternate power
error or has exceeded the pre-set supply. If this fails to resolve the issue, please contact
thermal/power thresholds. Cisco TAC.

If you see this fault, take the following actions:


Step 1 If this fault occurs, re-seat the processor.
This fault occurs in the unlikely event Step 2 If the above actions did not resolve the issue,
that a processor is disabled. create a tech-support file and contact Cisco TAC.

If you see this fault, take these actions:


Step 1 Monitor the processor for further degradation.
Step 2 Review the SEL statistics on the CPU to
determine which threshold was crossed.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
This fault occurs when the processor and
voltage is out of normal operating Service Guide for prerequisites, safety
range, but has not yet reached a recommendations, and warnings.
critical stage. Normally the processor Step 4 If the above actions did not resolve the issue,
recovers itself from this situation. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Monitor the processor for further degradation
Step 2 Review the SEL statistics on the CPU to
determine which threshold was crossed.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
and
Service Guide for prerequisites, safety
This fault occurs when the processor recommendations, and warnings.
voltage has exceeded the specified Step 4 If the above actions did not resolve the issue,
hardware voltage rating. create a tech-support file and contact Cisco TAC.

This fault occurs when the processor


voltage has exceeded the specified If you see this fault, take the following action:
hardware voltage rating and may Step 1 Create a tech-support file and contact Cisco
cause processor hardware damage. TAC.

If you see this fault, take the following actions:


Step 1 Check the POST result for the server.
This fault typically occurs when the Step 2 Reboot the server.
server has encountered a diagnostic Step 3 If the above actions did not resolve the issue,
failure or an error during POST. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Review the SEL statistics on the DIMM to
determine which threshold was crossed.
Step 2 Monitor the memory array for further
degradation.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
and
Service Guide for prerequisites, safety
This fault occurs when the memory recommendations, and warnings.
array voltage exceeds the specified Step 4 If the above actions did not resolve the issue,
hardware voltage rating. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Review the SEL statistics on the DIMM to
determine which threshold was crossed.
Step 2 Monitor the memory array for further
degradation.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
This fault occurs when the memory and
array voltage exceeded the specified Service Guide for prerequisites, safety
hardware voltage rating and recommendations and warnings.
potentially the memory hardware may Step 4 If the above actions did not resolve the issue,
be damaged. create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature


of a fan module has exceeded a non-
critical threshold value, but
is still below the critical threshold. Be
aware of the following possible If you see this fault, take the following actions:
contributing factors: Step 1 Review the product specifications to determine
• Temperature extremes can cause the temperature operating range of the fan module.
Cisco UCS equipment to operate at Step 2 Review the Cisco UCS Site Preparation Guide to
reduced efficiency and cause a ensure the fan modules have adequate airflow,
variety of problems, including early including front and back clearance.
degradation, failure of chips, and failure Step 3 Verify that the air flows are not obstructed.
of equipment. In Step 4 Verify that the site cooling system is operating
addition, extreme temperature properly.
fluctuations can cause CPUs to become Step 5 Power off unused rack servers.
loose in their sockets. Step 6 Clean the installation site at regular intervals to
• Cisco UCS equipment should operate avoid buildup of dust and debris, which can cause a
in an environment that provides an inlet system to overheat.
air temperature not Step 7 Replace faulty fan modules.
colder than 50F (10C) nor hotter than Step 8 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of a fan module has exceeded a critical
threshold value. Be aware
of the following possible contributing If you see this fault, take the following actions:
factors: Step 1 Review the product specifications to determine
• Temperature extremes can cause the temperature operating range of the fan module.
Cisco UCS equipment to operate at Step 2 Review the Cisco UCS Site Preparation Guide to
reduced efficiency and cause a ensure the fan modules have adequate airflow,
variety of problems, including early including front and back clearance.
degradation, failure of chips, and failure Step 3 Verify that the air flows are not obstructed.
of equipment. In Step 4 Verify that the site cooling system is operating
addition, extreme temperature properly.
fluctuations can cause CPUs to become Step 5 Power off unused rack servers.
loose in their sockets. Step 6 Clean the installation site at regular intervals to
Cisco UCS equipment should operate in avoid buildup of dust and debris, which can cause a
an environment that provides an inlet system to overheat.
air temperature not Step 7 Replace faulty fan modules.
colder than 50F (10C) nor hotter than Step 8 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature


of a fan module has exceeded a critical
threshold value. Be aware Recommended Action:
of the following possible contributing If you see this fault, take the following actions:
factors: Step 1 Review the product specifications to determine
• Temperature extremes can cause the temperature operating range of the fan module.
Cisco UCS equipment to operate at Step 2 Review the Cisco UCS Site Preparation Guide to
reduced efficiency and cause a ensure the fan modules have adequate airflow,
variety of problems, including early including front and back clearance.
degradation, failure of chips, and failure Step 3 Verify that the air flows are not obstructed.
of equipment. In Step 4 Verify that the site cooling system is operating
addition, extreme temperature properly.
fluctuations can cause CPUs to become Step 5 Power off unused rack servers.
loose in their sockets. Step 6 Clean the installation site at regular intervals to
• Cisco UCS equipment should operate avoid buildup of dust and debris, which can cause a
in an environment that provides an inlet system to overheat.
air temperature not Step 7 Replace faulty fan modules.
colder than 50F (10C) nor hotter than Step 8 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Verify that the power cord is properly
connected to the PSU and the power source.
Step 2 Verify that the power source is 220 volts.
Step 3 Verify that the PSU is properly installed.
Step 4 Remove the PSU and reinstall it.
Step 5 Replace the PSU.
Step 6 If the above actions did not resolve the issue,
This fault typically occurs when CIMC note down the type of PSU, create a tech-support file,
detects that a power supply unit in a and
chassis is offline contact Cisco Technical Support.

This fault occurs when the PSU voltage If you see this fault, take the following action:
is out of normal operating range, but Step 1 Monitor the PSU for further degradation.
has not reached to a critical Step 2 Remove and reseat the PSU.
stage yet. Normally the PSU will recover Step 3 If the above actions did not resolve the issue,
itself from this situation. create a tech-support file and contact Cisco TAC.

This fault occurs when server chassis or


cover has been opened. Make sure that the server chassis/cover is in place.

The adapter is missing. CIMC raises this If you see this fault, take the following actions:
fault when any of the following Step 1 Make sure an adapter is inserted in the adaptor
scenarios occur: slot in the server.
• The endpoint reports there is no Step 2 Check whether the adapter is connected and
adapter in the adaptor slot. configured properly and is running the recommended
• The endpoint cannot detect or firmware version.
communicate with the adapter in the Step 3 If the above actions did not resolve the issue,
adaptor slot. create a tech-support file and contact Cisco TAC.

This fault indicates a failure in the If you see this fault, take the following action:
rebuild process of the Local disk. Step 1 Restart the rebuild process.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine
the temperature operating range of the fan module.
Step 2 Review the Cisco UCS Site Preparation Guide
and ensure the fan module has adequate airflow,
including
front and back clearance.
Step 3 Verify that the air flows of the servers are not
obstructed.
Step 4 Verify that the site cooling system is operating
properly.
Step 5 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 6 Replace the faulty fan modules. Prior to
replacing this component, see the server-specific
Installation
This fault occurs when one or more fans and Service Guide for prerequisites, safety
in a fan module are not operational, but recommendations and warnings.
at least one fan is Step 7 If the above actions dd not resolve the issue,
operational. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Insert the disk in a supported slot.
Step 2 Replace the disk, if an additional drive is
available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
for
This fault occurs when the Flex Flash prerequisites, safety recommendations and warnings.
drive removed from slot while server Step 3 If the above actions did not resolve the issue,
was in use. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Initiate a consistency check on the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
This fault indicates that the review of Installation
the storage system for potential and Service Guide for prerequisites, safety
physical disk errors has failed. recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Synchronize the virtual drive using Cisco UCS
SCU to make the VD optimal.
Step 2 Replace any faulty Flex Flash drives. Prior to
replacing this component, see the server-specific
This fault indicates a recoverable error Installation and Service Guide for prerequisites, safety
with the Flex Flash virtual drive. recommendations and warnings.
If you see this fault, take the following actions:
Step 1 If the data on the drive is accessible, back up
and recreate the virtual drive.
Step 2 Replace any faulty Flex Flash drives. Prior to
replacing this component, see the server-specific
Installation and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates a non-recoverable Step 3 Synchronize the virtual drive using Cisco UCS-
error with the Flex Flash virtual drive. SCU to make the VD optimal.
AffectedDN Description Comment
Fault code Fault Name Message Severity

IOCard [location] on
F0376 fltEquipmentIOCardRemoved [serverId] is removed. critical

Log capacity on
Management Controller
F0460 fltSysdebugMEpLogMEpLogLog on [serverid] is [capacity] info

Log capacity on
Management Controller
F0461 fltSysdebugMEpLogMEpLogVeryLow on [serverid] is [capacity] info

Log capacity on
Management Controller
F0462 fltSysdebugMEpLogMEpLogFull on [serverid] is [capacity] info

[Serverid] BIOS failed


power-on self testServer
[chassisId] BIOS failed
F0313 fltComputePhysicalBiosPostTimeout power-on self test. critical
Motherboard of
fltComputeBoardMotherBoardVoltageUpperThresh [serverid] voltage:
F0920 oldCritical [voltage] major

Motherboard of
fltComputeBoardMotherBoardVoltageLowerThresh [serverid] voltage:
F0921 oldCritical [voltage] major

Motherboard of
fltComputeBoardMotherBoardVoltageThresholdLow [serverid] voltage:
F0919 erNonRecoverable [voltage] critical

Motherboard of
fltComputeBoardMotherBoardVoltageThresholdUpp [serverid] voltage:
F0918 erNonRecoverable [voltage] critical
Motherboard of
F1040 fltComputeBoardPowerUsageProblem [serverid] power: [power] major|critical

Motherboard of minor |
[serverid] : thermal: major |
F0869 fltComputeBoardThermalProblem [thermal] critical

Motherboard of
[serverid] power:
F0310 fltComputeBoardPowerError [operPower] critical

Motherboard of
F0868 fltComputeBoardPowerFail [serverid] power: [power] critical

CMOS battery voltage on


[serverid] is
F0424 fltComputeBoardCmosVoltageThresholdCritical [cmosVoltage] major

CMOS battery voltage on


fltComputeBoardCmosVoltageThresholdNonRecove [serverid] is
F0425 rable [cmosVoltage] critical

IO Hub on [serverId]
F0538 fltComputeIOHubThermalNonCritical temperature: [thermal] minor
IO Hub on [serverId]
F0539 fltComputeIOHubThermalThresholdCritical temperature: [thermal] major

IO Hub on [serverId]
F0540 fltComputeIOHubThermalThresholdNonRecoverable temperature: [thermal] critical

Processor [id] on
[serverid] temperature:
F0175 fltProcessorUnitThermalNonCritical [thermal] minor
Processor [id] on
[serverid] temperature:
F0176 fltProcessorUnitThermalThresholdCritical [thermal] major

Processor [id] on
[serverid] temperature:
F0177 fltProcessorUnitThermalThresholdNonRecoverable [thermal] critical
IOCard [location] on
[serverid] temperature:
F0729 fltEquipmentIOCardThermalThresholdNonCritical [thermal] minor

IOCard [location] on
[serverid] temperature:
F0730 fltEquipmentIOCardThermalThresholdCritical [thermal] major
IOCard [location] on
fltEquipmentIOCardThermalThresholdNonRecovera [serverid] temperature:
F0731 ble [thermal] critical

IOCard [location] on
server [id] operState:
F0379 fltEquipmentIOCardThermalProblem [operState] major
Storage Controller [id]
F1004 fltStorageControllerInoperable operability: [operability] critical

Local disk [id] on


[serverid] operability: major|
F0181 fltStorageLocalDiskInoperable [operability] warning

Local disk [id] on


[serverid] operability:
F1006 fltStorageLocalDiskCopybackFailed [operability] major

Virtual drive [id] on


[serverid] operability:
F1007 fltStorageVirtualDriveInoperable [operability] critical

Virtual drive [id] on


[serverid] operability:
F1008 fltStorageVirtualDriveDegraded [operability] warning
Virtual drive [id] on
[serverid] operability:
F1009 fltStorageVirtualDriveReconstructionFailed [operability] major

Virtual drive [id] on


[serverid] operability:
F1010 fltStorageVirtualDriveConsistencyCheckFailed [operability] major

RAID Battery on
[serverid] operability:
F0531 fltStorageRaidBatteryInoperable [operability] major

Raid battery [id] on


[serverid] operability:
F0997 fltStorageRaidBatteryDegraded [operability] major

Raid battery [id] on


[serverid] operability:
F0998 fltStorageRaidBatteryRelearnAborted [operability] minor

Raid battery [id] on


[serverid] operability:
F0999 fltStorageRaidBatteryRelearnFailed [operability] major
Fan [id] in Fan Module
[tray]-[id] under
[serverid] presence:
F0434 fltEquipmentFanMissing [presence] warning

Fan [id] in Fan Module


[tray]-[id] under
F0395 fltEquipmentFanPerfThresholdNonCritical [serverid] speed: [perf] minor

Fan [id] in Fan Module


[tray]-[id] under
F0396 fltEquipmentFanPerfThresholdCritical [serverid] speed: [perf] major

Fan [id] in Fan Module


[tray]-[id] under
F0397 fltEquipmentFanPerfThresholdNonRecoverable s[erverid] speed: [perf] critical
DIMM [location] on
[serverid] has an invalid
F0502 fltMemoryUnitIdentityUnestablishable FRU warning

DIMM [location] on
[serverid]
F0184 fltMemoryUnitDegraded operability: [operability] warning

DIMM [location] on
[serverid]
F0185 fltMemoryUnitInoperable operability: [operability] major
DIMM [location] on
[serverid]
F0186 fltMemoryUnitThermalThresholdNonCritical temperature: [thermal] Info

DIMM [location] on
[serverid]
F0187 fltMemoryUnitThermalThresholdCritical temperature: [thermal] major
DIMM [location] on
[serverid]
F0188 fltMemoryUnitThermalThresholdNonRecoverable temperature: [thermal] critical

Power supply [id] in


[serverid] presence:
F0378 fltEquipmentPsuMissing [presence] warning
Power supply [id] in
[serverid] temperature:
F0381 fltEquipmentPsuThermalThresholdNonCritical [thermal] minor

Power supply [id] in


[serverid] temperature:
F0383 fltEquipmentPsuThermalThresholdCritical [thermal] major
Power supply [id] in
[serverid] temperature:
F0385 fltEquipmentPsuThermalThresholdNonRecoverable [thermal] critical

Power supply [id] in


[serverid] output power:
F0392 fltEquipmentPsuPerfThresholdNonCritical [perf] minor

Power supply [id] in


[serverid] output power:
F0393 fltEquipmentPsuPerfThresholdCritical [perf] major

Power supply [id] in


[serverid] output power:
F0394 fltEquipmentPsuPerfThresholdNonRecoverable [perf] critical
Power supply [id] in
[serverid] voltage:
F0389 fltEquipmentPsuVoltageThresholdCritical [voltage] major

Power supply [id] in


[serverid] voltage:
F0391 fltEquipmentPsuVoltageThresholdNonRecoverable [voltage] critical

Power supply [id] on


[serverid] has minor |
exceeded its power major |
F0882 fltEquipmentPsuPowerThreshold threshold. critical

Power supply [id] on


[serverid] has
disconnected cable or
F0883 fltEquipmentPsuInputError bad input voltage. critical

Power supply [id] in


[serverid] operability:
F0374 fltEquipmentPsuInoperable [operability] major
Power supply [id] on
[serverid] has a
F0407 fltEquipmentPsuIdentity malformed FRU critical

[Serverid] was configured


for redundancy, but
fltPowerChassisMemberChassisPsuRedundanceFailu running in a non-
F0743 re redundant configuration. major

Thermal condition on
[serverid] cause:
F0410 fltEquipmentChassisThermalThresholdNonCritical [thermalStateQualifier] minor
Thermal condition on
[serverid] cause:
F0409 fltEquipmentChassisThermalThresholdCritical [thermalStateQualifier] major
Thermal condition on
fltEquipmentChassisThermalThresholdNonRecovera [serverid] cause:
F0411 ble [thermalStateQualifier] critical
Processor [id] on
[serverId] operability:
F0174 fltProcessorUnitInoperable [operability] critical|major

Processor [id] on
[serverid] operState:
F0842 fltProcessorUnitDisabled [operState] info

Processor [id] on
[serverId] voltage:
F0178 fltProcessorUnitVoltageThresholdNonCritical [voltage] minor
Processor [id] on
[serverId] voltage:
F0179 fltProcessorUnitVoltageThresholdCritical [voltage] major

Processor [id] on
[serverId] voltage:
F0180 fltProcessorUnitVoltageThresholdNonRecoverable [voltage] critical

[Serverid] POST or
F0517 fltComputePhysicalPostfailure diagnostic failure critical

Memory array [id] on


[serverid] voltage:
F0190 fltMemoryArrayVoltageThresholdCritical [voltage] major
Memory array [id] on
[serverid] voltage:
F0191 fltMemoryArrayVoltageThresholdNonRecoverable [voltage] critical

Fan module [tray]-[id] in


fltEquipmentFanModuleThermalThresholdNonCritic [serverid] temperature:
F0380 al [thermal] minor
Fan module [tray]-[id] in
[serverid] temperature:
F0382 fltEquipmentFanModuleThermalThresholdCritical [thermal] major

Fan module [tray]-[id] in


fltEquipmentFanModuleThermalThresholdNonReco [serverid] temperature:
F0384 verable [thermal] critical
Power supply [id] in
F0528 fltEquipmentPsuOffline [serverid] power: [power] warning

Power supply [id] in


[serverid] voltage:
F0387 fltEquipmentPsuVoltageThresholdNonCritical [voltage] minor

F0320 fltComputePhysicalUnidentified [Serverid] Chassis open. warning

Adapter [id] in [serverid]


F0203 fltAdaptorUnitMissing presence: [presence] warning

Local disk [id] on


[serverid] operability:
[operability]. Reason:
F1005 fltStorageLocalDiskRebuildFailed [operQualifierReason] major
Fan [id] in Fan Module
[tray]-[id] under server
[id] operability:
F0371 fltEquipmentFanDegraded [operability] warning

Local disk [id] on


FlexFlash Controller [id]
F0181 fltStorageFlexFlashLocalDiskMissing operability: [operability] info

Local disk [id] on server


[id] had a patrol read
failure. Reason:
F1003 fltStorageControllerPatrolReadFailed [operQualifierReason] warning

Virtual drive [id] on


FlexFlash Controller [id]
F1008 fltStorageFlexFlashVirtualDriveHVDegraded operability: [operability] warning
Virtual drive [id] on
Storage Controller [id]
F1007 fltStorageFlexFlashVirtualDriveHVInoperable operability: [operability] critical

Local disk [id] on


[serverid] operability:
F1256 fltStorageLocalDiskMissing [operability] info

Local disk [id] on


[serverid] operability:
F0996 fltStorageLocalDiskDegraded [operability] warning

F0637 fltPowerBudgetPowerBudgetBmcProblem Major

F0635 fltPowerBudgetPowerBudgetCmcProblem Major

Flex flash Controller [id]


F1257 fltStorageFlexFlashControllerInoperable operability: [operability] Major
F1262 fltStorageFlexFlashControllerUnhealthy Warning

Flex flash card [id] on


[serverid] operability:
F1258 fltStorageFlexFlashCardInoperable [operability] Info

Local disk [id] on


FlexFlash Controller [id]
F1259 fltStorageFlexFlashCardMissing operability: [operability] Info

Virtual drive [id] on


FlexFlash Controller [id]
F1260 fltStorageFlexFlashVirtualDriveDegraded operability: [operability] Warning
Virtual drive [id] on
Storage Controller [id]
F1261 fltStorageFlexFlashVirtualDriveInoperable operability: [operability] Critical
Probable Cause DN Description

[sensor_name]: PCI Slot [id]


equipment- sys/rack-unit-1/equipped-slot- riser or card missing: reseat or
removed [Id] replace pci card [id]

sys/rack-unit-1/mgmt/log-SEL-
log-capacity 0 System Event log is going low

sys/rack-unit-1/mgmt/log-SEL- System Event log capacity is


log-capacity 0 very low

sys/rack-unit-1/mgmt/log-SEL- System Event log is Full: Clear


log-capacity 0 the log

equipment- BIOS POST Timeout occurred:


inoperable sys/rack-unit-1/board Contact Cisco TAC
Stand-by voltage (xV) to the
motherboard is upper critical:
Check the power supply
Auxiliary voltage (xV) to the
motherboard is upper critical:
Check the power supply
Motherboard voltage (xV) is
upper critical: Check the
voltage-problem sys/rack-unit-1/board power supply

"Stand-by voltage ([Val] V) to


the motherboard is lower
critical: Check the power
supply
Auxiliary voltage ([Val] V) to
the motherboard is lower
critical: Check the power
supply
Motherboard voltage ([Val] V)
is lower critical: Check the
voltage-problem sys/rack-unit-1/board power supply"

"Stand-by voltage ([Val] V) to


the motherboard is lower
non-recoverable: Check the
power supply
Auxiliary voltage ([Val] V) to
the motherboard is lower
non-recoverable: Check the
power supply
Motherboard voltage ([Val] V)
is lower non-recoverable:
voltage-problem sys/rack-unit-1/board Check the power supply"

"Stand-by voltage ([Val] V) to


the motherboard is upper
non-recoverable: Check the
power supply
Auxiliary voltage ([Val] V) to
the motherboard is upper
non-recoverable: Check the
power supply
Motherboard voltage ([Val] V)
is upper non-recoverable:
voltage-problem sys/rack-unit-1/board Check the power supply"
"Motherboard Power usage is
upper critical: Check hardware
Motherboard Power usage is
upper non-recoverable: Check
power-problem sys/rack-unit-1/board hardware"

Motherboard chipset
inoperable due to high
thermal-problem sys/rack-unit-1/board temperature

P[Id]V[Id]_AU[Id]_PWRGD:
Voltage rail Power Good
dropped due to PSU or HW
failure, please contact CISCO
power-problem sys/rack-unit-1/board TAC for assistance

The server failed to power


power-problem sys/rack-unit-1/board ON: Check Power Supply

Battery voltage level is upper


voltage-problem sys/rack-unit-1/board/cpu-[Id] critical: Replace battery

Battery voltage level is upper


non-recoverable: Replace
voltage-problem sys/rack-unit-1/board/cpu-[Id] battery

[sensor_name]: Motherboard
sys/rack-unit-1/board chipset temperature is upper
thermal-problem non-critical
[sensor_name]: Motherboard
chipset temperature is upper
thermal-problem sys/rack-unit-1/board critical

[sensor_name]: Motherboard
chipset temperature is upper
thermal-problem sys/rack-unit-1/board non-recoverable

Processor [Id] Thermal


threshold has crossed upper
non-critical threshold: Check
thermal-problem sys/rack-unit-1/board/cpu-[Id] cooling
Processor [Id] Thermal
threshold has crossed upper
sys/rack-unit-1/board/cpu-[Id] critical threshold: Check
thermal-problem cooling

Processor [Id] Thermal


sys/rack-unit-1/board/cpu-[Id] threshold has crossed a preset
thermal-problem threshold: Check cooling
Adaptor Unit [Id]
Temperature is non critical :
thermal-problem sys/rack-unit-1/adaptor-[Id] Check Cooling

Adaptor Unit [id]


Temperature is critical : Check
thermal-problem sys/rack-unit-1/adaptor-[Id] Cooling
Adaptor Unit [id]
Temperature is non
thermal-problem sys/rack-unit-1/adaptor-[Id] recoverable : Check Cooling

[sensor_name]: Adaptor Unit


[Id] is inoperable due to high
thermal-problem sys/rack-unit-1/adaptor-[Id] temperature : Check Cooling
Storage controller SLOT-[Id]
equipment- sys/rack-unit-1/board/storage- inoperable: reseat or replace
inoperable SAS-SLOT-[Id] the storage controller

equipment-
inoperable | Storage Local disk [Id] is
equipment- sys/rack-unit-1/board/storage- inoperable: reseat or replace
missing SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]

Storage Local disk [Id] is


sys/rack-unit-1/board/storage- inoperable: reseat or replace
equipment-offline SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]

Storage Virtual Drive [Id] is


inoperable: Check storage
equipment- sys/rack-unit-1/board/storage- controller, or reseat the
inoperable SAS-SLOT-[Id]/vd-[Id] storage drive

Storage Virtual Drive [Id] is


inoperable: Check storage
equipment- sys/rack-unit-1/board/storage- controller, or reseat the
degraded SAS-SLOT-[Id]/vd-[Id] storage drive
Storage Virtual Drive [Id]
reconstruction failed: Check
equipment- sys/rack-unit-1/board/storage- storage controller, or reseat
degraded SAS-SLOT-[Id]/vd-[Id] the storage drive

Storage Virtual Drive [Id]


Consistency Check Failed:
equipment- sys/rack-unit-1/board/storage- please check the controller, or
degraded SAS-SLOT-[Id]/vd-[Id] reseat the physical drives

Storage Raid battery [Id]


equipment- sys/rack-unit-1/board/storage- inoperable: check the raid
inoperable SAS-SLOT-[Id]/raid-battery battery

Storage Raid battery [Id]


equipment- sys/rack-unit-1/board/storage- Degraded: check the raid
degraded SAS-SLOT-[id]/raid-battery battery

Storage Raid battery [Id]


equipment- sys/rack-unit-1/board/storage- relearn aborted : check the
degraded SAS-SLOT-[Id]/raid-battery raid battery

Storage Raid battery [id]


equipment- sys/rack-unit-1/board/storage- relearn aborted : check the
degraded SAS-SLOT-[Id]/raid-battery raid battery
equipment- sys/rack-unit-1/fan-module-1- Fan [id] missing: reseat or
missing [Id]/fan-[Id] replace fan [Id]

Fan speed for fan-[Id] in Fan


Module [Id]-[Id] is lower non
critical : Check the air intake
to the server
Fan speed for fan-[Id] is lower
performance- sys/rack-unit-1/fan-module-1- non critical : Check the air
problem [Id]/fan-[Id] intake to the server

Fan speed for fan-{Id] in Fan


Module [Id]-[Id] is lower
critical : Check the air intake
to the server
Fan speed for fan-[Id] is lower
performance- sys/rack-unit-1/fan-module-1- critical : Check the air intake
problem [Id]/fan-[Id] to the server

Fan speed for fan-[Id] in Fan


Module {Id]-[Id] is lower non
recoverable : Check the air
intake to the server
Fan speed for fan-[Id] is lower
performance- sys/rack-unit-1/fan-module-1- non recoverable : Check the
problem [Id]/fan-[Id] air intake to the server
[sensor_name]: Memory Riser
[Id] missing: reseat or replace
memory riser [Id]
[sensor_name]: Memory Unit
identity- sys/rack-unit-1/board/ [Id] missing: reseat or replace
unestablishable memarray-[Id]/mem-[Id] physical memory [Id]

equipment- sys/rack-unit-1/board/ DIMM [Id] is degraded : Check


degraded memarray-[Id]/mem-[Id] or replace DIMM

equipment- sys/rack-unit-1/board/ DIMM [Id] is inoperable :


inoperable memarray-[Id]/mem-[Id] Check or replace DIMM
Memory Unit [Id]
temperature is upper non
critical:Check Cooling
%s: Memory riser %d Thermal
sys/rack-unit-1/board/ Threshold at upper non
thermal-problem memarray-[Id]/mem-[Id] critical levels: Check Cooling

Memory Unit [Id]


temperature is upper
critical:Check Cooling
[sensor_name]: Memory riser
[Id] Thermal Threshold at
sys/rack-unit-1/board/ upper critical levels: Check
thermal-problem memarray-[Id]/mem-[Id] Cooling
Memory Unit [Id]
temperature is upper non
recoverable: Check Cooling
[sensor_name]: Memory riser
[Id] Thermal Threshold at
sys/rack-unit-1/board/ upper non recoverable levels:
thermal-problem memarray-[Id]/mem-[Id] Check Cooling

equipment- Power Supply [Id] missing:


missing sys/rack-unit-1/psu-[Id] reseat or replace PS [id]
Power Supply [Id]
temperature is upper non
thermal-problem sys/rack-unit-1/psu-[Id] critical : Check cooling

Power Supply [Id]


temperature is upper critical :
thermal-problem sys/rack-unit-1/psu-[Id] Check cooling
Power Supply [Id]
temperature is upper non
recoverable : Check Power
thermal-problem sys/rack-unit-1/psu-[Id] Supply Status

Power Supply [Id] output


power is upper non critical :
Reseat or replace Power
power-problem sys/rack-unit-1/psu-[Id] Supply

Power Supply [Id] output


power is upper critical :
Reseat or replace Power
power-problem sys/rack-unit-1/psu-[Id] Supply

Power Supply [Id] output


power is upper non
recoverable : Reseat or
power-problem sys/rack-unit-1/psu-[Id] replace Power Supply
Power Supply [Id] Voltage is
upper critical : Reseat or
voltage-problem sys/rack-unit-1/psu-[Id] replace Power Supply

Power Supply [Id] Voltage is


upper non Recoverable :
Reseat or replace Power
voltage-problem sys/rack-unit-1/psu-[Id] Supply

Power Supply [Id] current is


upper non critical : Reseat or
replace Power Supply
Power Supply [Id] Current is
upper critical : Reseat or
replace Power Supply
Power Supply [Id] Current is
upper non recoverable :
Reseat or replace Power
power-problem sys/rack-unit-1/psu-[Id] Supply

Power supply [Id] is in a


degraded state, or has bad
power-problem sys/rack-unit-1/psu-[Id] input voltage

Power Supply [Id] has lost


equipment- input or input is out of range :
inoperable sys/rack-unit-1/psu-[Id] Check input to PS or replace
[sensor_name]: Power Supply
[Id] Vendor/Revision/Rating
mismatch, or PSU Processor
missing : Repace PS or Check
fru-problem sys/rack-unit-1/psu-[Id] Processor [Id]

Power Supply redundancy is


psu-redundancy- lost : Reseat or replace Power
fail sys/rack-unit-1/psu-[Id] Supply

Front Panel Thermal


Threshold at upper non
critical levels: Check Cooling
The Front Panel temperature
has crossed upper non-critical
threshold: Check device
cooling
Riser [Id] inlet temperature
has crossed upper non-critical
threshold: Check device
cooling
Riser [Id] outlet temperature
has crossed upper non-critical
sys/rack-unit-1 threshold: Check device
thermal-problem sys/rack-unit-1/board cooling
Front Panel Thermal
Threshold at upper critical
levels: Check Cooling
The Front Panel temperature
has crossed upper critical
threshold: Check device
cooling
Riser [Id] inlet temperature
has crossed upper critical
threshold: Check device
cooling
Riser [Id] outlet temperature
has crossed upper critical
sys/rack-unit-1 threshold: Check device
thermal-problem sys/rack-unit-1/board cooling
Front Panel Thermal
Threshold at upper non
recoverable levels: Check
Cooling
The Front Panel temperature
has crossed upper non-
recoverable threshold: Check
device cooling
Riser [Id] inlet temperature
has crossed upper non-
recoverable threshold: Check
device cooling
Riser [Id] outlet temperature
has crossed upper non-
sys/rack-unit-1 recoverable threshold: Check
thermal-problem sys/rack-unit-1/board device cooling
Processor [Id] is inoperable
due to high temperature:
Check cooling
A catastrophic fault has
occurred on one of the
processors: Please check the
processors' status.
Processor [Id] is operating at a
high temperature: Check
cooling
PVCCD_P1_VRHOT: Processor
1 is operating at a high
temperature: Check cooling
P1_LVC3_PWRGD: Voltage rail
Power Good dropped due to
PSU or HW failure, please
contact CISCO TAC for
assistance
P1_MEM23_MEMHOT:
Temperature sensor
corresponding to Processor 1
Memory 2/3 has asserted a
equipment- sys/rack-unit-1/board/cpu-[Id] Thermal Problem: Check
inoperable sys/rack-unit-1/board server cooling

Processor [Id] missing: Please


equipment- reseat or replace Processor
disabled sys/rack-unit-1/board/cpu-[Id] [Id]

Memory channel ([Id]) voltage


is upper non-critical
Processor [Id] voltage is upper
non-critical
Processor [Id] Voltage
threshold has crossed upper
non-critical threshold: Replace
the Power Supply and verify if
the issue is resolved. If the
voltage-problem sys/rack-unit-1/board/cpu-[Id] issue persists, call Cisco TAC
Memory channel ([Id]) voltage
is upper critical
Processor [Id] voltage is upper
critical
Processor [Id] Voltage
threshold has crossed upper
critical threshold: Replace the
Power Supply and verify if the
issue is resolved. If the issue
voltage-problem sys/rack-unit-1/board/cpu-[Id] persists, call Cisco TAC

Memory channel ([Id]) voltage


is upper non-recoverable
Processor [Id] voltage is upper
non-recoverable
Processor [Id] Voltage
threshold has crossed upper
non-recoverable threshold:
Replace the Power Supply and
verify if the issue is resolved.
If the issue persists, call Cisco
voltage-problem sys/rack-unit-1/board/cpu-[Id] TAC

equipment- [sensor_name]: Bios Post


problem sys/rack-unit-1 Failed: Check hardware

[sensor_name]: Memory riser


[Id] Voltage Threshold at
upper critical levels: Check
Power Supply; reseat power
connectors on the
motherboard
[sensor_name]: Memory riser
[Id] Voltage Threshold at
lower critical levels: Check
Power Supply; reseat power
sys/rack-unit-1/board/ connectors on the
voltage-problem memarray-[Id] motherboard
[sensor_name]: Memory riser
[Id] Voltage Threshold at
upper non recoverable levels:
Check Power Supply; reseat
power connectors on the
motherboard
[sensor_name]: Memory riser
[Id] Voltage Threshold at
lower non recoverable levels:
Check Power Supply; reseat
sys/rack-unit-1/board/ power connectors on the
voltage-problem memarray-[Id] motherboard

sys/rack-unit-1/fan-module-
thermal-problem [Id]-[id]
sys/rack-unit-1/fan-module-
thermal-problem [Id]-[id]

sys/rack-unit-1/fan-module-
thermal-problem [Id]-[id]
equipment-offline sys/rack-unit-1/psu-[Id]

voltage-problem sys/rack-unit-1/psu-[Id]
[sensor_name]: server {id]
Chassis Intrusion detected:
equipment- Please secure the server
problem sys/rack-unit-1 chassis

equipment- [sensor_name]:[id] missing:


missing sys/rack-unit-1/adaptor-[Id] reseat or replace [id]

Storage Local disk [Id] is


sys/rack-unit-1/board/storage- rebuild failed: please check
equipment-offline SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]
[sensor_name]: Fan [Id] has
asserted a predictive failure:
equipment- sys/rack-unit-1/fan-module-1- reseat or replace fan [Id]
degraded [Id]/fan-[Id]

equipment- sys/rack-unit-1/board/storage-
missing FLASH-[Id]/pd-HV

Storage controller[Id] patrol


equipment- sys/rack-unit-1/board/storage- read failed: patrol read cant
inoperable SAS-SLOT-[Id] be started

Flex Flash Virtual Drive HV


equipment- sys/rack-unit-1/board/storage- Degraded: please check the
degraded FLASH-[Id]/vd-HV flash device or the controller
Flex Flash Virtual Drive HV
equipment- sys/rack-unit-1/board/storage- inoperable: please check the
inoperable FLASH-[Id]/vd-HV flash device or the controller

Storage Local disk [Id] is


equipment- sys/rack-unit-1/board/storage- inoperable: reseat or replace
missing SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]

Storage Local disk [Id] is


degraded: please check if
equipment- sys/rack-unit-1/board/storage- rebuild or copyback of drive is
degraded SAS-SLOT-[Id]/pd-[Id] required

Power capping failed:


System shutdown is
power-cap-fail sys/rack-unit-1/budget initiated by Node Manager

Power capping correction


time exceeded: Please set
power-cap-fail sys/rack-unit-1/budget an appropriate power limit

Flex Flash controller


FlexFlash-0 inoperable:
equipment- sys/rack-unit-1/board/ reseat or replace the flex
inoperable storage-flexflash-FlexFlash-0 controller
Flex Flash controller
FlexFlash-0 configuration
equipment- sys/rack-unit-1/board/ error: configure the flex
unhealthy storage-flexflash-FlexFlash-0 controller correctly

sys/rack-unit-1/board/ Flex Flash Local disk 2 is


equipment- storage-flexflash-FlexFlash- inoperable: reseat or
inoperable 0/card-2 replace the local disk 2

sys/rack-unit-1/board/ Flex Flash Local disk 2


equipment- storage-flexflash-FlexFlash- missing: reseat or replace
missing 0/card-2 Flex Flash Local disk

Flex Flash Virtual Drive 1


sys/rack-unit-1/board/ (testuser) Degraded: please
equipment- storage-flexflash-FlexFlash- check the flash device or
degraded 0/vd-1 the controller
Flex Flash Virtual Drive 5
(Hypervisor) is Inoperable:
sys/rack-unit-1/board/ Check flex controller
equipment- storage-flexflash-FlexFlash- properties or Flex Flash
inoperable 0/vd-5 disks
Explanation Recommended Action

If you see this fault, take the following actions:


Step 1 Re-seat/re-insert the I/O card. Prior to re-
inserting this server component, see the server-
specific
This fault typically occurs because an Installation and Service Guide for prerequisites, safety
I/O card is removed from the chassis, or recommendations and warnings.
when the card or the slot Step 2 If the above actions did not resolve the issue,
is faulty. create a tech-support file and contact Cisco TAC.

This fault typically occurs because Cisco


Integrated Management Controller
(CIMC) has detected that
the System Event Log (SEL) on the
server is almost full. The available
capacity in the log is low. This
is an information-level fault and can be
ignored if you do not want to clear the If you see this fault, take the following action:
SEL at this time. Step 1 You may choose to clear the SEL.

This fault typically occurs because Cisco


Integrated Management Controller
(CIMC) has detected that
the System Event Log (SEL) on the
server is almost full. The available
capacity in the log is very low.
This is an information-level fault and
can be ignored if you do not want to If you see this fault, take the following action:
clear the SEL at this time. Step 1 You may choose to clear the SEL.

This fault typically occurs because the If you see this fault, take the following action:
CIMC SEL is full. Step 1 You may choose to clear the SEL.

If you see this fault, take the following actions:


Step 1 Connect to the CIMC WebUI and launch the
KVM console to monitor the BIOS POST completion.
This fault typically occurs when the Step 2 If the above actions did not resolve the issue,
server did not complete the BIOS POST. create a tech-support file and contact Cisco TAC.
This fault typically occurs when one or This fault typically occurs when one or more
more motherboard input voltages has motherboard input voltages has exceeded upper
exceeded upper critical critical
thresholds. thresholds.

If you see this fault, take the following actions:


Step 1 Reseat or replace the power supply. Prior to
replacing this component, see the server-specific
Installation
This fault typically occurs when one or and Service Guide for prerequisites, safety
more motherboard input voltages has recommendations and warnings.
crossed lower critical Step 2 If the issue persists, create a tech-support file
thresholds. and contact TAC.

This fault typically occurs when one or


more motherboard input voltages has
dropped too low and is If you see this fault, take the following action:
unlikely to recover. Step 1 Contact Cisco TAC.

This fault typically occurs when one or


more motherboard input voltages has
become too high and is unlikely to If you see this fault, take the following action:
recover. Step 1 Contact Cisco TAC.
This fault typically occurs when the
motherboard power consumption
exceeds certain threshold limits.
When this happens, the power usage If you see this fault, take the following action:
sensors on a server detects a problem Step 1 Contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Verify that the server fans are working
properly.
Step 2 Wait for 24 hours to see if the problem resolves
This fault typically occurs when the itself.
motherboard thermal sensors on a Step 3 If the above actions did not resolve the issue,
server detect a problem. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Reseat/replace the power supply. Prior to
replacing this component, see the server-specific
Installation
and Service Guide for prerequisites, safety
This fault typically occurs when the recommendations and warnings.
server power sensors have detected a Step 2 If the above actions did not resolve the issue,
problem. create a tech-support file and contact Cisco TAC.

This fault typically occurs when the If you see this fault, take the following action:
power sensors on a server detect a Step 1 Create a tech-support file and contact Cisco
problem. TAC.

If you see this fault, take the following action:


This fault is raised when the CMOS Step 1 Replace the CMOS battery. Prior to replacing
battery voltage has dropped to lower this component, see the server-specific Installation
than the normal operating and
range. This could impact the clock and Service Guide for prerequisites, safety
other CMOS settings. recommendations and warnings.

If you see this fault, take the following action:


This fault is raised when the CMOS Step 1 Replace the CMOS battery. Prior to replacing
battery voltage has dropped quite low this component, see the server-specific Installation
and is unlikely to recover. and
This impacts the clock and other CMOS Service Guide for prerequisites, safety
settings. recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Monitor other environmental events related to
this server and ensure the temperature ranges are
This fault is raised when the I/O within
controller temperature is outside the recommended ranges.
upper or lower non-critical Step 2 If this action did not solve the problem, contact
threshold. Cisco TAC.
If you see this fault, take the following actions:
Step 1 Monitor other environmental events related to
the server and ensure the temperature ranges are
within
recommended ranges.
Step 2 Consider turning off the server for a while if
This fault is raised when the I/O possible.
controller temperature is outside the Step 3 If the above actions did not resolve the issue,
upper or lower critical threshold. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
This fault is raised when the I/O Step 1 Shut down the server immediately.
controller temperature is outside the Step 2 Create a tech-support file and contact Cisco
recoverable range of operation. TAC.

This fault occurs when the processor


temperature on a server exceeds a non-
critical threshold value, but
is still below the critical threshold. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.
This fault occurs when the processor
temperature on a rack server exceeds a
critical threshold value. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.

This fault occurs when the processor


temperature on a rack server has been
out of the operating range,
and the issue is not recoverable. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of an I/O card has exceeded a non-
critical threshold value, but is
still below the critical threshold. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a
variety of problems, including early
degradation, failure of chips, and failure
of equipment. In If you see this fault, take the following actions:
addition, extreme temperature Step 1 Review the product specifications to determine
fluctuations can cause CPUs to become the temperature operating range of the I/O card.
loose in their sockets. Step 2 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 3 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 4 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 5 If the above actions did not resolve the issue,
offline create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature


of an I/O card has exceeded a critical
threshold value. Be aware
of the following possible contributing
factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a
variety of problems, including early
degradation, failure of chips, and failure
of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to become If you see this fault, take the following actions:
loose in their sockets. Step 1 Review the product specifications to determine
• Cisco UCS equipment should operate the temperature operating range of the I/O card.
in an environment that provides an inlet Step 2 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 3 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 4 If the above actions did not resolve the issue,
offline create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of an I/O card has been out of the
operating range, and the issue
is not recoverable. Be aware of the
following possible contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a
variety of problems, including early
degradation, failure of chips, and failure
of equipment. In If you see this fault, take the following actions:
addition, extreme temperature Step 1 Review the product specifications to determine
fluctuations can cause CPUs to become the temperature operating range of the I/O card.
loose in their sockets. Step 2 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 3 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 4 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 5 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following action:


Step 1 Review the product specifications to determine
This fault occurs when there is a the temperature operating range of the I/O card.
thermal problem on an I/O card. Be Step 2 Review the Cisco UCS Site Preparation Guide to
aware of the following possible ensure that the servers have adequate airflow,
contributing factors: including
• Temperature extremes can cause front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows on the servers are not
reduced efficiency and cause a obstructed.
variety of problems, including early Step 4 Verify that the site cooling system is operating
degradation, failure of chips, and failure properly.
of equipment. In Step 5 Clean the installation site at regular intervals to
addition, extreme temperature avoid buildup of dust and debris, which can cause a
fluctuations can cause CPUs to become system to overheat.
loose in their sockets. Step 6 Replace faulty I/O cards. Prior to replacing this
• Cisco UCS equipment should operate component, see the server-specific Installation and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
If you see this fault, take the following action:
Step 1 Reseat or replace the storage controller. Prior
This fault indicates a non-recoverable to replacing this component, see the server-specific
storage controller failure. This happens Installation and Service Guide for prerequisites, safety
when the storage system recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Insert the disk in a supported slot.
Step 2 Remove and re-insert the local disk.
Step 3 Replace the disk, if an additional disk is
available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
This fault occurs when the local disk has for
become inoperable or has been prerequisites, safety recommendations and warnings.
removed while the server was Step 4 If the above actions did not resolve the issue,
in use. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Replace the physical drive and check to see if
the issue is resolved after a rebuild. Prior to replacing
this
component, see the server-specific Installation and
Service Guide for prerequisites, safety
This fault indicates a physical disk recommendations and warnings.
copyback failure. This fault could Step 2 Reseat or replace the storage controller.
indicate a physical drive problem Step 3 Check configuration options for the storage
or an issue with the RAID configuration. controller in the MegaRAID ROM configuration page.

If you see this fault, take the following actions:


Step 1 If the data on the drive is accessible, back up
and recreate the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
Installation
and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates a non-recoverable Step 3 Check for controller errors in the MegaRAID
error with the virtual drive. ROM page logs.

If you see this fault, take the following actions:


Step 1 Initiate a consistency check on the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
Installation
This fault indicates a recoverable error and Service Guide for prerequisites, safety
with the virtual drive. recommendations and warnings.
This fault indicates a failure in the
reconstruction process of the virtual If you see this fault, take the following action:
drive. Step 1 Restart the reconstruction process.

If you see this fault, take the following actions:


Step 1 Initiate a consistency check on the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
Installation
This fault indicates a consistency check and Service Guide for prerequisites, safety
failure with the virtual drive. recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Replace the RAID battery. Prior to replacing this
component, see the server-specific Installation and
Service Guide for prerequisites, safety
This fault occurs when the RAID battery recommendations and warnings
voltage is below the normal operating Step 2 If the above action did not resolve the issue,
range. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following action:


Step 1 Reseat or replace the battery backup unit on
the storage controller.
Prior to replacing this component, see the server-
This fault indicates a controller battery specific Installation and Service Guide for
backup unit failure. prerequisites, safety recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Restart the relearn process for the battery
backup unit.
Step 2 Reseat or replace the battery backup unit. Prior
to replacing this component, see the server-specific
Installation and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates that a controller Step 3 Replace the battery backup unit if it has
battery relearn process was aborted. exceeded 100 relearn cycles.

If you see this fault, take the following actions:


Step 1 Restart the relearn process for the battery
backup unit.
Step 2 Reseat or replace the battery backup unit. Prior
to replacing this component, see the server-specific
Installation and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates a controller battery Step 3 Replace the battery backup unit if it has
relearn failure. exceeded 100 relearn cycles.
If you see this fault, take the following actions:
Step 1 Insert/re-insert the fan module in the slot that
is reporting the issue.
Step 2 Replace the fan module with a different fan
module, if available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
for
This fault occurs in the unlikely event prerequisites, safety recommendations and warnings.
that a fan in a fan module cannot be Step 3 If the above actions did not resolve the issue,
detected. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the fan status.
Step 2 If the problem persists for a long period of time
or if other fans do not show the same problem, reseat
This fault occurs when the fan speed the fan.
reading from the fan controller does not Step 3 Replace the fan module. Prior to replacing this
match the desired fan speed component, see the server-specific Installation and
and is outside of the normal operating Service Guide for prerequisites, safety
range. This can indicate a problem with recommendations, warnings and procedures.
a fan or with the reading Step 4 If the above actions did not resolve the issue,
from the fan controller. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the fan status.
Step 2 If the problem persists for a long period of time
This fault occurs when the fan speed or if other fans do not show the same problem, reseat
read from the fan controller does not the fan.
match the desired fan speed Step 3 Replace the fan module. Prior to replacing this
and has exceeded the critical threshold component, see the server specific Installation and
and is in risk of failure. This can indicate Service Guide for prerequisites, safety
a problem with a fan recommendations and warnings.
or with the reading from the fan Step 4 If the above actions did not resolve the issue,
controller. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Replace the fan. Prior to replacing this
component, see the server-specific Installation and
This fault occurs when the fan speed Service Guide
read from the fan controller has far for prerequisites, safety recommendations and
exceeded the desired fan speed. warnings.
It usually indicates that the fan has Step 2 If the above action did not resolve the issue,
failed. create a tech-support file and contact Cisco TAC.
This fault typically occurs when a sensor If you see this fault, take the following action:
has detected an unsupported DIMM in Step 1 Verify if the DIMM is supported on the server
the server. For example, configuration. If the DIMM is not supported on the
the model, vendor, or revision is not server
recognized. configuration, contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the DIMM for further ECC errors. If the
high number of errors persists, there is a high
possibility of the DIMM becoming inoperable.
Step 2 If the DIMM becomes inoperable, replace the
DIMM. You can use the CIMC WebUI to locate the
faulty
This fault occurs when a DIMM is in a DIMM. Prior to replacing this component, see the
degraded operability state. This state server-specific Installation and Service Guide for
typically occurs when an prerequisites, safety recommendations, warnings and
excessive number of correctable ECC procedures.
errors are reported on the DIMM by the Step 3 If the above actions did not resolve the issue,
server BIOS. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Review the SEL statistics on the DIMM to
determine which threshold was crossed.
Step 2 If necessary, replace the DIMM. You can use
the CIMC WebUI to locate the faulty DIMM. Prior to
This fault typically occurs because an replacing this component, see the server-specific
above threshold number of correctable Installation and Service Guide for prerequisites, safety
or uncorrectable errors has recommendations, warnings and procedures.
occurred on a DIMM. The DIMM may be Step 3 If the above actions did not resolve the issue,
inoperable. create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of a memory unit on a server exceeds a
non-critical threshold
value, but is still below the critical
threshold. Be aware of the following
possible contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at If you see this fault, take the following actions:
reduced efficiency and cause a Step 1 Review the product specifications to determine
variety of problems, including early the temperature operating range of the server.
degradation, failure of chips, and failure Step 2 Review the Cisco UCS Site Preparation Guide to
of equipment. ensure the servers have adequate airflow, including
Inaddition, extreme temperature front and back clearance.
fluctuations can cause CPUs to become Step 3 Verify that the airflows on the servers are not
loose in their sockets. obstructed.
• Cisco UCS equipment should operate Step 4 Verify that the site cooling system is operating
in an environment that provides an inlet properly.
air temperature not Step 5 Clean the installation site at regular intervals to
colder than 50F (10C) nor hotter than avoid buildup of dust and debris, which can cause a
95F (35C). system to overheat.
• If sensors on a CPU reach 179.6F Step 6 If the above actions did not resolve the issue,
(82C), the system will take that CPU create a tech-support file and contact Cisco TAC.
offline.

This fault occurs when the temperature


of a memory unit on a server exceeds a
critical threshold value.
Be aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of a memory unit on a server has been
out of the operating range,
and the issue is not recoverable. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Check to see if the power supply is connected
to a power source.
This fault typically occurs when the Step 2 If the PSU is physically present in the slot,
power supply module is either missing remove and then re-insert it.
or the input power to the Step 3 If the PSU is not physically present in the slot,
server is absent. insert a new PSU.
This fault occurs when the temperature
of a PSU module has exceeded a non- If you see this fault, take the following actions:
critical threshold value, but Step 1 Review the product specifications to determine
is still below the critical threshold. Be the temperature operating range of the PSU module.
aware of the following possible Step 2 Review the Cisco UCS Site Preparation Guide to
contributing factors: ensure the PSU modules have adequate airflow,
• Temperature extremes can cause including front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows are not obstructed.
reduced efficiency and cause a Step 4 Verify that the site cooling system is operating
variety of problems, including early properly.
degradation, failure of chips, and failure Step 5 Clean the installation site at regular intervals to
of equipment. In avoid buildup of dust and debris, which can cause a
addition, extreme temperature system to overheat.
fluctuations can cause CPUs to become Step 6 Replace faulty PSU modules. Prior to replacing
loose in their sockets. this component, see the server-specific Installation
• Cisco UCS equipment should operate and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature If you see this fault, take the following actions:
of a PSU module has exceeded a critical Step 1 Review the product specifications to determine
threshold value. Be the temperature operating range of the PSU module.
aware of the following possible Step 2 Review the Cisco UCS Site Preparation Guide to
contributing factors: ensure the PSU modules have adequate airflow,
• Temperature extremes can cause including front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows are not obstructed.
reduced efficiency and cause a Step 4 Verify that the site cooling system is operating
variety of problems, including early properly.
degradation, failure of chips, and failure Step 5 Clean the installation site at regular intervals to
of equipment. In avoid buildup of dust and debris, which can cause a
addition, extreme temperature system to overheat.
fluctuations can cause CPUs to become Step 6 Replace faulty PSU modules. Prior to replacing
loose in their sockets. this component, see the server-specific Installation
• Cisco UCS equipment should operate and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature If you see this fault, take the following actions:
of a PSU module has been out of Step 1 Review the product specifications to determine
operating range, and the issue the temperature operating range of the PSU module.
is not recoverable. Be aware of the Step 2 Review the Cisco UCS Site Preparation Guide to
following possible contributing factors: ensure the PSU modules have adequate airflow,
• Temperature extremes can cause including front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows are not obstructed.
reduced efficiency and cause a Step 4 Verify that the site cooling system is operating
variety of problems, including early properly.
degradation, failure of chips, and failure Step 5 Clean the installation site at regular intervals to
of equipment. In avoid buildup of dust and debris, which can cause a
addition, extreme temperature system to overheat.
fluctuations can cause CPUs to become Step 6 Replace faulty PSU modules. Prior to replacing
loose in their sockets. this component, see the server-specific Installation
• Cisco UCS equipment should operate and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the PSU status.
Step 2 If possible, remove and reseat the PSU.
This fault is raised as a warning if the Step 3 If the above action did not resolve the issue,
current output of the PSU in a rack create a tech-support file for the chassis, and contact
server does not match the Cisco
desired output value. TAC.

If you see this fault, take the following actions:


Step 1 Monitor the PSU status.
Step 2 If possible, remove and reseat the PSU.
This fault is raised as a warning if the Step 3 If the above action did not resolve the issue,
current output of the PSU in a rack create a tech-support file for the chassis, and contact
server does not match the Cisco
desired output value. TAC.

If you see this fault, take the following actions:


Step 1 Monitor the PSU status.
Step 2 If possible, remove and reseat the PSU.
This fault is raised as a warning if the Step 3 If the above action did not resolve the issue,
current output of the PSU in a rack create a tech-support file for the chassis, and contact
server does not match the Cisco
desired output value. TAC.
If you see this fault, take the following actions:
Step 1 Remove and reseat the PSU.
Step 2 Replace the PSU. Prior to replacing this
component, see the server-specific Installation and
Service
Guide for prerequisites, safety recommendations and
This fault occurs when the PSU voltage warnings.
has exceeded the specified hardware Step 3 If the above actions did not resolve the issue,
voltage rating. create a tech-support file and contact Cisco TAC.

This fault occurs when the PSU voltage


has exceeded the specified hardware
voltage rating and PSU If you see this fault, take the following actions:
hardware may have been damaged as a Step 1 Remove and reseat the PSU.
result or may be at risk of being Step 2 If the above action did not resolve the issue,
damaged. create a tech-support file and contact Cisco TAC.

This fault occurs when a power supply If you see this fault, create a tech-support file and
unit is drawing too much current. contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Check if the power cable is disconnected.
Step 2 Check if the input voltage is within the correct
range mentioned the server-specific Installation and
Service Guide.
This fault occurs when a power cable is Step 3 Re-insert the PSU.
disconnected or input voltage is Step 4 If these actions did not solve the problem,
incorrect create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Verify that the power cord is properly
connected to the PSU and the power source.
Step 2 Verify that the power source is 220/110 volts.
Step 3 Remove the PSU and re-install it.
Step 4 Replace the PSU.
Note Prior to re-installing or replacing this component,
see the server-specific Installation and Service Guide
This fault typically occurs when the for prerequisites, safety recommendations and
power supply unit is either offline or the warnings.
input/output voltage is out Step 5 If the above actions did not resolve the issue,
of range. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Check the server-specific Installationg and
Service Guide for the power supply vendor
This fault typically occurs when the FRU specification.
information for a power supply unit is Step 2 If the above action did not resolve the issue,
corrupted or malformed. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Consider adding more PSUs to the chassis.
Step 2 Replace any non-functional PSUs. Prior to
replacing this component, see the server-specific
Installation
and Service Guide for prerequisites, safety
recommendations and warnings.
This fault typically occurs when chassis Step 3 If the above actions did not resolve the issue,
power redundancy has failed. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Review the Cisco UCS Site Preparation Guide
and ensure the server has adequate airflow, including
front
and back clearance.
Step 2 Verify that the air flows of the servers are not
obstructed.
Step 3 Verify that the site cooling system is operating
properly.
Step 4 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature readings and ensure it
is within the recommended thermal safe operating
range.
Step 6 If the fault reports a "Thermal Sensor threshold
crossing in the front or back pane" error for the
servers,
check if thermal faults have been raised. Those faults
include details of the thermal condition.
Step 7 If the fault reports a "Missing or Faulty Fan"
error, check on the status of that fan. If it needs
replacement,
create a tech-support file for the chassis and contact
This fault occurs under the following Cisco TAC.
condition: Step 8 If the above actions did not resolve the issue
• If a component within a chassis is and the condition persists, create a tech-support file
operating outside the safe thermal for
operating range. the chassis and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Review the Cisco UCS Site Preparation Guide
and ensure the server has adequate airflow, including
front
and back clearance.
Step 2 Verify that the air flows of the servers are not
obstructed.
Step 3 Verify that the site cooling system is operating
properly.
Step 4 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature readings and ensure it
is within the recommended thermal safe operating
range.
Step 6 If the fault reports a "Thermal Sensor threshold
crossing in the front or back pane" error for the
servers,
check if thermal faults have been raised. Those faults
include details of the thermal condition.
Step 7 If the fault reports a "Missing or Faulty Fan"
error, check on the status of that fan. If it needs
replacement,
create a tech-support file for the chassis and contact
This fault occurs under the following Cisco TAC.
condition: Step 8 If the above actions did not resolve the issue
• If a component within a chassis is and the condition persists, create a tech-support file
operating outside the safe thermal for
operating range. the chassis and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Review the Cisco UCS Site Preparation Guide
and ensure the server has adequate airflow, including
front
and back clearance.
Step 2 Verify that the air flows of the servers are not
obstructed.
Step 3 Verify that the site cooling system is operating
properly.
Step 4 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature readings and ensure it
is within the recommended thermal safe operating
range.
Step 6 If the fault reports a "Thermal Sensor threshold
crossing in the front or back pane" error for the
servers,
check if thermal faults have been raised. Those faults
include details of the thermal condition.
Step 7 If the fault reports a "Missing or Faulty Fan"
error, check on the status of that fan. If it needs
replacement,
create a tech-support file for the chassis and contact
This fault occurs under the following Cisco TAC.
condition: Step 8 If the above actions did not resolve the issue
• If a component within a chassis is and the condition persists, create a tech-support file
operating outside the safe thermal for
operating range. the chassis and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 In the event that the probable cause being
indicated is a thermal problem, check to see if the
airflow to
the server is not obstructed, and it is adequately
ventilated. If possible, check if the heat sink is properly
seated on the processor.
Step 2 In the event that the probable cause being
indicated is equipment inoperable, please contact
Cisco TAC
for further instructions.
Step 3 In the event that the probable cause being
indicated is a power or voltage problem, it is
This fault occurs in the event the recommended to
processor encounters a catastrophic see if the issue is resolved with an alternate power
error or has exceeded the pre-set supply. If this fails to resolve the issue, please contact
thermal/power thresholds. Cisco TAC.

If you see this fault, take the following actions:


Step 1 If this fault occurs, re-seat the processor.
This fault occurs in the unlikely event Step 2 If the above actions did not resolve the issue,
that a processor is disabled. create a tech-support file and contact Cisco TAC.

If you see this fault, take these actions:


Step 1 Monitor the processor for further degradation.
Step 2 Review the SEL statistics on the CPU to
determine which threshold was crossed.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
This fault occurs when the processor and
voltage is out of normal operating Service Guide for prerequisites, safety
range, but has not yet reached a recommendations, and warnings.
critical stage. Normally the processor Step 4 If the above actions did not resolve the issue,
recovers itself from this situation. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Monitor the processor for further degradation
Step 2 Review the SEL statistics on the CPU to
determine which threshold was crossed.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
and
Service Guide for prerequisites, safety
This fault occurs when the processor recommendations, and warnings.
voltage has exceeded the specified Step 4 If the above actions did not resolve the issue,
hardware voltage rating. create a tech-support file and contact Cisco TAC.

This fault occurs when the processor


voltage has exceeded the specified If you see this fault, take the following action:
hardware voltage rating and may Step 1 Create a tech-support file and contact Cisco
cause processor hardware damage. TAC.

If you see this fault, take the following actions:


Step 1 Check the POST result for the server.
This fault typically occurs when the Step 2 Reboot the server.
server has encountered a diagnostic Step 3 If the above actions did not resolve the issue,
failure or an error during POST. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Review the SEL statistics on the DIMM to
determine which threshold was crossed.
Step 2 Monitor the memory array for further
degradation.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
and
Service Guide for prerequisites, safety
This fault occurs when the memory recommendations, and warnings.
array voltage exceeds the specified Step 4 If the above actions did not resolve the issue,
hardware voltage rating. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Review the SEL statistics on the DIMM to
determine which threshold was crossed.
Step 2 Monitor the memory array for further
degradation.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
This fault occurs when the memory and
array voltage exceeded the specified Service Guide for prerequisites, safety
hardware voltage rating and recommendations and warnings.
potentially the memory hardware may Step 4 If the above actions did not resolve the issue,
be damaged. create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature


of a fan module has exceeded a non-
critical threshold value, but
is still below the critical threshold. Be
aware of the following possible If you see this fault, take the following actions:
contributing factors: Step 1 Review the product specifications to determine
• Temperature extremes can cause the temperature operating range of the fan module.
Cisco UCS equipment to operate at Step 2 Review the Cisco UCS Site Preparation Guide to
reduced efficiency and cause a ensure the fan modules have adequate airflow,
variety of problems, including early including front and back clearance.
degradation, failure of chips, and failure Step 3 Verify that the air flows are not obstructed.
of equipment. In Step 4 Verify that the site cooling system is operating
addition, extreme temperature properly.
fluctuations can cause CPUs to become Step 5 Power off unused rack servers.
loose in their sockets. Step 6 Clean the installation site at regular intervals to
• Cisco UCS equipment should operate avoid buildup of dust and debris, which can cause a
in an environment that provides an inlet system to overheat.
air temperature not Step 7 Replace faulty fan modules.
colder than 50F (10C) nor hotter than Step 8 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of a fan module has exceeded a critical
threshold value. Be aware
of the following possible contributing If you see this fault, take the following actions:
factors: Step 1 Review the product specifications to determine
• Temperature extremes can cause the temperature operating range of the fan module.
Cisco UCS equipment to operate at Step 2 Review the Cisco UCS Site Preparation Guide to
reduced efficiency and cause a ensure the fan modules have adequate airflow,
variety of problems, including early including front and back clearance.
degradation, failure of chips, and failure Step 3 Verify that the air flows are not obstructed.
of equipment. In Step 4 Verify that the site cooling system is operating
addition, extreme temperature properly.
fluctuations can cause CPUs to become Step 5 Power off unused rack servers.
loose in their sockets. Step 6 Clean the installation site at regular intervals to
Cisco UCS equipment should operate in avoid buildup of dust and debris, which can cause a
an environment that provides an inlet system to overheat.
air temperature not Step 7 Replace faulty fan modules.
colder than 50F (10C) nor hotter than Step 8 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature


of a fan module has exceeded a critical
threshold value. Be aware Recommended Action:
of the following possible contributing If you see this fault, take the following actions:
factors: Step 1 Review the product specifications to determine
• Temperature extremes can cause the temperature operating range of the fan module.
Cisco UCS equipment to operate at Step 2 Review the Cisco UCS Site Preparation Guide to
reduced efficiency and cause a ensure the fan modules have adequate airflow,
variety of problems, including early including front and back clearance.
degradation, failure of chips, and failure Step 3 Verify that the air flows are not obstructed.
of equipment. In Step 4 Verify that the site cooling system is operating
addition, extreme temperature properly.
fluctuations can cause CPUs to become Step 5 Power off unused rack servers.
loose in their sockets. Step 6 Clean the installation site at regular intervals to
• Cisco UCS equipment should operate avoid buildup of dust and debris, which can cause a
in an environment that provides an inlet system to overheat.
air temperature not Step 7 Replace faulty fan modules.
colder than 50F (10C) nor hotter than Step 8 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Verify that the power cord is properly
connected to the PSU and the power source.
Step 2 Verify that the power source is 220 volts.
Step 3 Verify that the PSU is properly installed.
Step 4 Remove the PSU and reinstall it.
Step 5 Replace the PSU.
Step 6 If the above actions did not resolve the issue,
This fault typically occurs when CIMC note down the type of PSU, create a tech-support file,
detects that a power supply unit in a and
chassis is offline contact Cisco Technical Support.

This fault occurs when the PSU voltage If you see this fault, take the following action:
is out of normal operating range, but Step 1 Monitor the PSU for further degradation.
has not reached to a critical Step 2 Remove and reseat the PSU.
stage yet. Normally the PSU will recover Step 3 If the above actions did not resolve the issue,
itself from this situation. create a tech-support file and contact Cisco TAC.

This fault occurs when server chassis or


cover has been opened. Make sure that the server chassis/cover is in place.

The adapter is missing. CIMC raises this If you see this fault, take the following actions:
fault when any of the following Step 1 Make sure an adapter is inserted in the adaptor
scenarios occur: slot in the server.
• The endpoint reports there is no Step 2 Check whether the adapter is connected and
adapter in the adaptor slot. configured properly and is running the recommended
• The endpoint cannot detect or firmware version.
communicate with the adapter in the Step 3 If the above actions did not resolve the issue,
adaptor slot. create a tech-support file and contact Cisco TAC.

This fault indicates a failure in the If you see this fault, take the following action:
rebuild process of the Local disk. Step 1 Restart the rebuild process.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine
the temperature operating range of the fan module.
Step 2 Review the Cisco UCS Site Preparation Guide
and ensure the fan module has adequate airflow,
including
front and back clearance.
Step 3 Verify that the air flows of the servers are not
obstructed.
Step 4 Verify that the site cooling system is operating
properly.
Step 5 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 6 Replace the faulty fan modules. Prior to
replacing this component, see the server-specific
Installation
This fault occurs when one or more fans and Service Guide for prerequisites, safety
in a fan module are not operational, but recommendations and warnings.
at least one fan is Step 7 If the above actions dd not resolve the issue,
operational. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Insert the disk in a supported slot.
Step 2 Replace the disk, if an additional drive is
available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
for
This fault occurs when the Flex Flash prerequisites, safety recommendations and warnings.
drive removed from slot while server Step 3 If the above actions did not resolve the issue,
was in use. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Initiate a consistency check on the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
This fault indicates that the review of Installation
the storage system for potential and Service Guide for prerequisites, safety
physical disk errors has failed. recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Synchronize the virtual drive using Cisco UCS
SCU to make the VD optimal.
Step 2 Replace any faulty Flex Flash drives. Prior to
replacing this component, see the server-specific
This fault indicates a recoverable error Installation and Service Guide for prerequisites, safety
with the Flex Flash virtual drive. recommendations and warnings.
If you see this fault, take the following actions:
Step 1 If the data on the drive is accessible, back up
and recreate the virtual drive.
Step 2 Replace any faulty Flex Flash drives. Prior to
replacing this component, see the server-specific
Installation and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates a non-recoverable Step 3 Synchronize the virtual drive using Cisco UCS-
error with the Flex Flash virtual drive. SCU to make the VD optimal.

If you see this fault, take the following actions:


Step1- First disable the corresponding power-profile in
the Power Cap Configuration page and power On the
Host. Step2- Try increasing the power-cap value in
This fault indicates the assigned power- the power-cap-profile page for which the shutdown
cap value is not maintained and host action is configured.
shutdown is intiated as a power-cap fail Step3- Also try reducing the load on the Host
exception action if the set exception if the assigned power-cap value needs to be
action is shutdown. maintained irrespective of host performance impact.

If you see this fault, take the following actions:


Step1-Try increasing the configured power-cap
value in the power-profile that corresponds to this
This fault indicates the assigned power- fault. Step2- Also try increasing the power
cap value is not able to attain within the limiting correction time in the corresponding power-
set correction-time. profile settings.

If you see this fault, take the following action:


Step 1 Try to reset the flax flash controller.
Step 2 If the above actions did not resolve the issue,
This fault indicates a non-recoverable create a tech-support file and contact Cisco TAC.
flex flash controller failure. This Prior to resetting the flex flash controller, see the
happens when the CIMC is not able to server-specific
manage/communicate with the flex Installation and Service Guide for prerequisites, safety
flash controller. recommendations and warnings.
If you see this fault, take the following action:
Step 1 Check controller status and make sure the
firmware mode is matching the SD Cards mode and
the VD's are in healthy state
This fault occurs if there is a mode Step 2 Check the size of the SD cards and make sure
mismatch Or cards size mismatch. both the cards match in size.

If you see this fault, take the following actions:


Step 1 Insert the disk in a supported slot.
Step 2 Remove and re-insert the card.
Step 3 Replace the card, if an additional card is
available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
for
prerequisites, safety recommendations and warnings.
This fault occurs when the flex flash Step 4 If the above actions did not resolve the issue,
card has become inoperable create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Insert the disk in a supported slot.
Step 2 Remove and re-insert the card, else insert a
new card if an additional drive is available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
for
This fault occurs when the Flex Flash prerequisites, safety recommendations and warnings.
drive removed from slot while server Step 3 If the above actions did not resolve the issue,
was in use. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Synchronize the virtual drive maually using
CIMC WebUI to make the VD optimal.
Step 2 If it did not solve the issue, then virtual drives
might need to be reconfigured. While reconfiguring
virtual drives, there is an option of auto-sync, which
can be enabled. This option will automate the virtual
drives and sync the data.
This fault indicates a recoverable error Review Installation and Service Guide for
with the Flex Flash virtual drive. prerequisites, safety recommendations and warnings.
If you see this fault, take the following actions:
Step 1 If the data on the drive is accessible, back up
and recreate the virtual drive.
Step 2 Replace any faulty Flex Flash drives. Prior to
replacing this component, see the server-specific
Installation and Service Guide for prerequisites, safety
recommendations and warnings.
Step 3 Synchronize the virtual drive either by manually
doing the sync using CIMC WebUI, or by selecting
This fault indicates a non-recoverable auto-sync option while creating the virtual drives, to
error with the Flex Flash virtual drive. make the VD optimal.
AffectedDN Description Comment
Vikram to provide the
details

Vikram to provide the


details
Take Details from F1007 UCSM
Fault code Fault Name Message Severity

IOCard [location] on
F0376 fltEquipmentIOCardRemoved [serverId] is removed. critical

Log capacity on
Management Controller
F0460 fltSysdebugMEpLogMEpLogLog on [serverid] is [capacity] info

Log capacity on
Management Controller
F0461 fltSysdebugMEpLogMEpLogVeryLow on [serverid] is [capacity] info

Log capacity on
Management Controller
F0462 fltSysdebugMEpLogMEpLogFull on [serverid] is [capacity] info

[Serverid] BIOS failed


power-on self testServer
[chassisId] BIOS failed
F0313 fltComputePhysicalBiosPostTimeout power-on self test. critical
Motherboard of
fltComputeBoardMotherBoardVoltageUpperThresh [serverid] voltage:
F0920 oldCritical [voltage] major

Motherboard of
fltComputeBoardMotherBoardVoltageLowerThresh [serverid] voltage:
F0921 oldCritical [voltage] major

Motherboard of
fltComputeBoardMotherBoardVoltageThresholdLow [serverid] voltage:
F0919 erNonRecoverable [voltage] critical

Motherboard of
fltComputeBoardMotherBoardVoltageThresholdUpp [serverid] voltage:
F0918 erNonRecoverable [voltage] critical
Motherboard of
F1040 fltComputeBoardPowerUsageProblem [serverid] power: [power] major|critical

Motherboard of minor |
[serverid] : thermal: major |
F0869 fltComputeBoardThermalProblem [thermal] critical

Motherboard of
[serverid] power:
F0310 fltComputeBoardPowerError [operPower] critical

Motherboard of
F0868 fltComputeBoardPowerFail [serverid] power: [power] critical

CMOS battery voltage on


[serverid] is
F0424 fltComputeBoardCmosVoltageThresholdCritical [cmosVoltage] major

CMOS battery voltage on


fltComputeBoardCmosVoltageThresholdNonRecove [serverid] is
F0425 rable [cmosVoltage] critical

IO Hub on [serverId]
F0538 fltComputeIOHubThermalNonCritical temperature: [thermal] minor
IO Hub on [serverId]
F0539 fltComputeIOHubThermalThresholdCritical temperature: [thermal] major

IO Hub on [serverId]
F0540 fltComputeIOHubThermalThresholdNonRecoverable temperature: [thermal] critical

Processor [id] on
[serverid] temperature:
F0175 fltProcessorUnitThermalNonCritical [thermal] minor
Processor [id] on
[serverid] temperature:
F0176 fltProcessorUnitThermalThresholdCritical [thermal] major

Processor [id] on
[serverid] temperature:
F0177 fltProcessorUnitThermalThresholdNonRecoverable [thermal] critical
IOCard [location] on
[serverid] temperature:
F0729 fltEquipmentIOCardThermalThresholdNonCritical [thermal] minor

IOCard [location] on
[serverid] temperature:
F0730 fltEquipmentIOCardThermalThresholdCritical [thermal] major
IOCard [location] on
fltEquipmentIOCardThermalThresholdNonRecovera [serverid] temperature:
F0731 ble [thermal] critical

IOCard [location] on
server [id] operState:
F0379 fltEquipmentIOCardThermalProblem [operState] major
Storage Controller [id]
F1004 fltStorageControllerInoperable operability: [operability] critical

Local disk [id] on


[serverid] operability: major|
F0181 fltStorageLocalDiskInoperable [operability] warning

Local disk [id] on


[serverid] operability:
F1006 fltStorageLocalDiskCopybackFailed [operability] major

Virtual drive [id] on


[serverid] operability:
F1007 fltStorageVirtualDriveInoperable [operability] critical

Virtual drive [id] on


[serverid] operability:
F1008 fltStorageVirtualDriveDegraded [operability] warning
Virtual drive [id] on
[serverid] operability:
F1009 fltStorageVirtualDriveReconstructionFailed [operability] major

Virtual drive [id] on


[serverid] operability:
F1010 fltStorageVirtualDriveConsistencyCheckFailed [operability] major

RAID Battery on
[serverid] operability:
F0531 fltStorageRaidBatteryInoperable [operability] major

Raid battery [id] on


[serverid] operability:
F0997 fltStorageRaidBatteryDegraded [operability] minor

Raid battery [id] on


[serverid] operability:
F0998 fltStorageRaidBatteryRelearnAborted [operability] minor

Raid battery [id] on


[serverid] operability:
F0999 fltStorageRaidBatteryRelearnFailed [operability] major
Fan [id] in Fan Module
[tray]-[id] under
[serverid] presence:
F0434 fltEquipmentFanMissing [presence] warning

Fan [id] in Fan Module


[tray]-[id] under
F0395 fltEquipmentFanPerfThresholdNonCritical [serverid] speed: [perf] minor

Fan [id] in Fan Module


[tray]-[id] under
F0396 fltEquipmentFanPerfThresholdCritical [serverid] speed: [perf] major

Fan [id] in Fan Module


[tray]-[id] under
F0397 fltEquipmentFanPerfThresholdNonRecoverable s[erverid] speed: [perf] critical
DIMM [location] on
[serverid] has an invalid
F0502 fltMemoryUnitIdentityUnestablishable FRU warning

DIMM [location] on
[serverid]
F0184 fltMemoryUnitDegraded operability: [operability] warning

DIMM [location] on
[serverid]
F0185 fltMemoryUnitInoperable operability: [operability] major
DIMM [location] on
[serverid]
F0186 fltMemoryUnitThermalThresholdNonCritical temperature: [thermal] Info

DIMM [location] on
[serverid]
F0187 fltMemoryUnitThermalThresholdCritical temperature: [thermal] major
DIMM [location] on
[serverid]
F0188 fltMemoryUnitThermalThresholdNonRecoverable temperature: [thermal] critical

Power supply [id] in


[serverid] presence:
F0378 fltEquipmentPsuMissing [presence] warning
Power supply [id] in
[serverid] temperature:
F0381 fltEquipmentPsuThermalThresholdNonCritical [thermal] minor

Power supply [id] in


[serverid] temperature:
F0383 fltEquipmentPsuThermalThresholdCritical [thermal] major
Power supply [id] in
[serverid] temperature:
F0385 fltEquipmentPsuThermalThresholdNonRecoverable [thermal] critical

Power supply [id] in


[serverid] output power:
F0392 fltEquipmentPsuPerfThresholdNonCritical [perf] minor

Power supply [id] in


[serverid] output power:
F0393 fltEquipmentPsuPerfThresholdCritical [perf] major

Power supply [id] in


[serverid] output power:
F0394 fltEquipmentPsuPerfThresholdNonRecoverable [perf] critical
Power supply [id] in
[serverid] voltage:
F0389 fltEquipmentPsuVoltageThresholdCritical [voltage] major

Power supply [id] in


[serverid] voltage:
F0391 fltEquipmentPsuVoltageThresholdNonRecoverable [voltage] critical

Power supply [id] on


[serverid] has minor |
exceeded its power major |
F0882 fltEquipmentPsuPowerThreshold threshold. critical

Power supply [id] on


[serverid] has
disconnected cable or
F0883 fltEquipmentPsuInputError bad input voltage. critical

Power supply [id] in


[serverid] operability:
F0374 fltEquipmentPsuInoperable [operability] major
Power supply [id] on
[serverid] has a
F0407 fltEquipmentPsuIdentity malformed FRU critical

[Serverid] was configured


for redundancy, but
fltPowerChassisMemberChassisPsuRedundanceFailu running in a non-
F0743 re redundant configuration. major

Thermal condition on
[serverid] cause:
F0410 fltEquipmentChassisThermalThresholdNonCritical [thermalStateQualifier] minor
Thermal condition on
[serverid] cause:
F0409 fltEquipmentChassisThermalThresholdCritical [thermalStateQualifier] major
Thermal condition on
fltEquipmentChassisThermalThresholdNonRecovera [serverid] cause:
F0411 ble [thermalStateQualifier] critical
Processor [id] on
[serverId] operability:
F0174 fltProcessorUnitInoperable [operability] critical|major

Processor [id] on
[serverid] operState:
F0842 fltProcessorUnitDisabled [operState] info

Processor [id] on
[serverId] voltage:
F0178 fltProcessorUnitVoltageThresholdNonCritical [voltage] minor
Processor [id] on
[serverId] voltage:
F0179 fltProcessorUnitVoltageThresholdCritical [voltage] major

Processor [id] on
[serverId] voltage:
F0180 fltProcessorUnitVoltageThresholdNonRecoverable [voltage] critical

[Serverid] POST or
F0517 fltComputePhysicalPostfailure diagnostic failure critical

Memory array [id] on


[serverid] voltage:
F0190 fltMemoryArrayVoltageThresholdCritical [voltage] major
Memory array [id] on
[serverid] voltage:
F0191 fltMemoryArrayVoltageThresholdNonRecoverable [voltage] critical

Fan module [tray]-[id] in


fltEquipmentFanModuleThermalThresholdNonCritic [serverid] temperature:
F0380 al [thermal] minor
Fan module [tray]-[id] in
[serverid] temperature:
F0382 fltEquipmentFanModuleThermalThresholdCritical [thermal] major

Fan module [tray]-[id] in


fltEquipmentFanModuleThermalThresholdNonReco [serverid] temperature:
F0384 verable [thermal] critical
Power supply [id] in
F0528 fltEquipmentPsuOffline [serverid] power: [power] warning

Power supply [id] in


[serverid] voltage:
F0387 fltEquipmentPsuVoltageThresholdNonCritical [voltage] minor

F0320 fltComputePhysicalUnidentified [Serverid] Chassis open. warning

Adapter [id] in [serverid]


F0203 fltAdaptorUnitMissing presence: [presence] warning

Local disk [id] on


[serverid] operability:
[operability]. Reason:
F1005 fltStorageLocalDiskRebuildFailed [operQualifierReason] major
Fan [id] in Fan Module
[tray]-[id] under server
[id] operability:
F0371 fltEquipmentFanDegraded [operability] warning

Local disk [id] on


FlexFlash Controller [id]
F0181 fltStorageFlexFlashLocalDiskMissing operability: [operability] info

Local disk [id] on server


[id] had a patrol read
failure. Reason:
F1003 fltStorageControllerPatrolReadFailed [operQualifierReason] warning

Virtual drive [id] on


FlexFlash Controller [id]
F1008 fltStorageFlexFlashVirtualDriveHVDegraded operability: [operability] warning
Virtual drive [id] on
Storage Controller [id]
F1007 fltStorageFlexFlashVirtualDriveHVInoperable operability: [operability] critical

Local disk [id] on


[serverid] operability:
F1256 fltStorageLocalDiskMissing [operability] info

Local disk [id] on


[serverid] operability:
F0996 fltStorageLocalDiskDegraded [operability] warning

F0637 fltPowerBudgetPowerBudgetBmcProblem Major

F0635 fltPowerBudgetPowerBudgetCmcProblem Major

Flex flash Controller [id]


F1257 fltStorageFlexFlashControllerInoperable operability: [operability] Major
F1262 fltStorageFlexFlashControllerUnhealthy Warning

Flex flash card [id] on


[serverid] operability:
F1258 fltStorageFlexFlashCardInoperable [operability] Info

Local disk [id] on


FlexFlash Controller [id]
F1259 fltStorageFlexFlashCardMissing operability: [operability] Info

Virtual drive [id] on


FlexFlash Controller [id]
F1260 fltStorageFlexFlashVirtualDriveDegraded operability: [operability] Warning
Virtual drive [id] on
Storage Controller [id]
F1261 fltStorageFlexFlashVirtualDriveInoperable operability: [operability] Critical

DIMM [location] on
server [chassisId]/[slotId]
operState:
[operState]DIMM
[location] on server [id]
F844 fltMemoryUnitDisabled operaState: [operState] Critical
Probable Cause DN Description

[sensor_name]: PCI Slot [id]


equipment- sys/rack-unit-1/equipped-slot- riser or card missing: reseat or
removed [Id] replace pci card [id]

sys/rack-unit-1/mgmt/log-SEL-
log-capacity 0 System Event log is going low

sys/rack-unit-1/mgmt/log-SEL- System Event log capacity is


log-capacity 0 very low

sys/rack-unit-1/mgmt/log-SEL- System Event log is Full: Clear


log-capacity 0 the log

equipment- BIOS POST Timeout occurred:


inoperable sys/rack-unit-1/board Contact Cisco TAC
Stand-by voltage (xV) to the
motherboard is upper critical:
Check the power supply
Auxiliary voltage (xV) to the
motherboard is upper critical:
Check the power supply
Motherboard voltage (xV) is
upper critical: Check the
voltage-problem sys/rack-unit-1/board power supply

"Stand-by voltage ([Val] V) to


the motherboard is lower
critical: Check the power
supply
Auxiliary voltage ([Val] V) to
the motherboard is lower
critical: Check the power
supply
Motherboard voltage ([Val] V)
is lower critical: Check the
voltage-problem sys/rack-unit-1/board power supply"

"Stand-by voltage ([Val] V) to


the motherboard is lower
non-recoverable: Check the
power supply
Auxiliary voltage ([Val] V) to
the motherboard is lower
non-recoverable: Check the
power supply
Motherboard voltage ([Val] V)
is lower non-recoverable:
voltage-problem sys/rack-unit-1/board Check the power supply"

"Stand-by voltage ([Val] V) to


the motherboard is upper
non-recoverable: Check the
power supply
Auxiliary voltage ([Val] V) to
the motherboard is upper
non-recoverable: Check the
power supply
Motherboard voltage ([Val] V)
is upper non-recoverable:
voltage-problem sys/rack-unit-1/board Check the power supply"
"Motherboard Power usage is
upper critical: Check hardware
Motherboard Power usage is
upper non-recoverable: Check
power-problem sys/rack-unit-1/board hardware"

Motherboard chipset
inoperable due to high
thermal-problem sys/rack-unit-1/board temperature

P[Id]V[Id]_AU[Id]_PWRGD:
Voltage rail Power Good
dropped due to PSU or HW
failure, please contact CISCO
power-problem sys/rack-unit-1/board TAC for assistance

The server failed to power


power-problem sys/rack-unit-1/board ON: Check Power Supply

Battery voltage level is upper


voltage-problem sys/rack-unit-1/board/cpu-[Id] critical: Replace battery

Battery voltage level is upper


non-recoverable: Replace
voltage-problem sys/rack-unit-1/board/cpu-[Id] battery

[sensor_name]: Motherboard
sys/rack-unit-1/board chipset temperature is upper
thermal-problem non-critical
[sensor_name]: Motherboard
chipset temperature is upper
thermal-problem sys/rack-unit-1/board critical

[sensor_name]: Motherboard
chipset temperature is upper
thermal-problem sys/rack-unit-1/board non-recoverable

Processor [Id] Thermal


threshold has crossed upper
non-critical threshold: Check
thermal-problem sys/rack-unit-1/board/cpu-[Id] cooling
Processor [Id] Thermal
threshold has crossed upper
sys/rack-unit-1/board/cpu-[Id] critical threshold: Check
thermal-problem cooling

Processor [Id] Thermal


sys/rack-unit-1/board/cpu-[Id] threshold has crossed a preset
thermal-problem threshold: Check cooling
Adaptor Unit [Id]
Temperature is non critical :
thermal-problem sys/rack-unit-1/adaptor-[Id] Check Cooling

Adaptor Unit [id]


Temperature is critical : Check
thermal-problem sys/rack-unit-1/adaptor-[Id] Cooling
Adaptor Unit [id]
Temperature is non
thermal-problem sys/rack-unit-1/adaptor-[Id] recoverable : Check Cooling

[sensor_name]: Adaptor Unit


[Id] is inoperable due to high
thermal-problem sys/rack-unit-1/adaptor-[Id] temperature : Check Cooling
Storage controller SLOT-[Id]
equipment- sys/rack-unit-1/board/storage- inoperable: reseat or replace
inoperable SAS-SLOT-[Id] the storage controller

equipment-
inoperable | Storage Local disk [Id] is
equipment- sys/rack-unit-1/board/storage- inoperable: reseat or replace
missing SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]

Storage Local disk [Id] is


sys/rack-unit-1/board/storage- inoperable: reseat or replace
equipment-offline SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]

Storage Virtual Drive [Id] is


inoperable: Check storage
equipment- sys/rack-unit-1/board/storage- controller, or reseat the
inoperable SAS-SLOT-[Id]/vd-[Id] storage drive

Storage Virtual Drive [Id] is


inoperable: Check storage
equipment- sys/rack-unit-1/board/storage- controller, or reseat the
degraded SAS-SLOT-[Id]/vd-[Id] storage drive
Storage Virtual Drive [Id]
reconstruction failed: Check
equipment- sys/rack-unit-1/board/storage- storage controller, or reseat
degraded SAS-SLOT-[Id]/vd-[Id] the storage drive

Storage Virtual Drive [Id]


Consistency Check Failed:
equipment- sys/rack-unit-1/board/storage- please check the controller, or
degraded SAS-SLOT-[Id]/vd-[Id] reseat the physical drives

Storage Raid battery [Id]


equipment- sys/rack-unit-1/board/storage- inoperable: check the raid
inoperable SAS-SLOT-[Id]/raid-battery battery

Storage Raid battery [Id]


equipment- sys/rack-unit-1/board/storage- Degraded: check the raid
degraded SAS-SLOT-[id]/raid-battery battery

Storage Raid battery [Id]


equipment- sys/rack-unit-1/board/storage- relearn aborted : check the
degraded SAS-SLOT-[Id]/raid-battery raid battery

Storage Raid battery [id]


equipment- sys/rack-unit-1/board/storage- relearn aborted : check the
degraded SAS-SLOT-[Id]/raid-battery raid battery
equipment- sys/rack-unit-1/fan-module-1- Fan [id] missing: reseat or
missing [Id]/fan-[Id] replace fan [Id]

Fan speed for fan-[Id] in Fan


Module [Id]-[Id] is lower non
critical : Check the air intake
to the server
Fan speed for fan-[Id] is lower
performance- sys/rack-unit-1/fan-module-1- non critical : Check the air
problem [Id]/fan-[Id] intake to the server

Fan speed for fan-{Id] in Fan


Module [Id]-[Id] is lower
critical : Check the air intake
to the server
Fan speed for fan-[Id] is lower
performance- sys/rack-unit-1/fan-module-1- critical : Check the air intake
problem [Id]/fan-[Id] to the server

Fan speed for fan-[Id] in Fan


Module {Id]-[Id] is lower non
recoverable : Check the air
intake to the server
Fan speed for fan-[Id] is lower
performance- sys/rack-unit-1/fan-module-1- non recoverable : Check the
problem [Id]/fan-[Id] air intake to the server
[sensor_name]: Memory Riser
[Id] missing: reseat or replace
memory riser [Id]
[sensor_name]: Memory Unit
identity- sys/rack-unit-1/board/ [Id] missing: reseat or replace
unestablishable memarray-[Id]/mem-[Id] physical memory [Id]

equipment- sys/rack-unit-1/board/ DIMM [Id] is degraded : Check


degraded memarray-[Id]/mem-[Id] or replace DIMM

equipment- sys/rack-unit-1/board/ DIMM [Id] is inoperable :


inoperable memarray-[Id]/mem-[Id] Check or replace DIMM
Memory Unit [Id]
temperature is upper non
critical:Check Cooling
%s: Memory riser %d Thermal
sys/rack-unit-1/board/ Threshold at upper non
thermal-problem memarray-[Id]/mem-[Id] critical levels: Check Cooling

Memory Unit [Id]


temperature is upper
critical:Check Cooling
[sensor_name]: Memory riser
[Id] Thermal Threshold at
sys/rack-unit-1/board/ upper critical levels: Check
thermal-problem memarray-[Id]/mem-[Id] Cooling
Memory Unit [Id]
temperature is upper non
recoverable: Check Cooling
[sensor_name]: Memory riser
[Id] Thermal Threshold at
sys/rack-unit-1/board/ upper non recoverable levels:
thermal-problem memarray-[Id]/mem-[Id] Check Cooling

equipment- Power Supply [Id] missing:


missing sys/rack-unit-1/psu-[Id] reseat or replace PS [id]
Power Supply [Id]
temperature is upper non
thermal-problem sys/rack-unit-1/psu-[Id] critical : Check cooling

Power Supply [Id]


temperature is upper critical :
thermal-problem sys/rack-unit-1/psu-[Id] Check cooling
Power Supply [Id]
temperature is upper non
recoverable : Check Power
thermal-problem sys/rack-unit-1/psu-[Id] Supply Status

Power Supply [Id] output


power is upper non critical :
Reseat or replace Power
power-problem sys/rack-unit-1/psu-[Id] Supply

Power Supply [Id] output


power is upper critical :
Reseat or replace Power
power-problem sys/rack-unit-1/psu-[Id] Supply

Power Supply [Id] output


power is upper non
recoverable : Reseat or
power-problem sys/rack-unit-1/psu-[Id] replace Power Supply
Power Supply [Id] Voltage is
upper critical : Reseat or
voltage-problem sys/rack-unit-1/psu-[Id] replace Power Supply

Power Supply [Id] Voltage is


upper non Recoverable :
Reseat or replace Power
voltage-problem sys/rack-unit-1/psu-[Id] Supply

Power Supply [Id] current is


upper non critical : Reseat or
replace Power Supply
Power Supply [Id] Current is
upper critical : Reseat or
replace Power Supply
Power Supply [Id] Current is
upper non recoverable :
Reseat or replace Power
power-problem sys/rack-unit-1/psu-[Id] Supply

Power supply [Id] is in a


degraded state, or has bad
power-problem sys/rack-unit-1/psu-[Id] input voltage

Power Supply [Id] has lost


equipment- input or input is out of range :
inoperable sys/rack-unit-1/psu-[Id] Check input to PS or replace
[sensor_name]: Power Supply
[Id] Vendor/Revision/Rating
mismatch, or PSU Processor
missing : Repace PS or Check
fru-problem sys/rack-unit-1/psu-[Id] Processor [Id]

Power Supply redundancy is


psu-redundancy- lost : Reseat or replace Power
fail sys/rack-unit-1/psu-[Id] Supply

Front Panel Thermal


Threshold at upper non
critical levels: Check Cooling
The Front Panel temperature
has crossed upper non-critical
threshold: Check device
cooling
Riser [Id] inlet temperature
has crossed upper non-critical
threshold: Check device
cooling
Riser [Id] outlet temperature
has crossed upper non-critical
sys/rack-unit-1 threshold: Check device
thermal-problem sys/rack-unit-1/board cooling
Front Panel Thermal
Threshold at upper critical
levels: Check Cooling
The Front Panel temperature
has crossed upper critical
threshold: Check device
cooling
Riser [Id] inlet temperature
has crossed upper critical
threshold: Check device
cooling
Riser [Id] outlet temperature
has crossed upper critical
sys/rack-unit-1 threshold: Check device
thermal-problem sys/rack-unit-1/board cooling
Front Panel Thermal
Threshold at upper non
recoverable levels: Check
Cooling
The Front Panel temperature
has crossed upper non-
recoverable threshold: Check
device cooling
Riser [Id] inlet temperature
has crossed upper non-
recoverable threshold: Check
device cooling
Riser [Id] outlet temperature
has crossed upper non-
sys/rack-unit-1 recoverable threshold: Check
thermal-problem sys/rack-unit-1/board device cooling
Processor [Id] is inoperable
due to high temperature:
Check cooling
A catastrophic fault has
occurred on one of the
processors: Please check the
processors' status.
Processor [Id] is operating at a
high temperature: Check
cooling
PVCCD_P1_VRHOT: Processor
1 is operating at a high
temperature: Check cooling
P1_LVC3_PWRGD: Voltage rail
Power Good dropped due to
PSU or HW failure, please
contact CISCO TAC for
assistance
P1_MEM23_MEMHOT:
Temperature sensor
corresponding to Processor 1
Memory 2/3 has asserted a
equipment- sys/rack-unit-1/board/cpu-[Id] Thermal Problem: Check
inoperable sys/rack-unit-1/board server cooling

Processor [Id] missing: Please


equipment- reseat or replace Processor
disabled sys/rack-unit-1/board/cpu-[Id] [Id]

Memory channel ([Id]) voltage


is upper non-critical
Processor [Id] voltage is upper
non-critical
Processor [Id] Voltage
threshold has crossed upper
non-critical threshold: Replace
the Power Supply and verify if
the issue is resolved. If the
voltage-problem sys/rack-unit-1/board/cpu-[Id] issue persists, call Cisco TAC
Memory channel ([Id]) voltage
is upper critical
Processor [Id] voltage is upper
critical
Processor [Id] Voltage
threshold has crossed upper
critical threshold: Replace the
Power Supply and verify if the
issue is resolved. If the issue
voltage-problem sys/rack-unit-1/board/cpu-[Id] persists, call Cisco TAC

Memory channel ([Id]) voltage


is upper non-recoverable
Processor [Id] voltage is upper
non-recoverable
Processor [Id] Voltage
threshold has crossed upper
non-recoverable threshold:
Replace the Power Supply and
verify if the issue is resolved.
If the issue persists, call Cisco
voltage-problem sys/rack-unit-1/board/cpu-[Id] TAC

equipment- [sensor_name]: Bios Post


problem sys/rack-unit-1 Failed: Check hardware

[sensor_name]: Memory riser


[Id] Voltage Threshold at
upper critical levels: Check
Power Supply; reseat power
connectors on the
motherboard
[sensor_name]: Memory riser
[Id] Voltage Threshold at
lower critical levels: Check
Power Supply; reseat power
sys/rack-unit-1/board/ connectors on the
voltage-problem memarray-[Id] motherboard
[sensor_name]: Memory riser
[Id] Voltage Threshold at
upper non recoverable levels:
Check Power Supply; reseat
power connectors on the
motherboard
[sensor_name]: Memory riser
[Id] Voltage Threshold at
lower non recoverable levels:
Check Power Supply; reseat
sys/rack-unit-1/board/ power connectors on the
voltage-problem memarray-[Id] motherboard

sys/rack-unit-1/fan-module-
thermal-problem [Id]-[id]
sys/rack-unit-1/fan-module-
thermal-problem [Id]-[id]

sys/rack-unit-1/fan-module-
thermal-problem [Id]-[id]
equipment-offline sys/rack-unit-1/psu-[Id]

voltage-problem sys/rack-unit-1/psu-[Id]
[sensor_name]: server {id]
Chassis Intrusion detected:
equipment- Please secure the server
problem sys/rack-unit-1 chassis

equipment- [sensor_name]:[id] missing:


missing sys/rack-unit-1/adaptor-[Id] reseat or replace [id]

Storage Local disk [Id] is


sys/rack-unit-1/board/storage- rebuild failed: please check
equipment-offline SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]
[sensor_name]: Fan [Id] has
asserted a predictive failure:
equipment- sys/rack-unit-1/fan-module-1- reseat or replace fan [Id]
degraded [Id]/fan-[Id]

equipment- sys/rack-unit-1/board/storage-
missing FLASH-[Id]/pd-HV

Storage controller[Id] patrol


equipment- sys/rack-unit-1/board/storage- read failed: patrol read cant
inoperable SAS-SLOT-[Id] be started

Flex Flash Virtual Drive HV


equipment- sys/rack-unit-1/board/storage- Degraded: please check the
degraded FLASH-[Id]/vd-HV flash device or the controller
Flex Flash Virtual Drive HV
equipment- sys/rack-unit-1/board/storage- inoperable: please check the
inoperable FLASH-[Id]/vd-HV flash device or the controller

Storage Local disk [Id] is


equipment- sys/rack-unit-1/board/storage- inoperable: reseat or replace
missing SAS-SLOT-[Id]/pd-[Id] the storage drive [Id]

Storage Local disk [Id] is


degraded: please check if
equipment- sys/rack-unit-1/board/storage- rebuild or copyback of drive is
degraded SAS-SLOT-[Id]/pd-[Id] required

Power capping failed:


System shutdown is
power-cap-fail sys/rack-unit-1/budget initiated by Node Manager

Power capping correction


time exceeded: Please set
power-cap-fail sys/rack-unit-1/budget an appropriate power limit

Flex Flash controller


FlexFlash-0 inoperable:
equipment- sys/rack-unit-1/board/ reseat or replace the flex
inoperable storage-flexflash-FlexFlash-0 controller
Flex Flash controller
FlexFlash-0 configuration
equipment- sys/rack-unit-1/board/ error: configure the flex
unhealthy storage-flexflash-FlexFlash-0 controller correctly

sys/rack-unit-1/board/ Flex Flash Local disk 2 is


equipment- storage-flexflash-FlexFlash- inoperable: reseat or
inoperable 0/card-2 replace the local disk 2

sys/rack-unit-1/board/ Flex Flash Local disk 2


equipment- storage-flexflash-FlexFlash- missing: reseat or replace
missing 0/card-2 Flex Flash Local disk

Flex Flash Virtual Drive 1


sys/rack-unit-1/board/ (testuser) Degraded: please
equipment- storage-flexflash-FlexFlash- check the flash device or
degraded 0/vd-1 the controller
Flex Flash Virtual Drive 5
(Hypervisor) is Inoperable:
sys/rack-unit-1/board/ Check flex controller
equipment- storage-flexflash-FlexFlash- properties or Flex Flash
inoperable 0/vd-5 disks

MEM_RSR3_STATUS:
Memory riser 3 has been
disabled due to a mixed or
invalid memory riser
configuration: Remove the
riser and make sure the
host CPU type supports the
equipment- sys/rack-unit-1/board/ Memory Riser DDR type
disabled memarray-1/mem-3 that is installed.
Explanation Recommended Action

If you see this fault, take the following actions:


Step 1 Re-seat/re-insert the I/O card. Prior to re-
inserting this server component, see the server-
specific
This fault typically occurs because an Installation and Service Guide for prerequisites, safety
I/O card is removed from the chassis, or recommendations and warnings.
when the card or the slot Step 2 If the above actions did not resolve the issue,
is faulty. create a tech-support file and contact Cisco TAC.

This fault typically occurs because Cisco


Integrated Management Controller
(CIMC) has detected that
the System Event Log (SEL) on the
server is almost full. The available
capacity in the log is low. This
is an information-level fault and can be
ignored if you do not want to clear the If you see this fault, take the following action:
SEL at this time. Step 1 You may choose to clear the SEL.

This fault typically occurs because Cisco


Integrated Management Controller
(CIMC) has detected that
the System Event Log (SEL) on the
server is almost full. The available
capacity in the log is very low.
This is an information-level fault and
can be ignored if you do not want to If you see this fault, take the following action:
clear the SEL at this time. Step 1 You may choose to clear the SEL.

This fault typically occurs because the If you see this fault, take the following action:
CIMC SEL is full. Step 1 You may choose to clear the SEL.

If you see this fault, take the following actions:


Step 1 Connect to the CIMC WebUI and launch the
KVM console to monitor the BIOS POST completion.
This fault typically occurs when the Step 2 If the above actions did not resolve the issue,
server did not complete the BIOS POST. create a tech-support file and contact Cisco TAC.
This fault typically occurs when one or This fault typically occurs when one or more
more motherboard input voltages has motherboard input voltages has exceeded upper
exceeded upper critical critical
thresholds. thresholds.

If you see this fault, take the following actions:


Step 1 Reseat or replace the power supply. Prior to
replacing this component, see the server-specific
Installation
This fault typically occurs when one or and Service Guide for prerequisites, safety
more motherboard input voltages has recommendations and warnings.
crossed lower critical Step 2 If the issue persists, create a tech-support file
thresholds. and contact TAC.

This fault typically occurs when one or


more motherboard input voltages has
dropped too low and is If you see this fault, take the following action:
unlikely to recover. Step 1 Contact Cisco TAC.

This fault typically occurs when one or


more motherboard input voltages has
become too high and is unlikely to If you see this fault, take the following action:
recover. Step 1 Contact Cisco TAC.
This fault typically occurs when the
motherboard power consumption
exceeds certain threshold limits.
When this happens, the power usage If you see this fault, take the following action:
sensors on a server detects a problem Step 1 Contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Verify that the server fans are working
properly.
Step 2 Wait for 24 hours to see if the problem resolves
This fault typically occurs when the itself.
motherboard thermal sensors on a Step 3 If the above actions did not resolve the issue,
server detect a problem. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Reseat/replace the power supply. Prior to
replacing this component, see the server-specific
Installation
and Service Guide for prerequisites, safety
This fault typically occurs when the recommendations and warnings.
server power sensors have detected a Step 2 If the above actions did not resolve the issue,
problem. create a tech-support file and contact Cisco TAC.

This fault typically occurs when the If you see this fault, take the following action:
power sensors on a server detect a Step 1 Create a tech-support file and contact Cisco
problem. TAC.

If you see this fault, take the following action:


This fault is raised when the CMOS Step 1 Replace the CMOS battery. Prior to replacing
battery voltage has dropped to lower this component, see the server-specific Installation
than the normal operating and
range. This could impact the clock and Service Guide for prerequisites, safety
other CMOS settings. recommendations and warnings.

If you see this fault, take the following action:


This fault is raised when the CMOS Step 1 Replace the CMOS battery. Prior to replacing
battery voltage has dropped quite low this component, see the server-specific Installation
and is unlikely to recover. and
This impacts the clock and other CMOS Service Guide for prerequisites, safety
settings. recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Monitor other environmental events related to
this server and ensure the temperature ranges are
This fault is raised when the I/O within
controller temperature is outside the recommended ranges.
upper or lower non-critical Step 2 If this action did not solve the problem, contact
threshold. Cisco TAC.
If you see this fault, take the following actions:
Step 1 Monitor other environmental events related to
the server and ensure the temperature ranges are
within
recommended ranges.
Step 2 Consider turning off the server for a while if
This fault is raised when the I/O possible.
controller temperature is outside the Step 3 If the above actions did not resolve the issue,
upper or lower critical threshold. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
This fault is raised when the I/O Step 1 Shut down the server immediately.
controller temperature is outside the Step 2 Create a tech-support file and contact Cisco
recoverable range of operation. TAC.

This fault occurs when the processor


temperature on a server exceeds a non-
critical threshold value, but
is still below the critical threshold. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.
This fault occurs when the processor
temperature on a rack server exceeds a
critical threshold value. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.

This fault occurs when the processor


temperature on a rack server has been
out of the operating range,
and the issue is not recoverable. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of an I/O card has exceeded a non-
critical threshold value, but is
still below the critical threshold. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a
variety of problems, including early
degradation, failure of chips, and failure
of equipment. In If you see this fault, take the following actions:
addition, extreme temperature Step 1 Review the product specifications to determine
fluctuations can cause CPUs to become the temperature operating range of the I/O card.
loose in their sockets. Step 2 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 3 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 4 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 5 If the above actions did not resolve the issue,
offline create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature


of an I/O card has exceeded a critical
threshold value. Be aware
of the following possible contributing
factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a
variety of problems, including early
degradation, failure of chips, and failure
of equipment. In
addition, extreme temperature
fluctuations can cause CPUs to become If you see this fault, take the following actions:
loose in their sockets. Step 1 Review the product specifications to determine
• Cisco UCS equipment should operate the temperature operating range of the I/O card.
in an environment that provides an inlet Step 2 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 3 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 4 If the above actions did not resolve the issue,
offline create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of an I/O card has been out of the
operating range, and the issue
is not recoverable. Be aware of the
following possible contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a
variety of problems, including early
degradation, failure of chips, and failure
of equipment. In If you see this fault, take the following actions:
addition, extreme temperature Step 1 Review the product specifications to determine
fluctuations can cause CPUs to become the temperature operating range of the I/O card.
loose in their sockets. Step 2 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 3 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 4 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 5 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following action:


Step 1 Review the product specifications to determine
This fault occurs when there is a the temperature operating range of the I/O card.
thermal problem on an I/O card. Be Step 2 Review the Cisco UCS Site Preparation Guide to
aware of the following possible ensure that the servers have adequate airflow,
contributing factors: including
• Temperature extremes can cause front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows on the servers are not
reduced efficiency and cause a obstructed.
variety of problems, including early Step 4 Verify that the site cooling system is operating
degradation, failure of chips, and failure properly.
of equipment. In Step 5 Clean the installation site at regular intervals to
addition, extreme temperature avoid buildup of dust and debris, which can cause a
fluctuations can cause CPUs to become system to overheat.
loose in their sockets. Step 6 Replace faulty I/O cards. Prior to replacing this
• Cisco UCS equipment should operate component, see the server-specific Installation and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
If you see this fault, take the following action:
Step 1 Reseat or replace the storage controller. Prior
This fault indicates a non-recoverable to replacing this component, see the server-specific
storage controller failure. This happens Installation and Service Guide for prerequisites, safety
when the storage system recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Insert the disk in a supported slot.
Step 2 Remove and re-insert the local disk.
Step 3 Replace the disk, if an additional disk is
available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
This fault occurs when the local disk has for
become inoperable or has been prerequisites, safety recommendations and warnings.
removed while the server was Step 4 If the above actions did not resolve the issue,
in use. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Replace the physical drive and check to see if
the issue is resolved after a rebuild. Prior to replacing
this
component, see the server-specific Installation and
Service Guide for prerequisites, safety
This fault indicates a physical disk recommendations and warnings.
copyback failure. This fault could Step 2 Reseat or replace the storage controller.
indicate a physical drive problem Step 3 Check configuration options for the storage
or an issue with the RAID configuration. controller in the MegaRAID ROM configuration page.

If you see this fault, take the following actions:


Step 1 If the data on the drive is accessible, back up
and recreate the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
Installation
and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates a non-recoverable Step 3 Check for controller errors in the MegaRAID
error with the virtual drive. ROM page logs.

If you see this fault, take the following actions:


Step 1 Initiate a consistency check on the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
Installation
This fault indicates a recoverable error and Service Guide for prerequisites, safety
with the virtual drive. recommendations and warnings.
This fault indicates a failure in the
reconstruction process of the virtual If you see this fault, take the following action:
drive. Step 1 Restart the reconstruction process.

If you see this fault, take the following actions:


Step 1 Initiate a consistency check on the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
Installation
This fault indicates a consistency check and Service Guide for prerequisites, safety
failure with the virtual drive. recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Replace the RAID battery. Prior to replacing this
component, see the server-specific Installation and
Service Guide for prerequisites, safety
This fault occurs when the RAID battery recommendations and warnings
voltage is below the normal operating Step 2 If the above action did not resolve the issue,
range. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following action:


Step 1 Reseat or replace the battery backup unit on
the storage controller.
Prior to replacing this component, see the server-
This fault indicates a controller battery specific Installation and Service Guide for
backup unit failure. prerequisites, safety recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Restart the relearn process for the battery
backup unit.
Step 2 Reseat or replace the battery backup unit. Prior
to replacing this component, see the server-specific
Installation and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates that a controller Step 3 Replace the battery backup unit if it has
battery relearn process was aborted. exceeded 100 relearn cycles.

If you see this fault, take the following actions:


Step 1 Restart the relearn process for the battery
backup unit.
Step 2 Reseat or replace the battery backup unit. Prior
to replacing this component, see the server-specific
Installation and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates a controller battery Step 3 Replace the battery backup unit if it has
relearn failure. exceeded 100 relearn cycles.
If you see this fault, take the following actions:
Step 1 Insert/re-insert the fan module in the slot that
is reporting the issue.
Step 2 Replace the fan module with a different fan
module, if available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
for
This fault occurs in the unlikely event prerequisites, safety recommendations and warnings.
that a fan in a fan module cannot be Step 3 If the above actions did not resolve the issue,
detected. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the fan status.
Step 2 If the problem persists for a long period of time
or if other fans do not show the same problem, reseat
This fault occurs when the fan speed the fan.
reading from the fan controller does not Step 3 Replace the fan module. Prior to replacing this
match the desired fan speed component, see the server-specific Installation and
and is outside of the normal operating Service Guide for prerequisites, safety
range. This can indicate a problem with recommendations, warnings and procedures.
a fan or with the reading Step 4 If the above actions did not resolve the issue,
from the fan controller. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the fan status.
Step 2 If the problem persists for a long period of time
This fault occurs when the fan speed or if other fans do not show the same problem, reseat
read from the fan controller does not the fan.
match the desired fan speed Step 3 Replace the fan module. Prior to replacing this
and has exceeded the critical threshold component, see the server specific Installation and
and is in risk of failure. This can indicate Service Guide for prerequisites, safety
a problem with a fan recommendations and warnings.
or with the reading from the fan Step 4 If the above actions did not resolve the issue,
controller. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Replace the fan. Prior to replacing this
component, see the server-specific Installation and
This fault occurs when the fan speed Service Guide
read from the fan controller has far for prerequisites, safety recommendations and
exceeded the desired fan speed. warnings.
It usually indicates that the fan has Step 2 If the above action did not resolve the issue,
failed. create a tech-support file and contact Cisco TAC.
This fault typically occurs when a sensor If you see this fault, take the following action:
has detected an unsupported DIMM in Step 1 Verify if the DIMM is supported on the server
the server. For example, configuration. If the DIMM is not supported on the
the model, vendor, or revision is not server
recognized. configuration, contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the DIMM for further ECC errors. If the
high number of errors persists, there is a high
possibility of the DIMM becoming inoperable.
Step 2 If the DIMM becomes inoperable, replace the
DIMM. You can use the CIMC WebUI to locate the
faulty
This fault occurs when a DIMM is in a DIMM. Prior to replacing this component, see the
degraded operability state. This state server-specific Installation and Service Guide for
typically occurs when an prerequisites, safety recommendations, warnings and
excessive number of correctable ECC procedures.
errors are reported on the DIMM by the Step 3 If the above actions did not resolve the issue,
server BIOS. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Review the SEL statistics on the DIMM to
determine which threshold was crossed.
Step 2 If necessary, replace the DIMM. You can use
the CIMC WebUI to locate the faulty DIMM. Prior to
This fault typically occurs because an replacing this component, see the server-specific
above threshold number of correctable Installation and Service Guide for prerequisites, safety
or uncorrectable errors has recommendations, warnings and procedures.
occurred on a DIMM. The DIMM may be Step 3 If the above actions did not resolve the issue,
inoperable. create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of a memory unit on a server exceeds a
non-critical threshold
value, but is still below the critical
threshold. Be aware of the following
possible contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at If you see this fault, take the following actions:
reduced efficiency and cause a Step 1 Review the product specifications to determine
variety of problems, including early the temperature operating range of the server.
degradation, failure of chips, and failure Step 2 Review the Cisco UCS Site Preparation Guide to
of equipment. ensure the servers have adequate airflow, including
Inaddition, extreme temperature front and back clearance.
fluctuations can cause CPUs to become Step 3 Verify that the airflows on the servers are not
loose in their sockets. obstructed.
• Cisco UCS equipment should operate Step 4 Verify that the site cooling system is operating
in an environment that provides an inlet properly.
air temperature not Step 5 Clean the installation site at regular intervals to
colder than 50F (10C) nor hotter than avoid buildup of dust and debris, which can cause a
95F (35C). system to overheat.
• If sensors on a CPU reach 179.6F Step 6 If the above actions did not resolve the issue,
(82C), the system will take that CPU create a tech-support file and contact Cisco TAC.
offline.

This fault occurs when the temperature


of a memory unit on a server exceeds a
critical threshold value.
Be aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of a memory unit on a server has been
out of the operating range,
and the issue is not recoverable. Be
aware of the following possible
contributing factors:
• Temperature extremes can cause
Cisco UCS equipment to operate at
reduced efficiency and cause a If you see this fault, take the following actions:
variety of problems, including early Step 1 Review the product specifications to determine
degradation, failure of chips, and failure the temperature operating range of the server.
of equipment. In Step 2 Review the Cisco UCS Site Preparation Guide to
addition, extreme temperature ensure the servers have adequate airflow, including
fluctuations can cause CPUs to become front and back clearance.
loose in their sockets. Step 3 Verify that the airflows on the servers are not
• Cisco UCS equipment should operate obstructed.
in an environment that provides an inlet Step 4 Verify that the site cooling system is operating
air temperature not properly.
colder than 50F (10C) nor hotter than Step 5 Clean the installation site at regular intervals to
95F (35C). avoid buildup of dust and debris, which can cause a
• If sensors on a CPU reach 179.6F system to overheat.
(82C), the system will take that CPU Step 6 If the above actions did not resolve the issue,
offline. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Check to see if the power supply is connected
to a power source.
This fault typically occurs when the Step 2 If the PSU is physically present in the slot,
power supply module is either missing remove and then re-insert it.
or the input power to the Step 3 If the PSU is not physically present in the slot,
server is absent. insert a new PSU.
This fault occurs when the temperature
of a PSU module has exceeded a non- If you see this fault, take the following actions:
critical threshold value, but Step 1 Review the product specifications to determine
is still below the critical threshold. Be the temperature operating range of the PSU module.
aware of the following possible Step 2 Review the Cisco UCS Site Preparation Guide to
contributing factors: ensure the PSU modules have adequate airflow,
• Temperature extremes can cause including front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows are not obstructed.
reduced efficiency and cause a Step 4 Verify that the site cooling system is operating
variety of problems, including early properly.
degradation, failure of chips, and failure Step 5 Clean the installation site at regular intervals to
of equipment. In avoid buildup of dust and debris, which can cause a
addition, extreme temperature system to overheat.
fluctuations can cause CPUs to become Step 6 Replace faulty PSU modules. Prior to replacing
loose in their sockets. this component, see the server-specific Installation
• Cisco UCS equipment should operate and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature If you see this fault, take the following actions:
of a PSU module has exceeded a critical Step 1 Review the product specifications to determine
threshold value. Be the temperature operating range of the PSU module.
aware of the following possible Step 2 Review the Cisco UCS Site Preparation Guide to
contributing factors: ensure the PSU modules have adequate airflow,
• Temperature extremes can cause including front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows are not obstructed.
reduced efficiency and cause a Step 4 Verify that the site cooling system is operating
variety of problems, including early properly.
degradation, failure of chips, and failure Step 5 Clean the installation site at regular intervals to
of equipment. In avoid buildup of dust and debris, which can cause a
addition, extreme temperature system to overheat.
fluctuations can cause CPUs to become Step 6 Replace faulty PSU modules. Prior to replacing
loose in their sockets. this component, see the server-specific Installation
• Cisco UCS equipment should operate and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature If you see this fault, take the following actions:
of a PSU module has been out of Step 1 Review the product specifications to determine
operating range, and the issue the temperature operating range of the PSU module.
is not recoverable. Be aware of the Step 2 Review the Cisco UCS Site Preparation Guide to
following possible contributing factors: ensure the PSU modules have adequate airflow,
• Temperature extremes can cause including front and back clearance.
Cisco UCS equipment to operate at Step 3 Verify that the airflows are not obstructed.
reduced efficiency and cause a Step 4 Verify that the site cooling system is operating
variety of problems, including early properly.
degradation, failure of chips, and failure Step 5 Clean the installation site at regular intervals to
of equipment. In avoid buildup of dust and debris, which can cause a
addition, extreme temperature system to overheat.
fluctuations can cause CPUs to become Step 6 Replace faulty PSU modules. Prior to replacing
loose in their sockets. this component, see the server-specific Installation
• Cisco UCS equipment should operate and
in an environment that provides an inlet Service Guide for prerequisites, safety
air temperature not recommendations and warnings.
colder than 50F (10C) nor hotter than Step 7 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Monitor the PSU status.
Step 2 If possible, remove and reseat the PSU.
This fault is raised as a warning if the Step 3 If the above action did not resolve the issue,
current output of the PSU in a rack create a tech-support file for the chassis, and contact
server does not match the Cisco
desired output value. TAC.

If you see this fault, take the following actions:


Step 1 Monitor the PSU status.
Step 2 If possible, remove and reseat the PSU.
This fault is raised as a warning if the Step 3 If the above action did not resolve the issue,
current output of the PSU in a rack create a tech-support file for the chassis, and contact
server does not match the Cisco
desired output value. TAC.

If you see this fault, take the following actions:


Step 1 Monitor the PSU status.
Step 2 If possible, remove and reseat the PSU.
This fault is raised as a warning if the Step 3 If the above action did not resolve the issue,
current output of the PSU in a rack create a tech-support file for the chassis, and contact
server does not match the Cisco
desired output value. TAC.
If you see this fault, take the following actions:
Step 1 Remove and reseat the PSU.
Step 2 Replace the PSU. Prior to replacing this
component, see the server-specific Installation and
Service
Guide for prerequisites, safety recommendations and
This fault occurs when the PSU voltage warnings.
has exceeded the specified hardware Step 3 If the above actions did not resolve the issue,
voltage rating. create a tech-support file and contact Cisco TAC.

This fault occurs when the PSU voltage


has exceeded the specified hardware
voltage rating and PSU If you see this fault, take the following actions:
hardware may have been damaged as a Step 1 Remove and reseat the PSU.
result or may be at risk of being Step 2 If the above action did not resolve the issue,
damaged. create a tech-support file and contact Cisco TAC.

This fault occurs when a power supply If you see this fault, create a tech-support file and
unit is drawing too much current. contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Check if the power cable is disconnected.
Step 2 Check if the input voltage is within the correct
range mentioned the server-specific Installation and
Service Guide.
This fault occurs when a power cable is Step 3 Re-insert the PSU.
disconnected or input voltage is Step 4 If these actions did not solve the problem,
incorrect create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Verify that the power cord is properly
connected to the PSU and the power source.
Step 2 Verify that the power source is 220/110 volts.
Step 3 Remove the PSU and re-install it.
Step 4 Replace the PSU.
Note Prior to re-installing or replacing this component,
see the server-specific Installation and Service Guide
This fault typically occurs when the for prerequisites, safety recommendations and
power supply unit is either offline or the warnings.
input/output voltage is out Step 5 If the above actions did not resolve the issue,
of range. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Check the server-specific Installationg and
Service Guide for the power supply vendor
This fault typically occurs when the FRU specification.
information for a power supply unit is Step 2 If the above action did not resolve the issue,
corrupted or malformed. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Consider adding more PSUs to the chassis.
Step 2 Replace any non-functional PSUs. Prior to
replacing this component, see the server-specific
Installation
and Service Guide for prerequisites, safety
recommendations and warnings.
This fault typically occurs when chassis Step 3 If the above actions did not resolve the issue,
power redundancy has failed. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Review the Cisco UCS Site Preparation Guide
and ensure the server has adequate airflow, including
front
and back clearance.
Step 2 Verify that the air flows of the servers are not
obstructed.
Step 3 Verify that the site cooling system is operating
properly.
Step 4 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature readings and ensure it
is within the recommended thermal safe operating
range.
Step 6 If the fault reports a "Thermal Sensor threshold
crossing in the front or back pane" error for the
servers,
check if thermal faults have been raised. Those faults
include details of the thermal condition.
Step 7 If the fault reports a "Missing or Faulty Fan"
error, check on the status of that fan. If it needs
replacement,
create a tech-support file for the chassis and contact
This fault occurs under the following Cisco TAC.
condition: Step 8 If the above actions did not resolve the issue
• If a component within a chassis is and the condition persists, create a tech-support file
operating outside the safe thermal for
operating range. the chassis and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Review the Cisco UCS Site Preparation Guide
and ensure the server has adequate airflow, including
front
and back clearance.
Step 2 Verify that the air flows of the servers are not
obstructed.
Step 3 Verify that the site cooling system is operating
properly.
Step 4 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature readings and ensure it
is within the recommended thermal safe operating
range.
Step 6 If the fault reports a "Thermal Sensor threshold
crossing in the front or back pane" error for the
servers,
check if thermal faults have been raised. Those faults
include details of the thermal condition.
Step 7 If the fault reports a "Missing or Faulty Fan"
error, check on the status of that fan. If it needs
replacement,
create a tech-support file for the chassis and contact
This fault occurs under the following Cisco TAC.
condition: Step 8 If the above actions did not resolve the issue
• If a component within a chassis is and the condition persists, create a tech-support file
operating outside the safe thermal for
operating range. the chassis and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Review the Cisco UCS Site Preparation Guide
and ensure the server has adequate airflow, including
front
and back clearance.
Step 2 Verify that the air flows of the servers are not
obstructed.
Step 3 Verify that the site cooling system is operating
properly.
Step 4 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 5 Check the temperature readings and ensure it
is within the recommended thermal safe operating
range.
Step 6 If the fault reports a "Thermal Sensor threshold
crossing in the front or back pane" error for the
servers,
check if thermal faults have been raised. Those faults
include details of the thermal condition.
Step 7 If the fault reports a "Missing or Faulty Fan"
error, check on the status of that fan. If it needs
replacement,
create a tech-support file for the chassis and contact
This fault occurs under the following Cisco TAC.
condition: Step 8 If the above actions did not resolve the issue
• If a component within a chassis is and the condition persists, create a tech-support file
operating outside the safe thermal for
operating range. the chassis and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 In the event that the probable cause being
indicated is a thermal problem, check to see if the
airflow to
the server is not obstructed, and it is adequately
ventilated. If possible, check if the heat sink is properly
seated on the processor.
Step 2 In the event that the probable cause being
indicated is equipment inoperable, please contact
Cisco TAC
for further instructions.
Step 3 In the event that the probable cause being
indicated is a power or voltage problem, it is
This fault occurs in the event the recommended to
processor encounters a catastrophic see if the issue is resolved with an alternate power
error or has exceeded the pre-set supply. If this fails to resolve the issue, please contact
thermal/power thresholds. Cisco TAC.

If you see this fault, take the following actions:


Step 1 If this fault occurs, re-seat the processor.
This fault occurs in the unlikely event Step 2 If the above actions did not resolve the issue,
that a processor is disabled. create a tech-support file and contact Cisco TAC.

If you see this fault, take these actions:


Step 1 Monitor the processor for further degradation.
Step 2 Review the SEL statistics on the CPU to
determine which threshold was crossed.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
This fault occurs when the processor and
voltage is out of normal operating Service Guide for prerequisites, safety
range, but has not yet reached a recommendations, and warnings.
critical stage. Normally the processor Step 4 If the above actions did not resolve the issue,
recovers itself from this situation. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Monitor the processor for further degradation
Step 2 Review the SEL statistics on the CPU to
determine which threshold was crossed.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
and
Service Guide for prerequisites, safety
This fault occurs when the processor recommendations, and warnings.
voltage has exceeded the specified Step 4 If the above actions did not resolve the issue,
hardware voltage rating. create a tech-support file and contact Cisco TAC.

This fault occurs when the processor


voltage has exceeded the specified If you see this fault, take the following action:
hardware voltage rating and may Step 1 Create a tech-support file and contact Cisco
cause processor hardware damage. TAC.

If you see this fault, take the following actions:


Step 1 Check the POST result for the server.
This fault typically occurs when the Step 2 Reboot the server.
server has encountered a diagnostic Step 3 If the above actions did not resolve the issue,
failure or an error during POST. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Review the SEL statistics on the DIMM to
determine which threshold was crossed.
Step 2 Monitor the memory array for further
degradation.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
and
Service Guide for prerequisites, safety
This fault occurs when the memory recommendations, and warnings.
array voltage exceeds the specified Step 4 If the above actions did not resolve the issue,
hardware voltage rating. create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Review the SEL statistics on the DIMM to
determine which threshold was crossed.
Step 2 Monitor the memory array for further
degradation.
Step 3 Replace the power supply. Prior to replacing
this component, see the server-specific Installation
This fault occurs when the memory and
array voltage exceeded the specified Service Guide for prerequisites, safety
hardware voltage rating and recommendations and warnings.
potentially the memory hardware may Step 4 If the above actions did not resolve the issue,
be damaged. create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature


of a fan module has exceeded a non-
critical threshold value, but
is still below the critical threshold. Be
aware of the following possible If you see this fault, take the following actions:
contributing factors: Step 1 Review the product specifications to determine
• Temperature extremes can cause the temperature operating range of the fan module.
Cisco UCS equipment to operate at Step 2 Review the Cisco UCS Site Preparation Guide to
reduced efficiency and cause a ensure the fan modules have adequate airflow,
variety of problems, including early including front and back clearance.
degradation, failure of chips, and failure Step 3 Verify that the air flows are not obstructed.
of equipment. In Step 4 Verify that the site cooling system is operating
addition, extreme temperature properly.
fluctuations can cause CPUs to become Step 5 Power off unused rack servers.
loose in their sockets. Step 6 Clean the installation site at regular intervals to
• Cisco UCS equipment should operate avoid buildup of dust and debris, which can cause a
in an environment that provides an inlet system to overheat.
air temperature not Step 7 Replace faulty fan modules.
colder than 50F (10C) nor hotter than Step 8 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
This fault occurs when the temperature
of a fan module has exceeded a critical
threshold value. Be aware
of the following possible contributing If you see this fault, take the following actions:
factors: Step 1 Review the product specifications to determine
• Temperature extremes can cause the temperature operating range of the fan module.
Cisco UCS equipment to operate at Step 2 Review the Cisco UCS Site Preparation Guide to
reduced efficiency and cause a ensure the fan modules have adequate airflow,
variety of problems, including early including front and back clearance.
degradation, failure of chips, and failure Step 3 Verify that the air flows are not obstructed.
of equipment. In Step 4 Verify that the site cooling system is operating
addition, extreme temperature properly.
fluctuations can cause CPUs to become Step 5 Power off unused rack servers.
loose in their sockets. Step 6 Clean the installation site at regular intervals to
Cisco UCS equipment should operate in avoid buildup of dust and debris, which can cause a
an environment that provides an inlet system to overheat.
air temperature not Step 7 Replace faulty fan modules.
colder than 50F (10C) nor hotter than Step 8 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.

This fault occurs when the temperature


of a fan module has exceeded a critical
threshold value. Be aware Recommended Action:
of the following possible contributing If you see this fault, take the following actions:
factors: Step 1 Review the product specifications to determine
• Temperature extremes can cause the temperature operating range of the fan module.
Cisco UCS equipment to operate at Step 2 Review the Cisco UCS Site Preparation Guide to
reduced efficiency and cause a ensure the fan modules have adequate airflow,
variety of problems, including early including front and back clearance.
degradation, failure of chips, and failure Step 3 Verify that the air flows are not obstructed.
of equipment. In Step 4 Verify that the site cooling system is operating
addition, extreme temperature properly.
fluctuations can cause CPUs to become Step 5 Power off unused rack servers.
loose in their sockets. Step 6 Clean the installation site at regular intervals to
• Cisco UCS equipment should operate avoid buildup of dust and debris, which can cause a
in an environment that provides an inlet system to overheat.
air temperature not Step 7 Replace faulty fan modules.
colder than 50F (10C) nor hotter than Step 8 If the above actions did not resolve the issue,
95F (35C). create a tech-support file and contact Cisco TAC.
If you see this fault, take the following actions:
Step 1 Verify that the power cord is properly
connected to the PSU and the power source.
Step 2 Verify that the power source is 220 volts.
Step 3 Verify that the PSU is properly installed.
Step 4 Remove the PSU and reinstall it.
Step 5 Replace the PSU.
Step 6 If the above actions did not resolve the issue,
This fault typically occurs when CIMC note down the type of PSU, create a tech-support file,
detects that a power supply unit in a and
chassis is offline contact Cisco Technical Support.

This fault occurs when the PSU voltage If you see this fault, take the following action:
is out of normal operating range, but Step 1 Monitor the PSU for further degradation.
has not reached to a critical Step 2 Remove and reseat the PSU.
stage yet. Normally the PSU will recover Step 3 If the above actions did not resolve the issue,
itself from this situation. create a tech-support file and contact Cisco TAC.

This fault occurs when server chassis or


cover has been opened. Make sure that the server chassis/cover is in place.

The adapter is missing. CIMC raises this If you see this fault, take the following actions:
fault when any of the following Step 1 Make sure an adapter is inserted in the adaptor
scenarios occur: slot in the server.
• The endpoint reports there is no Step 2 Check whether the adapter is connected and
adapter in the adaptor slot. configured properly and is running the recommended
• The endpoint cannot detect or firmware version.
communicate with the adapter in the Step 3 If the above actions did not resolve the issue,
adaptor slot. create a tech-support file and contact Cisco TAC.

This fault indicates a failure in the If you see this fault, take the following action:
rebuild process of the Local disk. Step 1 Restart the rebuild process.
If you see this fault, take the following actions:
Step 1 Review the product specifications to determine
the temperature operating range of the fan module.
Step 2 Review the Cisco UCS Site Preparation Guide
and ensure the fan module has adequate airflow,
including
front and back clearance.
Step 3 Verify that the air flows of the servers are not
obstructed.
Step 4 Verify that the site cooling system is operating
properly.
Step 5 Clean the installation site at regular intervals to
avoid buildup of dust and debris, which can cause a
system to overheat.
Step 6 Replace the faulty fan modules. Prior to
replacing this component, see the server-specific
Installation
This fault occurs when one or more fans and Service Guide for prerequisites, safety
in a fan module are not operational, but recommendations and warnings.
at least one fan is Step 7 If the above actions dd not resolve the issue,
operational. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Insert the disk in a supported slot.
Step 2 Replace the disk, if an additional drive is
available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
for
This fault occurs when the Flex Flash prerequisites, safety recommendations and warnings.
drive removed from slot while server Step 3 If the above actions did not resolve the issue,
was in use. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Initiate a consistency check on the virtual drive.
Step 2 Replace any faulty physical drives. Prior to
replacing this component, see the server-specific
This fault indicates that the review of Installation
the storage system for potential and Service Guide for prerequisites, safety
physical disk errors has failed. recommendations and warnings.

If you see this fault, take the following actions:


Step 1 Synchronize the virtual drive using Cisco UCS
SCU to make the VD optimal.
Step 2 Replace any faulty Flex Flash drives. Prior to
replacing this component, see the server-specific
This fault indicates a recoverable error Installation and Service Guide for prerequisites, safety
with the Flex Flash virtual drive. recommendations and warnings.
If you see this fault, take the following actions:
Step 1 If the data on the drive is accessible, back up
and recreate the virtual drive.
Step 2 Replace any faulty Flex Flash drives. Prior to
replacing this component, see the server-specific
Installation and Service Guide for prerequisites, safety
recommendations and warnings.
This fault indicates a non-recoverable Step 3 Synchronize the virtual drive using Cisco UCS-
error with the Flex Flash virtual drive. SCU to make the VD optimal.

If you see this fault, take the following actions:


Step1- First disable the corresponding power-profile in
the Power Cap Configuration page and power On the
Host. Step2- Try increasing the power-cap value in
This fault indicates the assigned power- the power-cap-profile page for which the shutdown
cap value is not maintained and host action is configured.
shutdown is intiated as a power-cap fail Step3- Also try reducing the load on the Host
exception action if the set exception if the assigned power-cap value needs to be
action is shutdown. maintained irrespective of host performance impact.

If you see this fault, take the following actions:


Step1-Try increasing the configured power-cap
value in the power-profile that corresponds to this
This fault indicates the assigned power- fault. Step2- Also try increasing the power
cap value is not able to attain within the limiting correction time in the corresponding power-
set correction-time. profile settings.

If you see this fault, take the following action:


Step 1 Try to reset the flax flash controller.
Step 2 If the above actions did not resolve the issue,
This fault indicates a non-recoverable create a tech-support file and contact Cisco TAC.
flex flash controller failure. This Prior to resetting the flex flash controller, see the
happens when the CIMC is not able to server-specific
manage/communicate with the flex Installation and Service Guide for prerequisites, safety
flash controller. recommendations and warnings.
If you see this fault, take the following action:
Step 1 Check controller status and make sure the
firmware mode is matching the SD Cards mode and
the VD's are in healthy state
This fault occurs if there is a mode Step 2 Check the size of the SD cards and make sure
mismatch Or cards size mismatch. both the cards match in size.

If you see this fault, take the following actions:


Step 1 Insert the disk in a supported slot.
Step 2 Remove and re-insert the card.
Step 3 Replace the card, if an additional card is
available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
for
prerequisites, safety recommendations and warnings.
This fault occurs when the flex flash Step 4 If the above actions did not resolve the issue,
card has become inoperable create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Insert the disk in a supported slot.
Step 2 Remove and re-insert the card, else insert a
new card if an additional drive is available.
Note Prior to installing or replacing this component,
see the server-specific Installation and Service Guide
for
This fault occurs when the Flex Flash prerequisites, safety recommendations and warnings.
drive removed from slot while server Step 3 If the above actions did not resolve the issue,
was in use. create a tech-support file and contact Cisco TAC.

If you see this fault, take the following actions:


Step 1 Synchronize the virtual drive maually using
CIMC WebUI to make the VD optimal.
Step 2 If it did not solve the issue, then virtual drives
might need to be reconfigured. While reconfiguring
virtual drives, there is an option of auto-sync, which
can be enabled. This option will automate the virtual
drives and sync the data.
This fault indicates a recoverable error Review Installation and Service Guide for
with the Flex Flash virtual drive. prerequisites, safety recommendations and warnings.
If you see this fault, take the following actions:
Step 1 If the data on the drive is accessible, back up
and recreate the virtual drive.
Step 2 Replace any faulty Flex Flash drives. Prior to
replacing this component, see the server-specific
Installation and Service Guide for prerequisites, safety
recommendations and warnings.
Step 3 Synchronize the virtual drive either by manually
doing the sync using CIMC WebUI, or by selecting
This fault indicates a non-recoverable auto-sync option while creating the virtual drives, to
error with the Flex Flash virtual drive. make the VD optimal.
This fault indicates that the If you see this fault, take the following actions:
corresponding memory riser has been Step 1 Remove the riser
disabled. Step 2 make sure the host CPU type supports the
Memory Riser DDR type that is installed.
AffectedDN Description Comment
Vikram to provide the
details

Vikram to provide the


details
Take Details from F1007 UCSM
Fault code Fault Name Message

SIOC[id]_RTC_BAT: Real Time Clock


battery Voltage level of CMC-[id]is
F0533 fltComputeRtcBatteryInoperable upper non-recoverable
Severity Probable Cause DN

voltage-problem sys/chassis-1/slot-[id]
major, Critical
Description Explanation Recommended Action

This fault indicates real time


SIOC[id]_RTC_BAT: Real clock battery voltage has
Time Clock battery Voltage crossed the threshold.
level of CMC-[id]is upper Severity of the fault depends If you see this fault, take following
non-recoverable./ upper upon type of threshold actions:
critical / lower non- crossed.This impacts the Step 1 Replace the CMOS battery.
recoverable/lower critical system clock. Step 2 If problem still persists, then
please contact CiscoTAC.
Fault
code Fault Name Message Severity

F1686 fltStorageSasExpanderAccessibility major

F1687 fltStorageSasExpanderDegraded major

F1744 fltEquipmentSystemIOControllerRemoved warning

F1688 fltStorageLocalDiskLinkDegraded minor


Probable Cause DN

equipment-inoperable sys/chassis-1/sas-expander-[id]

connectivity-problem sys/chassis-1/expander-[id]

equipment-missing sys/chassis-1/slot-1

connectivity-problem sys/rack-unit-1/storage/diskslot-10
Description Explanation
SAS Expander controller [id] is CMC communicates with SAS
unreachable: SAS expander controller expander over ethernet (TCP
[id] might be rebooting. If this fault connection). If CMC is not able to
persists for more than 15 minutes, establish connection with the
please contact Cisco TAC expander
This fault is generated. The reasons
could be defective expander/ dead
firmware in expander, or defective
chassis.

SAS Expander controller [id] link speed The LSI controller on the server board
changed with LSI RAID Controller of is connected to SAS expander over
server board 2: reseat or replace the multiple SAS Links (6G or 12G). If any
RAID controller of server board 2. If of these links are down
the issue still persists, please contact you will get this fault. The reasons
Cisco TAC could be flaky SAS link between LSI
controller and SAS Expander ( cause:
defective LSI controller, defective
chassis)

This fault indicates that one of the IO


SIOC1_PRES: IO Module 1 missing: modules is missing
Please reseat or replace IO Module 1

There are two SAS links per drive, each


link is connected to one of the SAS
expanders. If one/both of these links
Storage Local disk 10 drive link are flaky/down, you will get this fault
status/speed changed with SAS The cause could be defective drive or
expander 1: reseat or replace the defective SAS expander/chassis
storage drive 10
AffectedD Descriptio Sensors - Sensors- Sensors - Sensors-
Recommended Action N n SL1 SL2 AL1 AL2
If you see this fault, take the following
actions:
Replace the chassis / Contact Cisco TAC

If you see this fault, take the following


actions:
Replace the corresponding Server Board /
Contact Cisco TAC

Reseat or Replace the IO Module

If you see this fault, take the following


actions:
Replace the corresponding Drive / Contact
Cisco TAC
Sensors- Sensors-
Alpine Amador Comment
Fault
code Fault Name Message

Installed TPM is currently in-


F1783 fltEquipmentTpmTpmMismatch operable for server configuration
Severity Probable Cause DN

warning equipment-inoperable sys/rack-unit-1/adaptor-10


Description Explanation Recommended Action
This fault indicates that a wrong If you see this fault, take the following
tpm has been installed or actions:
previously installed tpm has Step 1 Remove TPM if incorrect
PM_FAULT_STATUS: Check been removed. revision installed.
TPM, either wrong TPM Step 2 And/Or Install removed TPM
revision installed for CPU back into server
type or previously installed
TPM has been removed
Sensors - Sensors- Sensors - Sensors- Sensors- Sensors-
AffectedDN Description SL1 SL2 AL1 AL2 Alpine Amador
Comment
Fault Probable
code Fault Name Message Severity Cause

link-
F0717 fltMgmtIfMissing Notice missing
DN Description Explanation
This fault indicates
that the
Link Down : <Interface> Check the network corresponding
cable conneciton interface cable is
not connected.
Where Interface can be
DEDICATED_MODE_<port>
LOM_ACTIVE_STANDBY_<port>
LOM_ACTIVE_ACTIVE_<port>
CISCO_CARD_ACTIVE_STANDBY_<port>
CISCO_CARD_ACTIVE_ACTIVE_<port>
LOM10G_ACTIVE_STANDBY_<port>
LOM10G_ACTIVE_ACTIVE_<port>
LOM_EXT_MODE_<port>
sys/rack-unit-1/mgmt/if-1
Recommended AffectedD
Action N
If you see this fault,
take the following
actions:
Step 1 - Check if the
cable is connected
properly..
None
Fault code Fault Name Message

1_SRVR_DUAL_VIC: Server board is hot


inserted: Correct Single Server Dual VIC
F1896 fltIncompatibleHardwareDetected configuration.
Severity Probable Cause DN Description

1_SRVR_DUAL_VIC : Server
board is hot inserted: Correct
Single Server Dual VIC
critical equipment-problem sys/chassis-1 configuration.
Explanation Recommended Action

If you see this fault, take following


actions:
Step 1 Please correc the Single
This fault indicates server board is Server Dual VIC configuration .
hot inserted in single server dual Step 2 If problem still persists, then
VIC configuration. please contact CiscoTAC.
None
None
None
None
None
Fault code Fault Name Message

HDD[id]_STATUS: An invalid drive


population detected on slot [id] :
Please replace the drive or check
F0776 fltStorageLocalDiskSlotEpUnusable system configuration.
Severity Probable Cause DN Description

HDD[id]_STATUS: An invalid
drive population detected on
slot [id] : Please replace the drive
warning equipment-problem sys/rack-unit-1/board or check system configuration.
Explanation Recommended Action

If you see this fault, take following


actions:
Step 1 Please place proper SAS/SATA
This fault indicates HDD drive type HDD drive in expected slot
SAS/SATA is either not present in the Step 2 If problem still persists, then
expected slot Or is unusable. please contact CiscoTAC.
Fault code Fault Name Message

Chassis Intrusion detected: Please


F1932 fltChassisOpenServer secure the server chassis
Intel PCH Secure Fuse Verification
F1934 fltComputeBoardFailedSecureFuseValidation Failed. Please contact Cisco TAC

BIOS FD0 Verification Failed.


F1935 fltBiosUnitFD0FailedSecurityVerification Please update and reactivate BIOS.

FOC_ALERT : Over current


condition has been detected on the
system. Check power supplies and
F0369 fltEquipmentPsuPowerSupplyProblem its inputs.

PMBUS_ALERT : Power supply


internal failure detected: Check
power supplies and its inputs
F0369 fltEquipmentPsuPowerSupplyProblem
Severity Probable Cause DN Description

Chassis Intrusion detected:


critical chassis-open sys/rack-unit-1 Please secure the server chassis
Intel PCH Secure Fuse Verification
critical equipment-unhealthy sys/rack-unit-1/board/Intel-PCH Failed. Please contact Cisco TAC.

BIOS FD0 Verification Failed.


critical BIOS-Image-Corrupted sys/rack-unit-1/board Please update and reactivate BIOS.

FOC_ALERT Over current condition


has been detected on the system.
Check power supplies and its
critical power-problem sys/rack-unit-1/board inputs

PMBUS_ALERT : Power supply


internal failure detected: Check
power supplies and its inputs
critical power-problem sys/rack-unit-1/board
Explanation Recommended Action

If you see this fault, take following actions:


This fault indicates , Step 1 Check chassis cover is open. If is
server chassis cover is opened. open, then close it and fault will get cleared.
This fault indicates , the Intel PCH If you see this fault, take following actions:
secure fuse verification has failed. Step 1 Contact Cisco TAC

If you see this fault, take following actions:


Step 1 Activate the bakcup BIOS image or
This fault indicates, BIOS failed reflash the BIOS. If problem still persist, then
secure boot. please contact Cisco TAC

If you see this fault, take following actions:


Step 1 Please check the power supply is
This fault indicates over current proper.
condition has been detected on Step 2 If problem still persists, then please
system contact CiscoTAC.

If you see this fault, take following actions:


Step 1 Please check the power supply is
proper.
This fault indicates Power supply Step 2 If problem still persists, then please
internal failure has been contact CiscoTAC.
detected on system
Fault code Fault Name Message Severity

Intel ME Node Manager is unhealthy


F1707 fltMgmtHealthStatusHealthCriticalIssue or unresponsive. critical
Probable Cause DN Description

Intel ME Node Manager is


equipment-unhealthy sys/rack-unit-1/board unhealthy or unresponsive.
Explanation Recommended Action

If you see this fault, take following actions:


Step 1 If fault does not auto-clear in 5 to 10
minutes, Active the backup BIOS image or re-
flash BIOS.
Step 2 If problem still persists, then please
contact CiscoTAC.
The fault indicates Intel Node
Manager is not healthy
None
None
None
None
Fault code Fault Name Message Severity Probable Cause

This device supports connectivity


to Cisco Intersight, but has not
been claimed.
To take advantage of the
features of Cisco Intersight,
please claim the device to your
Intersight account.
F1936 fltIntersightGenericCode For help visit intersight.com/help informational intersight-not-claimed

PLX SWITCH [id] is inoperable:


Please check the PLX SWITCH
F1974 fltPLXHealthCritical status critical equipment-degraded
DN Description Explanation

This device supports connectivity


to Cisco Intersight, but has not
been claimed.
To take advantage of the
features of Cisco Intersight,
please claim the device to your The fault indicates
Intersight account. server is not claimed
sys/cloud-mgmt/device-connector For help visit intersight.com/help from Intersight account

PLX SWITCH [id] is inoperable: The fault indicates PLX


sys/rack-unit-1/board/PCI-Switch-1 Please check the PLX SWITCH switch [id] is not healthy
Recommended Action

If you see this fault, take following actions:


Step 1 Claim the server from Intersight account. Visit intersight.com/help for
more details
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 Reboot the server.
Step 2 If problem still persists, then please contact CiscoTAC.
None
None
None
None
None
None
Fault code Fault Name Message Severity

Mixed RDIMMs sizes detected in


the system, check CPU:X
F1704 fltMgmtHealthStatusHealthWarningIssueconfiguration warning

Not enough DDR4 DIMMS,


(found only n, check CPU: X
configuration.)

n = number of DDR4 DIMMS, X =


F1704 fltMgmtHealthStatusHealthWarningIssueCPU number warning

The number of DCPMMs per CPU


in the system do not match.
Check the number of DCPMMs
F1704 fltMgmtHealthStatusHealthWarningIssueper CPU. warning

DCPMM not found in correct slot


location.

Check CPU:X Bus:0x03 <Bus_id>


Dimm:0xC2 <DIMM_id>
F1704 fltMgmtHealthStatusHealthWarningIssueconfiguration. warning

Mixed DCPMMs sizes detected in


the system. Check the system
F1704 fltMgmtHealthStatusHealthWarningIssueconfiguration. warning

Too many DCPMM n number of


DCPMM, check CPU:X
F1704 fltMgmtHealthStatusHealthWarningIssueconfiguration warning
Total Memory (xxxx) greater than
CPU4 memory tier (yyyy).

xxxx = Total sum of memory


(DDR4 DIMMs+DCPMM)
populated per CPU

F1704 fltMgmtHealthStatusHealthWarningIssueyyyy = Total CPU memory tier warning

DIMM_id: DCPMM package


F1704 fltMgmtHealthStatusHealthWarningIssuesparing no longer available. warning

DIMM_id: DCPMM health status


F1704 fltMgmtHealthStatusHealthWarningIssueis fatal. warning

DIMM_id: DCPMM health status


F1704 fltMgmtHealthStatusHealthWarningIssueis critical. warning

DIMM_id: DCPMM health status


F1704 fltMgmtHealthStatusHealthWarningIssueis non-critical. warning

DIMM_id: DCPMM life remaining


F1704 fltMgmtHealthStatusHealthWarningIssueis 0% warning

DIMM_id: DCPMM life remaining


F1704 fltMgmtHealthStatusHealthWarningIssueis 1% warning

DIMM_id: DCPMM life remaining


F1704 fltMgmtHealthStatusHealthWarningIssueis below 50% warning

DIMM_id: Host cannot manage


F1704 fltMgmtHealthStatusHealthWarningIssueDCPMM warning
DIMM_id: DCPMM mismatched
F1704 fltMgmtHealthStatusHealthWarningIssuefirmware revision warning

DIMM_id: DCPMM package


F1704 fltMgmtHealthStatusHealthWarningIssuesparing no longer available warning

NamespaceID n: Health state is


F1704 fltMgmtHealthStatusHealthWarningIssueUnManageable warning

RegionID n: Health state is


F1704 fltMgmtHealthStatusHealthWarningIssueFatalFailure warning

RegionID n: Health state is


F1704 fltMgmtHealthStatusHealthWarningIssueCriticalFailure warning

RegionID n: Health state is


F1704 fltMgmtHealthStatusHealthWarningIssueUnmanageable warning

RegionID n: Health state is


F1704 fltMgmtHealthStatusHealthWarningIssueNonCriticalFailure warning

equipment-inoperable
DDR4_Px_y_ECC: DIMM n is
inoperable: Check or replace
DIMM

x = Processor_id, y = DIMM
F1704 fltMgmtHealthStatusHealthWarningIssuename, n = DIMM_id warning
Probable Cause DN Description

Mixed RDIMMs sizes detected in the system,


configuration-warning sys/rack-unit-1/board/memarray-1 check CPU:X configuration

Not enough DDR4 DIMMS, (found only n,


check CPU: X configuration.)

n = number of DDR4 DIMMS, X = CPU


configuration-warning sys/rack-unit-1/board/memarray-1 number

The number of DCPMMs per CPU in the


system do not match. Check the number of
configuration-warning sys/rack-unit-1/board/memarray-1 DCPMMs per CPU.

DCPMM not found in correct slot location.

Check CPU:X Bus:0x03 <Bus_id>


configuration-warning sys/rack-unit-1/board/memarray-1 Dimm:0xC2 <DIMM_id> configuration.

Mixed DCPMMs sizes detected in the system.


configuration-warning sys/rack-unit-1/board/memarray-1 Check the system configuration.

Too many DCPMM n number of DCPMM,


configuration-warning sys/rack-unit-1/board/memarray-1 check CPU:X configuration
Total Memory (xxxx) greater than CPU4
memory tier (yyyy).

xxxx = Total sum of memory (DDR4


DIMMs+DCPMM) populated per CPU

configuration-warning sys/rack-unit-1/board/memarray-1 yyyy = Total CPU memory tier

DIMM_id: DCPMM package sparing no


configuration-warning sys/rack-unit-1/board/memarray-1 longer available.

configuration-warning sys/rack-unit-1/board/memarray-1 DIMM_id: DCPMM health status is fatal.

configuration-warning sys/rack-unit-1/board/memarray-1 DIMM_id: DCPMM health status is critical.

DIMM_id: DCPMM health status is non-


configuration-warning sys/rack-unit-1/board/memarray-1 critical.

configuration-warning sys/rack-unit-1/board/memarray-1 DIMM_id: DCPMM life remaining is 0%

configuration-warning sys/rack-unit-1/board/memarray-1 DIMM_id: DCPMM life remaining is 1%

DIMM_id: DCPMM life remaining is below


configuration-warning sys/rack-unit-1/board/memarray-1 50%

configuration-warning sys/rack-unit-1/board/memarray-1 DIMM_id: Host cannot manage DCPMM


DIMM_id: DCPMM mismatched firmware
configuration-warning sys/rack-unit-1/board/memarray-1 revision

DIMM_id: DCPMM package sparing no


configuration-warning sys/rack-unit-1/board/memarray-1 longer available

NamespaceID n: Health state is


configuration-warning sys/rack-unit-1/board/memarray-1 UnManageable

configuration-warning sys/rack-unit-1/board/memarray-1 RegionID n: Health state is FatalFailure

configuration-warning sys/rack-unit-1/board/memarray-1 RegionID n: Health state is CriticalFailure

configuration-warning sys/rack-unit-1/board/memarray-1 RegionID n: Health state is Unmanageable

RegionID n: Health state is


configuration-warning sys/rack-unit-1/board/memarray-1 NonCriticalFailure

equipment-inoperable DDR4_Px_y_ECC:
DIMM n is inoperable: Check or replace
DIMM

x = Processor_id, y = DIMM name, n =


configuration-warning sys/rack-unit-1/board/memarray-1 DIMM_id
Explanation
Populate DIMMS with valid Cisco POR, but
mix DRAM DIMM sizes. This fault is applicable
for both 2 Socket and 4 Socket configurations.
The fault is generated for respective
CPU/DIMM.

One unsupported CISCO POR 220 across both


CPUs. Populate Intel Optane Persistent
Memory with unsupported Cisco POR with
few DRAMs. This fault is applicable for both 2
Socket and 4 Socket configurations. The fault
is generated for respective CPU/DIMM.

Async population between the CPUs.


However, in each CPU, the population
complies with one valid Cisco POR. Populate
the first CPU as per one Cisco supported POR,
and the second CPU with another valid Cisco
POR. This fault is applicable for both 2 Socket
and 4 Socket configurations. The fault is
generated for respective CPU/DIMM.

Swapping of Intel Optane Persistent Memory


and DRAM DIMMs between slot 1 and slot 2.
Populate Intel Optane Persistent Memory and
DRAM in a swapped position. This fault is
applicable for both 2 Socket and 4 Socket
configurations. The fault is generated for
respective CPU/DIMM.

Mixing of Intel Optane Persistent Memory


DIMMs sizes within valid Cisco POR. Populate
Intel Optane Persistent Memory as per valid
Cisco POR, but with mix of Intel Optane
Persistent Memory size capacity. This fault is
applicable for both 2 Socket and 4 Socket
configurations. The fault is generated for
respective CPU/DIMM.

Populate Intel Optane Persistent Memory


DIMMs on both slots of same channel. This
fault is applicable for both 2 Socket and 4
Socket configurations. The fault is generated
for respective CPU/DIMM.
Mismatch of CPU memory tier and the total
memory installed. Populate Intel Optane
Persistent Memory with valid Cisco POR, but
populated total memory of Intel Optane
Persistent Memory and DRAM per CPU is
greater than CPU memory tier.

When DCPMM package sparing is no longer


available. This fault is applicable for both 2
Socket and 4 Socket configurations. The fault
is generated for respective CPU/DIMM.

When DCPMM health status is fatal. This fault


is applicable for both 2 Socket and 4 Socket
configurations. The fault is generated for
respective CPU/DIMM.

When DCPMM health status is critical. This


fault is applicable for both 2 Socket and 4
Socket configurations. The fault is generated
for respective CPU/DIMM.

When DCPMM health status is non-critical.


This fault is applicable for both 2 Socket and 4
Socket configurations. The fault is generated
for respective CPU/DIMM.

When DCPMM life remaining is 0%. This fault


is applicable for both 2 Socket and 4 Socket
configurations. The fault is generated for
respective CPU/DIMM.

When DCPMM life remaining is 1%. This fault


is applicable for both 2 Socket and 4 Socket
configurations. The fault is generated for
respective CPU/DIMM.

When DCPMM life remaining is below 50%.


This fault is applicable for both 2 Socket and 4
Socket configurations. The fault is generated
for respective CPU/DIMM.

When the host cannot manage DCPMM. This


fault is applicable for both 2 Socket and 4
Socket configurations. The fault is generated
for respective CPU/DIMM.
When DCPMM mismatched firmware
revision. This fault is applicable for both 2
Socket and 4 Socket configurations. The fault
is generated for respective CPU/DIMM.

When DCPMM package sparing is no longer


available. This fault is applicable for both 2
Socket and 4 Socket configurations. The fault
is generated for respective CPU/DIMM.

When Namespace Health state is


unmanageable. This fault is applicable for
both 2 Socket and 4 Socket configurations.
The fault is generated for namespace under
respective CPU/DIMM.

When Region Health state is FatalFailure. This


fault is applicable for both 2 Socket and 4
Socket configurations. The fault is generated
for region under respective CPU/DIMM.

When Region Health state is CriticalFailure.


This fault is applicable for both 2 Socket and 4
Socket configurations. The fault is generated
for region under respective CPU/DIMM.

When Region Health state is unmanageable.


This fault is applicable for both 2 Socket and 4
Socket configurations. The fault is generated
for region under respective CPU/DIMM.
When Region Health state is
NonCriticalFailure. This fault is applicable for
both 2 Socket and 4 Socket configurations.
The fault is generated for region under
respective CPU/DIMM.

When DIMM is inoperable. This fault is


applicable for both 2 Socket and 4 Socket
configurations. The fault is generated under
respective CPU/DIMM.
Recommended Action

If you see this fault, take following actions:


Step 1 It is not recommended to mix RDIMMs with present DCPMMs. Remove
or install RDIMMs of the same size in the system.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 All CPUs should have the same symmetric memory configuration.
Remove or install RDIMMs and DPCMMs until both the CPU configurations are
symmetric and follow Cisco POR.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 All CPUs should have the same symmetric memory configuration.
Remove or install RDIMMs and DPCMMs until both the CPU configurations are
symmetric and follow Cisco POR.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 DCPMMs can only be installed in specific slots in the system. Remove or
install RDIMMs and DPCMMs until both the CPU configurations are symmetric
and follow Cisco POR.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 DCPMMs installed in the system must be of the same size. Remove or
install RDIMMs and DPCMMs until both the CPU configurations are symmetric
and follow Cisco POR.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 DCPMMs can only be installed in specific slots in the system. Remove or
install RDIMMs and DPCMMs until both the CPU configurations are symmetric
and follow Cisco POR.
Step 2 If problem still persists, then please contact CiscoTAC.
If you see this fault, take following actions:
Step 1 The installed CPUs have maximum memory tier. If memory is installed
beyond the CPU's maximum memory tier, this message is displayed. Remove
or install RDIMMs and DPCMMs (that is, reduce the size of the total memory
installed in the system) until both CPU configurations are symmetric and
follow Cisco POR.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 DCPMMs can only be installed in specific slots in the system. Remove or
install RDIMMs and DPCMMs until both the CPU configurations are symmetric
and follow Cisco POR.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 A particular DCPMM has fatal health status and might need to be
replaced.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 A particular DCPMM has critical health status and might need to be
replaced.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 A particular DCPMM has non-critical health status and might need to
be replaced.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1A particular DCPMM has 0% (storage) life and might need to be
replaced. .
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 A particular DCPMM has 1% (storage) life and might need to be
replaced.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 Particular DCPMM has 50% (storage) life and should be monitored.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 Particular DCPMM cannot be managed by the host and likely cannot
change between Memory Mode to App Direct and vice versa.
Step 2 If problem still persists, then please contact CiscoTAC.
If you see this fault, take following actions:
Step 1 Highly recommend that all DCPMMs installed in the system have the
same firmware version installed. Please install the same firmware versions on
all the DCPMMs in the system.
Step 2 If problem still persists, then please contact CiscoTAC.

If you see this fault, take following actions:


Step 1 A particular DCPMM has used its backup sparing device and likely
needs to be replaced.
Step 2 If problem still persists, then please contact CiscoTAC.
Fault code Fault Name

F1706 fltMgmtHealthStatusHealthMajorIssue
Message Severity Probable Cause

ADDDC Rank/Bank-level adaptive virtual lockstep is


activated on DIMM [id] (DDR4_P2_G1_ECC). This
DIMM is at an increased risk of experiencing an
Uncorrectable Error. Please schedule maintenance and
replace the DIMM. major equipment-unhealthy
DN Description

If post package repair is enabled, then fault


description will be like ADDDC Rank/Bank-level
adaptive virtual lockstep is activated on DIMM [id]
(DDR4_P2_G1_ECC). This DIMM is at an increased
risk of experiencing an Uncorrectable Error.
sys/rack-unit-1/board/memarray-1/mem- Post Package Repair will occur on the
[id] next system reboot.
Explanation

This fault raised with DNsys/rack-unit-1/board/memarray-1/mem-[id] indicates ADDDC rank or bank level
ECC has occurred on this DIMM
Recommended Action

If you see this fault, take following actions:


Step 1 For ADDDC fault, Enable the Post package repair or please schedule the
maintenance of DIMM

Step 2 If problem still persists, then please contact CiscoTAC.


None
None

You might also like