Professional Documents
Culture Documents
Version 4.0.2
1
Product description
Product description
These release notes support EMC® CentraStar® version 4.0.2 and
supplement the EMC Centera® documentation. Read the entire
document for additional information available about CentraStar
version 4.0.2. It may describe potential problems or irregularities in
the software and contains late changes to the product documentation.
New migration role for Effective in CentraStar and CV/CLI versions 4.0.2 or higher, a new
emcservice profile management profile role was added to allow management profiles to
run a migration. While the new migration role is not automatically
assigned to new or existing management profiles, it is automatically
built into the emcservice profile. The migration role enables the
emcservice profile to issue all migration CLI commands associated
with running intracluster migration.
Migration reports
The intracluster migration process produces the following reports:
◆ Centera Migration Precheck Report: This report includes the
results of a series of cluster prechecks to see if the cluster is ready
for intracluster migration.
Example 1 Sample output for Centera MIgration Precheck Report
Scheduled
Ended at: 11/04/2008 14:59
Migrating content
Ended at: 11/05/2008 07:12
Migration executing:
---------------------------------------------------
Waiting for user input
Migration is ready for completion.
Migration alerts
The following migration alerts were added to the EMC Centera
system:
Note: If multiple warning, error, or veto messages exist, CentraStar may issue
additional informational (INFO) messages.
Logging Migration logging is added to the FilePool logs. The entries contain
the string "MigrationManager". Most migration commands are also
audit-logged.
Health Report The following changes were made to the Health Report, which was
updated to the 1.6.1.dtd format:
A new migration job section and corresponding fields were added to
the HTML report following the Garbage Collection section:
◆ job id
◆ status
◆ progress
◆ current speed
◆ time remaining
◆ source nodes
Example 3 Sample output for migration job section of the Health Report
Fixed problems
This release includes fixes to the following problems:
Server
Host OS Any OS
Problem Human errors during a service procedure may cause unprotected data
Symptom Duplicate disk IDs can be introduced in a cluster when a service procedure is not accurately
followed. This may cause regeneration to fail although it appears to be completed successfully.
This may lead to unprotected data.
Host OS Any OS
Symptom Some forms of corruption (for example, filename corruption) can cause lockups when
encountered during range init. This results in an ever bouncing node, and a range init that never
finishes.
Fix Summary Remove the offending fragment file to allow range init to continue.
Self Healing
Host OS Any OS
Symptom Adding new capacity to a cluster (more than 8 nodes) while a regeneration is ongoing may
cause nodes to bounce when a second regeneration is initiated due to out of memory and
lock-ups.
Host OS Any OS
Symptom After upgrading to CentraStar 4.0, regeneration may skip a rare occurrence of a partially
reclaimed Blob caused by the old Garbage Collection (Pre-4.0 CentraStar).
Host OS Any OS
Problem Hyper regeneration retries lead to very slow progress on a mixed cluster
Symptom During hyper regeneration, validation of missing blob fragments was failing and causing
excessive retries that slowed progress on a mixed cluster.
Fix Summary The validation no longer fails on missing blob fragments, and the number of retries has been
reduced.
Host OS Any OS
Symptom Restart of the full iteration engine (FIE) is logged in the corruption log as "started" while it
actually "resumed".
Fix Summary The log message was adjusted to show the correct operation of the full iteration engine.
Host OS Any OS
Symptom When hyper regeneration finds many copies of the same object, it significantly slows down and
after 48 hours, may end up stuck.
Fix Summary HyperRegen now gives up after trying a certain amount of combinations and will fail (and submit
for review).
Host OS Any OS
Problem Adding capacity to a cluster while a regeneration is ongoing may cause bouncing nodes, OOMs
on the principal, and nodes failing to attach
Symptom When nodes are added to a cluster, they get a DB init on startup, which creates an empty
BlobIndex. However, if at that time a regeneration is already running in the cluster, these DB inits
will unnecessarily pause and be replaced with hyper regenerations. This can be invasive and
cause OOMs on the principal, bouncing nodes, and nodes failing to attach to the cluster.
Upgrades
Host OS Any OS
Problem Regenerations may not start after a disruptive platform-controlled upgrade from CentraStar
pre-4.0 to 4.0.1
Symptom After a disruptive platform-controlled upgrade from CentraStar pre-4.0 to 4.0.1, it is possible that
Centralized Regeneration Manager can no longer start. This prevents regenerations from
occurring when needed, and integrity is always assumed to be complete.
Host OS Any OS
Problem Pausing the primary fragments migration task can lead to future regenerations going to stuck
state
Symptom When a primary fragment migration task is being paused, the ongoing migrating fragment may
be half migrated leading to inconsistency in the BlobIndex. This inconsistency caused future
regeneration tasks to become stuck.
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2
Host OS Any OS
Problem No Health Reports being sent after an upgrade to CentraStar version 3.1
Symptom When upgrading from CentraStar version 3.0 or lower to CentraStar version 3.1 the From
address for ConnectEMC may not be set. As a result no Health Reports will be sent if after the
upgrade the notification settings are updated without setting the From address. Make sure that
you set the From address after the upgrade using the CLI command set notification.
Fix Summary Set the From address manually after the upgrade using CLI command set notification
Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1,
4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2
Protection
Host OS Any OS
Problem Regeneration becomes stuck when the protection information of a fragment in the blobIndex
database is inconsistent
Symptom Regeneration ends in a stuck state when a fragment entry on the remote protection index was
found, but not on the main index. The BlobIndex entry for that fragment is inconsistent.
Fix Summary Force the regeneration to success or trigger a DB init to rebuild the blobIndex.
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2
Centera CLI
Host OS Any OS
Problem Standalone CLI 'quit' functionality was broken, causing the quit to take a very long time (until
timeout)
Symptom When starting the standalone 4.0.1 CLI with a script, the 'quit' command was unresponsive. The
program terminated only after a very long timeout.
Fix Summary The 'quit' command was fixed to stop and end the CLI program instantly.
Host OS Any OS
Problem The CLI command "show config version" does not always show correct information during
CentraStar upgrade
Symptom When EMC Customer support is upgrading CentraStar, the CLI command "show config version"
does not always show the correct upgrade information. The CLI command "show upgrade
status" must be used instead.
Fix Summary To monitor an ongoing Filepool-controlled upgrade, a message now displays to redirect the user
to the "show upgrade status" CLI command.
CenteraViewer
Host OS Any OS
Problem Allocated memory for Centera Viewer installed via the Global Services installer was sometimes
insufficient during heavy use
Symptom During use of the Regenerations dashboard, the Centera Viewer installed via the Global
Services installer can go out of memory, causing the application to stall.
Fix Summary The allocated heap memory in the Centera Viewer start script was increased to 256 MB.
Other
Host OS Any OS
Problem When an intracluster migration is attempted while an upgrade is in progress, the error message
may contain errant characters
Symptom When an intracluster migration is attempted while an upgrade completion has yet to be finished,
the error message may contain errant characters like ".
Fix Summary
Host OS Any OS
Problem A dead disk can cause unnecessary failure statements in the platform log file
Symptom When CentraStar declares a disk dead, the platform log file may be flooded with unnecessary
transient failure statements.
Host OS Any OS
Problem No clear message when Filepool cannot start because of NTP problems
Symptom There is not enough logging to indicate that Filepool may have failed to start because of an NTP
problem. The best indicator for this problem is to look in the /var/log/fp-status file and check if
there is a message indicating that Filepool has started (such as "Restarted filepool agent") after
NTP sync logs (such as "Node is not time synced yet. Waiting").
Fix Summary
Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem In a small CPP cluster (4 to 8 nodes), large file read performance may degrade under highly
threaded workloads
Symptom In a small CPP cluster (4 to 8 nodes), large file read performance may degrade under highly
threaded workloads.
Fix Summary
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Fix Summary
Found in Version 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0,
3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem In certain cases a node will fail to come up completely and instead hang in NetDetect
Symptom In certain cases a node will fail to come up completely and instead hang in NetDetect.
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem The reverse lookup portion of EDM repairs on system partitions can cause random characters to
be written to the console.
Symptom The reverse lookup portion of EDM repairs on system partitions can cause random characters to
be written to the console.
Fix Summary
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom If Filepool encounters a bad sector on a disk it will retry reading the disk. This may take some
time before Filepool gives up. This delay may cause lockups and/or timeouts, which in turn can
result in Filepool restarts or other undesired behavior.
Fix Summary
Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1,
4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Write performance degradation may occur for small files under very high loads
Symptom Write performance of small files (<200 KB) at extremely high thread counts (50 threads per
Access Node) can be up to 3% lower than using CentraStar v3.1.0 and 10% lower than using
CentraStar v3.1.1.
Fix Summary
Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1,
4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem It is possible, due to capacity constraints, that the BLC overflows on a node.
Symptom It is possible, due to capacity constraints, that the BLC overflows on a node. This may result in
the node being unable to restart FilePool. To resolve the problem, capacity needs to be made
available. Please escalate to L3 for proper intervention.
Fix Summary
Found in Version 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Blob deletion can be delayed and performance of Incremental Garbage Collection might be
reduced by up to a factor 5 on systems with very low object count (less than 5000 objects per
node).
Symptom Blob deletion can be delayed and performance of Incremental Garbage Collection will be
reduced on systems with very low object count (less than 5000 objects per node). This is due to
the interaction of Garbage Collection with OrganicOn.
Fix Summary
Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Imbalanced capacity load across mirror groups may lead to unused free space
Symptom In multi-rack environments, it may happen that the used capacity on both mirror groups is
substantially different. If one of the mirror groups gets full, the free capacity left on the other
mirror group cannot be used anymore for writing new C-Clips/blobs. This problem is greater on
CPM clusters. It is caused by nodes failing on a full cluster of the multi-rack and being
regenerated on the second cluster of the rack (also known as Cross-HRG regeneration). If this is
a potential problem to your customer, add more nodes or call L3.
Fix Summary
Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem When powering down a Gen3 node, the command may display the
SMBUS_INVALID_RESPONSE error
Symptom When powering down a Gen3 node, the command may display the
SMBUS_INVALID_RESPONSE error. The node does in fact power down to standby mode
despite the error message.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Before adding any nodes to a cluster that has been downgraded, ensure that all nodes have in
fact been downgraded.
Symptom Before adding any nodes to a cluster that has been downgraded, ensure that all nodes have in
fact been downgraded.
Fix Summary
Found in Version 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2,
3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1,
4.0.1p2, 4.0.2
Host OS Any OS
Problem When a node fails during a sensor definition update, the sensor manager is inconsistent in its
reporting.
Symptom When a node fails during a sensor definition update, the sensor manager is inconsistent in its
reporting. It reports that the update failed, even when the update was successful. If you get a
message that the sensor definition update failed, perform a list sensors command to verify if the
update was accepted or not. Retry if the update was not accepted. Also check if there are no
failed nodes on the cluster.
Fix Summary
Found in Version 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3,
3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Monitoring
Host OS Any OS
Problem When a database goes corrupt while a local db init is already running, the statistics for the
remote db init may be incorrect
Symptom When a database goes corrupt while a local db init is already running, the statistics for the
remote db init may show a high erroneous value for total blobs processed.
Fix Summary
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Redundant node failure alerts when both cube switches were down
Symptom When both cube switches were down, redundant alerts for node failures will be triggered, in
addition to the alerts for the cube switches (with symptom code 3.1.4.1.01.02). The node failure
alerts (with symptom code 3.1.6.1.01.01) should be ignored in this case.
Fix Summary
Host OS Any OS
Symptom The current Health Report incorrectly displays the value of the storage strategy as the naming
scheme.
Fix Summary
Host OS Any OS
Symptom Nodes going on- and offline may fire duplicate alerts with symptom code 4.1.1.1.02.01. These
are all instances of the same problem which EMC service has to follow up.
Fix Summary
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom Even if the startlog file has been removed from the /var partition it still consumes 730MB of
space. As a workaround restart Filepool to complete the deletion of the file and reclaim its
space.
Fix Summary
Found in Version 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3,
3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom In some cases all sensor definitions can be lost on the cluster. This means no alerts will be sent.
Fix Summary
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem The audit log may have entries not related to an event
Symptom The audit log may contain entries such as 'Command {COMMAND} was executed ({result})'.
These messages do not relate to an actual event and can be ignored.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom Centera domain names are case sensitive. Management of and presentation of domain names
may cause confusion since CV, CLI, and Console are not consistently case sensitive.
Fix Summary When managing domains use and enter the domains in the same case
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom If, in Health Reports and Alerts, more than one recipient is specified with spaces between the
email addresses, none of the addresses may receive email. None of the recipients may receive
Health Reports or Alerts.
Fix Summary Remove any spaces in the list of recipients and only use comma to separate recipients
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom Although a "reply to" address for ConnectEMC can be configured by EMC Service in CentraStar
2.4 and 3.0, this address is currently not set in the email message header. Since CentraStar 3.1
it is possible to change the "from" address which will be used as the reply address for emails
sent by ConnectEMC.
Fix Summary
Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Nodes in maintenance modes will activate the node fault light
Symptom Nodes in Maintenance Mode will activate the node fault light. However, there is no fault and no
action is necessary.
Fix Summary Check whether the node is in Maintenance Mode by using the nodelist in CenteraViewer.
Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem When an alert is sent about the CPU temperature, the principal access node responsible for
sending the alert identifies itself as the node with the problem.
Symptom When an alert is sent about the CPU temperature, the principal access node responsible for
sending the alert identifies itself as the node with the problem.
Fix Summary
Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem When the eth2 fails on the pool service principal, no alert is sent.
Symptom When the eth2 fails on the pool service principal, no alert is sent.
Fix Summary
Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem When the principal role of a node with the access role is taken over by another node, SNMP
might miss initial events
Symptom When the principal role of a node with the access role is taken over by another node, SNMP
might miss initial events. This means that certain alert traps might be missed and the health trap
severity level is possibly incorrect.
Fix Summary
Found in Version 1.2.0, 1.2.1, 1.2.2, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1,
2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0,
3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom All cluster nodes keep a copy of the current sensor definitions. The cluster principal is
responsible for distributing any changes to a definition across the cluster. If the principal is
unavailable when updating a sensor's definition, distribution of the update may fail. The failure
may not always be correctly detected, resulting in an OK response to the update command,
while it should be not OK. Only update the sensor definitions on a stable cluster.
Fix Summary
Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Fix Summary The CLI command show network detail shows Uplink info and the Health report: <port>
<portIdentification portNumber="49" type="uplink" speedSetting="" status="Up"/>
Found in Version 1.2.0, 1.2.1, 1.2.2, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1,
2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0,
3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Server
Host OS Any OS
Problem A temporary network interface failure on an access node can result in small but permanent
read/write performance degradation
Symptom A temporary network interface failure on an access node may result in a small but permanent
read/write performance degradation. Ongoing SDK threads may hang for a while on the access
nodes and block the reuse of the connection the thread is using. The number of connections
available to threads will be reduced. It requires a restart of CentraStar to reset unavailable
connections.
Fix Summary
Host OS Any OS
Problem The replication target cluster may get out of sync with replication retry
Symptom The replication target cluster may get out of sync with a rare occurrence of retry of replicating a
blob while the cleanup of the previously failed write transaction has yet to be completed.
Fix Summary
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Read-only tags for obsolete database entries are never cleaned up
Symptom After a (blobindex or BLC) database has been moved, as a result of a disk failure or manual
move, DatabaseManager.cml still contains a tag for the database indicating that it is in read-only
mode. Those obsolete database tags will give false positives when service is trying to detect
read-only databases.
Fix Summary
Found in Version 1.0.0, 1.1.0, 1.2.0, 1.2.1, 1.2.2, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3,
2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom When a manually started GC run is aborted due to a non-uniform cluster version and the
auto-scheduling mode is enabled, the reason for the aborted run will report the auto-scheduling
mode instead of the manual mode.
Fix Summary
Host OS Any OS
Symptom Disabling NTP on a node to perform a service procedure, may cause time stamp inconsistencies
between the C-Clip and the metadata. When a service procedure requires you to disable NTP,
make sure that Filepool is no longer running. When restarting Filepool after the service
procedure, make sure that NTP is also started.
Fix Summary
Host OS Any OS
Symptom Nodes may go on and offline because of database corruption. To investigate this please refer to
Primus case emc165864.
Fix Summary
Found in Version 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2,
4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem A delete of a C-Clip fails if its mirror copy is located on an offline node
Symptom An SDK delete fails when the mirror copy of a C-Clip resides on an offline node. The client will
receive error code -10156 (FP_TRANSACTION_FAILED_ERR) in this case.
Fix Summary
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Problems with the blob partitions can cause Filepool to hang at startup
Symptom In exceptional cases it can happen that during startup Filepool cannot open the blob index
databases and platform cannot restart Filepool. This results in a node on which Filepool does
not run.
Fix Summary
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom Garbage Collection (GC) aborts when it encounters a fragment that has an incorrectly formatted
name. As long as the fragment remains on the system GC cannot run. As a consequence the
number of aborted runs will increase and no extra space will be reclaimed. EMC Service has to
investigate the failed runs and fix the problem. To view the GC status of GC use Centera Viewer
> Garbage Collection.
Fix Summary
Host OS Any OS
Problem A node may continuously reboot due to a bad disk which is not healed by EDM
Symptom A node may continuously reboot due to a bad disk which is not healed by EDM.
Fix Summary Primus procedure: either force EDM to run and/or bring the bad disk down for regeneration
Found in Version 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2,
3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2,
4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem During Centera self-healing actions, suboptimal load balancing can cause degraded SDK
performance
Symptom In rare circumstances, Centera's load balancing mechanisms fail to spread data traffic evenly
and can cause temporary performance degradations. This may happen during periods of heavy
I/O load to Centera while data self-healing and internal database repair operations are taking
place.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Possible false error message when removing access role and immediately adding it again
Symptom When removing the access role from a node and then immediately add the access role back to
it, in very rare cases an error message may be displayed even though the role change was
performed successfully.
Fix Summary Verify access role to be sure the change was performed. Normally the error message can be
ignored.
Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom In very exceptional cases a corrupted database may not be noticed and could cause range tasks
such as init, copy, or cleanup to loop without making any progress.
Fix Summary
Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Incorrect error code when C-Clip is unavailable due to corrupted CPP blob
Symptom The SDK may return an incorrect error code (-10036, FP_BLOBIDMISMATCH_ERR) when a
CPP blob is unavailable due to a corrupted fragment and a disk with another fragment that is
offline. The correct error code is -10014, FP_FILE_NOT_STORED_ERR.
Fix Summary
Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Pools
Host OS Any OS
Problem Virtual pools do not enable on new nodes after capacity addition
Symptom When adding capacity, virtual pools may not be automatically enabled on newly added nodes
when the cluster already has virtual pools enabled. The CLI command ìpool migrationîindicates
it has not started yet. When this situation occurs, read and query performance may also be
degraded.
Fix Summary
Host OS Any OS
Problem Query performance issues and inability to create new pools after adding new nodes
Symptom When adding new Gen4 nodes to a cluster that has pools enabled may cause pool migration
status no longer to show finished, may cause query to run slower and may not allow the user to
create pools.
Found in Version 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2,
4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Not all C-Clips are mapped to the pool specified after pool migration
Symptom Pool migration does not take into account regeneration self-healing activity. In limited cases it
may happen that a C-Clip is not mapped to the appropriate pool when a regeneration
self-healing task runs during the pool migration. The C-Clip then remains in the default pool.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Replication
Host OS Any OS
Problem Mutable metadata updates are not replicated for C-Clips that had been written before replication
was enabled
Symptom If event-based retention and litigation hold updates occur after replication was enabled on
C-Clips written to the cluster prior to the time replication was enabled, those updates also will not
be replicated. This is true even if the C-Clips already exist on the target cluster.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem C-Clip with metadata added by the application containing invalid characters cannot replicate
Symptom A C-Clip with metadata added by the application containing invalid characters cannot replicate
and may even cause replication to get stuck and reject attempts to update event-based retention
entries. Valid characters are #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] |
[#x10000-#x10FFFF].
Fix Summary
Host OS Any OS
Problem Replication cannot be disabled if the replication roles on source or target are removed
Symptom If you want to disable replication completely, you first need to disable replication with the CLI
command set cluster replication before you remove the replication roles of the source and target
cluster. In case replication cannot be disabled because all replication roles are removed, first
add the replication role to two nodes on the source and target cluster and then disable
replication.
Fix Summary
Host OS Any OS
Symptom CentraStar may not replicate mutable metadata for an XAM XSet when the user has deleted this
XSet on the source cluster and the Global Delete feature is disabled.
Fix Summary
Host OS Any OS
Problem If a C-Clip is re-written without being changed and the C-Clip has triggered an EBR event or has
a litigation hold set, the C-Clip is replicated again
Symptom If a C-Clip is re-written to the cluster without being changed (no blobs added, no metadata
added or changed) and the C-Clip has triggered an EBR event or has a litigation hold, the C-Clip
is replicated again, although this is not necessary. Besides the extra replication traffic, there is no
impact.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem The reported number of C-Clips to be replicated may show a higher number than what actually is
still due to be replicated
Symptom In certain situations the reported number of C-Clips to be replicated may show a higher number
than what actually is still due to be replicated. This is caused by organic self-healing cleaning up
redundant C-Clip fragments before replication has processed them. Organic self-healing does
not update the number of C-Clips to be replicated when it is cleaning up.
Fix Summary
Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem No apparent replication progress with CLI command show replication detail
Symptom When replication of deletes is enabled and many (100,000s) deletes are issued in a short time
period it appears as if replication is not progressing when monitored with the CLI show
replication detail command. Replication is, in fact, processing the deletes.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom When the Global Delete feature is enabled and C-Clips are deleted soon after they have been
written, the Replication Lag value may increase.
Fix Summary Work-a-round: disable Global delete or increase the time span between creating and deleting
the clip
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Replication with failed authentication gives back wrong error message in some cases
Symptom When replication is started with a disabled anonymous profile, the SDK returns the error code
FP_OPERATION_NOT_ALLOWED (-10204) to the application and replication pauses with
paused_no_capability. When replication is started with a disabled user profile, the SDK returns
the error code FP_AUTHENTICATION_FAILED_ERR (-10153) and replication pauses with
paused_authentication_failed. This does not affect the operation of the application.
Fix Summary Consider both error messages as valid for this use case
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Replication does not pause when global delete is issued and target cluster does not have delete
capabilities granted
Symptom When the replication profile has no delete capability granted and a global delete is issued, the
deleted C-Clips go to the parking lot. Replication does not get paused.
Fix Summary An alert will be sent when the parking is almost full
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Self Healing
Host OS Any OS
Symptom An unstable cluster can result in a node failing to receive updates to parameters. This failure
prevents progress in other system processes.
Fix Summary
Host OS Any OS
Problem EDM can get stuck indefinitely if a volume cannot be unmounted when a repair tries to kick in
Symptom When an EDM repair tries to kick in and is unable to unmount all volumes, EDM may get stuck
indefinitely and stop activity on the affected node. EDM may not be able to unmount if a service
user is logged on to the node and the current directory is in the volume that needs to be
unmounted.
Fix Summary
Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom After upgrading from CentraStar 3.0 to 3.1 or higher, the MD5 scrubber task is displayed twice in
the Task List and the checkpoint can be wrong or missing.
Fix Summary
Found in Version 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem No-write functionality may not always be successful to protect the node from repetitive reboot
Symptom In rare and unique cases the disk no-write functionality may not be successful to protect the
node from repetitive DBinit and reboot cycle. Examples of rare and unique cases are: during
upgrade from a version without no-writes functionality (version before 2.4.3, 3.0.3 and 3.1.2) and
service actions that include accidentally put large files in the blob partitions suddenly causing a
very low space situation.
Fix Summary
Found in Version 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2,
3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2,
4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Primary Fragment Migration statistic shows RUNNING while the task is paused
Symptom When the FeedOrganicSingleStep task (Primary Fragment Migration) is paused from the Task
List in Centera Viewer, the statistic OrganicManager.FeedOrganicSingleStep.running_status
shows RUNNING instead of PAUSED.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem EDM fails to skip a disk that dies while the system is running and will loop forever
Symptom EDM will fail to skip a disk that dies while the system is running and it is not detected or marked
dead. This will cause EDM to loop indefinitely.
Fix Summary
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem EDM may not recognize a failed disk if smartctl information cannot be read
Symptom If a disk failure occurs and the startctl data cannot be read, EDM will not realize that the disk is
bad and will not attempt a repair. This could lead to underprotected data and potential data
unavailability. There is a very small chance that this will happen.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem DB init and regenerations may get stuck and log files are filled with retry logging
Symptom CentraStar may switch the hard disk I/O mode from DMA to PIO when the disk experiences
transient or persistent errors. This results in a decreased hard disk performance and can impact
the overall performance of a busy cluster as much as 50%.
Fix Summary
Found in Version 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3,
3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Organic Regeneration and BFR may not always detect a stealth corruption of CPP blobs in
certain situations
Symptom Organic Regeneration and Blobs For Review (BFR) may not always detect a stealth corruption of
CPP blobs in certain situations. Other self-healing functions will ultimately deal with them.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom Due to subsequent EDM requests, a node may become inaccessible preventing Garbage
Collection to progress. This can be an issue for clusters that are nearly full and on which the
application is consistently issuing writes and deletes.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem A regeneration task on a node that restarts will wait until the regeneration timeout expires
Symptom If a node regeneration task is running on a node that is restarted, the task will wait until the
regeneration timeout expires before it restarts. This means that the node will wait longer to
regenerate than necessary.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom The regeneration buffer is by default set to 2 disks per cube or 1 disk per mirrorgroup per cube. If
the cluster is heavily loaded, this could cause the cluster to run out of space when a node goes
down. As a workaround, set the regeneration buffer to 2 disks per mirrorgroup per cube. Use the
CLI command: set capacity regenerationbuffer to set the limit to 2 disks.
Fix Summary Set the regeneration buffer to 2 disks per mirrorgroup per cube.
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Centera SDK
Host OS Any OS
Problem The network interfaces on Gen4LP might auto negotiate to a speed lower than available (1,000
Mbps)
Symptom During a service intervention the network interfaces on Gen4LP nodes may have been brought
down temporarily. After the intervention the network interfaces might auto negotiate to a link
speed lower than available (1,000 Mbps). As a result, the performance will be lower.
Fix Summary
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Upgrades
Host OS Any OS
Problem Nodes in Console domain show old port number after upgrade from CentraStar lower than 3.1
Symptom When upgrading a cluster from a CentraStar version lower than 3.1 to a higher version, the CLI
command show domain list will show both the old and new management port number (3218 and
3682) of each node in the Console domain. After an upgrade from CentraStar lower than 3.1 to a
higher version the service engineer should check the Console domain configuration on the
cluster that Console connects to and update it if necessary.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Deployed images of higher versions will be removed after an automated upgrade
Symptom After an automated upgrade to CentraStar 4.0 or higher, obsolete images are removed to safe
space. Images of higher versions are also regarded as obsolete and therefore deleted. When a
multi-step upgrade is to be executed, only deploy the next image after each upgrade step has
been done.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Nodes taking longer than 30 minutes to upgrade to CentraStar 4.0 or higher might shutdown
Symptom When a node upgrade to CentraStar 4.0 or higher takes more than 30 minutes, it will be retried.
The upgrade may have actually succeeded and the node has come back online but the retry has
shut down CentraStar. Restarting CentraStar on the affected node solves the issue.
Fix Summary
Host OS Any OS
Problem The MOTD did not reset after an FPreinit was issued
Symptom The MOTD message does not reset after executing an FPreinit.
Fix Summary
Host OS Any OS
Problem Upgrade can be stuck in paused state when nodes go offline unexpectedly
Symptom An upgrade may remain paused when a storage node goes down after all access nodes have
failed to upgrade. This occurs on clusters where the first upgraded node was a spare node. To
resolve the issue, bring the storage node back online.
Fix Summary
Host OS Any OS
Problem Upgrade completion does not work correctly when a node is removed while downgrading
Symptom The upgrade completion does not work when a node is removed while an automatic node
upgrade is downgrading from a 4.0 version to a lower 4.0 version. The advanced ranges stay
enabled. As a workaround you should restart the principal node.
Fix Summary
Host OS Any OS
Problem The 'install' and 'show config version' command can fail sporadically when the cluster is under
heavy load
Symptom When the cluster is under heavy load, the CLI commands 'install' and 'show config version' may
sporadically fail due to time outs while processing installation images.
Fix Summary
Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1,
4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom When the FPupgrade_v2 tool is used to activate an image, it can fail reporting that it "cannot find
version number". This is caused by the invalid assumption that all nodes being upgraded have
the images in the same location. This can be worked around by manually ensuring that the
upgrade images are in the same location for all nodes that need to be upgraded.
Fix Summary
Host OS Any OS
Symptom Upgrade from 3.1.0 to 3.1.2 aborts because of a failed FPgrub-install in the upgrade log.
Fix Summary Re-image the boot partitions with the old boot image and re-start the upgrade.
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem When adding multiple nodes not all nodes may have been automatically upgraded
Symptom When adding multiple nodes, a node may sometimes not be upgraded automatically. As a result
the node can come online with its original software version.
Fix Summary Restarting CentraStar on the node will upgrade the non-upgraded node when it tries to come
online again
Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1,
4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom During upgrades, there can be a small window during a reboot where a node may not complete
the reboot. In most cases the screen will then display GRUB.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom When a node with the access role goes down or becomes unavailable, it may happen in some
circumstances that the cluster becomes unreachable. This can for example happen during an
upgrade.
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom Upgrading may cause read errors at the moment that one of the nodes with the access role is
upgraded. The read errors will disappear after the upgrade.
Fix Summary
Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom Upgrading your cluster to version 2.4 SP1 may cause client errors on the application that runs
against the cluster. The following circumstances increase this risk: 1) When the cluster is heavily
loaded. 2) When the application is deleting or purging C-Clips or blobs. 3) When the application
runs on a version 1.2 SDK. Especially when the number of retries
(FP_OPTION_RETRYCOUNT) or the time between the retries (FP_OPTION_RETRYSLEEP) is
the same as the default values or less. When you have a retrycount of 3, set the retrysleep to at
least 20 seconds.
Fix Summary
Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Hardware
Host OS Any OS
Problem Node may perform more reboots than necessary when a cube switch is no longer responding
Symptom When a cube switch is no longer responding, unnecessary reboots of a node may occur until the
switch gets rebooted or replaced.
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Disk failures as shown through CLI or Centera Viewer might not be shown on the front panel.
Symptom Disk failures as shown through CLI or Centera Viewer might not be shown on the front panel.
Fix Summary
Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Tools
Host OS Any OS
Symptom Integrity Checker 4.10 does not read or report XAM-related objects or C-Clips. If there is data
written by XAM on the cluster, it will not show up in the IC 4.10 reports.
Fix Summary
Host OS Any OS
Problem Execution of service scripts that need to download and run code on Centrastar server are
refused
Symptom From CentraStar version 4.0 onwards service scripts are signed. When these scripts download
and try to run code from the Centrastar server the digital signature of the script is checked. Part
of the validation is a check whether the certificates used are still within the validity period. If the
time is incorrectly set on the cluster it is possible that the time is outside the validity period of the
certificate and the script cannot be executed.
Fix Summary 1. Change time on cluster if possible (Primus solution emc103181) 2. If time cannot be
changed for some reason: issue service certificate which is valid according to the time on the
cluster.
Host OS Linux
Symptom Two nodes might become unavailable at the same time when the DBMigration tool runs
simultaneously with a cluster upgrade. This may cause temporary data unavailability.
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Disk-to-Disk Tool does not handle bad write errors on the target node
Symptom In the unlikely event that a write error occurs on a new target hard disk, the bad sector will not be
overwritten nor fixed. Even if the corresponding sector on the source disk is good, the data on
the target disk will remain corrupt.
Fix Summary
Found in Version 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2,
3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2,
4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Documentation
Host OS Any OS
Problem Incorrect path to EMC Centera software on Powerlink in EMC Centera Quick Start Guide
Symptom The EMC Centera 4.0 Quick Start Guide, P/N 300-002-546, Rev A03, gives a wrong path to the
EMC Centera software on Powerlink (Support > Software Downloads and Licensing >
Downloads A-C > Centera Enterprise Software). The correct path is: Home > Support >
Software Downloads and Licensing > Downloads C > Centera Enterprise Software.
Fix Summary
Host OS Any OS
Problem Access nodes are rebooting when establishing many connections at the same time
Symptom The rate at which new SDK clients connect to a cluster is limited to 5 per minute. Care should be
taken when multiple clients boot up and connect simultaneously. If an excessive number of
connections are established at the same time, the node with the access role may reboot.
Fix Summary Change your client start-up procedure to avoid establishing too many simultaneous connections
at the same time
Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Data Integrity
Host OS Any OS
Problem EDM can fail with segmentation fault due to invalid volume information in configuration files
Symptom The EDM process can fail with a Segmentation Fault due to missing volume information in the
nodesetup and edm.conf file. The missing volume information in these files is considered invalid.
Follow the service procedures for correcting the node setup file.
Fix Summary
Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Filepool may continuously attempt to start on a node with a bad disk
Symptom If a disk is corrupt and the Linux OS flags it as read only, Filepool may continually attempt to start
on the node. As a resolution, replace the disk and ensure that after the procedure the disk is
mounted read/write.
Fix Summary
Host OS Any OS
Problem Unclear when it is safe to reset cluster integrity and may have serious consequences
Symptom It is not clear when it is safe to reset cluster integrity. Resetting the cluster integrity may be
required when for example a cluster has two disk failures, A and B, with stuck regenerations (due
to circular dependency between fragments on the two disks). However, resetting cluster integrity
in un-safe situations may have serious consequences. Resetting cluster integrity must be done
with great care.
Fix Summary
Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem On nearly full clusters, it is not possible to delete a C-Clip because there is no room to write the
reflection.
Symptom When many embedded blobs are written to the same C-Clip, CentraStar may have an issue
parsing the C-Clip which in an extreme case could cause the node to reboot.
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Configuration
Host OS Any OS
Symptom Importing a pool definition to a cluster in Basic mode that was exported from a cluster in GE or
CE+ mode will fail. Both clusters must run the same configuration to import a pool definition.
Furthermore, importing a pool definition to a cluster without the Advanced Retention
Management (ARM) feature that was exported from a cluster with the ARM feature will fail. Both
clusters must have the ARM feature.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Support
Host OS Any OS
Symptom Occasionally, a FRU replacement disk cannot be added to the node because the disk has not
been formatted automatically. To resolve this issue, manually format the disk and re-insert it into
the node following the procedures.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Symptom When the /var partition is full the fpshell command will fail with multiple 666 [NO RES] errors
because it uses this partition for temporary storage. Investigate the /var partition usage and
clean out unnecessary files.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Host OS Any OS
Problem Prior to replacing a disk in a node, the status of regenerations needs to be verified
Symptom Prior to replacing a disk in a node, the following needs to be verified: 1) Are regenerations
running for that disk? 2) Does the node on which a disk needs to be replaced have failed
regenerations? 3) Do other nodes have failed regenerations for the node on which a disk needs
to be replaced?
Fix Summary
Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Security
Host OS Any OS
Problem When upgrading from CentraStar 3.1 to CentraStar 3.1.2 or higher, the anonymous profile may
be enabled again
Symptom When upgrading from a newly installed cluster running CentraStar 3.1 with anonymous disabled
to CentraStar 3.1.2 or higher, the anonymous profile may have been enabled during the
upgrade. This happens if the profile was never updated.
Fix Summary
Found in Version 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2,
3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1,
4.0.1p2, 4.0.2
Host OS Any OS
Symptom With only the hold capability enabled, a set or unset of a litigation on a C-Clip fails with
insufficient capabilities error. Add the write capability to work around this issue.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1,
4.0.1p1, 4.0.1p2, 4.0.2
Compatibility
Host OS Any OS
Symptom Profile C-Clips cannot be written to CentraStar version 3.1 or higher with an SDK version older
than 3.1 if the maximum retention period for the cluster is set to anything other than 'infinite'.
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3,
3.1.3p4, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Query
Host OS Any OS
Symptom The delete of a C-Clip will result in the deletion of both copies of the CDF, and the creation of a
pair of reflections. After that, Incremental GC will delete the underlying blobs. If one or more
copies of the C-Clip's CDF are located on offline nodes at the time of the delete, Full GC will
remove these copies when the nodes come back online. During the Full GC process, read and
query of the deleted C-Clip may temporarily succeed. During the Incremental GC process, read
of the underlying blobs may also succeed.
Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 3.1.3p3, 3.1.3p4, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1, 4.0.1p1, 4.0.1p2, 4.0.2
Centera CLI
Host OS Any OS
Problem User profiles with only the 'monitor' role assigned are unable to use the 'show config version' CLI
command
Symptom If you use a profile that has only the 'monitor role' associated with it, the 'show config version'
command does not appear on the list of available commands. Issuing this command will result in
a failure.
Fix Summary
Host OS Any OS
Symptom The CLI command show node mode maintenance all gives Unknown as the reason for
maintenance while it should be DBMove.
Fix Summary
Host OS Any OS
Problem The CLI command show location returns message on internal error in remote manager
Symptom The CLI command show location returns the error "An internal error occurred in the remote
manager" for C-Clips that are written multiple times to a cluster. This can happen for example
when a backed-up C-Clip is restored to the cluster with CASScript, while it already exists on the
cluster. This particular situation will automatically be resolved with cluster self-healing
operations.
Fix Summary
Found in Version 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Symptom The 'emcsecurity' account is able to execute the CLI command 'show summary' command. This
is not allowed.
Fix Summary
Host OS Any OS
Problem DefaultCluster listed in show domain list does not get updated when node roles or IP settings
change
Symptom The DefaultCluster in the domain list (CLI command 'show domain list') does not get updated
when removing the management role from nodes and/or updating the IP addresses.
Fix Summary
Host OS Any OS
Problem Show config warnings gives invalid warning message about replication roles
Symptom The CLI command show config warnings may show: "Warning: replication role: has no
redundant network connection." even though the replication role has not been assigned to any
node.
Fix Summary
Host OS Any OS
Symptom The admin user can disable their own profile or revoke role(s) to update their own profile. This
will disable the admin CV login or will render the admin account useless.
Fix Summary
Found in Version 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem Some CLI commands may truncate the last column of a table
Symptom The CLI commands show profile list, show constraint list, show pool capacity, show pool
capacitytasks, show pool detail, show pool list, show pool mapping, show pool migration, and
show profile list display truncated output even when the number of columns to use for output is
set higher than 80 with the set cli command. This happens when the table format is set to fixed.
Fix Summary A workaround is to set the table format to expand or wrap with the set cli command.
Found in Version 2.4, 2.4 Patch 1, 2.4 SP 1 Patch 2, 2.4 SP1, 2.4.2, 2.4.2p1, 2.4.3, 3.0, 3.0 SP1, 3.0 SP1 Patch 1,
3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1,
3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem CLI command show config version does not show patch number
Symptom The CLI command show config version does not show the patch number in the field Version
being distributed.
Fix Summary
Found in Version 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Symptom When you run Centera CLI on Solaris the arrow, backspace, and delete keys do not work.
Fix Summary
Found in Version 3.0, 3.0 SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch
1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Symptom When setting the From address for ConnectEMC using the CLI command set cluster notification,
the data entered may not be saved the first time.
Fix Summary Issue the set cluster notification command again to set the From address.
Found in Version 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem Some CLI commands may not be executable on certain CentraStar versions during upgrade.
Symptom Some CLI commands may not be executable on certain CentraStar versions. The CLI uses the
active CentraStar version on the cluster to determine which commands to use. During an
upgrade this active version may not be accurately determined. If certain CLI commands do not
work properly during an upgrade, please reconnect.
Fix Summary If certain CLI commands do not work properly during an upgrade, please reconnect.
Found in Version 3.0, 3.0 SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch
1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem IP address or linkspeed of a node with access role cannot be set manually
Symptom In order to manually set the IP address of a node with the access role or to change the
linkspeed, the access role has to be disabled first. After the settings have been changed, you
have to enable the access role again.
Fix Summary If you cannot manually update the IP settings, contact Global Services to assist.
Found in Version 3.0, 3.0 SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch
1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem Using non-valid BlobID or ClipID with the CLI command 'show location' gives error code.
Symptom When using a non-valid BlobID or ClipID with the CLI command 'show location', the following
error code appears: Command failed: An internal error occurred in the remote manager.
Fix Summary
Found in Version 1.0, 1.1, 1.2, 1.2 SP1, 1.2.2, 2.0, 2.0 SP1, 2.0 SP2, 2.1, 2.2, 2.2 SP1, 2.2 SP2, 2.3, 2.3 SP2, 2.3
SP3, 2.4, 2.4 Patch 1, 2.4 SP 1 Patch 2, 2.4 SP1, 2.4.2, 2.4.2p1, 2.4.3, 3.0, 3.0 SP1, 3.0 SP1
Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3,
3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Symptom The CLI command set cluster notififcation sometimes fails. If this occurs, retry the command.
Found in Version 2.3, 2.3 SP2, 2.3 SP3, 2.4, 2.4 Patch 1, 2.4 SP 1 Patch 2, 2.4 SP1, 2.4.2, 2.4.2p1, 2.4.3, 3.0, 3.0
SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2,
3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
CenteraViewer
Host OS Any OS
Symptom The Device Name 2 column in the Integrity tab of the Regenerations dashboard (Commands >
Regenerations > Integrity) does not show the device name that corresponds to Regeneration ID
2 but instead shows the same device name as for Device Name 1.
Fix Summary
Host OS Any OS
Problem On Gen2/Gen3 clusters the power module in CV will not work if the ATS cable is not connected
Symptom For clusters with Gen2 and/or Gen3 nodes, the Power module (Commands > Power) will not
work if the ATS cable is disconnected.
Fix Summary
Host OS Any OS
Problem The Retry button in the replication parking viewer does not work for parked entries of type
metadata update
Symptom The Retry button in the replication parking viewer (Tools > Parking > Replication) does not work
for parked entries of metadata update type. As a workaround retry the entire parking including
metadata updates by using the Retry All button.
Fix Summary Retry the entire parking including metadata updates by using the Retry All button
Host OS Any OS
Problem After upgrade to CentraStar 4.0 or higher CV/CLI may not show all roles immediately
Symptom During or after an upgrade to CentraStar 4.0 or higher, the Nodelist in Centera Viewer and the
CLI command show node status may not immediately show the management and replication
roles. Restart Centera Viewer and connect to a node that already completed the upgrade to view
all correct roles immediately.
Fix Summary Restart Centera Viewer and connect to a node that already completed the upgrade to view all
correct roles immediately.
Host OS Any OS
Problem The Remove all button in the restore parking viewer does not work properly
Symptom The Remove All button in the restore parking viewer (Tools > Parking > Restore) does not work
correctly. The parking entries seem to be removed while in fact they are not. As a workaround
select the individual parking entries and use the Remove Selected Clips button instead.
Fix Summary Select the individual parking entries and use the Remove Selected Clips button
Host OS Any OS
Problem Centera Viewer can respond slowly or not at all when it is running out of memory
Symptom Centera Viewer can respond slowly or not at all when it is running out of memory. This can occur
with windows that retrieve a lot of information from the server, for example the statistics or
ranges windows. It will occur more frequently when automatic refresh is turned on and the debug
window is opened. The workaround is to restart Centera Viewer.
Fix Summary
Found in Version 2.4, 2.4 Patch 1, 2.4 SP 1 Patch 2, 2.4 SP1, 2.4.2, 2.4.2p1, 2.4.3, 3.0, 3.0 SP1, 3.0 SP1 Patch 1,
3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1,
3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Symptom Specifying a custom path for extension.jar files in the Centera Viewer Preferences window does
not work. Centera Viewer will only look for extension.jar files in the Centera Viewer installation
directory.
Fix Summary
Host OS Any OS
Problem An error suggests that the CLI command set security lock has failed
Symptom When executing the set security lock command, other running CV/CLI operations (such as
auto-refresh in CV) may display an error 'Not authorized'. This may falsely suggest that the set
security lock command has failed, while it was successfully executed.
Fix Summary
Host OS Any OS
Problem Loading data in CV results in the error "One or more statistics not found"
Symptom Loading data in CV by File > Load Data, Commands > Record Data and Commands > Real-time
Data results in the error "One or more statistics not found".
Fix Summary
Host OS Any OS
Problem Centera Viewer may run out of memory when it tries to download log files
Symptom Centera Viewer may run out of memory when it tries to download log files from a cluster with
many log files.
Fix Summary
Host OS Any OS
Problem Centera Viewer can respond slowly or not at all when running out of memory
Symptom Centera Viewer can respond slowly or not at all when it is running out of memory.
Found in Version 2.4, 2.4 Patch 1, 2.4 SP 1 Patch 2, 2.4 SP1, 2.4.2, 2.4.2p1, 2.4.3, 3.0, 3.0 SP1, 3.0 SP1 Patch 1,
3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1,
3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem Service script fails with error "cannot establish a development connection"
Symptom When running Tools > Service > Upgrade > Upgrade scripts in CV, the service script fails and
returns the error "cannot establish a development connection". Executing a service script
requires a new connection to the server. If all 10 connections are already in use, the script will
fail with the given error.
Fix Summary
Found in Version 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem All nodes are displayed as spare in CV for users with monitor role only
Symptom If you connect to a cluster with CV as a user with only the monitor role, all nodes are displayed
as spare irrespective of their actual node role.
Fix Summary
Found in Version 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem Specific log entries may not be displayed properly in Centera Viewer's logging window,
especially when connecting to a cluster running CentraStar earlier than 3.1.
Symptom Specific log entries may not be displayed properly in Centera Viewer's logging window,
especially when connecting to a cluster running CentraStar earlier than 3.1. The complete log
files can be retrieved via ssh.
Fix Summary The complete log files can be retrieved via ssh.
Found in Version 3.0, 3.0 SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch
1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem To add the storage role to a spare node on a cluster running CentraStar version 2.3 and below,
do not use CenteraViewer 3.0 or 3.1.
Symptom To add the storage role to a spare node on a cluster running CentraStar version 2.3 and below,
do not use CenteraViewer 3.0 or 3.1. Use instead the CLI command set node role, or use
CenteraViewer v2.4 or below.
Fix Summary Use instead the CLI command set node role, or use CenteraViewer v2.4 or below.
Found in Version 3.0, 3.0 SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch
1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Symptom In the Device list (Centera Viewer > Nodelist > Devices) an optical link (nic) may show up as eth9
instead of eth3.
Fix Summary
Found in Version 3.0, 3.0 SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch
1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Symptom After starting a pool migration, the CLI reports the ETA of the migration. The measurement unit
of this ETA is stated to be years, while this should be hours.
Found in Version 3.0, 3.0 SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch
1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem No emails are sent if invalid email addresses are entered in the recipient list for notification.
Symptom If the email recipient list (configurable with the CLI command 'set notification') contains email
addresses that are in a domain that cannot be reached, all addresses in the list will be ignored
and no messages will be sent.
Fix Summary Make sure there are no undeliverable email addresses in the CLI-recipient list.
Found in Version 3.0, 3.0 SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch
1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Symptom Consolidated Logging does not display any mail logging. In Centera Viewer, open Field >
Logging > Mail Logging > Display Logs to obtain the mail logs.
Fix Summary In Centera Viewer, open Field > Logging > Mail Logging > Display Logs to obtain the mail logs.
Found in Version 2.4, 2.4 Patch 1, 2.4 SP 1 Patch 2, 2.4 SP1, 2.4.2, 2.4.2p1, 2.4.3, 3.0, 3.0 SP1, 3.0 SP1 Patch 1,
3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1,
3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem If you switch to another application while the Browse Dialog Box is open in Centera Viewer, the
dialog box may remain hidden when returning to Centera Viewer.
Symptom If you switch to another application while the Browse Dialog Box is open in Centera Viewer, the
dialog box may remain hidden when returning to Centera Viewer. Use your keyboard shortcut
keys (for example ALT+TAB on MS Windows) to cycle to the dialog box.
Fix Summary Use your keyboard shortcut keys (for example ALT+TAB on MS Windows) to cycle to the dialog
box.
Found in Version 2.3, 2.3 SP2, 2.3 SP3, 2.4, 2.4 Patch 1, 2.4 SP 1 Patch 2, 2.4 SP1, 2.4.2, 2.4.2p1, 2.4.3, 3.0, 3.0
SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2,
3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem The DiskStorageDistribution report in Centera Viewer (Field version) may contain incorrect
values for blobs stored, bytes stored, and % used of particular disks
Symptom In the Centera Viewer Field Version, the DiskStorageDistribution report (Tools -> Field -> Cluster
Usage -> DiskStorageDistribution) shows three values for each disk: blobs stored, bytes stored,
and % used. Sometimes a disk may have 0 blobs stored and 0 bytes stored, but a % used of
~50% or higher. This normally indicates that the filesystem on the disk is not mounted and that
the disk is not available to store data.
Fix Summary
Found in Version 2.0, 2.0 SP1, 2.0 SP2, 2.1, 2.2, 2.2 SP1, 2.2 SP2, 2.3, 2.3 SP2, 2.3 SP3, 2.4, 2.4 Patch 1, 2.4
SP 1 Patch 2, 2.4 SP1, 2.4.2, 2.4.2p1, 2.4.3, 3.0, 3.0 SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2,
3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1,
4.0.2
Monitoring
Host OS Any OS
Problem CLI command set notification needed when setting the cluster domain
Symptom A cluster domain entered with the CLI command set cluster notification on CentraStar version
3.0.2 and below, is not saved when using Centera Viewer 3.1 or higher. Use the CLI command
set notification instead when setting the cluster domain.
Fix Summary
Found in Version 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Limitations
Host OS Any OS
Problem EmailHome: sequence number of health report and 'Last report number' are off by 1
Symptom The Last report number shown by the CLI command show config notification will always be 1
higher than the sequence number listed in the last sent health report.
Fix Summary
Found in Version 2.3, 2.3 SP2, 2.3 SP3, 2.4, 2.4 Patch 1, 2.4 SP 1 Patch 2, 2.4 SP1, 2.4.2, 2.4.2p1, 2.4.3, 3.0, 3.0
SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2,
3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1, 4.0.2
Host OS Any OS
Problem Eth2 NIC cards on storage or spare nodes are displayed as failing by CLI commands that use
the scope failure
Symptom Eth2 NIC cards are by default not connected on nodes with the storage role nor on spare nodes.
They are however displayed as a failure by the CLI commands that are used with the scope
failure (for instance show security failures n).
Found in Version 2.0, 2.0 SP1, 2.0 SP2, 2.1, 2.2, 2.2 SP1, 2.2 SP2, 2.3, 2.3 SP2, 2.3 SP3, 2.4, 2.4 Patch 1, 2.4
SP 1 Patch 2, 2.4 SP1, 2.4.2, 2.4.2p1, 2.4.3, 3.0, 3.0 SP1, 3.0 SP1 Patch 1, 3.0.0p1, 3.0.2,
3.0.2p1, 3.0.3, 3.1, 3.1 Patch 1, 3.1.1, 3.1.1 Patch 1, 3.1.2, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0, 4.0.1,
4.0.2
Technical notes
The following table contains details of the currently shipping EMC
Centera Gen4 and Gen4LP hardware. Although other hardware
generations are supported, they are no longer shipped and so are not
presented here. For a list of all compatible EMC Centera hardware for
this release, go to E-Lab NavigatorTM on the EMC Powerlink®
website.
Documentation
Documentation for the EMC Centera, which can be downloaded from
the GS Web site, includes the following:
◆ Technical Manuals (EMC Centera Hardware, PointSystem Media
Converter, Utilities, EMC Centera API)
◆ Software Release Notices (CentraStar, Linux, IBM/zOS, Solaris,
Windows, HP-UX, AIX, IRIX)
◆ Customer Service Procedures
Installation
For instructions on installing and setting up Centera tools, refer to the
Procedure Generator following the path: CS Procedures > Centera >
Information Sheet > Management and Control.
Note: You need administrator rights to install EMC Centera software on your
machine.
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN
THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
For the most up-to-date regulatory document for your product line, go to the Technical Documentation and
Advisories section on EMC Powerlink.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
All other trademarks used herein are the property of their respective owners.