You are on page 1of 6

Core File

When a controller has a critical problem it may not or may reboot itself to avoid corruption of
data, this process is called panic. In this situation we may retrieve memory dump from the
system called core file/ core dump.
If there is any Panic message other than a core, we need to run Panic Message Analyzer
(PMA) before requesting a core analysis.
Core file is generated when there is panic in the system, it can also be created manually.
Core file contains contents of write cache and memory at the time of the Core file was
created. It is used to analyze and find out the reason for the panic or hang of the filer.
When created the core file is written across reserved areas on the disks belonging to the
appliance that went down.
If the Core file is not generated it can be manually generated.
The savecore command should always be in /etc/rc file so that way the core file is always
moved to a location where it can be accessed. After the reboot, the filer boots normally and
the configuration file RC is run by the boot process. This file contains the savecore command
which writes the core to /etc/crash directory so user can retrieve it.
In HA (High Availability) takeover situation before Data ONTAP 7.2 you couldnt get the core
file until a giveback is performed. After Data ONTAP 7.2 the users can access the core file
while in takeover mode.
How to obtain Core file.
1. Normal Core File Processing
2. Sync Core Processing

1. Normal Core File Processing

How to upload core file to NetApp for Analysis?
This can be achieved by the following methods.
One of the easiest method is to upload the data files directly to NetApp over HTTP/HTTPS.
Its not necessary to have HTTP license to do this.
In order to achieve this you need to on auto indexing option, which is by default set to off.
>options httpd.admin.enable on
>options httpd.autoindex.exable on
Using the below link you can view all the cores that are on /etc/crash directory. You can then
go to the core directory and use the link to download it.
Navigate to NetApp website
You need java to be installed in your browser in order to upload the file.
Here you need to select http or https in protocols.
Enter the valid NetApp case number.
Browse and select your core file.
Enter the remote name of the file to be uploaded, case number will be add to the
remote filename (
.nz - NanoZip

A separate window will pop-up and it will show the progress of the file transfer. Note the
Remote file name (

Check for the core file, if the core file doesnt start with the case id, kindly recommend
customer to add case id at beginning of the filename.
To rename the core file name

Now the steps to upload the file using FTP:

Core_files>ftp to
User Name: anonymous
Password: you email address

There are several directories here, choose to-ntap file with is core file upload path.
ftp> cd to-ntap
cwd here means Changes working directory, which means /incoming/cs/ontap is the new
directory where the core file is going to be uploaded
we are using bin command in order to change in to binary mode
hash command is not mandatory, we are using this command so that we can see the
progress of the file transfer, instead the screen being blank.

How to upload a clustered Data ONTAP 8.x core file analysis

There are three methods to update core file
a) Uploading using Auto-Support
c) Coredump upload command
a) Upload using Auto-Support
Starting from clustered DataONTAP 8.3 it is possible to upload core files using Auto-Support.
This is the recommended method for uploading a core file to NetApp.
To check if the core files are saved run the below command.
>core show
>coredump status

To verify HTTPS is being using, run the below command.

> autosupport show fields transport
To start the upload of the core file:
> autosupport invoke-core-upload -core-filename <file name> -case-number <case

To view the progress of the core upload you can use the following command:
>autosupport history show-upload-details -node <node name>
Enable remote read-only HTTPS access to the root volume of each node.
Kindly refer KB to enable
remote access to a nodes root volume in a cluster.
With cluster Data ONTAP 8.2.1 and newer versions, remote access is already enabled by
Copy the file from the root volume to a local workstation. The file be located in the coredumps folder for the node specified above.
The file stored on a local workstation can be uploaded using
Navigate to NetApp website
c) Coredump upload
Core file can be uploaded to NetApp from the filer, provided it has access to the internet and
FTP is not blocked.
Run the coredump upload command as below.
> coredump upload -node <node_name> -corename <file name> -location -type kernel -casenum <case_number>
> coredump upload -node node0 -corename -location -type kernel -casenum

Upload a core file from a down node in Clustered Data ONTAP

Go to the node shell of the up node

>node run node up_node_name
Go to the partner
To check if the core file is saved
down-node/up-node>savecore -l
To save the file if available
Use SPI (Service Processor Infrastructure) to download the core file
Use to upload the core file to NetApp

How to obtain sync core file from a hanging or unresponsive storage system
Sync core file is nothing but user created or user generated core file. A user-generated core
is triggered by sending NMI (Non-Maskable Interrupt) broadcast to all CPUs on the filer. The
filer will immediately panic and will be taken over by the partner, if the
cf.takeover.on_panic is set to on. If this option is of, takeover will be deferred/postponed
until the core file is written to the disk.

This sync core process should only be initiated by the direction of TSEs, users are not
supposed to do this without the proper instructions.
NOTE: If there is any Panic message other than a core, we need to run Panic Message
Analyzer (PMA) before requesting a core analysis. A sync core should usually be the last
troubleshooting step. In cases the service needs to be restored immediately and a reboot
seems to be the workaround, it might be useful to take a sync core. However, finding the
root cause with this limited information is not guaranteed.



A serial console must be attached and it must be working. Telnet will not work. If the
system is equipped with (Remote LAN Module) RLM/SP, as RLM/SP is also available
while the filer is down. You possibly might be able to connect to RLM/SP through SSH.
Only an outdoing connection to NetApp on port 443 is allowed. Data collection is only
from the /root/etc/crash and /root/etc/log directories, and their subdirectories.
If running a HA system and has at least 1 spare disk available, turn on options
cf.takeover.on_panic. This is called spare core. This means that the coredump will
be written to a spare disk in compressed format during the dump duration. Takeover
will start immediately after the core is initiated, which causes minimal disruption.
If options cf.takeover.on_panic is off, the core dump will always be sprayed and
takeover will be deferred/postponed by the time it takes to write the core file to the
Generate an NMI (Non Maskable Interrupt), if the filer is equipped with Service
Processor (SP), RLM just enter system core at the SP/RLM shell.
For F800, FAS900 and near store series filers, press NMI (reset) button. This button is
usually located on the front panel under the bezel.

Note: The FAS8000 (FAS8020, FAS8040, FAS8060 & FAS8080 EX) series of storage systems
do not contain a physical NMI button. Use the SP to generate a core file/NMI reset by running
the system core command


Saving the sync core, when the filer boot. The core is created and copied to the
/etc/crash dir of the root volume. If the node has been taken over during core dump,
the partner saves the corefile to /etc/crash of the taken over filer/node.
Upload the core file to NetApp.