Professional Documents
Culture Documents
User Guide. Nexio LLM. December
User Guide. Nexio LLM. December
Nexio® LLM
December 2014
175-100403-00
Imagine Communications considers this document and its contents to be proprietary and confidential.
Except for making a reasonable number of copies for your own internal use, you may not reproduce this
publication, or any part thereof, in any form, by any method, for any purpose, or in any language other
than English without the written consent of Imagine Communications. All others uses are illegal.
This publication is designed to assist in the use of the product as it exists on the date of publication of
this manual, and may not reflect the product at the current time or an unknown time in the future. This
publication does not in any way warrant description accuracy or guarantee the use for the product to
which it refers. Imagine Communications reserves the right, without notice to make such changes in
equipment, design, specifications, components, or documentation as progress may warrant to improve
the performance of the product.
Trademarks
Product names and other appropriate trademarks, e.g. D-Series™, Invenio®, PowerSmart®, Versio™ are
trademarks or trade names of Imagine Communications or its subsidiaries.
Microsoft® and Windows® are registered trademarks of Microsoft Corporation. All other trademarks and
trade names are the property of their respective companies.
Contact Information
Imagine Communications has office locations around the world. For domestic and international location
and contact information, visit our Contact page
(http://www.imaginecommunications.com/company/contact-us.aspx).
Contents
Overview................................................................................................................ 5
Nexio Server System Architectures .......................................................................................................... 5
HDI Internal Storage ............................................................................................................................. 5
Direct Connect Architecture ................................................................................................................. 5
Switched Fibre Channel Model ............................................................................................................ 6
Media Host Architecture ...................................................................................................................... 6
Intrinsic Mirror System ......................................................................................................................... 6
Troubleshooting................................................................................................... 30
Observing Error Conditions .................................................................................................................... 30
Reading Sense Codes .......................................................................................................................... 31
Locating a Problem Drive........................................................................................................................ 31
Locating Physical Drives using LLM ........................................................................................................ 32
Locating a Disk ID ............................................................................................................................... 32
Locating Enclosure Slot Number ........................................................................................................ 33
Managing Hot Spares ............................................................................................................................. 34
Configuring a Hot Spare ..................................................................................................................... 34
Finding a Hot Spare ............................................................................................................................ 35
Fixing Problem Drives ............................................................................................................................. 35
Reseating a Drive ................................................................................................................................ 35
Overview
The main software engine inside all Nexio™ server systems is the Low Level Module (LLM) application.
Although its user interface is spare, the functions provided by the LLM are far reaching. It handles all of
the low level communications between various servers on a Nexio server network system, it directs the
storage and retrieval of media and metadata to the storage array, it manages the real-time compression
and decompression of the media, and it provides the main interface to other software applications that
control the inputs and the outputs from the server.
As an application, the LLM must remain running on each Nexio server that directly or indirectly accesses
media from the storage system. It is possible to minimize the application so that it runs in the
background in the Windows system tray. The main thing to understand is that you must not close the
LLM application if you are recording or playing media from the Nexio system.
This manual provides a reference to the main functions of the LLM user interface. It does not include all
of the possible functions and features of every Nexio platform and storage system. Please refer to the
individual manuals associated with each platform and system for more information on those specifics.
Examples: NXAMP3801HDX, NXVOLT1401HDX, NXS3100 Storage Series, NXS2200 and NXS 2300 Farad
Series
For an Intrinsic Mirror system, the LLM ensures that all data written to one storage system is written
simultaneously to the second providing mirrored copies of the same media storage. Nexio servers have
access to both storage systems. This provides more storage, more data bandwidth, and more attached
server platforms all in a truly redundant storage system.
Nexio Config
The Nexio Config software application provides an interface for setting configurations on your Nexio
devices. When Nexio Config opens, it displays network and system information that it reads from your
system on startup. Nexio Config reads your system and sets up the LLM so that it displays the correct
configuration options. Once the Nexio Config Initial Setup routine makes the settings, the LLM can be
started to initialize the storage.
Note. You must use Nexio Config to set up your server system before you can initialize using LLM.
User Interface
The LLM must be running in order for the Nexio server system to operate. Although you must not un-
install the LLM, you may need to exit and restart the LLM so new settings can take effect.
The LLM can take anywhere from 20 seconds to more than a minute to fully launch. No other
applications should be launched while it starts up and connects to the Nexio network and storage
system.
Nodes Pane
The LLM UI Nodes pane displays an icon for each Nexio device connected to the network. Labels identify
each node by number and tell whether the node is local or remote. Node icons are also color-coded for
easy identification. Each node must have a unique number. This number is set when the system is
commissioned. When the LLM begins to launch, the LLM checks all node numbers on the network. If the
LLM discovers that it is assigned number is already assigned to another node, the LLM will not launch.
Cyan monitor with red display: indicates a node attached to the local device.
Gray monitor with blue display: indicates a remote node.
The Nodes pane reacts a little differently in the Media Host architecture. On client Nexio platforms, you
will only see the local node number in the node pane. On the Media Host platforms, you will see the
local node, the other Media Hosts node, and the client nodes connecting through the local Media Host.
Note: To view parameters for a physical disk, right-click on the disk icon and use the context menu.
The Logical Disks pane displays one icon for each logical disk set.
Note: To view details about a logical disk, right-click on the icon and select Properties.
A ghost-disk icon (disk outline) displays when RAID-60 is set up properly on a Farad Gen-2.
Mirrored drive sets must be numbered as shown here. The drive letters must be the same, and one of
the disk labels must show an asterisk (*). If the asterisk is missing or if the icons show two different
letters (such as D and E) the system is running with two separate volumes each containing unique
material. In this case, one of the data sets will not be protected from a major system fault.
Note that the top part of the context menu is grayed out. These items are mostly used for administrative
functions. To enable these menu items you must log in to the LLM. See Logging .
Logging In
You must log into the LLM any time you need to use the administration functions listed on the Logical
Disks Context Menu In general; these functions allow you to manage your RAID sets.
To log onto LLM, right-click on a disk icon in the Logical Disks pane to display the Properties window.
Click on the Security tab, enter your LLM password and click LogOn. The default password is LEITCH.
To log off the LLM UI, right-click on a disk icon to display the Properties window. Click on the Security
tab and click LogOff.
TCP IP Address – The primary network address that carries the SAN coordination messaging. Each device
should have a unique network address on your system. Nexio systems typically ship with network
addresses in the 192.168.90.xxx subnet as this is a protected class. In the case of an HDI Internal Storage
server or a network client platform in a Media Host architecture, the network address may display
000.000.000.000. This is normal and expected of these special architectures. (There is a failover address
set up by NXconfig that does not display here.)
Link Inits – The number of times the LLM has established a new connection to other LLM nodes. Each
time an LLM on a node is stopped and restarted, the Link Inits value is reset to 1, and the Link Init counts
on the other nodes will increase by 1. If the Link Init value spuriously increases at other times, it may
indicate a network adapter problem or an issue with the network infrastructure.
Best Practice. No Ethernet network is perfect. Therefore, it is normal and expected that retries will
occasionally increment as the LLM responds to short term network disruptions. You should benchmark
your system when it is new and record the number of retries that occur in a 24 hour period. If the
number of retries increases over time, the network should be analyzed for bad connections.
The General tab displays information about the logical disk including how much of the storage is in use
according to the estimated time of media stored on the RAID set and how much is still available. This
estimate is based on the current configuration of the Nexio platform’s first channel or, in the case of a
platform that is not a video server, the configuration of Codec O in the LLM’s registry. The information
regarding Used Segments and Free Segments is a similar report of available storage. In this case,
segments refer to the number of discreet files stored on the RAID set.
To view physical disk parameters for a RAID set, right-click on a Logical Disk icon and navigate to
Parameters > Information. The Information tab displays icons for the physical drives in the RAID set.
On the Information tab, drives with cyan blue icons are the primary data drives. Drives with yellow icons
are the parity drives which provide the RAID protection. If a drive has failed or is in some other way
inaccessible to the LLM, it will appear with a red slashed circle over the icon.
Right-click on a physical disk icon and select Identification to display information about the selected
physical disk.
The numbers along the bottom of the graph (X-axis) represent drive numbers. The numbers along the
left side of the graph (Y-axis) represent the number of errors. Read errors are represented by a cyan
blue bar and Write errors are represented by a magenta pink bar.
Note. The top number on the Y-axis represents the highest number of errors that has occurred on any
drive. Notice that the top number starts at 1, meaning that just one error will display a full size bar. You
should always view the actual number displayed at the top of the Y-axis to determine the number of
errors shown on the chart.
Disk numbers on the Error Chart are not always numbered sequentially. When systems are newly
commissioned, the disk numbers are sequential. However, once a system has a few disk replacements,
the numbers display on the Error Chart out of order.
To display error history for a specific drive, navigate to the Properties > Information tab, right-click on
the drive icon and select Errors. A pop-up displays Read/Write errors for the selected disk.
The Codec Control window displays video server status and diagnostic information. To open Codec
Control window, double-click the Local device icon on the Nodes pane.
Buffer Usage
Buffer Usage shows the system pre-fetch and record cache buffer usage. This is a peak hold indicator.
Use the Clear button to view the current reading. The numbers are displayed as X/Y.
X is the sum of the number of buffers used to hold pre-fetched read data prior to decompression
plus the write data after compression but prior to storage commit.
Y is the total number of buffers available. The quantity of available buffers (the Y value) depends on
machine settings and the quantity of installed memory.
Tip. If you find that the X value exceeds Y, the common cause is a settings error or data delivery issue. To
troubleshoot this condition you should first run the Nexio Config software to verify that the settings are
as expected then apply these settings. If the condition persists, the data delivery subsystem is at fault.
The condition can be caused by factors external to the node, such as failing Fibre Channel infrastructure
component or slow network.
Special
The Special area of the Codec Controls window allows you to write the Rate Log and save the video
content displayed on the lowest codec channel as a special file for review by the factory.
Drive Activity
The drive activity section of the Codec Controls window shows event messages for the RAID set.
Control Definition
Read Activity Shows data transfer activity for read
operations.
Write Activity Shows data transfer activity for write
operations.
Xor Read Activity Shows the speed of the parity calculation
process.
Xor Write Activity Shows the speed of the parity calculation
process
Errors Shows error messages returned by the LLM
since the last restart of the LLM.
Use the Next button to cycle through the errors. The message buffer keeps track of the first few
reported messages. For a complete log of all error messages, review the contents of the error log file
located on your Nexio server at C:\VR\Logs\LLM\errlog.txt.
Connections
The Connections area of the Codec Controls window shows the number of active control connections.
Control Definition
Serial Number of RS422 ports that are receiving control
streams from an external device.
Timeline The number of LLM connections to a timeline playback
service on TCP port 559. Normally 1 per playback
channel when the service is available.
Tcp The number of controllers that have opened up a
connection to the LLM hosted TCP port 557.
Udp The number of controllers that have sent messages to
LLM hosted UDP port 331 since the LLM was started.
Vmgr The number of management processes (internal or
external) attached to LLM hosted TCP port 555.
Software RAID
HDI, Direct Connect, and Switched Fibre Channel devices employ “software RAID.” This means that the
distribution of data across multiple drives is managed by system software. For these Nexio devices, the
LLM handles the RAID technology including drive configuration, rebuilds, parity and reading and writing.
If you have one of these devices, you must set your drive configurations using the LLM.
Hardware RAID
Farad devices employ “hardware RAID.” This means that the distribution of data across multiple drives is
managed by the Farad controller. In this case, the LLM is aware of the Farad RAID drives but it does not
control them. You won’t set drive configurations using the LLM for a Farad device because the Farad
system handles it automatically.
Examples
Creating RAID Sets – HDI
This example shows how to create a RAID-6 RAID set on a Nexio 3801HDI device.
The chart says I can set up the HDI device with 6 data drives and 2 parity drives.
Select all of the physical disks and click on Initialize to display the Initialize New Disk window.
Icons for all 8 drives display in the Selected Physical Drives pane.
In most cases, the correct initialization settings display. If your system requires changes, make them now
and then click OK.
After the software initializes, the LLM UI shows an icon in the Logical Disks pane and no icons in the
Physical Disks pane. This indicates a single RAID set (logical disk). To verify your RAID set, right-click the
icon and select Properties > Information. The Information tab shows six data drives and two parity
drives.
3. Check the X-Allocation box, indicate the Max Disks value and click OK.
When the LLM comes back up after re-launch, it displays a Logical Disk icon representing the RAID set
you just created. It also displays the outline of a Logical Disk icon. This “ghost” disk indicates that your
Farad 60 image is okay, and that you can add more drives.
Registry Settings
You must set the Mindisks and MaxParity values in the registry of your local node Nexio device in order
to initialize your RAID sets.
Use RegEdit to set Mindisks and MaxParity values at the following locations:
HKCU\Software\ASC AUDIO Video\LLM\Parameters\Mindisks
HKCU\Software\ASC AUDIO Video\LLM\Parameters\MaxParity
Mindisks are the total number of drives configured into your logical disk RAID set. To come up with this
value you must add the number of data disks to the number of parity disk configured in the logical disk
RAID set. Do not count disks that are designated as hot spares.
MaxParity is the number of parity drives configured into your logical disk RAID set.
Note. These values must match the drive numbers configured into your LLM logical disk array.
This example shows how to add two RAID sets (023 and 024) to logical disk D0 and another two RAID
sets (031 and 032) to logical disk D1.
1. Right-click on logical disk icon D0 and select Expand.
2. The Expand Disk window displays showing a list of disks in the Selected disks to add pane. This list
shows the disks that are available for expanding the selected logical drive.
3. Select the disks you want to add to logical disk D0 and click the top <= arrow button next to the D0
display. The selected disks will move to the list under D0.
4. Select the disks you want to add to logical disk 01and click the top <= arrow button next to the D1
display. The selected disks will move to the list under D1.
5. Click OK.
The LLM automatically expands the new disks into the D0 and D1 logical disks. You can monitor the
expansion process by observing the progress bar on the LLM UI. The progress bar on the LLM UI node
This update process takes 15 minutes or less. Do not interrupt the process once it begins.
Be sure the drive really needs to be replaced (see error messages), and make sure you have correctly
identified the drive that needs to be replaced.
1. Right-click on the RAID set (logical disk) icon that contains the problem drive and select Properties >
Information.
2. Right-click on the icon for the problem drive and select Prepare Drive for removal.
When the drive is ready for removal, the LLM UI displays the Logical Disk icon showing an error status.
3. Remove the failed drive from the array. The icon representing the failed drive should disappear from
the Physical Disks pane.
4. Insert a new drive into the drive slot of the array and allow it to spin up.
The LLM UI displays the Logical Disk icon still showing an error status. Even though the disk has been
replaced, you must restore the missing data before the Logical Disk icon indicates a normal state. To
restore the new drive, you must use the Rebuild command. See Rebuilding a RAID Set.
Note. Before you can start the rebuild process, you must log into the LLM so you can access the Logical
Disks Context Menu. Also, you cannot perform the rebuild process from a client device.
Examples
Rebuilding a RAID Set – HDI
To start a rebuild, right-click on the Logical Disk icon showing the error status and select Start Rebuild.
You can monitor the rebuild process by watching the progress bar at the bottom of the LLM UI.
When the Rebuild process is complete, the Logical Disk icon shows a normal state.
The rebuild process does not apply to Farad units for physical drive replacements.
You can restore an entire RAID set to an earlier point in time. This process requires you to do the
following:
Save a copy of the LLM’s logical disk sets Frame Allocation Table (FAT) to the local system drive in a
snapshot file.
Apply the snapshot file to the device by executing the Restore command from the LLM.
4. Enter savedisk=3600, 10 into the Value data field and click OK.
This specifies that 10 snapshot files will be saved every 3600 seconds (every hour). These files will be
written into the C:\VR folder on the Farad device and will have a .dat file extension.
1. Right-click on the logical disk icon and select Restore. A pop-up displays asking if you want to restore
the logical disk.
2. Click OK to continue. The Snapshot file list window displays showing a list of snapshot files.
3. Double-click on the snapshot file that you want to restore. A Restore Completed pop-up will display
when the snapshot file has been restored.
4. Click OK to complete the process.
Troubleshooting
Observing Error Conditions
Logical disk icons indicate logical disk array problems under certain conditions.
The yellow disk icon indicates that the logical disk array has problems, however media stored on the
array may still be available. The yellow disk icon displays in these situations:
When you start the LLM with a missing drive.
When you use the LLM Prepare drive for removal function.
The red disk icon indicates that the logical disk is offline and the media stored on the array is not
available. The red disk icon appears when you start the LLM.
The yellow disk icon appears in the system tray automatically when the number of drive errors is too
high over a period of time.
The total number of Read/Write errors that have occurred for the selected drive appear in parentheses.
These errors relate to the local server operation from the time the LLM was started. The last error
experienced by the disk displays as a Sense code in the following format.
X [yy] [zz]
x Sense key.
yy Additional sense code (ASC)
zz Additional sense code qualifier (ASCQ)
If the Sense Code is 0 [00] [00], the drive is experiencing a communication problem.
If the Sense code shows any other value other than 0 [00] [00], the drive is experiencing a physical
problem.
Drives can show the circle-slash sign for any of these reasons.
The drive has a mechanical failure.
The drive has been removed from the RAID set using the Prepare drive for removal command.
The drive was missing when the LLM was launched.
Use the process of elimination. If you can determine the ID for all other drives in the RAID set plus the
hot spare, the remaining ID will be the circle-slash drive.
You may need to identify a physical disk drive that corresponds to a logical disk as it appears in the LLM.
Locating a Disk ID
To locate a disk drive’s ID, right-click on the Logical Disk icon in the LLM UI and select to Properties.
On the Information tab, the Physical Drive List pane shows icons for the physical drives within the
logical disk. The Disk ID displays under each icon.
Right-click on a physical disk icon and select Identification from the context menu.
The Physical Disk Identification pop-up displays. At the same time, the LED for the selected drive lights
up on the server array.
Frame and Slot indicates the enclosure and the slot number.
Serial Number matches the serial number on the physical disk drive. (Only the last 5 digits display on
the physical drive.
Each RAID set can contain only one hot spare. So the maximum number of hot spare drives is equal to
the number of RAID sets in the system.
This example shows a 16 disk device. If you select and configure 15 drives into a RAID set, the remaining
disk becomes a hot spare.
Reseating a Drive
Follow these steps to reseat a drive.
1. Display the Physical Disk pop-up for the disk you want to reseat.
Right-click on the Logical Disk icon and select Properties > Information.
Right-click on the icon for the drive you are reseating and select Identification.
The LED indicator for the problem drive should be on the entire time the Physical Disk pop-up is open.
2. Pull the problem drive out from the chassis about 2 inches.
Confirm that the drive has resumed normal operations by watching the front of the array. The LED of
the drive should flash for 10-15 seconds and then become solid green. If any operations are in progress
while the drive is being reseated, the reseated drive’s activity light will be flashing in tandem with the
other drives.