PowerEdge M1000e – Administration and Configuration

1

PowerEdge M1000e – Administration and Configuration 2 .

an optional fiber channel host bus adaptor (HBA) and other input/output (IO) ports. known as server blades. often dedicated to a single application.A blade server is a server chassis housing multiple thin. high availability. blade servers can also be managed to include load balancing and failover capabilities. Each blade is a server in its own right. and manageability in the most rackdense form factor. integrated network controllers. PowerEdge M1000e – Administration and Configuration 3 . modular electronic circuit boards. The PowerEdge M1000e solution is designed for the following markets: • Corporate • Public • Small/Medium Business (SMB) Customers that require high performance. simplifying cabling and reducing power consumption. Blade servers allow more processing power in less rack space. database) • High performance cluster and grid environments • Front-end applications (web apps/Citrix/terminal services) • File sharing access • Web page serving and caching • SSL encrypting of Web communication • Audio and video streaming Like most clustering applications. The blades are literally servers on a card. containing processors. memory. A blade server are implemented in: • Virtualization environments • SAN applications (Exchange.

you have the flexibility you need to meet increasing demands for I/O consumption. a high performance and highly available passive midplane that connects server modules to the infrastructure components.The PowerEdge M1000e Modular Server Enclosure solution is a fully modular blade enclosure optimized for use with all Dell M-series blades. easy to deploy and manage. while minimizing energy and cooling consumption. nor will the 10G blades fit or run in the 8/9G chassis. and cluster interconnect modules (switches and pass-through modules). Note: The PowerEdge M1000e Chassis has been created as a replacement for the current 1855/1955 chassis unit. Flexibility and scalability to maximize TCO. Plus. These technologies are packed into a highly available rack dense package that integrates into standard Dell or 3rd party 1000mm depth racks. The PowerEdge M1000e supports server modules. integrated KVM (iKVM) and Chassis Management Controllers (CMC). The PowerEdge M1000e uses redundant and hot‐pluggable components throughout to provide maximum uptime. Dell provides complete. Ultra dense servers. With additional I/O slots and switch options. storage. power supplies. The existing 8/9G blades will not fit or run in the 10G chassis. network. scale on-demand switch designs. PowerEdge M1000e – Administration and Configuration 4 . fans. Best for environments needing to consolidate computing resources to maximize efficiency. Dell’s FlexIO modular switch technology lets you easily scale to provide additional uplink and stacking functionality—no need to waste your current investment with a “rip and replace” upgrade.

The M1000e is designed to support future generations of blade technologies regardless of processor/chipset architecture. and deliver better performance/watt. and power supplies are incorporated into the system with independent airflow paths. the server module iDRAC performs a power budget inventory for the server module. lower operating costs. No longer is the system “flying blind” in regards to power consumption. high‐efficiency design. and there is no danger of exceeding power capacity availability. iKVM. reducing the required airflow consumptions of each module. This high‐efficiency design philosophy also extends into the layout of the subsystems within the PowerEdge M1000e. Lower airflow impedance allows the system to draw air through the system at a lower operating pressure than competitive systems. The key areas of interest are Power Delivery and Power Management. fans and server modules. which could result in a spontaneous activation of power supply over current protection without these features. I/O and local storage. This power measurement is used locally by the server module to insure that its instantaneous power consumption never exceeds the budgeted amount. based upon a total chassis power inventory. In coordination with the CMC. flexible. what they enable is a more aggressive utilization of the shared system power resources. The lower backpressure reduces the system fan power consumed to meet the airflow requirements of the system. which confirms the availability of power from the system level. which correlates directly into power and cost savings. I/O modules. Since the CMC controls when every modular system element powers on. the M1000e delivers one of the most energy efficient. Built from the ground up. I/O Modules. This isolates these components from pre‐heated air. The server modules. Once this number is generated. based upon its configuration of CPUs.IT IS ALL ABOUT EFFICIENCY. including power supplies. The PowerEdge M1000e introduces an advanced power budgeting feature. Prior to any server module powering up. memory. the iDRAC communicates the power budget to the CMC. and manageable blade server product on the market. This hardware design is coupled with a thermal cooling algorithm that incorporates the following: • Server module thermal monitoring by the iDRAC • I/O module thermal health monitors • CMC monitoring and fan control (throttling) PowerEdge M1000e – Administration and Configuration 5 . Shared power takes advantage of the large number of resources in the modular server. The cooling strategy for the PowerEdge M1000e supports a low ‐impedance. While the system administrator may never notice these features in action. Built on Dell’s energy smart technology the M1000e can help customers to increase capacity. controlled by the CMC and negotiated in conjunction with the Integrated Dell Remote Access Controller (iDRAC) on every server module. distributing power across the system without the excess margin required in dedicated rack mount servers and switches. iDRAC hardware constantly monitors actual power consumption at each server module. it can now set power policies on a system level.

• Dynamic and granular power management so you have the capability to set power thresholds to help ensure your blades operate within your specific power envelope. enabling easy set up and deployment. The M1000e helps reduce the cost and complexity of managing computing resources so you can focus on growing your business or managing your organization with features such as: • Centralized CMC modules for redundant. The system administrator sets priorities for each server module. • One of the only blade solutions with an integrated KVM switch. and providing future growth opportunities to standards based management. minimizing the impacts to existing management tools and processes. Each subsystem has been reviewed and adjusted to optimize efficiencies. while others require only a one time selection of desired operating modes. The priority works in conjunction with the CMC power budgeting and iDRAC power monitoring to insure that the lowest priority blades are the first to enter any power optimization mode.Dell’s PowerEdge M1000e modular server enclosure delivers major enhancements in management features. secure access paths for IT administrators to manage multiple enclosures and blades from a single interface. providing you with optimal control over power resources. And power is no longer just about power delivery. as well as the ability to prioritize blade slots for power. • Real-time reporting for enclosure and blade power consumption. The PowerEdge M1000e System adds a number of advanced power management features that operate transparently to the user. and seamless integration into an existing KVM infrastructure. Dynamic power management provides the capability to set high/low power thresholds to ensure blades operate within your power envelope. should conditions warrant the activation of this feature. it is also about power management. PowerEdge M1000e – Administration and Configuration 6 .

The bottom fresh air plenum provides non‐preheated air to the power supplies. At the bottom of the enclosure is a flip out multiple angle LCD panel for local systems management configuration. serial. as the iKVM provides the capability to switch the KVM between the blades. only the related module need to be replaced. System Control Panel Features: • System Control Panel w/ LCD panel and two USB keyboard/mouse and one video “crash cart” connections. iKVM switch. iKVM and I/O Modules. and interprocess communications. Broad management capabilities include private Ethernet. Caution: The system power button controls power to all of the blades and I/O modules in the enclosure. Up to sixteen half-height or eight full height server modules (or a mixture of the two blade types) are supported. Not visibly obvious. and low level management connectivity between the CMC. system information. Press to turn on the system. but important nonetheless. are fresh air plenums at both top and bottom of the chassis. storage. Finally. The high-speed midplane is completely passive. The top fresh air plenum provides non‐preheated air to the CMC. • The system power button turns the system on and off. The midplane provides connectivity for I/O fabric networking. The front of the enclosure also contains two USB connections for USB keyboard and mouse (only.Server blade modules are accessible from the front of the PowerEdge M1000e enclosure. and status. a video connection and the system power button. PowerEdge M1000e – Administration and Configuration 7 . in case of bent pins. with no hidden stacking midplanes or interposers with active components. Press and hold 10 seconds to turn off the system. It is also important to note that any empty blade server slots should have filler modules installed to maintain proper airflow through the enclosure. the midplane encompasses a unique design in that it uses female connectors instead of male connectors. no USB flash or hard disk drives can be connected). and server modules. USB. the midplane does not. The front control panel’s USB and video ports work only when the iKVM module is installed.

available switches include Dell & Cisco® 1Gb/10Gb Ethernet with modular bays. Dell 10Gb Ethernet with modular bays. including thorough power management capabilities including delivering shared power to ensure full capacity of the power supplies available to all modules. N+1 redundant fan modules all come standard. Dell Ethernet pass-through. Ethernet based management connectivity via the CMC. • An optional iKVM module. PowerEdge M1000e – Administration and Configuration 8 . Mellanox® DDR & QDR Infiniband. Brocade® 4Gb Fibre Channel. • Nine. • Choice of 3 or 6 hot-pluggable power supplies. Brocade® 8Gb Fibre Channel. • All back panel modules are hot-pluggable.The back panel of the M1000e enclosure supports: • Up to 6 I/O modules for three redundant fabrics. • One or two (redundant) CMC modules. that include high performance. Fibre Channel pass-through.

this was provide by the KVM on previous models of the blade systems. The iKVM can also be accessed from the front of the enclosure. including the M1000e enclosure’s network and security settings. The optional Avocent iKVM analog switch module provides connections for a keyboard. The iDRAC offers features in line with the DRAC 5 and allows remote control using virtual machine.The CMC provides multiple systems management functions for your modular server. The PowerEdge M1000e blade server solution instead uses a CMC to manage and monitor the chassis and each server module has its own onboard iDRAC chip. It should be noted that chassis management and monitoring on previous blade systems (1855/1955) was done using a DRAC installed directly in to the chassis. the DRAC would then offer connectivity to the blades one by one. For enhanced security. PowerEdge M1000e – Administration and Configuration 9 . providing front or rear panel KVM functionality. but not at the same time. You can use the iKVM to access the CMC. and power redundancy and power ceiling settings. now the iKVM has only been offered as an option as a lot of customers do not access the blade servers locally. front panel access can be disabled using the CMC’s interface. I/O module and iDRAC network settings. video (monitor). and mouse.

I/O and chassis • iDRAC o One per blade with full DRAC functionality like other Dell servers including vMedia/KVM o Integrates into CMC or can be used separately • iKVM o Embedded in the chassis for easy KVM infrastructure incorporation allowing one admin per blade o Control Panel on front of M1000e for crash cart access • Front LCD o Designed for deployment and local status reporting Management connections transfer health and control traffic throughout the chassis. one switched and one unswitched. There are two 100BaseT interfaces between CMCs. All system management Ethernet is routed for 100 Mbps signaling. The system management fabric is architected for 100BaseT Ethernet over differential pairs routed to each module. with redundancy provided at the module level.The M1000e server solution offers a holistic management solution designed to fit into any customer data center. flexible security. It features: • Dual Redundant Chassis Management Controllers (CMC) o Powerful management for the entire enclosure o Includes: real-time power management and monitoring. status/ inventory/ alerting for blades. PowerEdge M1000e – Administration and Configuration 10 . Every module has a management network link to each CMC. Failure of any individual link will cause failover to the redundant CMC.

Additionally. PowerEdge M1000e – Administration and Configuration 11 . the slot-based WWN/MAC ID remains the same. If a server module is moved to a chassis that does not support FlexAddress. This feature eliminates the need to reconfigure Ethernet network management tools and SAN resources for a new server module. the factory assigned WWN/MAC IDs are used. Every server module is assigned unique WWN and MAC IDs as part of the manufacturing process. the override action only occurs when a server module is inserted in a FlexAddress enabled chassis. FlexAddress allows the CMC to assign WWN/MAC IDs to a particular slot and override the factory IDs. the WWN/MAC IDs would change and Ethernet network management tools and SAN resources would need to be reconfigured to be aware of the new server module.1 that allows server modules to replace the factory assigned World Wide Name and Media Access Control (WWN/MAC) network IDs with WWN/MAC IDs provided by the chassis. if you had to replace one server module with another. no permanent changes are made to the server module. If the server module is replaced.The FlexAddress™ feature is an optional upgrade introduced in CMC 1. Before the FlexAddress feature was introduced.

Features Lock the World Wide Name (WWN) of the Fibre Channel controller and Media Access Control (MAC) of the Ethernet and iSCSI controller into a blade slot. Brocade. and Dell PowerConnect switches as well as pass-thru modules PowerEdge M1000e – Administration and Configuration Benefits Easily replace blades without network management effort Ease of Management An almost no-touch blade replacement Fewer future address name headaches No need to learn a new management tool Low cost vs switch-based solution Simple and quick to deploy No need for the user to configure No risk of duplicates on your network or SAN Choice is independent of switch or pass-through module 12 . instead of to the blade’s hardware Service or replace a blade or I/O mezzanine card and maintain all address mapping to Ethernet and storage fabrics Easy and highly reliable booting from Ethernet or Fibre Channel based Storage Area Networks (SANs) All MAC/WWN/iSCSIs in the chassis will never change Fast & Efficient integration into existing network infrastructure FlexAddress is simple and easy to implement FlexAddress SD card comes with a unique pool of MAC/WWNs and is able to be enabled on a single enclosure at a given time. until disabled Works with all I/O modules including Cisco.

the choice for one fabric does not restrict or limit or depend on the choice any other fabric. only Ethernet pass-through or switch modules may be installed in Fabric A. To communicate with an I/O module in the Fabric B or C slots. fibre-channel.Each M-series server module connects to traditional network topologies. Fabric A connects to the hardwired LAN-on-Motherboard (LOM) interface. In summary. or Infiniband can be installed in Fabric B. and finally C1 and C2. GbE I/O modules that would be used in Fabric A may also be installed in the Fabric B or C slots provided a matching GbE mezzanine card is installed in that same fabric. Fabric B and C can be used independently of each other. and Infiniband. these network topologies include Ethernet. and fibre-channel modules. Currently. but an optional mezzanine card can be installed in one of the available Fabric B or C mezzanine slots located on the motherboard. The I/O modules include fibre-channel switch and passthrough modules. Infiniband. 10 GbE. Each fabric contains 2 slots numbered 1 and 2 resulting in A1 and A2. a blade must have a matching mezzanine card installed in the Fabric B or C mezzanine card location. Infiniband switches. Each “1” and “2” relate to the ports found on the server side I/O cards (LOM or mezzanine cards). There is no interdependency of the three fabrics. and any one of the other types in Fabric C. B. for example either 1 GbE. Also. quad-lane redundant fabric which allow higher bandwidth I/O technologies and can support Ethernet. PowerEdge M1000e – Administration and Configuration 13 . The six I/O slots are classified as Fabrics A. or C. Fabrics B and C are a 1 to 10 Gb/sec dual port. B1 and B2. The M1000e enclosure uses three layers of I/O fabric to connect the server module with the I/O module via the midplane. and 1 GbE and 10 GbE Ethernet switch and pass-through modules. And Fabrics B and C are similar in design. the only mandate is that Fabric A is always a GbE LOM. fibre-channel. Further. Up to six hot-swappable I/O modules can be installed within the enclosure.

PowerEdge M1000e – Administration and Configuration 14 . They are also carried to the outside world through the physical copper or optical interfaces on the I/O modules. a receive positive signal. a user could inadvertently hot plug a server module with the wrong mezzanine into the system. A port is defined as the physical I/O end interface of a device to a link. then all server modules must have either no mezzanine in fabric C or only Fibre Channel cards in fabric C. The differentiation has been made here between lane and link to prevent confusion over Ethernet’s use of the term link for both single and multiple lane fabric transports. Some fabrics such as Fibre Channel do not define links as they simply run multiple lanes as individual transports for increased bandwidth. preventing the accidental activation of any misconfigured fabric device on a server module. Differential pair signaling provides improved noise immunity for these high speed lanes. four and eight lane PCIe. it is necessary to first define four key terms: fabric. Infiniband calls it a physical lane. and a receive negative signal. the system automatically detects this misconfiguration and alerts the user of the error. a transmit negative signal. No damage occurs to the system. Examples are two. transporting. In modern high speed serial interfaces each lane is comprised of one transmit and one receive differential pair. link and port. If a GE mezzanine card is in a Mezzanine C slot. so they effectively act together as a single transport. or four lane 10GBASE‐KX4. Fabrics are carried inside the PowerEdge M1000e system. In reality. Since mezzanine to I/O Module connectivity is hardwired yet fully flexible. Various terminology is used by fabric standards when referring to lanes. and synchronizing data between devices. For instance. a transmit positive signal. between server module and I/O Modules through the midplane. lane. A link as defined here provides synchronization across the multiple lanes. PCIe calls this a lane. The PowerEdge M1000e system management hardware and software includes Fabric Consistency Checking. and the user has the ability to reconfigure the faulted module. and Fibre Channel and Ethernet call it a link. a single lane is four wires in a cable or traces of copper on a printed circuit board. A link is defined here as a collection of multiple fabric lanes used to form a single communication transport path between I/O end devices. PCIe. Examples of fabrics are Gigabit Ethernet (GE). Fibre Channel (FC) or Infiniband (IB). A fabric is defined as a method of encoding.To understand the PowerEdge M1000e architecture. A lane is defined as a single fabric data transport path between I/O end devices. Infiniband and Ethernet call this a link. A port can have single or multiple lanes of fabric I/O connected to it. if Fibre Channel I/O Modules are located in Fabric C I/O Slots.

if the enclosure is powered on. as well as passive requirements due to hardware configuration. all fans will be set to 50% speed if the enclosure is in Standby mode. Concurrently. and can control the fans as required to maintain Server and I/O Module airflow at optimal levels. Please ensure that components are operating properly if fans remain at full speed. Fans are loud when running at full speed. • Re-installation of a fan will cause the rest of the fans to settle back to a quieter state.The iDRAC on each server modules calculates the amount of airflow required on an individual server module level and sends a request to the CMC. • The CMC will automatically raise and lower the fan speed to a setting that is appropriate to keep all modules cool. • Whenever communication to the CMC or iDRAC is lost such as during firmware update. The CMC interprets these requests. • If a single fan is removed. This request is based on temperature conditions on the server module. It is rare that fans need to run at full speed. each IOM can send a request to the CMC to increase or decrease cooling to the I/O subsystem. PowerEdge M1000e – Administration and Configuration 15 . the fan speed will increase and create more noise. removal of a single fan is treated like a failure (nothing happens).

• High-density: In 40U of rack space customers can install 64 blades (4 enclosures by 16 slots) into four M1000e enclosures versus 40 1U servers. • When you need to add additional blade servers. the PowerEdge M1000e is designed to support future generations of blade technologies regardless of processor or chipset architecture.Flexible and scalable. built on innovative Dell Energy Smart technology. and cable management is reduced. • Our FlexAddress technology ties Media Access Control (MAC) and World Wide Name (WWN) addresses to blade slots — not to servers or switch ports — so reconfiguring your setup is as simple as sliding a blade out of a slot and replacing it with another. • The M1000e is a leader in power efficiency. easiest-to-use management features in the industry. alerts and management.and half-height blades in adjacent slots within the same chassis without limitations or caveats. • Speed and ease of deployment: Each 1U server takes on average approximately 15 minutes to rack. PowerEdge M1000e – Administration and Configuration 16 . • The M1000e is the only solution that supports mixing full. • Redundant Chassis Management Controllers (CMCs) provide a powerful systems management tool. most effective. The M1000e enclosure can be racked in approximately the same amount of time then each blade takes seconds to physically install. they slide right in. not including cabling. The M1000e has these advantages: • The M1000e blade enclosure helps reduce the cost and complexity of managing computing resources with some of the most innovative. • The M1000e's passive midplane design keeps critical active components on individual blades or as hot-swappable shared components within the chassis. inventory. • Dell FlexIO modular switch technology lets you easily scale to provide additional uplink and stacking functionality — giving you the flexibility and scalability for today's rapidly evolving networking landscape without replacing your current environment. improving reliability and serviceability. giving comprehensive access to component status.

PowerEdge M1000e – Administration and Configuration 17 .

PowerEdge M1000e – Administration and Configuration 18 .

Sign up to vote on this title
UsefulNot useful