EMC Celerra NS600 NAS device

With its Celerra NS600 network-attached storage system, EMC has pulled some of its high-end NAS technology down into a mid-range device aimed at enterprise users who want to consolidate 10 to 20 departmental and branch-office storage servers onto a single system.

Based on EMC’s proprietary Unix-based Data Access in Real time (DART) operating system, the NS600 supports a number of features typically reserved for high-end NAS devices, such as extensive hardware redundancy and high-availability measures.

With its $US162,00 price tag for 1 terabyte of capacity, the NS600 offers an alternative to competitive products that can cost more than $250,000.

The NS600 is a rack-mounted system with multiple components. The front end of the system is housed in the Data Mover Enclosure (DME), which has two Data Movers, each supporting six auto-negotiating 10/100/1000Mbit/sec interfaces.

The Storage Processor Enclosure (SPE) supports 2Gbytes of storage RAM and dual-active storage processors (2GHz Pentium III Prestonia CPUs). The SPE manages the NS600’s RAID 5 arrays, which reside in a separate enclosure. Having the Data Movers and SPE in separate enclosures ensures that if there is a disk failure, the SPE can provide the data needed to rebuild the disk without affecting data processing power within the Data Movers. In our tests, this worked as advertised.

We tested a system with 30 disks, but the NS600 can support up to 120. In addition to the RAID 5 disks, the NS600 comes with another hot spare replacement for any disk that might fail within the cabinet.

N+1 backup power is stored in its own component (the DME and SPE each have their own dual power supplies, as well). A separate hot standby power supply for the storage processor allows data in cache to be written to a special area called a vault so that it is not lost during a system failure.

Connections between the major front-end and back-end components are via 2-gigabit Fibre Channel links.

Fail Safe Networks (FSN) are a key availability feature on the Data Movers. They allow 10/100/1000Mbit/sec Data Mover ports to be configured in redundant mode to fail over to a secondary connection if the primary connection fails. FSNs can be configured a variety of ways — in sets of two to eight ports, as Ethernet channels or in link aggregations. All connections in an FSN share a single media access control and IP address.

To test FSN, we configured Data Mover ports 0 and 1 as the primary and secondary network interfaces through a pulldown menu and then pulled the cable on port 0. The failover to secondary Port 1 was instantaneous. However, when Port 1 failed back to the primary Port 0, we observed a 49-second delay, which is high. Data written to the device during this delay would be lost. In our experience, instantaneous failover is optimal; more than 20 seconds is noticeable; 49 seconds is about the time we consider getting tech support on the issue.

This delay can be avoided by configuring the ports in standby mode. Once a port fails over to a secondary port, it will not fall back to the primary port (unless there is a failure in the secondary port, now acting as the primary).

The NS600 Data Movers can be configured in two modes: primary/primary and primary/standby. In primary/primary mode, both Data Movers are operational, allowing users to optimise performance by spreading the load between two systems. But a secondary system doesn’t have a backup. In the event of major failure, file systems would have to be mounted manually on the available Data Mover.

In primary/standby mode, one Data Mover acts as the primary system; a second Data Mover would take over if the primary unit fails.

In our tests, failover from the primary to a secondary Data Mover took 90 seconds, which is high for a network device. No sessions were dropped, but there was delay in writing to the file.

When we pulled a fan on the storage processor, there was a 19-second delay until operation resumed. Removing a fan on the Data Mover and pulling a disk from its enclosure produced no delays in operation in either component.

During our fail-over tests of the storage processor fans, we did not replace the primary fan after a failover to a secondary fan. After two minutes the storage processor shut down entirely. This was odd to us because the three fans are designed to be redundant. EMC says the shutdown was caused by a safety mechanism built into the system to protect against overheating. But with two fans still active, we’re not convinced this safety feature should have kicked in.

Performance

Because of the limitation of the IOMeter performance-measurement tool we used in our tests and the number of client machines we had available for the test, we couldn’t tax the NS600 to anywhere near its maximum capacity (60,000 TCP connections, according to EMC), but we did kick its tyres to determine whether it performed as expected.

We ran two tests — one using an I/O block size of 8Kbytes and another test with an increased I/O block size of 16Kbytes to determine how processing larger block sizes affected the NS600. (Our I/O blocks consisted of emulated file server traffic with 20 per cent write and 80 per cent read data. Eight HP ProLiant machines supported two clients each.)

Results of the 8Kbyte test were 10msec latency and 10,224 I/Os per second, which equates to a throughput of 79.87Mbyte/sec. (To put these results in perspective in terms of capacity, in recent tests we conducted of lower-end, Windows-based systems, throughputs were 6.26 I/Os per second and 7.67Mbyte/sec.)

Performance results with 16Kbyte block sizes came out as we expected. Latency was slightly higher (14 microseconds) and the I/O-per-second rate was slightly lower (9017) because it was processing the larger blocks. The throughput rate was 140Mbyte/sec, an increase we would expect to see with the larger block size.

Because all network equipment is subject to security threats, we ran two attacks against the product. When we threw an Internet Control Messaging Protocol flood denial-of-service attack against the Data Movers, the I/O-per-second rate dropped 3 per cent; a Jolt2 attack against the NS600 caused a 4 per cent drop in I/Os per second. Both are negligible, and in both cases the system continued operating. However, a SYN-flood attack on Port 445 (the Microsoft port and, therefore, most vulnerable to attack) created 42 per cent reduction in I/Os per second, although, again, the system continued to operate.

The NS600 came pre-installed — EMC sent it to us racked, cabled and ready for operation — the only thing we had to do to get the system up and running was establish a static IP address on the storage controller using an installation wizard.

The wizard sets up communication with the browser, through which we set username, password and other parameters, such as static IP addresses on the Data Movers’ interfaces.

Betsy Yocom is managing editor and Diane Poletti-Metzel is test lab manager at Miercom, a network test lab and consultancy

Join the newsletter!

Error: Please check your email address.

More about EMC CorporationMicrosoftSEC

Show Comments

Market Place