Jump to content

We put Western Digital’s dreaded SMR Red drive to the test


Karlston

Recommended Posts

We put Western Digital’s dreaded SMR Red drive to the test

Western Digital's SMR disks won't work for ZFS, but they're okay for most NASes.

Western Digital's EFAX Red—an SMR disk—squares off against a Seagate Ironwolf in today's testing.
Enlarge / Western Digital's EFAX Red—an SMR disk—squares off against a Seagate Ironwolf in today's testing.
Jim Salter

Western Digital has been receiving a storm of bad press—and even lawsuits—concerning their attempt to sneak SMR disk technology into their "Red" line of NAS disks. To get a better handle on the situation, Ars purchased a Western Digital 4TB Red EFAX model SMR drive and put it to the test ourselves.

Although Western Digital's 4TB SMR disk performed adequately in Servethehome's light duty tests, it performed miserably when they used it to replace a disk in a degraded four-disk RAIDz1 vdev.
Enlarge / Although Western Digital's 4TB SMR disk performed adequately in Servethehome's light duty tests, it performed miserably when they used it to replace a disk in a degraded four-disk RAIDz1 vdev.

Recently, the well-known tech enthusiast site Servethehome tested one of the SMR-based 4TB Red disks with ZFS and found it sorely lacking. The disk performed adequately—if underwhelmingly—in generic performance tests. But when Servethehome used it to replace a disk in a degraded RAIDz1 vdev, it required more than nine days to complete the operation—when all competing NAS drives performed the same task in around sixteen hours.

 

This has rightfully raised questions as to what Western Digital was thinking when it tried to use SMR technology in NAS drives at all, let alone trying to sneak it into the market. Had Western Digital even tested the disks at all? But as valuable as Servethehome's ZFS tests were, they ignored the most common use case of this class of drive—consumer and small business NAS devices, such as Synology's DS1819+ or Netgear's ReadyNAS RN628X00. Those all use Linux kernel RAID (mdraid) to manage their arrays.

Rebuilding a 75% full eight disk RAID6 array

After purchasing a WD 4TB Red EFAX drive like the one that Servethehome tested, we used our existing test rig with eight Seagate Ironwolf drives in the Ars Storage Hot Rod to create a RAID6 array. Our eight Ironwolf disks are 12T a piece, so we partitioned them down to 3500GiB a piece—this made the array small enough that our new WD Red disk could "fit" as a replacement when we failed an Ironwolf out.

 

When we created the RAID6 array, we used the argument -b none, to keep it from attempting to perform a bitmap scan to do faster rebuilds when using a disk that had previously been in the array. And we formatted it using the ext4 filesystem, with arguments -E lazy_itable_init=0,lazy_journal_init=0 so that background processes wouldn't contaminate our tests with drive activity that normal users wouldn't usually contend with.

 

After formatting the new eight disk, 19TiB array, we dumped 14TiB of data onto it in fourteen subdirectories, each containing 1,024 1GiB files filled with pseudo-random data. This brought the array to a little more than 75 percent used. At this point, we failed one Ironwolf disk out of the array, did a wipefs -a /dev/sdl1 on it to remove the existing RAID headers, then added it back into the now-degraded array. This was our baseline.

 

Once the Ironwolf had successfully rebuilt into the array, we failed it out again—and this time, we removed it from the system entirely and replaced it with our 4TB Red SMR guinea-pig. First, we fed the entire 4TB Red to the degraded array as a replacement for the missing, partitioned Ironwolf. Then once it had finished rebuilding, we failed it out again, wipefs -a'd the RAID header from it, and added it back in to rebuild a second time.

 

This gave us our two test cases—a factory-new Red SMR disk being rebuilt into an array, and a used Red SMR disk with a lot of data on it already being rebuilt into an array. We felt it was important to test both ways, since each case is a common use of NAS disks in the real world. It also seemed likely that an SMR disk full of data might perform worse than a brand-new one, which wouldn't need to read-modify-write as it dealt with already-used zones.

The SMR EFAX rebuilt into our conventional RAID6 array just fine—even when 75% of its capacity was already filled.
Enlarge / The SMR EFAX rebuilt into our conventional RAID6 array just fine—even when 75% of its capacity was already filled.
Jim Salter

We weren't surprised that the SMR disk performed adequately in the first test—consumer ire aside, it seemed unlikely Western Digital had sent these disks out the door with no testing whatsoever. We were more surprised that it performed the same way in a used condition as it had when new—the drive's firmware was able to shuffle data around well enough that it didn't take a single additional minute to rebuild from a "used" condition as it had when new.

Simple 1MiB random write test

Clearly, the WD Red's firmware was up to the challenge of handling a conventional RAID rebuild, which amounts to an enormous, very large block sequential write test. The next thing to check on was whether the EFAX would handle a heavy version of the typical day-to-day use case of a consumer NAS well—that is, storing large files.

 

Once again, at first glance, the WD Red passes muster. In terms of throughput, the Red is only 16.7 percent slower than its non-SMR Ironwolf competition. Even retesting it a second time, when the firmware has a harder job dealing with already-full zones, doesn't change the picture significantly.

 

When we drill down a little farther and look at fio's latency numbers, things look noticeably worse. The EFAX Red is 68.8 percent slower on average to return from an operation than the Ironwolf—but again, this is "not winning the race" territory, not "you're going to get sued for fraud" territory. It's only when we look at peak latency from the 1MiB random write test that we begin to see how bad things can get when you push the Red in unplanned-for directions. Its worst case return is a whopping 1.3 seconds, well over ten times worse than the Ironwolf's slowest return of 108 milliseconds.

 

We can extrapolate from this peak latency result that when the Red's firmware is floundering badly, its throughput may fall below 1MiB/sec for a little while—and this correlates with the ever-changing throughput numbers we saw as we watched the throughput tests running. It also tells us that for a desktop user, someone who wants things to happen when they click buttons and drag things around, the Red can occasionally provide a truly frustrating experience during what should be a very, very easy workload, even for a conventional drive.

Replicating WD Red's ZFS failures artificially

When we hit the SMR Red with 32KiB sequential writes—approximately the workload an 8-disk RAIDz2 vdev would see—the Red falls on its face.
Enlarge / When we hit the SMR Red with 32KiB sequential writes—approximately the workload an 8-disk RAIDz2 vdev would see—the Red falls on its face.
Jim Salter

Although our latency tests demonstrated the problems the WD Red EFAX's firmware could have, the bad failures demonstrated were relatively fleeting. For the most part, the Red's firmware handled all of the use case workloads we'd expect of a consumer drive well. Still, we wanted to see why it failed so miserably when asked to handle similar tasks under ZFS.

 

In order to do this, you need to understand how ZFS—and in particular, ZFS RAIDz, such as Servethehome tested—actually writes data. When you ask an mdraid RAID6 array to store a 1MiB block of data, the data winds up in easy-to-manage 512KiB chunks on two drives total. As we saw in each of our tests here, the Red can (for the most part, at least) handle that workload just fine.

 

But when you save the same data to a ZFS RAIDz vdev, the per-disk workload looks considerably different. The default ZFS recordsize is 128KiB—and each full-size block is split evenly between n-P disks of the vdev, where n is the total number of disks and P is the number of parity blocks. So for ServeTheHome's four-disk RAIDz1 vdev, records are stored in 48KiB (128/3, rounded up to the nearest power of two) chunks per disk. In our own eight-disk RAIDz2 vdev, the records are stored in 32KiB (128/6, rounded up) chunks.

 

Now that we understand that, we know that we can model an ideal ZFS resilvering workload with a massive 32KiB sequential write—and we did exactly that, using incompressible pseudorandom data, on both one of our Ironwolf 12TB "baseline" disks and our WD Red 4TB EFAX test disk. With this test workload, we achieved a throughput of 209.3MiB/sec on the Ironwolf, but only 13.2MiB/sec on the Red—a 15.9:1 slowdown, which corresponds very well indeed with the 15.7:1 slowdown Servethehome observed in their ZFS resilver testing.

Conclusions

We want to be very clear: we agree with Seagate's Greg Belloni, who stated on the company's behalf that they "do not recommend SMR for NAS applications." At absolute best, SMR disks underperform significantly in comparison to CMR disks; at their worst, they can fall flat on their face so badly that they may be mistakenly detected as failed hardware.

 

With that said, we can see why Western Digital believed, after what we assume was a considerable amount of laboratory testing, that their disks would be "OK" for typical NAS usage. Although obviously slower than their Ironwolf competitors, they performed adequately both for conventional RAID rebuilds and for typical day-to-day NAS file-sharing workloads.

 

We were genuinely impressed with how well the firmware adapted itself to most workloads—this is a clear example of RFC 1925 2.(3) in action, but the thrust does appear sufficient to the purpose. Unfortunately, it would appear that Western Digital did not test ZFS, which a substantial minority of their customer base depends upon.

 

These tests may not be great news for either the American or Canadian class-action lawsuits currently underway against Western Digital, but they aren't the end of the line for those lawsuits, either. Even in the best case, the SMR models of WD Red underperform their earlier, non-SMR counterparts substantially—and consumers were not given clear notice of the downgrade.

 

If the same firmware was being used to make substantially larger drives available to consumers than would otherwise be possible, and the limitations of those drives were adequately explained, we would probably be gushing over its utility and function. Unfortunately, Western Digital has so far only chosen to use it to cut manufacturing costs on small disks, without even passing the savings along to the consumer.

 

 

We put Western Digital’s dreaded SMR Red drive to the test

 

(To view the article's image gallery, please visit the above link)

 

ThanksForReading200x49.jpg

Link to comment
Share on other sites


  • Views 624
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...