Jump to content

Superfast PCI-E SSDs are coming, but will they be used the right way?


nsane.forums

Recommended Posts

taFwq.jpg

Intel, Dell, IBM, EMC, and a host of other component makers and OEMs have announced a partnership aimed at developing a standard interface for PCIe-hosted solid-state disks. If the group has its way, we'll eventually be ditching the SATA bus entirely for SSDs that are hosted directly on PCIe. Why does this matter? Because PCIe-based SSDs are brain-meltingly fast. And right now, they're also wallet-bustingly expensive, even for enterprise customers.

While the new standards won't immediately bring down prices, they'll pave the way for more widespread adoption that can let economies of scale kick in. The group aims to create a new, standardized drive connector for PCIe SSDs; it will also specify features, like hot-plug capability, and a 2.5" form factor.

FusionIO has probably the best known and most widely deployed PCIe SSD solution. The lowest-end FusionIO card in the current lineup has a 770MB/s read bandwidth and a 750MB/s write bandwidth that vaults it ahead of even the fastest SATA drives. It'll also set you back over $8,000 for a 160GB model. If you want to go slower, smaller, and cheaper, you can pick up a consumer-grade, 80GB OCZ PCIe SSD card from Amazon for $300 and change, but you'll get only 540MB/s read and 450MB/s write.

PCIe-based SSDs are so much faster than SATA drives in part because flash storage and PCIe are a natural fit. SATA is a faster, serialized version of the old ATA drive protocol, which is itself a descendent of IDE. The protocol works by issuing commands to the drives, and the command set was originally designed with a stack of rotating magnetic disks in mind. Ideally, the market would migrate to a kind of SSD-specific answer to SATA, like ONFI, but moving to PCIe instead is still a step in the right direction. PCIe has higher bandwidth, and it just makes more sense as a bus protocol for hosting an array of solid-state memory chips.

The next step is obviously for some enterprising OS maker, ideally one that controls the underlying hardware, too, to have the OS stop treating a PCIe SSD like magnetic backing store and instead give it a special place in the memory hierarchy.

Apple's new iMacs are a tiny half-step in this direction (more like a furtive glance, actually), in that they put the OS partition and swapfile on a fast SSD and use a hard disk for mass storage. The company could easily take this to the next level by upgrading to a PCIe SSD, and then creating a new "fast file access flag" abstraction in the OS that fast-access-aware media production programs could use to store a working set of large files. If a Final Cut Pro user, for instance, has marked a set of clips for inclusion in the flash cache by designating them as fast-access files, those clips would still "live" in the same directory on her magnetic disk, but FCP would work from a copy in the flash cache that invisibly syncs back to disk.

In other words, when PCIe SSD comes to consumer desktops, it will be profound a shame if it does so as yet another volume (albeit a fast one) that users must manage. It's better if users don't see the new drive at all—at least, not as a place to "put" files. Rather, a PCIe SSD should operate like a bag of magic pixie dust that you sprinkle on certain files to make them load and save super quickly. Users and programs can mark certain files to indicate that those files need to be really fast, and then when they're no longer accessed regularly they can be unmarked.

view.gif View: Original Article

Link to comment
Share on other sites


  • Views 743
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...