Once upon a time, storage capacity was a big deal. But we solved that problem, and then started running into performance bottlenecks. But alas, these days people are mostly over IOPS. Vendors are walking around slinging numbers like “1 Million IOPS” like it’s no big deal. So then the issue was, “Yeah, your IOPS are huge, but how’s the latency?” With hybrid arrays and AFAs, offering sub-millisecond latency is pretty attainable, and again we’re getting comfortable. The next wave of hype around storage (I believe) is not going to be ‘how does it perform,’ or ‘how much can I store,’ but ‘how much intelligence can I collect from it?’ I recently wrote about the launch of a product called DataGravity that aims to (among other things) provide you an unparalleled level of
insight into the data you’re storing. I think this is a really exciting development, BUT…. Since all technology analogies must tied back to a car, consider this: yeah, your Tesla is really neat;
it’s efficient and fast and green. But sometimes, a man (or a woman) just wants to haul. ass. in a Bugatti Veyron Super Sport. Sometimes, despite all the practically and rationality that must go into choosing your storage, there’s a need for incredible speed. Today, I’d like to look at something that is about sheer performance. Massive IOPS at minuscule latency. I introduce to you….
Memory Channel Storage
The pinnacle of general storage performance for the last few years has looked an awful lot like a traditional hard drive. It seems that we made significant improvements in the performance of our hardware, but we did not re-evaluate our paradigm to be sure we were still solving the problem in the right place. We continued accessing storage over the same SATA/SAS bus. Eventually, we were able to place flash storage on the PCIe bus, thus closer to the CPU, and that made a substantial difference. But the storage bottleneck still existed. How much closer to the CPU can we get? What’s the closest thing to the CPU we have access to? The memory channel. Memory Channel Storage is NAND flash storage (non-volatile) presented via the memory subsystem. I have previously written about products like Atlantis ILIO/USX that use DRAM as a sort of storage tier, and they are also incredibly fast. The difficulty with a solution like this is that from a capacity perspective, DRAM is expensive. It is also volatile. MCS storage solves both the volatility and the capacity problem.
I Want It!
The guts of MCS have been developed by Diablo Technologies and they have been working at this for quite some time. But over the last year I’ve heard about some major vendors adopting this technology and starting to make it widely available. The one I’m going to highlight is SanDisk’s ULLtraDIMM SSD. This module uses the standard DDR3 interface and can achieve a latency of less than FIVE MICROSECONDS. Wow. It’s rated for 150K random read / 65K random write IOPS. You can buy this guy in either 200 or 400 GB modules, and you can install as many of them as you have DIMM slots, minus 1. You still need some DRAM folks! They tout a MTBF of 2.5M hours which is largely due to SanDisk’s proprietary FlashGuard Technology. All this sounds pretty good to me. So who should actually buy some of this?
- Big Data/Analytics operations
- High-performance database shops
- Financial/trading institutions
- Any latency sensitive, business critical application owner
Check out more about MCS and ULLtraDIMM at the following links: