This post is live-blogged from the Dell presentation at #VFD4. The structure and organization will undoubtedly be poor, since this is all captured on the fly from our chat. Dell had two time slots worth of content, so I hope you have a comfy chair for this one.
We’re at Dell’s Round Rock office which is a really neat campus! We heard right off the bat about Dell’s privatization and it sounded like they’re very excited about the agility that comes with not being beholden to shareholders. We talked about that in contrast to the agility that comes with going private, it increases the difficulty of people outside the organization to see where Dell is committed, and what’s going well and what isn’t (all due to the lack of quarterly financials being available). We have a lot to cover today, so we jumped right to the first presentation.
Foglight for Virtualization
Foglight (which was a part of the Quest acquisition) is a multi-hypervisor solution for monitoring OS, vSwitch, storage, AD/Exchange/SQL, and View/XenApp. It also does capacity planning, chargeback, and change management. A slide during this presentation says they’re the “only one in the industry that offers remediation/automation.” Maybe it’s just a poorly constructed slide, but that’s hogwash. A SaaS edition is on the roadmap for Q4 of 2015, which is consistent with what we’re seeing a lot of other monitoring/optimization tools do. The whole roadmap for 2015 looks cool, so if all of this stuff makes it in, I’ll be pleased. During this slot, Dell won the “Best Slide of #VFD4” award by having picked up a meme of Emad Younis that Christopher Kusek tweeted last night.
The Foglight Resource Utilization dashboard has spinny wheels that change colors and spin at different speeds. It would look beautiful up on a big screen in your NOC! Unfortunately, I didn’t see anything feature-wise that really stood out, as compared to competitors. That doesn’t mean there aren’t business reasons to make the Foglight choice. But I was looking for something that – like we talked about earlier – no one else is doing, and I didn’t catch that if it was there. We also briefly touched on price, because traditionally Foglight has been seen in the market as an expensive product. It sounds like they don’t think so, and it’s been quite some time since the price has changed. Perhaps the perception just needs to be addressed.
Dell is unifying their different storage product lines and will be known simply as Dell Storage moving forward. We finally got to some meat when we got to talk about NVMe and Fluid Cache. It took me about five slides to get it, but Fluid Cache is simply a server-side caching solution using PCIe SSD. The first implementation of Fluid Cache uses RDMA over 10g Ethernet to pool the flash. In the future, perhaps RDMA over Infiniband would be possible, or a number of other methods of managing this pool. A minimum of 3 nodes is required, which is for typical parity purposes. The software side of this solution is deployed via virtual appliance. I’m not sure if companies are waiting for VAIO or what the vision is, but I don’t understand why more solutions don’t use a VIB and perform this function right at the hypervisor. Maybe the can’t? I’ll say that I really like the one(s) that can. I think if there’s anything that stands out about this, it’s the way the pooling is done via RDMA (I don’t think a lot of others are doing this?).
Intel was also a part of this conversation and most of the conversation was mostly re-stating the problem…again. But fortunately we got 5 minutes at the end to talk about rack-scale architecture. This topic has come up in conversation with delegates already this week, and it’s really cool. If you don’t know, essentially this is the concept of having pooled and disaggregate compute, network, and storage resources connected with a technology called “silicon photonics” which is really cool. Maybe I’ll write a full post on this soon, but the concept is really neat. Imagine that rather than having racks full of servers, you could have racks full of CPU, and racks full of Memory, and so on.
Lastly, we got a look (Literally! They brought some in. ) at the PowerEdge FX series servers. FX is the converged-style system. It’s 2U x 4-node, as seems to be the standard right now. You can add compute nodes or storage nodes, and it uses something called an IO Aggregator in the back of the chassis for networking. I guess I don’t know it for a fact, but I’m assuming this system will be what Dell’s EVO:RAIL system runs on moving forward. It came up, but there wasn’t a firm answer. The density potential here is pretty impressive. If you crammed the compute nodes full of memory, you could potentially run about 12 TB of memory in a measly 2U of rack space. Obviously the DIMMs to get to that density will be stupid expensive, I’m just saying it’s possible 🙂 This FX session was one of the best of #VFD4. She did a fantastic job and REALLY knows her stuff. Delegates were really excited about the FX stuff in general, and this session will probably get some great blog posts soon!
That’s it for the Dell session! We’ll be back in about an hour for the final session of #VFD4 with Scale Computing!