• Skip to primary navigation
  • Skip to main content

jdgreen.io

The personal blog of James Green

  • Home
  • Technology
    • Announcements
    • Industry
    • Reviews/Guides
    • Tech Field Day
    • Interviews
  • Health
    • Food
    • Biohacking
    • Fitness
  • Faith
  • Book Notes
    • Business Books
    • Faith Books
    • Health Books
    • Technology Books
  • About
  • Show Search
Hide Search

#VFD4

#VFD4 Live – Scale Computing

James Green · Jan 16, 2015 ·

This post is live-blogged from the Scale Computing presentation at #VFD4. The structure and organization will undoubtedly be poor, since this is all captured on the fly from our chat. This is the last session of #VFD4 and we’re all exhausted, but still sad to be done 🙁

Overview

Scale was initially founded as a scale-out SAN solution, and then slowly grew into the hyper-coverged offering that they’re known by today. They target SMB and mid-market companies exclusively, and they aim to simply deliver virtual infrastructure to these smaller shops. I mentioned this earlier in the week, and I’ll say it again: their focus is admirable, and I wish more companies had it. Lots SAY they do, but they don’t. I believe Scale really does.

Deep-dive/Demo

We took a stroll through the interface, and saw features that compare to: vMotion, HA, Admission Control, VM-based site to site replication, and more. It absolutely lacks some features that are “enterprise” in nature, but this thing does more than enough for customers in their target market. I’m actually having a hard time writing about this, because I’m so captivated by this solution. During the presentation, I said:

 

Being such an invested @vmware proponent, I feel a bit guilty liking @ScaleComputing so much 🙂 #VFD4

— James Green (@jdgreen) January 16, 2015

 

The pricing model is straightforward and simple, and entry is right around $25k for everything. Node types (HC1000, 2000, 4000) can be mixed and matched, which allow you to start small and grow without pain. Clusters can be up to 8 nodes. An 8 node cluster with 256 GB RAM in each node is a sizable cluster; I don’t view this as a limitation because if you scale larger than this, you’re outside Scale’s target market anyhow. Storage data is striped across all the nodes using their proprietary storage architecture called SCRIBE, which we dove into a bit. They did a very deep dive on SCRIBE at Storage Field Day 5, and you can view the recording of that HERE.

Before we wrapped up, we got to do the fun stuff! We started pulling cables and watching what happened, and how the system handled things like recovery. Yes, of course, an HA failover is nothing new, but seeing how we would utilize the Availability features of the product was nice.

And that puts a bow on Virtualization Field Day 4! Everyone has been awesome, I am exhausted, and all of our brains are mush by now. Thank you to Stephen, Tom, Claire, and all the sponsors and delegates for making this event happen! It’s always awesome!

#VFD4 Live – Dell

James Green · Jan 16, 2015 ·

This post is live-blogged from the Dell presentation at #VFD4. The structure and organization will undoubtedly be poor, since this is all captured on the fly from our chat. Dell had two time slots worth of content, so I hope you have a comfy chair for this one.

Overview

We’re at Dell’s Round Rock office which is a really neat campus! We heard right off the bat about Dell’s privatization and it sounded like they’re very excited about the agility that comes with not being beholden to shareholders. We talked about that in contrast to the agility that comes with going private, it increases the difficulty of people outside the organization to see where Dell is committed, and what’s going well and what isn’t (all due to the lack of quarterly financials being available). We have a lot to cover today, so we jumped right to the first presentation.

Foglight for Virtualization

Foglight (which was a part of the Quest acquisition) is a multi-hypervisor solution for monitoring OS, vSwitch, storage, AD/Exchange/SQL, and View/XenApp. It also does capacity planning, chargeback, and change management. A slide during this presentation says they’re the “only one in the industry that offers remediation/automation.” Maybe it’s just a poorly constructed slide, but that’s hogwash. A SaaS edition is on the roadmap for Q4 of 2015, which is consistent with what we’re seeing a lot of other monitoring/optimization tools do. The whole roadmap for 2015 looks cool, so if all of this stuff makes it in, I’ll be pleased. During this slot, Dell won the “Best Slide of #VFD4” award by having picked up a meme of Emad Younis that Christopher Kusek tweeted last night.

The Foglight Resource Utilization dashboard has spinny wheels that change colors and spin at different speeds. It would look beautiful up on a big screen in your NOC! Unfortunately, I didn’t see anything feature-wise that really stood out, as compared to competitors. That doesn’t mean there aren’t business reasons to make the Foglight choice. But I was looking for something that – like we talked about earlier – no one else is doing, and I didn’t catch that if it was there. We also briefly touched on price, because traditionally Foglight has been seen in the market as an expensive product. It sounds like they don’t think so, and it’s been quite some time since the price has changed. Perhaps the perception just needs to be addressed.

Dell Storage

Dell is unifying their different storage product lines and will be known simply as Dell Storage moving forward. We finally got to some meat when we got to talk about NVMe and Fluid Cache. It took me about five slides to get it, but Fluid Cache is simply a server-side caching solution using PCIe SSD. The first implementation of Fluid Cache uses RDMA over 10g Ethernet to pool the flash. In the future, perhaps RDMA over Infiniband would be possible, or a number of other methods of managing this pool. A minimum of 3 nodes is required, which is for typical parity purposes. The software side of this solution is deployed via virtual appliance. I’m not sure if companies are waiting for VAIO or what the vision is, but I don’t understand why more solutions don’t use a VIB and perform this function right at the hypervisor. Maybe the can’t? I’ll say that I really like the one(s) that can. I think if there’s anything that stands out about this, it’s the way the pooling is done via RDMA (I don’t think a lot of others are doing this?).

Intel

Intel was also a part of this conversation and most of the conversation was mostly re-stating the problem…again. But fortunately we got 5 minutes at the end to talk about rack-scale architecture. This topic has come up in conversation with delegates already this week, and it’s really cool. If you don’t know, essentially this is the concept of having pooled and disaggregate compute, network, and storage resources connected with a technology called “silicon photonics” which is really cool. Maybe I’ll write a full post on this soon, but the concept is really neat. Imagine that rather than having racks full of servers, you could have racks full of CPU, and racks full of Memory, and so on.

PowerEdge FX

Lastly, we got a look (Literally! They brought some in. ) at the PowerEdge FX series servers. FX is the converged-style system. It’s 2U x 4-node, as seems to be the standard right now. You can add compute nodes or storage nodes, and it uses something called an IO Aggregator in the back of the chassis for networking. I guess I don’t know it for a fact, but I’m assuming this system will be what Dell’s EVO:RAIL system runs on moving forward. It came up, but there wasn’t a firm answer. The density potential here is pretty impressive. If you crammed the compute nodes full of memory, you could potentially run about 12 TB of memory in a measly 2U of rack space. Obviously the DIMMs to get to that density will be stupid expensive, I’m just saying it’s possible 🙂 This FX session was one of the best of #VFD4. She did a fantastic job and REALLY knows her stuff. Delegates were really excited about the FX stuff in general, and this session will probably get some great blog posts soon!

That’s it for the Dell session! We’ll be back in about an hour for the final session of #VFD4 with Scale Computing!

#VFD4 Live – CommVault

James Green · Jan 15, 2015 ·

This post is live-blogged from the CommVaullt presentation at #VFD4. The structure and organization will undoubtedly be poor, since this is all captured on the fly from our chat.

Overview

This is not CommVault’s first time at Virtualization Field Day – I also saw them at #VFD3. It’s good to have them back! They specifically called out that they had made some major changes since the last time they were here. CommVault has historically been known for data protection and recovery. At this stage in the game, there’s more than that though! The major value in CommVault over another similar product is not just their ability to back up or recover your data, but the ability to index at catalog that data, and make it available to end-users in a “versioning” sort of way. The backup target possibilities are pretty diverse – tape, disk, public cloud storage, private cloud storage, and so on. Another big offering is the ability to convert and move workloads from VMware to AWS, or from VMware to Azure, or various other configurations of the same type of migration. At this time, this migration is one-way, but the goal for the future would be able to move workload OUT of public cloud and into your data center with ease as well.

Deep-dive/Demo

I have often been known to talk about the idea that if you don’t provide a service that users want as an IT organization, they will go to an external public organization to get it. Right off the bat, we saw CommVault offering a Dropbox for enterprise sort of solution. That already exists in places like OneDrive, but the difference here is that you can utilize the other features of CommVault with it, like the content indexing or backup/versioning. I do see this as being something that would satisfy both end users’ needs an IT’s needs from a manageability, compliance, and security standpoint.

We talked at length about the steps CommVault takes to make application-consistent backups efficient and fast. Then we got into the “searchability” of the content store, as well as immediately doing file-level restores from VMs. It sounds like the way it works is similar to the way vPower NFS works with Veeam. It will essentially stage the VM to be restored in a temporary location, mount a temporary NFS volume and power on the VM. We can then reach in and pull out the file we’re looking for. There’s other ways of getting the data out, like mounting an entire VMDK back to the VM in question. This may be a better method for large file recovery, like a database for instance.

Lastly, they showed us a “cloud provisioning portal” and I can’t quite figure out why I’d want to use that as opposed to something mature and well-known like OpenStack of vCAC. Maybe it’s easier? Maybe it’s because I’m already a CommVault shop and it’s an addition to something I already own? I wish we had more time to talk about this, because I’d like to be more clear on the direction here. Overall, great job by CommVault and as suspected, it was great to have them back!

That’s it for today! Back tomorrow for the last sessions of VFD4, with Dell and Scale Computing! We’ll start at 8 AM Central. Here’s the full schedule.

#VFD4 Live – Solarwinds

James Green · Jan 15, 2015 ·

This post is live-blogged from the Solarwinds presentation at #VFD4. The structure and organization will undoubtedly be poor, since this is all captured on the fly from our chat.

Overview

The Solarwinds presentation focused on AppStack, which intends to provide insight all the way through the stack from the application to the storage. The Solarwinds software portfolio in general is massive, and they do an exceptional job of exposing data from all pieces to all the other pieces. Orion is the foundation, and all the other products basically extend Orion. Products can operate without each other, but they work best in tandem.

Deep-Dive/Demo

So AppStack is a common interface to tie together the different extensions like NPM, SAM, etc. AppStack is built into Orion, which means that it already has access to all the good stuff. Because AppStack has access to all the data, we can easily identify issues down the stack that may be impacting upstream components. They were very careful about the phrasing surrounding correlation/root cause analysis. Obviously, accurate root cause analysis is very hard, even for people, much less in software. The good news is the massive amount of data provided, so even though automated root cause analysis isn’t possible, the tool can absolutely highlight some places to start looking. Of course, another part of this picture is identifying anomalies, which via baselines (both automatic and manual, as I understand it) this tool is able to do.

Honestly, the demo moved quickly and we were very engaged, so I didn’t have time to capture much of it here. Oops 🙂 Kudos to Solarwinds for being the first presenter at this TFD to do a whole session with no slides. We used only a whiteboard and a live UI to get through this two hour session, and although that’s not as easy, it’s what we want as delegates. Well done!!

Last session of the day coming up in a bit with CommVault! Looking forward to this one!

#VFD4 Live – StorMagic

James Green · Jan 15, 2015 ·

This post is live-blogged from the StorMagic presentation at #VFD4. The structure and organization will undoubtedly be poor, since this is all captured on the fly from our chat.

Overview

The founders of StorMagic have been developing storage solutions for literally decades, and this isn’t their first rodeo. The StorMagic story is simple storage targeted at ROBO applications (which doesn’t mean small, just remote). Central management is critical, but they see many challenges with centralized infrastructure. Their goal is to enable SAN at ROBO sites, but retain the ease of management associated with centralized infrastructure. Traditional shared storage requires far too much CAPEX and OPEX for the remote office, which is the real problem the StorMagic VSA aims to solve. Reduce both Cap/Opex for storage in ROBO situations. For me, one of the greatest things about StorMagic is their level of focus. From an organizational perspective, they have really zeroed in on their target market and aren’t trying to be all things to all people. I really appreciate this.

SvSAN Deep-dive/demo

The VSA consumes any sort of storage – internal, DAS, etc, and is capable of tiering and SSD caching. We had a (too long) discussion about the way you can deploy with as little as 2 nodes, using a third remote or local node as the witness. Delegates were really unsettled by the idea of the witness being “optional” and/or remote, but eventually we had to move on. They do this using what they call a “neutral storage host,” and it can tolerate up to 3000ms of latency, up to 20% packet loss, and only needs 9 kbps of bandwidth per mirror! Locally, 10g links are not required, although it would obviously be nice. It should work fine on 1g. The product is available currently for VMware and Hyper-V. KVM and Xen run in the lab but they aren’t implementing this for customers yet.

We walked through deploying a new VSA, which was straightforward and simple. Creating a new datastore was equally simple. All the configuration on the back end as far as adding storage to ESXi hosts, etc, is done automatically by the wizard. The only prerequisite is to create a software iSCSI initiator ahead of time on the hosts. Volume snapshots are not an integrated feature at this point, although they do have the technology built and are working on clarifying use case and how to lay out the management of this. It is possible to create a stretched cluster, but you’re then bound by stricter latency and bandwidth requirements.

Great work from StorMagic during this presentation – I think we’re all happy with their focus on their target market, and the way they drive down the expense to put storage in ROBO sites. Join us again shortly for Solarwinds!

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Copyright © 2023 ¡ Monochrome Pro on Genesis Framework ¡ WordPress ¡ Log in

Posting....