• Skip to primary navigation
  • Skip to main content

jdgreen.io

The personal blog of James Green

  • Home
  • Technology
    • Announcements
    • Industry
    • Reviews/Guides
    • Tech Field Day
    • Interviews
  • Health
    • Food
    • Biohacking
    • Fitness
  • Faith
  • Book Notes
    • Business Books
    • Faith Books
    • Health Books
    • Technology Books
  • About
  • Show Search
Hide Search

Tech Field Day

Programmable ASICs Will Change Infrastructure Investments

James Green · Feb 17, 2016 ·

With any technology, whether it’s consumer technology or enterprise IT, it’s only a matter of time from the purchase before the technology is outdated. Most IT departments are extremely good by now with understanding the refresh cycle and how hardware depreciates over time. I learned of some technology at Cisco Live Europe this week that is going to revolutionize the way the buying cycle (at least with regard to switches) works. To understand why this is so important, we first need to understand the silicon that is used in a piece of IT hardware.

Processing Power

There are a few distinct ways that physical processing power can be developed and placed inside an infrastructure component like a switch.

  • General purpose CPU (x86)
  • Field-programmable Gate Array
  • Application-specific Integrated Circuit

The design decision regarding which of these types of chips to use has to do with balancing tradeoffs. As you can see in Figure 1 which is from Dave‘s slide deck, the tradeoffs are as such:

  • CPU: Sacrifice performance for application flexibility.
  • FPGA: Sacrifice cost for performance and application flexibility.
  • ASIC: Sacrifice flexibility for raw performance.
Figure 1 - The tradeoffs between x86 CPU, FPGAs, and ASICs
Figure 1 – The tradeoffs between x86 CPU, FPGAs, and ASICs

There are unique use cases for choosing one over the other, but consider an example of both ends of the spectrum. A desktop computer uses an x86 CPU that can run the operating system and all sorts of applications. On the other hand, an average Cisco switch is built with ASICs built specifically to switch frames or route packets (and other networky stuff). The switch’s ASICs perform a limited, specific set of functions extremely well. But they don’t do anything else. On the other hand, the CPU in the desktop computer can do just about anything, but it comes at the cost of performance.

The Depreciation Challenge

In the case of enterprise switching, the design decision is usually to use ASICs for the performance boost compared to the other two options. The biggest problem with this is that if a new ASIC-level feature is finished 3 months after the switch was purchased, it won’t be available to customers on the 3-month-old switch. A new batch of ASICs will need to be produced with the new feature integrated and only newly manufactured switches will include the feature.

Over the course of the IT hardware lifecycle, decisions to replace gear are based on things like support contracts and such. But they’re also based on the availability of features as compared to business requirements. The challenge with traditional ASICs in the lifecycle is that a business requirement might necessitate that a certain feature is implemented that is not available on switches that are only 6 months old. Unfortunately, this can force IT organizations to have no choice but to stomach the cost of a new hardware purchase.

This is not brand new technology (it’s been in the Catalyst 3850 for a bit of time now, I think) but Cisco has developed something I just heard about for the first time that will make a big difference in addressing business challenges like this.

Unified Access Data Plane ASIC

To make a long story short, the UADP chip (the name has to do with unifying wired+wireless, which is unrelated) shown in the header picture (autographed by Peter, one of the folks heavily involved in the project) is a programmable ASIC. This means that it comes with the application-specific performance that switch manufacturers have always chosen ASICs for, but it can be upgraded with new hardware-level code after it’s already in the field.

This means that if Cisco develops a new feature that will solve the business challenge an organization is facing and the 3850 is capable, a microcode update to the ASIC can enable the feature in a situation that would require the purchase of a new switch under any standard ASIC architecture. This is huge in that businesses will be able to delay the purchase of new equipment until necessary for another reason. There will, of course, still be reasons that switches need to be replaced. Other components will still become outdated. But the ability to introduce new hardware-level features on a unit is already deployed to the field is super exciting to me.

The UADP chip has many other sweet features as well, so it’s certainly even more exciting than just the programmability. But the microcode update ability of the chip which enables the balance of tradeoffs shown in Figure 2 is the most exciting feature to me.

Figure 2 - Balanced tradeoffs with UADP chip
Figure 2 – Balanced tradeoffs with UADP chip

To learn more about the big picture of the UADP technology, be sure to watch the Tech Field Day Extra video – the entire thing is riveting, and I don’t say that lightly. Thanks to Cisco Live EMEA and Tech Field Day for having us out for this presentation!

#VFD5 in Boston – PernixData

James Green · Jun 24, 2015 ·

Ah, the belle of Tech Field Day… That may or may not be an exaggeration, but either way the Tech Field Day community LOVES PernixData. They show up and talk about the innovative and awesome technology they’re building. Also, Satyam is a hoot! At previous Tech Field Day Events, PernixData revealed FVP for the first time, and then announced FVP 2.0 at a later TFD. They’re not letting off the gas, because today at VFD5, Satyam announced PernixData Architect! This tool will analyze and understand an environment to provide insight and advice at a ‘meta’ level.

PernixData Architect

An example that Satyam gave during the presentation could be paraphrased like this: “When you ask Siri whether you need a raincoat, Siri associates the raincoat with rain and shows you the weather. In the infrastructure, traditional monitoring/planning/design tools will show you ‘If you pull a network cable, the link will be down.’ when what would be helpful is ‘If you pull a network cable, the link to the SAN will be down, causing datastore A, B, and C to be unavailable.” The ability of the tool to understand the impact of events and/or design decisions on the environment is monumental.

Satyam shared with the audience 3 design principles that guided the development of PernixData architect. The product that came to be is a result of these three considerations:

  • Get a lot of data (your own data). The data that someone else collects may not be useful to you, and it’s possible that no one else is collecting the type of data you need.
  • Control the user experience. Ensure that the UI is streamlined, and the dashboard shows extremely distilled, helpful information.
  • Make recommendations based on the application. A VM performing poorly doesn’t necessarily mean there’s an infrastructure performance problem. Perhaps the application isn’t doing what it should be, or was spec’ed to do?

The tool will show a characteristic IO “fingerprint” of a VM, including precise information about the size and frequency of IOs. This is as compared to most tools that would show an average of IO operations. For planning purposes, these two numbers are wildly different, the former obviously being much more helpful. It also shows the “working set” for each VM in the data center. Meaning the “hot” data for that VM is highlighted, which can help calculate the need for cache, higher tiers of storage, etc. At this point the tool focuses on storage only. That’s not an issue, as the product has to ship some time. Network, compute, etc will come later.

As exciting as all of this is, that was only one of the announcements!

[Read more…] about #VFD5 in Boston – PernixData

#VFD5 in Boston – OneCloud Software

James Green · Jun 24, 2015 ·

OneCloud is a new company (founded in 2013) to solve the problem of backup to public cloud being expensive and complicated. It was founded on 3 major premises (< lol, punny) which are the following:

  • DR needs to be simple
  • DR needs to be cost-optimized
  • A DR solution needs to be complete

These certainly are true premises, and if OneCloud really addresses them well, this product could be major! I’m excited to know more, and I’m especially curious about all the inner workings detailed in the embargoed session! The sad part of not being able to attend in person is having to miss the great relationship building, and the private sessions.

The OneCloud DR platform seems to work about like you would think. On-premises, the OCVA plugs in to vCenter and takes application-consistent snapshots which are then stored on lower-tier storage, compressed, and shipped off to the AWS VPC that OneCloud has created with your AWS credentials. VM’s are protected via protection groups with different RPO settings, etc. Something neat that Eric Shanks caught is that the tool has the ability to resize VMs during failover depending on the type of failover being performed. For example, the VMs can be made half the size of production when initiating a test failover. Alastair discovered that when the production failover is initiated, the tool does NOT initiate shutdown of the production workloads as a part of the “run book.” I totally agree with him that this would be a much desired feature. In a moment of panic, I don’t want to be worrying about whether I’m going to end up with a conflict when a certain workload ends up running on both sides. And lastly, Marco had a brilliant idea – providing some WorkSpaces cloud desktops for administrators to use during the outage. If local site connectivity is an issue, this might be the ticket to the administrators being able to easily continue to manage the environment.

I really enjoyed what I was able to see of OneCloud, and look forward to hearing more as it becomes public info. I’m certainly looking forward to a GA release.

#VFD5 in Boston – VMTurbo

James Green · Jun 24, 2015 ·

Today, Virtualization Field Day 5 kicked off in Boston. I have a few thoughts on each of the presenters today that I would like to share. All of the delegates and presenters were awesome. Great lineup from a delegates perspective, by the way! Stephen really out-did himself this time!

The Green Circle folks were back for the third VFD in a row. Eric Wright kicked off the presentation with an overview on the product and why it’s needed, followed by Charles Crouchman (CTO) on some more details, and finally Endre Sara with some great demo material and deep discussion on the inner workings of the tool.

I’ve seen VMTurbo present at all three of their VFD appearances, and for some reason I didn’t grasp the true power of the tool until this time. I think part of that had to do with a skewed first impression. I was first introduced to VMTurbo as a monitoring tool of sorts. If you look at VMTurbo through the lens of a monitoring tool, and only a monitoring tool, you’re going to get the wrong idea. Like not even close. The power of VMTurbo comes from it’s ability to take action based on calculated metrics where many parts of the infrastructure/application stack are considered. “Automation” is much more descriptive of VMTurbo’s product than “Monitoring.”

I still find the market economy analogy really intriguing. It makes a lot of sense to me, but I’ve wondered – as have others (like Justin Warren, if I’m not mistaken) – whether the analogy actually maps properly to infrastructure components. It’s not that I think the mechanics and algorithms aren’t doing helpful things. It’s that I wonder if market terms that would mean one thing in world of finance mislead me about what the technology is actually doing or calculating in the world of IT infrastructure. I’m a believer in what the product can do, I just wonder if I – the administrator – am confused about the technology because of the complex analogy. Either way, I love that EVERYTHING they do can be viewed with this paradigm, it would seem. The continuity is impressive.

I’m really looking forward to hitting up Eric for the NFR he promised me 😉 and spending some time with the tool in the lab. I’ll definitely put some thoughts up once I’ve had more experience with it.

#VFD4 Live – Scale Computing

James Green · Jan 16, 2015 ·

This post is live-blogged from the Scale Computing presentation at #VFD4. The structure and organization will undoubtedly be poor, since this is all captured on the fly from our chat. This is the last session of #VFD4 and we’re all exhausted, but still sad to be done 🙁

Overview

Scale was initially founded as a scale-out SAN solution, and then slowly grew into the hyper-coverged offering that they’re known by today. They target SMB and mid-market companies exclusively, and they aim to simply deliver virtual infrastructure to these smaller shops. I mentioned this earlier in the week, and I’ll say it again: their focus is admirable, and I wish more companies had it. Lots SAY they do, but they don’t. I believe Scale really does.

Deep-dive/Demo

We took a stroll through the interface, and saw features that compare to: vMotion, HA, Admission Control, VM-based site to site replication, and more. It absolutely lacks some features that are “enterprise” in nature, but this thing does more than enough for customers in their target market. I’m actually having a hard time writing about this, because I’m so captivated by this solution. During the presentation, I said:

 

Being such an invested @vmware proponent, I feel a bit guilty liking @ScaleComputing so much 🙂 #VFD4

— James Green (@jdgreen) January 16, 2015

 

The pricing model is straightforward and simple, and entry is right around $25k for everything. Node types (HC1000, 2000, 4000) can be mixed and matched, which allow you to start small and grow without pain. Clusters can be up to 8 nodes. An 8 node cluster with 256 GB RAM in each node is a sizable cluster; I don’t view this as a limitation because if you scale larger than this, you’re outside Scale’s target market anyhow. Storage data is striped across all the nodes using their proprietary storage architecture called SCRIBE, which we dove into a bit. They did a very deep dive on SCRIBE at Storage Field Day 5, and you can view the recording of that HERE.

Before we wrapped up, we got to do the fun stuff! We started pulling cables and watching what happened, and how the system handled things like recovery. Yes, of course, an HA failover is nothing new, but seeing how we would utilize the Availability features of the product was nice.

And that puts a bow on Virtualization Field Day 4! Everyone has been awesome, I am exhausted, and all of our brains are mush by now. Thank you to Stephen, Tom, Claire, and all the sponsors and delegates for making this event happen! It’s always awesome!

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 5
  • Go to Next Page »

Copyright © 2023 · Monochrome Pro on Genesis Framework · WordPress · Log in

Posting....