• Skip to primary navigation
  • Skip to content

jdgreen.io

The personal blog of James Green

  • Home
  • Technology
    • Announcements
    • Industry
    • Reviews/Guides
    • Tech Field Day
    • Interviews
  • Health
    • Food
    • Biohacking
    • Fitness
  • Faith
  • Book Notes
    • Business Books
    • Faith Books
    • Health Books
    • Technology Books
  • About
  • Show Search
Hide Search

Announcements

NEW CloudPhysics Offering – Storage Analytics

James Green · Mar 26, 2014 ·

Today, CloudPhysics is entering your storage environment to give you the same level of awesome insight they’ve given into your compute cluster! The storage analytics aim to provide data about two main struggles that every data center has: storage capacity and storage performance.

Storage Capacity

Isn’t it sickening how quickly you can chew through terabytes of storage by not keeping an close eye on things like VM sprawl? Here’s an excerpt from the official blog post:

There are many fast-and-easy paths to storage waste in a virtualized datacenter. But the path to storage reclamation is typically slow and complicated. Take VM sprawl for example: it takes just seconds to spin up a new VM, but figuring out if and when it needs to be deleted takes hours, if not weeks. You can more easily reclaim CPU and memory resources by powering off the VMs, but powered off VMs still take up disk space. Over time, you may forget about them. That’s just one example of space waste – there are many more.

CloudPhysics is specifically addressing storage-induced capacity problems by providing unique, powerful insights into where and how your storage space is being consumed along with specific recommendations on how to reclaim the space. We’ve been working hard to develop the algorithms that solve this problem, while making all that complexity transparent to users. The screenshot below, which shows how we address the problem of unused VMs – is just a sample of what you’ll find in our storage analytics.

With the new storage analytics, it’s now stupid simple to find wasted space and take it back. Here’s a screen capture of one of the space reclamation dashboards:

Unused VMs

Storage Performance

Another frustration: how much money do you waste buying spindles or upgrading flash cache to support your performance needs when what you should be doing is tuning your workload to perform better? You could potentially save boatloads of money if you could fix the problem instead of just throw spindles at it. The new Storage Analytics can help with that too!

There’s a strong relationship between storage waste and storage performance. Why? Many virtualization users simply overprovision the number of spindles, read/write cache, flash storage etc. to avoid the pain of troubleshooting storage performance issues. After all who wants to spend hours and hours of time combing through performance charts to understand correlations and do root cause analysis? Nobody does – but we have figured out how to leverage big data analytics to do it for you. For example, our fantastic new datastore contention analytics (see above) tells you when and where you are experiencing contention, and automatically identifies which VMs in your datastore are performance culprits (and which are victims). You can now solve performance issues literally in few clicks – which is a lot quicker and more efficient than overprovisioning your storage.

You can see from the screenshot below how quickly you can identify a problem and remediate it, instantly saving you pain. It won’t hurt your bottom line either!

Datastore Contention Screen Shot 2014-03-24

I know I’d love to see this technology used in my data center to make my life easier. If you’d like to see it in yours too, you’re in luck! All it takes to give it a shot is to sign up for a free 30 day trial! Here’s the link to the sign-up page. If you give it a shot, let me know how it goes in the comments! I’d love to hear what you do with it, and let CloudPhysics know what else you’d like to see.

VCP Recertification – Don’t Be a Baby

James Green · Mar 18, 2014 ·

Earlier this month, VMware announced changes to their certification program that included – among other things – an an expiration period for the VCP cert. Many people in the community have more or less freaked out and threatened to not bother certifying in the first place. I have mostly bit my tongue, but I decided I really do want to weigh in on this topic. Here are just a few of my thoughts:

    • Certification is for you. You’re not punishing anyone but yourself by not certifying. You can not go update your VCP because you’re “taking a stand” and VMware really won’t care. They’ll probably never notice.
    • To be clear, the VCP exam does require some hard work (depending on your expertise). BUT, as I’ve said on Twitter, if you certify and two years down the road you are unable to certify again, that is an unfortunate lack of progress on your part.

. @h0bbel It’s like I said before – if you can’t pass another #VCP 2 years from now, you’re going the wrong direction…

— James Green (@jdgreen) March 18, 2014

  • I would prefer that the VCP brand isn’t watered down. By making folks update their certification, we ensure that only people who are qualified at this point in time are certified. Let me give an example: If you’re coming in to work on my team and do vSphere 5.1/5.5 implementations, I really do not care if you were certified on 3.5. Yes, some low level components stay the same, but what do you know about SSO? SSO 5.1 is going to make you look like a fool at the client’s site and I’m going to lose business.
  • Chris has a replacement award for you if you are still worked up about the changes. Click here to claim it.

If you want to read more about the policy, here is the official page. Please have a look!

In conclusion, I am very much in support of the recertification policy. I am, however, open to hearing more if you are someone who believes it is a bad move. Shout out to me on Twitter or leave a comment!

SDDC Symposium 2014

James Green · Mar 18, 2014 ·

Tech Field Day is planning a Software-Defined Datacenter Symposium! This will feature many key figures from the industry and end-user community, and will include discussions of OpenFlow, software-defined networking (SDN), software-defined storage, converged infrastructure, and the greater software-defined future!

SDDC Symposium 2014 highlights:

  • Architecture in the SDDC Panel – A discussion about the convergence of compute, storage, and networking and what this means for the future of data center architecture.
  • The CapEx / OpEx Battleground Panel – Budgets in the SDDC are tight. Should you budget for traditional big expenditures? Or will monthly fees lure you to the operational model? Panelists discuss the pros and cons of each approach.
  • SDDC in the Sunlight: Real World Use Cases Panel – Technology discussions and protocol arguments solve little. The real world is where SDDC technologies will be proven. The panelists will discuss real world applications of SDDC technology from a business case perspective as well as interesting deployments seen in actual networks.

Details

  • When: Tuesday, April 22, 2014, from 8 AM to 5 PM
  • Where: Santa Clara Convention Center, Santa Clara, CA
  • Who: End-users interested in SDN and SDDC

The good news for you, dear reader, is that I’ve got a promo code! Register for the event using this link to receive $25 off your ticket! You can also use the code ‘JDGREEN’ to register on the Eventbrite website yourself.

Check out the full details of the SDDC Symposium 2014 event at the Tech Field Day website here!

Atlantis ILIO – 100,000 Times Faster!!

James Green · Mar 12, 2014 ·

Alright, so maybe that’s a bit of an exaggeration. But 100k is a real, valuable number. RAM access latencies are measured in nanoseconds, while spinning disk access times are measured in milliseconds, making RAM roughly 100,000 times faster than spinning disk. Wouldn’t it follow, then, that for ultra-high storage performance we should be making use of RAM? This is exactly what Atlantis has been doing with ILIO. A few different ILIO solutions have existed for VDI for a while, and now they’ve finally made the leap to include server workloads with ILIO USX. Atlantis ILIO USX In-Memory storage technology enables organizations to abstract any SAN, NAS, DAS, flash or server RAM to create capacity and performance resource pools, which are then used to dynamically create the optimal type of storage characteristics for each application. RAM can be used either as a storage tier itself, or as an acceleration mechanism for other storage tiers.

How Does Atlantis ILIO Work?

Simply put, the solution uses virtual appliances to consume either memory or network storage. The virtual appliance caches, deduplicates, accelerates, all sorts of other happy words. The appliance turns around a presents an NFS datastore back to the hosts and VMs are provisioned on (or svMotion’ed to) this new datastore.

There are some distinct advantages to the way this product works, and I’d like to list them here.

  • Because this is an all-software solution, there is nothing to ship, nothing to rack in the data center, and you don’t need to procure any additional cooling or power resources.
  • Because of the design, the solution can be completely implemented in hours – if not minutes – depending on the size of your implementation.
  • The implementation is usually completely non-disruptive. (I say usually only to account for one-off setups) Because of the way that the ILIO appliance presents an NFS datastore back to the hosts, all one must do to begin reaping the benefits is svMotion (a non-disruptive operation) the VMs over to the new datastore.

The Pinnacle

If you’re looking for some SERIOUS performance, ILIO USX offers the ability to use RAM as a storage tier itself rather than just an acceleration mechanism. This means a few things: your IO is extremely fast, your IO never leaves the host, your IO is as close as possible to the CPU. The problem with any caching solution is that as soon as you have a cache miss, you’re going down to the same old spinning disk that’s been too slow for years. Not to discount the great benefits of compression, deduplication, coalescing writes, and caching, but at the end of the day going across the network and/or down to spinning disk is non-optimal. ILIO USX In-Memory storage solves this problem entirely and is a great solution for your Tier 1, business critical applications.

Paradigm Shift

Something that we have to start thinking about which was discussed during the #VFD3 presentation at Atlantis headquarters was this: no matter what we do to make it more efficient, we still haven’t exactly changed the way that storage works. We can add high-performance flash, we can do in-line dedup, but the overall architecture is still the same. If we want to really change the way we do storage, I think we need to consider two things:

  • Using RAM as a storage tier a la USX or UltraDIMM
  • Reaping the locality benefits of DAS as well as the control and scale of NAS/SAN with hyper-convergence

If you’d like to learn more about Atlantis ILIO solutions, here are some links to whitepapers and datasheets as well as all of the Tech Field Day presentations at Atlantis that I participated in at #VFD3.

Product page | http://www.atlantiscomputing.com/products/usx
FAQ | http://www.atlantiscomputing.com/downloads/Atlantis_ILIO_USX-Technical_FAQ.pdf
#VFD3 | http://techfieldday.com/appearance/atlantis-computing-presents-at-virtualization-field-day-3/

Other VFD3 posts:

Paul Meehan | http://paulpmeehan.com/2014/03/06/atlantis-computing-different-approach-vfd3/
Eric Wright | http://www.discoposse.com/index.php/2014/03/05/tech-field-day-vfd3-atlantis-computing-ilio-usx-bring-the-usxy-back/
Marco Broeken | http://www.vclouds.nl/atlantis-ilio-usx-in-memory-performance-for-servers/
Andrea Mauro | http://vinfrastructure.it/2014/02/atlantis-ilio-usx-unified-software-defined-storage/

DISCLOSURE: Travel and expenses for Tech Field Day – Virtualization Field Day 3 were provided by the Tech Field Day organization. No compensation was received for attending the event. All content provided in my posts is of my own opinion based on independent research and information gathered during the sessions.

Storage Has a Problem; We Have Coho Data

James Green · Mar 11, 2014 ·

Storage has come a long way in the last decade, and we shouldn’t take those massive advances for granted. But with the way data centers work today, the conventional array model has some limitations – both from the business side as well as the technical side. Conventional SAN models don’t scale well, and in a time when “web-scale” is a buzzword, this is a problem. What if, rather than trying to predict your storage needs over the next 5 years when making a purchase, you bought only what you need? This is the essence of web-scale: buy only what you need, and add more as soon as you need it without a hassle.

Before moving on, let’s just list a few problems currently facing a conventional SAN offering:

  • Difficult to scale out
  • Need to buy spindles for IOPS
  • Expensive, proprietary hardware
  • High cost per IO with spinning disk

There are even more, but since we’ve established that something needs to change, what can we do? Enter Coho Data (@cohodata).

Scale-out Storage from Coho DataScreen Shot 2014-03-08 at 10.08.11 AM

Enabled by software defined networking (OpenFlow), Coho Data has managed to offer a linear scale-out storage solution with virtually no upper limit on IOPS and capacity. The DataStream 1000 is a combination of a few things: SDN switch (Arista 7050T) and a pile of commodity hardware including 3 TB spinning disks for capacity and Intel MLC PCIe flash for performance. This. baby. screams. But fast isn’t everything. Andy Warfield (CTO at Coho) said, “If you buy storage now for the next 5 years, you’re going to be very sad in about two years.” Statistics show that an average business’s need for storage capacity is nearly doubling every two years. At this rate, it’s important to be able to grow quickly. With the DataStream 1000, just add another 2U box (including two MicroArrays) and plug it in; the rest will be taken care of in software.

The Team

As a CIO, not only is one concerned with the technical features of a product, but also with the company that stands behind them. Rest assured; the Coho team is BRILLIANT. While at #VFD3, we spoke with Andy Warfield (@andywarfield) and Forbes Guthrie (@forbesguthrie). If you’re interested in the leadership of the company, do a bit of digging on the backgrounds of these two and I think you’ll be satisfied.

Get After It!

I believe that the DataStream 1000 is a fantastic solution for any business that isn’t sold on hyper-convergence but needs a very agile, high-performance storage array. This is one of the most exciting I’ve seen in recent days. They regularly put on webinars and events that you can attend for free and get more information for yourself. Below are some links to more information about the company, as well as to the videos of the Tech Field Day interviews where I got to hear from Andy and Forbes. I hope you’ll do yourself a favor and take a look!

  • Tech Field Day talks | http://techfieldday.com/appearance/coho-data-presents-at-virtualization-field-day-3/
  • Live Demo Friday | http://www.cohodata.com/live-demo-friday
  • Eric Shanks | http://theithollow.com/2014/03/initial-musing-coho-data-scale-networking/
  • David Davis | http://www.virtualizationsoftware.com/designing-next-generation-vsphere-storage-systems/
  • Scott D Lowe | http://www.enterprisestorageguide.com/dont-let-raw-storage-metrics-solely-drive-buying-decision
  • Jeff Wilson | http://agnosticcomputing.com/2014/03/06/vfd3-day-2-kicking-it-with-coho/
  • Eric Wright | http://www.discoposse.com/index.php/2014/03/06/tech-field-day-vfd3-coho-data-coho-ly-moly-this-is-a-cool-product/

DISCLOSURE: Travel and expenses for Tech Field Day – Virtualization Field Day 3 were provided by the Tech Field Day organization. No compensation was received for attending the event. All content provided in my posts is of my own opinion based on independent research and information gathered during the sessions.

  • « Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Next Page »

Copyright © 2021 · Monochrome Pro on Genesis Framework · WordPress · Log in