• Skip to primary navigation
  • Skip to content

jdgreen.io

The personal blog of James Green

  • Home
  • Technology
    • Announcements
    • Industry
    • Reviews/Guides
    • Tech Field Day
    • Interviews
  • Health
    • Food
    • Biohacking
    • Fitness
  • Faith
  • Book Notes
    • Business Books
    • Faith Books
    • Health Books
    • Technology Books
  • About
  • Show Search
Hide Search

Technology

Horizon Workspace and SAML 2.0 with External Applications

James Green · Jun 29, 2015 ·

With the Horizon Suite 6.0/6.1, it’s awesome to be able to use Horizon Workspace as a ‘dashboard’ of sorts to give access to published applications (via RDSH), full desktops, and user files. Since the latest version of the Horizon Client has support for both full desktops and published applications, the use case for Workspace is in accessing other resources. One of the most impressive things that can be done with Workspace is to provide single sign-on services for external applications a business uses.

Any external application can be integrated with Workspace if you have the development muscle to make it happen. The integration is made possible by leveraging Security Assertion Markup Language (SAML), pronounced “sam-el”, to pass authentication information to the external service. But there’s a list of applications (called the ‘cloud application catalog’) that are already prepared and tested by VMware for integration with Workspace. I’m going to show how one of these applications can easily be configured to allow users already authenticated to Workspace to connect without the need to log in again. In this tutorial, I’m going to use Salesforce as the external application, because it’s a widely used platform, and is hopefully relevant to many organizations.

Preparation

This process can seem fairly intimidating if you don’t have a development background. Rest assured that although the process is somewhat complex in the background, most of the leg-work is already done by VMware and the external service. An integration with an application from the cloud application catalog can be done in 15 minutes or less, in the easier cases. There are only a few requirements to get this working:

  • Administrative access to the organization’s Salesforce portal
  • Administrative access to Workspace

Depending on the platform, this process may or may not be entirely self-service. Some platforms like ADP do work, but require intervention from folks at the other end. In the case of Salesforce, however, it can all be done by internal IT staff.

[Read more…] about Horizon Workspace and SAML 2.0 with External Applications

#VFD5 in Boston – PernixData

James Green · Jun 24, 2015 ·

Ah, the belle of Tech Field Day… That may or may not be an exaggeration, but either way the Tech Field Day community LOVES PernixData. They show up and talk about the innovative and awesome technology they’re building. Also, Satyam is a hoot! At previous Tech Field Day Events, PernixData revealed FVP for the first time, and then announced FVP 2.0 at a later TFD. They’re not letting off the gas, because today at VFD5, Satyam announced PernixData Architect! This tool will analyze and understand an environment to provide insight and advice at a ‘meta’ level.

PernixData Architect

An example that Satyam gave during the presentation could be paraphrased like this: “When you ask Siri whether you need a raincoat, Siri associates the raincoat with rain and shows you the weather. In the infrastructure, traditional monitoring/planning/design tools will show you ‘If you pull a network cable, the link will be down.’ when what would be helpful is ‘If you pull a network cable, the link to the SAN will be down, causing datastore A, B, and C to be unavailable.” The ability of the tool to understand the impact of events and/or design decisions on the environment is monumental.

Satyam shared with the audience 3 design principles that guided the development of PernixData architect. The product that came to be is a result of these three considerations:

  • Get a lot of data (your own data). The data that someone else collects may not be useful to you, and it’s possible that no one else is collecting the type of data you need.
  • Control the user experience. Ensure that the UI is streamlined, and the dashboard shows extremely distilled, helpful information.
  • Make recommendations based on the application. A VM performing poorly doesn’t necessarily mean there’s an infrastructure performance problem. Perhaps the application isn’t doing what it should be, or was spec’ed to do?

The tool will show a characteristic IO “fingerprint” of a VM, including precise information about the size and frequency of IOs. This is as compared to most tools that would show an average of IO operations. For planning purposes, these two numbers are wildly different, the former obviously being much more helpful. It also shows the “working set” for each VM in the data center. Meaning the “hot” data for that VM is highlighted, which can help calculate the need for cache, higher tiers of storage, etc. At this point the tool focuses on storage only. That’s not an issue, as the product has to ship some time. Network, compute, etc will come later.

As exciting as all of this is, that was only one of the announcements!

[Read more…] about #VFD5 in Boston – PernixData

#VFD5 in Boston – OneCloud Software

James Green · Jun 24, 2015 ·

OneCloud is a new company (founded in 2013) to solve the problem of backup to public cloud being expensive and complicated. It was founded on 3 major premises (< lol, punny) which are the following:

  • DR needs to be simple
  • DR needs to be cost-optimized
  • A DR solution needs to be complete

These certainly are true premises, and if OneCloud really addresses them well, this product could be major! I’m excited to know more, and I’m especially curious about all the inner workings detailed in the embargoed session! The sad part of not being able to attend in person is having to miss the great relationship building, and the private sessions.

The OneCloud DR platform seems to work about like you would think. On-premises, the OCVA plugs in to vCenter and takes application-consistent snapshots which are then stored on lower-tier storage, compressed, and shipped off to the AWS VPC that OneCloud has created with your AWS credentials. VM’s are protected via protection groups with different RPO settings, etc. Something neat that Eric Shanks caught is that the tool has the ability to resize VMs during failover depending on the type of failover being performed. For example, the VMs can be made half the size of production when initiating a test failover. Alastair discovered that when the production failover is initiated, the tool does NOT initiate shutdown of the production workloads as a part of the “run book.” I totally agree with him that this would be a much desired feature. In a moment of panic, I don’t want to be worrying about whether I’m going to end up with a conflict when a certain workload ends up running on both sides. And lastly, Marco had a brilliant idea – providing some WorkSpaces cloud desktops for administrators to use during the outage. If local site connectivity is an issue, this might be the ticket to the administrators being able to easily continue to manage the environment.

I really enjoyed what I was able to see of OneCloud, and look forward to hearing more as it becomes public info. I’m certainly looking forward to a GA release.

#VFD5 in Boston – VMTurbo

James Green · Jun 24, 2015 ·

Today, Virtualization Field Day 5 kicked off in Boston. I have a few thoughts on each of the presenters today that I would like to share. All of the delegates and presenters were awesome. Great lineup from a delegates perspective, by the way! Stephen really out-did himself this time!

The Green Circle folks were back for the third VFD in a row. Eric Wright kicked off the presentation with an overview on the product and why it’s needed, followed by Charles Crouchman (CTO) on some more details, and finally Endre Sara with some great demo material and deep discussion on the inner workings of the tool.

I’ve seen VMTurbo present at all three of their VFD appearances, and for some reason I didn’t grasp the true power of the tool until this time. I think part of that had to do with a skewed first impression. I was first introduced to VMTurbo as a monitoring tool of sorts. If you look at VMTurbo through the lens of a monitoring tool, and only a monitoring tool, you’re going to get the wrong idea. Like not even close. The power of VMTurbo comes from it’s ability to take action based on calculated metrics where many parts of the infrastructure/application stack are considered. “Automation” is much more descriptive of VMTurbo’s product than “Monitoring.”

I still find the market economy analogy really intriguing. It makes a lot of sense to me, but I’ve wondered – as have others (like Justin Warren, if I’m not mistaken) – whether the analogy actually maps properly to infrastructure components. It’s not that I think the mechanics and algorithms aren’t doing helpful things. It’s that I wonder if market terms that would mean one thing in world of finance mislead me about what the technology is actually doing or calculating in the world of IT infrastructure. I’m a believer in what the product can do, I just wonder if I – the administrator – am confused about the technology because of the complex analogy. Either way, I love that EVERYTHING they do can be viewed with this paradigm, it would seem. The continuity is impressive.

I’m really looking forward to hitting up Eric for the NFR he promised me 😉 and spending some time with the tool in the lab. I’ll definitely put some thoughts up once I’ve had more experience with it.

AutoLab 2.6 on Ravello – Build the DC (Part 2)

James Green · May 7, 2015 ·

Well, the break in between Part 1 and Part 2 was intended to be overnight. Turns out that it’s been 6 days as I battle through the struggle of trying to get my ISOs uploaded to my Ravello portal. To make a long story short, the Ravello VM Import Tool is immature, and not well suited for use on any network other than a pristine, enterprise, business-grade network. Repeated attempts from multiple locations to upload my ISO images using the tool failed miserably. I opened a support case, but they weren’t able to do anything about it. I was able to upload a file via FTP in about an hour, compared to the same file with the VM Import tool being aborted after 26 hours.

All of that being said, I did get to chat with some internal resources about this issue, and they’re aware of need for improvement in this area. They were apologetic and helpful in getting me a workaround. I would assume this issue will be resolved in short order. Also, I’m using Ravello at no charge on an extended trial, so I really can’t be too demanding 🙂

In the mean time, I created an Xubuntu VM from within Ravello and downloaded the files there, then uploaded using the CLI version of the import tool. This whole process took roughly 2-3 hours, which is perfectly acceptable.

Uploading the ISO images with the CLI version of the VM Import Tool

Now that the images are finally available, it’s time to get back to building the AutoLab!

Build the DC

The first thing we need to do is connect the ISOs to the VMs. Following the deployment guide, I connected up all the ISOs to the appropriate VMs on the canvas. One interesting thing to note in comparison to the traditional AutoLab deployment is that we won’t be accessing the build share on the NAS over the network. We’re mounting the ISOs directly to the NAS and if I’m not mistaken, it has some additional automation built in to copy those files off the mounted ISOs. I also noted that the NAS platform has changes from FreeNAS to Ubuntu!

Mounting the ISO images to the VMs on the Canvas

Once the VMs are configured properly, we’re ready to publish. This will actually deploy the application to either Google Compute Enginer or Amazon Web Services. In my case, I chose to go with the same configuration as the deployment guide – Amazon cloud, optimized for Performance. I set the Auto-Stop time to 6 hours because I’m not sure how long the build will take to complete. On my MacBook Pro, it usually takes about 3 hours, but just in case, I’m giving it some slack. One additional note – be sure that ‘Start all VMs automatically’ under the Advanced heading is unchecked. Just like the traditional AutoLab build, we need to start each VM in a particular order to help the build process complete. My first attempt to publish the application was unsuccessful, because in my eval account, I can’t run more than 6 VMs at once. Since my Xubuntu desktop was still powered up for circumventing the Import Tool’s issues, I had to power it down before I could publish the AutoLab application.

Publishing the application to AWS

Once the application has been published, it’s time to power on the NAS and the DC and begin the first phase of the build. As in the traditional AutoLab deployment, the DC build takes about an hour. I’m knee-deep in a Horizon 6.1 Suite deployment at work, so I took this opportunity to read the App Volumes Deployment Guide while I monitor the build process on the DC 🙂

The validate script confirms the build is good

Once the DC build process has completed, it needs to be signed off on. We’ll run the Validate script on the desktop to be sure it’s happy and that everything is squared away for moving on to the next step – vCenter.

  • « Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • …
  • Page 20
  • Next Page »

Copyright © 2021 · Monochrome Pro on Genesis Framework · WordPress · Log in