Installing F5 BIG-IP on SimpliVity using VMware ESXi

Installing F5 BIG-IP on SimpliVity using VMware ESXi

I’ve had the opportunity over the last few months to play with some pretty incredible hardware. I sometimes feel like I’ve been given the opportunity to test-drive the computing industry’s analog to a high-performance automobile. I’ve had a few moments where I’ve had to do a double take, and ask myself: “Did that task really complete in 12 seconds? Wow, that’s fast.” OK, so I didn’t say “wow,” but this is a public blog so let’s all just pretend I did.

I’m excited about the possibilities that the strategic partnership between F5 and SimpliVity will offer to you, so today I want to discuss how the BIG-IP Virtual Editions (VE) can be installed on the SimpliVity hyper-converged platform.

I put this post together with Jon Mark Sano, my solutions architect counterpart from SimpliVity. I also want to thank Kent Munson, principal solution engineer here at F5, who physically installed these three OmniCube CN-3400s in our lab, and Chris Rose from SimpliVity who provided a handy-dandy pre-deployment template that made planning for the provisioning of these OmniCubes a breeze.

Now, let’s get down to business.

I began the process with a relatively blank VMware vCenter install. I always like to utilize Distributed Virtual Switches (dVS) to eliminate the repetitive work of creating the same port group on each host. Since SimpliVity supports dVS switches for their controllers, storage, and federation networking requirements, I used them, making sure to up the MTU to 9,000 on the dVS to support the storage networking requirements.

I also created 3 other port groups: an external/internet facing network, an internal/infrastructure network, and a trunked (VLAN tagged) network to handle the rest.

Since this is an article about deploying F5 Virtual Editions on SimpliVity, I want to point out that the BIG-IP VE can very easily participate in what VMware refers to as Virtual Guest Tagging (VGT). This example was based off of SimpliVity’s OmniCube product line – the same process applies identically to SimpliVity’s OmniStack integrated solutions as well.

The OmniCube’s physical 10GbE interfaces are tagged with multiple VLANs. This allows the dVS to have port groups that are either tagged with a particular VLAN (like the Ext and INF in my environment – VLAN 128 and 115 respectively) or tagged with multiple VLANs (like the TrunkAll port group in my environment VLANs — 1-4094). Setting it up this way enables the Guest VM (in this case, a BIG-IP VE) to tag the traffic with a VLAN before it sends it to the dVS.

Before continuing installing the BIG-IPs, I made sure I had the proper .OVA file downloaded and accessible so I can import it into vSphere. For this demonstration, I’ve downloaded the 11.6HotFix6ESXi.OVA file from the F5 Downloads page.

After logging in and finding the download link, I chose the 11.6 virtual edition. I accepted the EULA and then chose the proper “hypervisor flavor,” which for us is the image file set for VMware ESXi server 5.0 through 5.5. I made sure to download that file to a file share accessible by this vCenter.

You may have noticed that I used the vSphere c# client. I did that because SimpliVity has a plugin that is used to manage and report on the status of the OmniCubes. We’ll see that plugin in action later in this post.

OK, let’s start the actual deployment of the F5 BIG-IP VEs. Open the vSphere client and go to the File menu and choose Deploy OVF Template.

This opens a dialog box that asks you to specify the location to which the .OVA file was recently downloaded. Click Next.

When you get to this dialog screen, click Next again.

This is the second time you’ll see F5’s EULA; just go ahead and click Accept and Next.

Here, I supplied the name BIG-IP11.6.6–01 as the VM name for my first deployed BIG-IP. Earlier, I created a folder in my VMs and Templates view called BIG-IPs. I made sure to highlight this folder, so that the newly created BIG-IP will automatically be placed there. Click Next.

There are 1, 2, 4, and 8 CPU configuration options available, each with a corresponding 2 GB RAM per CPU. This allows you to size the VE for the modules you will provision on it. You can learn more about these options by checking out our Recommended Practices Guide for Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure. In this case, I chose the 2 CPU/4 GB RAM option because I’ll be using only two modules. After selecting your configuration option, click Next.

On this screen, I chose the cluster in which I will install this first BIG-IP. Click Next.

I’m installing it on the node with the hostname simp2.bd.f5.com, which is important because I’ll deploy the second BIG-IP on the simp3.bd.f5.com host. Click Next.

I selected the shared storage volume created by the SimpliVity plugin to deploy these virtual editions instead of the local storage associated with this specific hypervisor host or the other network attached storage volumes I have in my environment. Click Next.

I chose to thin provision the virtual edition. Because SimpliVity will manage this storage, the VE will be thin provisioned even if I chose Thick Provision Lazy Zeroed, which is the default for the BIG-IP VE. Click Next.

Here is where the networks I created earlier come into play. I associated the management interface with my infrastructure network: dvPG–INF–115. In order to showcase at a later date the Virtual Guest Trunking functionality of the BIG-IP, I chose to associate the other three interfaces with the TrunkAll port group: dvPG-TrunkAll. Click Next.

At this point, review your selections. If you’re happy with them, just click Finish.

Now, VMware is deploying the BIG-IP version 11.6 HF6 virtual edition. This process took about a minute on the SimpliVity OmniCube system that we have in our lab.

This is a perfect example of why I’m so excited to have such fast toys to play with in our lab. This SimpliVity OmniCube is fast. And while not part of this specific how-to post, I had a Maverick to Ice Man moment when Jon Mark suggested I try using the SimpliVity – Clone Virtual Machine context menu on one of the BIG-IPs. The OmniCube cloned the 3.24GB BIG-IP in under 12 seconds. All I could say was “wow” (or something like that).

Once completed, I ran through this process again deploying a second BIG-IP VE (on a second OmniCube as mentioned earlier) to be part of a high availability active/standby pair.

Here’s where I chose the other SimpliVity OmniCube for the second BIG-IP VE. Except for changing the VM name to BIG-IP11.6.6-02, and placing the VE on this OmniCube the other deployment choices remain the same as the first BIG-IP VE deployment.

 

One of the great features of the SimpliVity vSphere Client plugin is this screen. It would be much more impressive if I had more than my 2 BIG-IPs installed, but you can see how SimpliVity is managing the storage backend for my VEs.

This screen is accessed by highlighting the sjc-bd data-center object in the Hosts and Clusters inventory view and then selecting the SimpliVity tab. Notice that for even just these two VEs, I’m getting a 3.8:1 efficiency score, which means that the storage used is almost 25% of the storage used if I were not using SimpliVity. That’s kinda cool. Oh, and If I were doing this in production rather than simply for your amusement, I’d also be able to set up local and remote backup jobs to save me from disaster or even save me from myself…

Now that we have two BIG-IPs installed on different hosts within the Simplivity–01 cluster, we need to perform a few more configuration tasks to make sure that if we’re using Distributed Resource Scheduler (DRS), it doesn’t inadvertently vMotion these BIG-IP VEs to be on the same host.

To accomplish this, just highlight the Simplivity–01 cluster object in the left-hand navigation pane, select the DRS tab, and click Edit.

This dialog box will open. Highlight Rules and click Add…

Here’s where you provide a name for your new rule. From the Type dropdown menu, select Separate Virtual Machines and click Add…

From the Virtual machine list, place checkmarks next to the two BIG-IP devices you just created, and click OK.

This rule will prevent the two BIG-IP devices from running on the same SimpliVity OmniCube.

Next highlight Virtual Machine Options in the left-hand pane. For each of the BIG-IPs you just deployed, change the automation level to Disabled, and click OK.

This will prevent these two BIG-IP devices from being vMotioned by DRS, which is important because F5 does not support vMotioning a BIG-IP device while it is in the active state. Since BIG-IPs are real-time networking devices, F5 best practice is to never disrupt the active BIG-IP in the HA Active/Standby pair.

vMotioning will first perform a Stun During Page Send operation which will slow the processing speed of the VM so that the Memory copy can catch up and have as short as possible quiesce time (cease all processing and disk activity for 100s of milliseconds up to 4 seconds) as the host is actually flipped from one host to another.

As you can imagine, slowing the processing speed of a real-time operating system network device and actually ceasing all CPU processing of the BIG-IP can have disastrous consequences on BIG-IP’s ability to proxy traffic. That is why F5 does not support vMotioning an active BIG-IP VE.

Keep in mind that this does not prevent you from manually vMotioning the standby BIG-IP, which may be necessary to perform maintenance operations.

And that’s it. You’ve got the facts on deploying BIG-IP VEs on the SimpliVity hyper-converged platform.

If you want to read more about the business benefits of doing just that, head over to the SimpliVity blog where my F5 colleague Frank Strobel has a guest post that details all the ways the F5 and SimpliVity partnership is redefining business mobility. Also, you can sign up for our joint March 2 webinar about deploying F5 and SimpliVity solutions in your enterprise.

Published Feb 04, 2016
Version 1.0

Was this article helpful?

No CommentsBe the first to comment