SuperSizing the Data Center: Big Data and Jumbo Frames
#centaur #40GBE Data center transformation discussions too often overlook the impact on the network – and its necessary transformation. For many of the same reasons IPv6 migration is moving slower than perhaps it should given the urgent need for more IP addresses (to support all those cows connecting to the Internet) is the sheer magnitude of such an effort. Without the ability for IPv6-only nodes to talk to IPv4-only nodes, there’s a lot of careful planning that has to happen around the globe to ensure success and continued communication between the two incompatible protocols. In many ways, Jumbo Frames – despite performance advantages – suffer from the same technological incompatibility. Remember that Jumbo Frames – 9000 bytes – are incompatible with regular old sized Ethernet frames (1500 bytes). It makes sense for much the same reasons – you simply can’t stuff 9000 bytes into a frame designed to hold 1500. And one of the basic rules of Ethernet is that the smallest MTU (maximum transmission unit) used by any component in a network path determines the maximum MTU for all traffic that flows along that path. And yet the benefits of Jumbo Frames have been demonstrated many times. It reduces fragmentation overhead (the process of splitting data into chunks small enough to fit into a 1500 byte frame) which translates into lower CPU overhead on hosts. It also allows for more aggressive TCP dynamics, which results in greater throughput and better responses to some kinds of loss. But even though Jumbo Frames can deliver an increase in throughput along with a simultaneous decrease in CPU utilization they are rarely, if ever, used in a data center network. That, however, is changing. You might recall some predictions with respect to 10GB adoption in the data center: "We expect the Ethernet Switch market to experience two significant years of market growth in 2013 and 2014 from the migration of servers towards 10 Gigabit Ethernet," said Alan Weckel, Senior Director of Dell'Oro Group. "We believe that in 2013, most large enterprises will upgrade to 10 Gigabit Ethernet for server access through a mix of connectivity options ranging from blade servers, SFP+ direct attach and 10G Base-T. -- Data Center to Drive Ethernet Switch Revenue Growth through 2016, According to Dell'Oro Group Forecast Historically in the switching market the deployment of 10G in the core networks and the use of Jumbo Frames went pretty much hand-in-hand. Until recently, however, 10GB just wasn’t making its way into the data center (costs were too high) and the only place Jumbo Frames were really seen was within storage networks, particularly in conjunction with FCIP implementations. For the most part, a lack of support within the data center infrastructure and no real urgency for the efficiency gains that come from Jumbo Frames (and the fact that the Internet is not using Jumbo Frames from end-to-end, which pretty much kills the value proposition) meant enterprise organizations looked at Jumbo Frames with a “someday, but not right now” attitude. But with the increasing adoption of virtualization and movement of 10G networks into datacenters (in part driven by virtualization), Jumbo Frames are becoming more of a reality for a larger population of organizations. Consider the following support and recommendations for jumbo frames within VMware’s documentation: TCP Segmentation Offload and Jumbo Frames: Jumbo frames must be enabled at the host level using the command-line interface to configure the MTU size for each vSwitch. TCP Segmentation Offload (TSO) is enabled on the VMkernel interface by default, but must be enabled at the virtual machine level. -- ESX 4.0 Config Guide, page 57 Optimizing vMotion Performance Use of Jumbo Frames is recommend for best vMotion performance. -- Page 188 vSphere 4.0 System Admin Guide vSphere 4 Performance Jumbo Frames is one of the suggested means of improving CPU performance with respect to vSphere -- CPU Performance Enhancement Advice (Table 22-6, page 278) Add in cloud computing and a desire to more quickly move big data (virtual machines) over the WAN to cloud providers for a variety of business initiatives – a process in which the number of frames sent and low latency is key to success - and Jumbo Frames suddenly start looking a lot more like a requirement than a “Yeah, yeah, we’ll get to that eventually. Maybe.” Virtualization and cloud computing are transformative technologies. As some have often – and loudly – reminded us, the network is part of the data center, and indeed an integral part of the data center. While we tend to focus on the management and provisioning and automation of the data center and its cultural impact, we should not overlook the impact that these technologies and the changes they bring are having – and will have – on the network. If cloud and virtualization and consumerization and emerging technologies like HTML5 are going to transform the data center, that’s going to necessarily include the network. Ultimately, support for Jumbo Frames will be a requirement – a checkbox item – for every component in the data center. Sometimes It Is About the Hardware Live Migration versus Pre-Positioning in the Cloud F5 and VMware–One Step Closer to the Cloud as a Seamless Data Center Extension Cloud is an Exercise in Infrastructure Integration Performance in the Cloud: Business Jitter is Bad Like Cars on a Highway.400Views0likes1CommentF5 Long Distance VMotion Solution Demo
Watch how F5's WAN Optimization enables long distance VMotion migration between data centers over the WAN. This solution can be automated and orchestrated and preserves user sessions/active user connections allowing seamless migration. Erick Hammersmark, Product Management Engineer, hosts this cool demonstration. ps232Views0likes0CommentsCloudFucius Dials Up the Cloud
According to IDC, the worldwide mobile worker population is set to increase from 919.4 million in 2008, accounting for 29% of the worldwide workforce, to 1.19 billion in 2013, accounting for 34.9% of the workforce. The United States has the highest percentage of mobile workers in its workforce, with 72.2% of the workforce mobile in 2008. This will grow to 75.5% by the end of the forecast period to 119.7 million mobile workers. The U.S. will remain the most highly concentrated market for mobile workers with three-quarters of the workforce being mobile by 2013 and Asia/Pacific (excluding Japan) represents the largest total number of mobile workers throughout the forecast, with 546.4 million mobile workers in 2008 and 734.5 million in 2013. This means more workers will be using mobile devices, not being tied to an office cube and will need to have access back to the corporate network or applications hosted in the Cloud. Enterprises and management are faced with a potential contradictory business situation. The level of employee collaboration is on the rise; yet at the same time, the locations and work hours are changing and growing. Additionally, companies understand the importance of providing access to their critical systems, even during a disaster; and that doesn’t necessarily mean a major tornado, flood, hurricane, earthquake or other natural phenomenon. What does an enterprise do when it’s so cold and snowy that employees can’t get to the office? Declare a “snow day” and close their doors? Certainly not. What does an employee do when they are sick, injured or their child is home from school? Depending on the severity, they might be able to work from home. As for the users, it's not just a bunch of office employees and road warriors accessing shared files; but it’s also consultants, contractors, telecommuters, partners and customers using home computers and mobile devices to get our job done. Squeezed in the middle are the IT guys facing the demands of both management and users, along with the ever expanding and evolving security requirements. SSL VPN has become the mainstream technology of choice for remote access and Infonetics reports that the Worldwide SSL VPN gateway revenue increased 13.9% to $116.8M in 4Q09 and will grow 19% to $138.7M by 4Q10. Traditionally, corporate VPN controllers have been deployed in-house or in the corporate data center since the needed resources were also located there. Management and control over that VPN has been critical since it’s the gateway to the corporate network along with much of the sensitive info that resides ‘on-the-inside.’ Plus, *most* VPN controllers are full appliances – dedicated/branded hardware with the vendor’s code baked in. Finally, the advancement of cloud computing has become an enticing choice for IT departments looking to deploy corporate systems and sensitive resources for user and customer access. Enter FirePass SSL VPN Virtual Edition. A couple weeks ago F5 released FirePass v7, improving SSL VPN functionality, scalability, third-party integration, and offering new flexible deployment options including a virtual appliance. Virtualization as a technology, has reached a point of widespread adoption and many customers have requested the option of running FirePass as a virtual appliance. Providing a virtual edition of FirePass allows customers to potentially save money by allowing them add SSL VPN functionality to their existing virtual infrastructure. With FirePass VE, you get better scalability & flexibility due to the ability of being able to spin up and spin down virtual FirePass instances across the globe, in much the same way we talk about the BIG-IP appliances managing virtualized environments around the world. FirePass Virtual Edition is the full fledged, full featured FirePass code and currently runs on VMware ESX* and ESXi 4.0*. It’s vMotion enabled and you can cluster for config-sync, load balance VMs and service providers can have multiple VMs running on one system for a hosted VPN service. FirePass VE provides flexibility, scalability, context, and control particularly for Small & Medium Enterprises whose budgets might still be tight but need a remote access solution. It’s also a perfect solution for Enterprises who need a remote access business continuity solution. *Asterisk alert: If you are like me, and see a little * after something, I immediately drop to the bottom fine print to find the catch. FirePass VE is sold & supported just like FirePass hardware and is fully supported on the VMware products listed above. VMware also has a link off their website about the FirePass VE/VMware interoperability. As with any piece of software, there are minimum hardware and configuration requirements along with recommended VM provisioning but actual performance may vary depending on the target system. The FirePass v7 VE release notes (logon may be required) does provide the VMware system minimum characteristics. Just want to properly set expectations, especially with that pesky asterisk. :-) And one from Confucius: A man who has committed a mistake and doesn't correct it, is committing another mistake. ps The CloudFucius Series: Intro, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11207Views0likes0CommentsF5 Friday: Seconds Too Late and More Than a Few Bytes Short
Correcting some misperceptions regarding ADCs, virtualization, and the use of Cisco as the definitive yardstick for measuring the ADC market A recent article penned by analyst Jim Metzler asks “Can application delivery controllers support virtualization?” A fair question, especially when one digs into the eventual migration and portability of virtual machines across disparate cloud computing deployments based on just such support. But the conclusion reached is misleading and does a disservice to the entire load balancing/application delivery controller industry. Caveat: Having been under fire from vendors and readers alike in the past due to editorial discretion regarding content changes before publishing I must note that the misleading subhead “where application delivery controllers for virtualization fall short” was not Jim’s creation. I am, however, taking him to task for using Cisco as the “yardstick” for application delivery controller capabilities in part because it provided the rationale from which the misleading subhead was derived. The overall suggestion of evaluating application delivery controller’s with an eye toward integration with an organization’s virtualized provisioning system of choice is a good one, but one section of the aforementioned article leaps out as inaccurate and misleading. Where application delivery controllers for virtualization fall short [registration required] Moving a VM between servers in disparate data centers is very challenging, and in some cases there is nothing that an ADC can do to respond to these challenges. Cisco and VMware, for example, have stated that when moving VMs between servers in disparate data centers, the maximum roundtrip latency between the source and destination VMware ESX servers cannot exceed 5 milliseconds. The speed of light in a combination of copper and fiber is roughly 120,000 miles per second. In 5 ms, light can travel about 600 miles. Since the 5 ms is roundtrip delay, the data centers can be at most 300 miles apart. That 300-mile figure assumes that the WAN link is a perfectly straight line between the source and destination ESX servers and that the data that is being transmitted does not spend any time at all in a queue in a router or other device. Both of those assumptions are unlikely to be the case. Using Cisco as your yardstick for application delivery controller capabilities leads to inaccurate analysis of the market and is patently unfair. When writing an article on the state of networking would you use Juniper or Extreme as your measuring stick for switches instead of Cisco? Unlikely. If you are evaluating routers, switches, telepresence or a variety of other network-focused solutions then you use Cisco as your yardstick. They are the leading provider in most things layer 2-3 and certainly it defines that industry and sets the bar for the state of the market and its capabilities. But when you start looking at layer 4-7 (application delivery) then it is Cisco’s yardstick and not application delivery controllers in general that falls short. Most of the industry has long since moved past the capabilities of Cisco’s application delivery solutions and it is not the leader in the market. Therefore using its technology as a yardstick for the rest of the players and then generalizing its inability to perform in this given scenario across the entire market is misleading. It results in an inaccurate depiction of what the market is capable of doing. IT’S THE INTEGRATION – INSIDE and OUT F5 handles this scenario just fine, thank you very much. While true that a simple Load balancer would likely find this scenario difficult to support, an advanced application delivery controller that incorporates WAN optimization integrated with the provisioning systems has no problem whatsoever. I submit a variety of documentation and demonstration of such functionality for your perusal: Long Distance vMotion Demo F5’s vSphere Solution vMotion Deployment Guide vMotion Deployment Guide v10 The reason an F5 BIG-IP has no problem with this scenario is its integration – both with the provisioning systems and internal to its core platform. F5 doesn’t just have a WAN optimization solution; it has a fully integrated WAN optimization feature set that is a part of a larger unified application delivery network. It is contextually aware of where data is coming from and going to and can apply a broad set of policies to accelerate and optimize data traversing disparate networks. More important, perhaps, is the external integration of BIG-IP with orchestration/provisioning systems. Between the two, an F5-VMware solution has no problems traversing WAN links because F5 not only manages the migration from one data center to another (be it cloud or traditional) but it simultaneously manages application connectivity and ensures continuity of use before, during, and after such a migration. This becomes important when addressing this next point from the article: To support VMs moving between servers in disparate data centers, one can extend the VLAN between the VM and the ADC in the originating data center to the ADC in the receiving data center and then proceed as if the VM were being moved between servers in the same data centers. How this is done tends to be specific to each ADC vendor. There is no arguing that migrating a virtual machine between two disparate locations is not complex; it is. There’s an entire industry of solutions cropping up to deal with managing the complex networking necessary to essentially bridge the local data center with remote cloud computing environments (CohesiveFT, for example, and I’m certain Cisco with its layer 2-3 expertise has this well in hand). But this statement focuses altogether too tightly on the network layer and fails to consider the application layer and complexity associated with that migratory process. Most ADCs cannot manage to simultaneously successfully move a virtual machine across a less-than-optimal WAN link and maintain application availability. Live migration. F5 can and does using global application delivery and application delivery controllers. And if you don’t think that’s useful, then consider what it will take to implement dynamic cloud-bursting capability, i.e. on-demand, cross-cloud elastic scalability with live migration and no downtime. Exactly. You have to manage the WAN link and ensure the VM can move from one physical location to another successfully while maintaining application availability until it is complete and then distribute requests across the two live instances until such time as the secondary instance is no longer necessary based on real-time demand and then return to a single instance. All without disrupting service. Jim is right in that there is no solution today that can do that – but that’s today, not tomorrow or next week or next month. But if you’re watching the number two in the market, you’ll probably miss when number one announces it can. F5 clobbers Cisco in application delivery controller market share F5 Networks: Charging Ahead GARTNER REPORT: F5 Commands Worldwide Market Share Lead for Application Delivery Controllers Learning from F5’s Success Against Cisco It doesn’t make sense to watch number two in the market to see where it’s going, because it’s always going to be seconds too late and more than a few bytes short in delivering the solutions to the day’s most challenging problems. Most of the application delivery controllers on the market can’t support the scenario described in the article. True. But that’s just most, not all, and certainly not F5. F5 isn’t leading the application delivery market without good reason. It’s exactly because we understand the complex relationship between applications and networking and the value of integration with the broader data center ecosystem that gives us the edge to develop the solutions necessary not just to traditional application availability and performance challenges, but to the challenges emerging from the rapid adoption of virtualization and cloud computing. Related Posts All F5 Friday Entries on DevCentral from tag analyst Google claims analyst research firm site is an attack site, serving up malware from tag market Firefox Reaches 20% Market Share for First Time Ever - ReadWriteWeb from tag Cisco Hindsight is Always Twenty-Twenty Is Vendor Lock-In Really a Bad Thing? The API Is the New CLI from tag intercloud Pursuit of Intercloud is Practical not Premature Location, Location, Location Cloud Balancing, Cloud Bursting, and Intercloud Getting Around That Pesky Speed of Light Limitation Intercloud: The Evolution of Global Application Delivery from tag vmware The day of the virtual desktop has come...and gone Return of the Web Application Platform Wars (more..) del.icio.us Tags: MacVittie,F5,F5 Friday,virtualization,cloud computing,Cisco,vmotion,vmware,intercloud,Jim Metzler,analyst,market,application delivery,load balancing200Views0likes0CommentsF5 Friday: Efficient Long Distance Transfer of VMs with F5 BIG-IP WOM and NetApp Flexcache
BIG-IPWOM and NetApp Flexcache speed movement of your VMs across the WAN. One of the major obstacles to the concept of cloud computing and “on-demand” is implementing the “on-demand” piece of the equation. Virtualization in theory allows organizations to shuffle virtual machine images of applications to and fro without the Big Hairy Mess that’s generally involved in physically migrating an application from location to another. Just the differences in hardware and thus potential conflicts between hardware drivers and the inevitable “lack of support” for some piece of critical hardware in the application can doom an application migration. Virtualization, of course, removes these concerns and moves the interoperability issues up to the hypervisor layer. That makes migration a much simpler process and, assuming all is well at that layer, mitigates many of the issues that had been present in the past with moving an application – such as ensuring all the right files and adapters and connections were with the application. It’s an excellent packaging scheme that migration as well as it does rapid provisioning. The problem, of course, has been in the network. Virtual images aren’t small by any stretch of the imagination while Internet connectivity has always been more constrained. Organizations did not run out and increase the amount of bandwidth they had available upon embarking on their virtualization journey and even if they did, they still have little to no control over the quality of that connection. So while it was possible in theory to move these packages of applications around to-and-fro, it wasn’t always necessarily feasible. Thus it is that solutions are appearing to address these problems to make it not only possible but feasible to perform migration of virtual images on-demand. NetApp Flexcache is just one such solution. Flexcache leverages data reduction and caching to ease the burden on the network of transferring such “big data”. Alone it is a powerful addition to vMotion, but it’s focused on storage, on the image, on data. It’s not necessarily addressing many of the core network issues that can cause a storage vMotion to fail. That’s where we come in because F5 BIG-IP WOM (WAN Optimization Module) does address those core network issues and makes it possible to successfully complete a storage vMotion across the WAN. Application migration, on-demand. Today’s F5 Friday is a guest post by Don MacVittie who, as you may know, keeps a close eye on storage and WAN optimization concerns and technologies in his blog. So without further adieu, I’ll let Don explain more about the combined F5 BIG-IP WOM and NetApp Flexcache solution for long distance transfer of virtual machines. VMWare vMotion allows you to transfer VMs from one server to another, or even from one datacenter to another, provided the latency between the datacenters is small. It does this in a two-step process that first moves the image, and then moves the running “dynamic” portions of the VM. Moving the image is much more intensive than moving the dynamic bits, as the image is everything you have on disk for the VM, while the dynamic part is just the current state of the machine. Moving the image is referred to as “storage vMotion” in VMWare lingo. NetApp Flexcache enhances the experience by handling the transfer of the image for you, making it possible to utilize Flexcache’s data reduction and cache refresh mechanisms to transfer the image for the vMotion system. While Flexcache alone is a powerful addition to vMotion, it does not address latency issues, and if the network is lossy, will suffer performance degradation as any application will. F5 BIG-IP WAN Optimization Module (WOM) boosts the performance and reliability of your WAN connections, be they down the street or on a different continent. In the datacenter-to-datacenter scenario, utilizing iSessions, two BIG-IP WOM devices can drastically improve the performance of your WAN link. Adding F5 BIG-IP WOM to the VMWare/Flexcache architecture provides you with latency mitigation techniques, loss prevention mechanisms, and more data reduction capability. as shown in this solution profile, a VMWare/Flexcache/WOM solution greatly increases the mobility of your VMs between datacenters. It also allows you to optimize all traffic flowing between the source and destination datacenters, not just the vMotion and Flexcache traffic. While the solution involving Flexcache (diagrammed in the above-mentioned Solution Profile) is more complex, a generic depiction of F5 BIG-IP WOM’s ability to speed, secure, and stabilize data transfers looks like this: So whether you are merging datacenters, shifting load, or opening a new datacenter, VMWare vMotion + NetApp Flexcache + F5 BIG-IP WOM are your path to quick and painless VM transfers across the WAN. Related blogs & articles: F5 Friday: Rackspace CloudConnect - Hybrid Architecture in Action F5 Friday: The 2048-bit Keys to the Kingdom All F5 Friday Posts on DevCentral F5 Friday: Elastic Applications are Enabled by Dynamic Infrastructure F5 Friday: It is now safe to enable File Upload F5 Friday: Application Access Control - Code, Agent, or Proxy? Oracle RMAN Replication with F5's BIG-IP WOM Don MacVittie - WOM Nojan Moshiri - BIGIP-WOM How May I Speed and Secure Replication? Let Me Count the Ways. WOM nom nom nom nom – DevCentral WOM and iRules - DevCentral194Views0likes0CommentsImproving WAN VM transfer Speed with NetApp Flexcache and F5 BIG-IP WOM
That’s a mouthful, but this is just a quick blog to point you at the actual blog I guest wrote for our F5 Fridays series. In short, we’ve been toying with F5 BIG-IPWOM in the labs as a performance and distance enhancement tool for VMWare vMotion moves over the WAN when NetApp Flexcache is deployed. Pretty cool stuff, and while I wasn’t involved in all of the testing that went on, as the Technical Marketing Manager for WOM I did get to see the results as they rolled out of the lab. Take a read if you’re doing Long Distance VMWare transfers with vMotion, it’s well worth the five minutes of your life – Efficient Long Distance Transfer of VMs with F5 BIG-IP WOM and NetApp Flexcache. And next week we’ll return to my regularly scheduled meandering about IT management, storage, and WAN Optimization…157Views0likes0CommentsCloudFucius Goes Off…to VMworld
The collective we (F5 Networks) will be at Moscone Center in San Francisco next week for VMworld 2010. If you are in town and attending, visit F5’s booth #1131. We’ll have giveaways, demos, exciting announcements, video interviews and other fun & informative activities all week, in addition to participating in several breakout sessions during the show. I’ll be handling a lot of our social media activities during the show, specifically tweeting (or follow F5) and conducting some video interviews of F5 partners and customers. I’ll also probably grab an F5er or two to talk about our VMware partnership along with some technical folks to capture some interesting whiteboard discussions. You can catch all the video posts here in my blog, or you can subscribe to F5’s YouTube Channel to be alerted when new content gets posted. We’ve grown our video content tremendously over the last year with more than 100 videos covering a range of topics: Tech Demos, In 5 Minutes or Less, Interviews, Case Studies, White Board Discussions, Cool Solutions, Product Info, Partner Spotlights, Life at F5 and of course some fun ones with my own brand of humor. There is no Breakout Session pre-registration this year which will allow you to choose the sessions that are interesting at the moment rather than sticking to a plan you built weeks ago. Breakout Sessions that might be of interest include (links take you to main VMworld site): SP9723 Designing Optimal Networks for Virtual Apps – Why It Makes All The Difference Tue 8/31: 11:00 AM PC6940 Networking Best Practices for vCloud Wed 9/1: 12:00 PM & Thu 9/2: 1:30 PM PC7754 Leveraging an Enterprise-Ready vCloud Service to Regain IT Control Format Tue 8/31: 3:30 PM & Thu 9/2: 12:00 PM Hope to see you there, here or somewhere in the clouds. And one from Confucius: Ability will never catch up with the demand for it. ps The CloudFucius Series: Intro, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19143Views0likes0Comments