html pages not elevated to ram cache from WAM.
I have a vip configured with caching(ram & wam). The WAM is not elevating html pages to the RAM cache. It does for all the static content like images, css & js. The response code for clients is always S10101 even after 10-20 requests for the same html page. Is there anything in particular to check. LTM version 10.2.4 HF11.232Views0likes2CommentsRAM cache statistics has more http/1.0 responses
Hi All, I have a vip which has a ram cache & wam(http class) associated with it. There are plenty other vips similar in my LTM. But only one particular vip has more of http/1.0 responses. I have checked it and the server is responding with http/1.1 responses itself. When checked with a websniffer, even for http/1.1 requests the site is responding with http/1.0 responses. Not sure if there is anything which i need to check specifically to modify this behaviour. Anybody has any idea about this. Current LTM version 10.2.4 HF11(i know its very old.) Statistics of ram profile(problem): | requests (total, max in conn, GET, POST) = (1.440G, 54393, 1.434G, 1.507M) | requests (v0.9, v1.0, v1.1) = (1.739M, 4.595M, 1.433G) | responses (v0.9, v1.0, v1.1) = (1.765M, 30.74M, 24.34M) Other ram profiles(good): | requests (total, max in conn, GET, POST) = (853.6M, 149935, 853.6M, 35649) | requests (v0.9, v1.0, v1.1) = (586, 1.996M, 851.6M) | responses (v0.9, v1.0, v1.1) = (0, 2.047M, 122.9M)179Views0likes0CommentsWAM must be provisioned when a Virtual Server is using a Web Acceleration profile
Hello guys, I'll really appreciate you could help me with this matter. I just grabbed a UCS file from a BIG IP 4200v box which runs LTM and ASM in version 12.1.2 and I tried to install it in a VE running the same TMOS version and modules as the physical one. After sorting out some issues, I reached a point where I cannot figure out what is happening. I cannot load the config because of the following issue: 01070668:3: WAM must be provisioned when a Virtual Server is using a Web Acceleration profile (/Common/MainPortal.app/MainPortal_optimized-acceleration) with attached applications.Unexpected Error: Loading configuration process failed. Such application was deployed using iApps. The source BIGIP 4200v only has LTM and ASM provisioned, so why I need to provision the WAM in the VE one? Only for test, I provisioned AM in the VE, but the issue remains. Maybe I cannot load this specific .ucs in a VE since the WAM is related to hardware. By the way, how can I provision WAM? Any advice is welcome. Thanks Jorge378Views0likes1CommentDoes anyone know where I can find a SharePoint 2010 Web Acceleration policy template for 10.2.4?
I have SharePoint 2013 up and running on BigIP LTM 10.2.4. I would like to start using/testing Web Acceleration in my 2013 environment, but the only "Pre-Defined Acceleration Policy" available in 10.2.4 is for SharePoint 2007. I have found a link in the SharePoint 2010 F5 deployment documentation, but it only points to a Wiki...not a specific download. Standard SharePoint 2010 policy: http://devcentral.f5.com/wiki/default.aspx/WebAccelerator/Sha repoint2010WebAcceleratorPolicy.html I was wondering if there is a SharePoint 2013 (or at least 2010) "Pre-Defined" Policy available for use in 10.2.4? Thanks in advance for any assistance.338Views0likes3CommentsWAM cache invalidation trigger - how to invalidate Path from this request
I want to be able to trigger a cache invalidation based on the Host/Path of the incoming request (with extra headers to trigger the invalidation), however when I select "Cached Content to invalidate">>Add parameter "Path", there is no "Path from Request" option, only "Path segment from request". Given that my incoming path could have an unknown number of path segments, how do I achieve "Path from request"-like behaviour? I've done this before using an incoming query parameter to designate the entire path to invalidate as a workaround to not being able to use "Path from Request", but am now looking for an answer!!225Views0likes3CommentsMobile Apps. New Game, New (and Old) Rules
For my regular readers: Sorry about the long break, thought I’d start back with a hard look at a seemingly minor infrastructure elements, and the history of repeating history in IT. In the history of all things, technological and methodological improvements seem to dramatically change the rules, only in the fullness of time to fall back into the old set of rules with some adjustment for the new aspects. Military history has more of this type of “accommodation” than it has “revolutionary” changes. While many people see nuclear weapons as revolutionary, many of the worlds’ largest cities were devastated by aerial bombardment in the years immediately preceding the drop of the first nuclear weapon, for example. Hamburg, Tokyo, Berlin, Osaka, the list goes on and on. Nuclear weapons were not required for the level of devastation that strategic planners felt necessary. This does not change the hazards of the atomic bomb itself, and I am not making light of those hazards, but from a strategic, war winning viewpoint, it was not a revolutionary weapon. Though scientifically and societally the atomic bomb certainly had a major impact across the globe and across time, from a warfare viewpoint, strategic bombing was already destroying military production capability by destroying cities, the atomic bomb was just more efficient. The same is true of the invention of rifled cannons. With the increased range and accuracy of rifled guns, it was believed that the warship had met its match, and while protection of ships went through fundamental changes, in the end rifled cannons increased the range of engagement but did not significantly tip the balance of power. Though in the in-between times, from when rifled cannons became commonplace and when armor plating became strong enough, there was a protection problem for ships and crews. And the most obvious example, the tank, forced military planners and strategists to rethink everything. But in the end, World War II as a whole was decided in the same manner other continental or globe spanning conflicts have throughout history – with hoards of soldiers fighting over possession of land and destruction of the enemy. Tanks were a tool that often lead to stunning victories, but in the cases of North Africa and Russia, it can be seen that many of those victories were illusory at best. Soldiers, well supplied and with sufficient morale, had to hold those gains, just like in any other war, or the gains were as vapor. Technology – High Tech as we like to call it – is the other area with stunning numbers of “This changes everything” comparisons that just don’t pan out the way the soothsayers claim it will. Largely because the changes are not so revolutionary from a technology perspective as evolutionary. The personal computer may have revolutionized a lot of things in the world – I did just hop out to Google, search for wartime pictures of Osaka, find one on Wikipedia, and insert it into my blog in less time than it would have taken me to write the National Archives requesting such a picture after all – but since the revolution of the Internet we’ve had a string of “this changes everything” predictions that haven’t been true. I’ve mentioned some of them (like XML eliminating programmers) before, I’ll stick to ones that I haven’t mentioned by way of example. Saas is perhaps the best example that I haven’t touched on in my blog (to my memory at least). When SaaS came along, there would be no need for an IT department. None. They would be going away, because everything would be SaaS driven. Or at least made tiny. If there was an IT version of mythbusters, they would have fun with that one, because now we have a (sometimes separate) staff responsible for maintaining the integration of SaaS offerings into our still-growing datacenters. Osaka Bomb Damage – source Wikipedia The newest version of the “everything is different! Look how it’s changed!” mantra is cell network access to applications. People talk about how the old systems are not good enough and we must do things differently, etc. And as always, in some areas they are absolutely right. If you’ve ever hit a website that was designed without thought for a phone-sized screen, you know that applications need to take target screen size into account, something we haven’t had to worry about since shortly after the browser came along. But in terms of performance of applications on cellular clients, there is a lot we’ve done in the past that is relevant today. Originally, a lot of technology on networks focused on improving performance. The thing is that the performance of a PC over a wired (or wireless) network has been up and down over the years as technology has shifted the sands under app developers’ feet. Network performance becomes the bottleneck and a lot of cool new stuff is created to get around that, only to find that now the CPU, or memory, or disk is the bottleneck, and resources are thrown that way to resolve problems. I would be the last to claim that cellular networks are the same as Ethernet or wireless Ethernet networks (I worked at the packet layer on CDMA smartphones long ago), but at a 50,000 foot view, they are “on the network” and they’re access applications served the same way as any other client. While some of the performance issues with these devices are being addressed by new cellular standards, some of them are the same issues we’ve had with other clients in the past. Too many round trips, too much data for the connection available, repeated downloads of the same data… All of these things are relative. Of course they’re not the only problems, but they’re the ones we already have fixes for. Take NTLM authentication for example, back when wireless networks were slow, companies like F5 came up with tools to either proxy for, or reduce the number of round trips required for authentication to multiple servers or applications. Those tools are still around, and are even turned on in many devices being used today. Want to improve performance for an employee that works on remote devices? Check your installed products and with your vendor to find out if this type of functionality can be turned on. How about image caching on the client? While less useful in the age of “Bring You Own Device”, BYOD is not yet, and may never be, the standard. Setting image (or object) caching rules that make sense for the client on devices that IT controls can help a lot. Every time a user hits a webpage with the corporate logo on it, the image really doesn’t need to be downloaded if it has been once. Lots of web app developers take care of this within the HTML of their pages, but some don’t, so again, see if you can manage this on the network somewhere. For F5 Application Acceleration products you can, I cannot speak for other vendors. The list goes on and on. Anyone with five or ten years in the industry knows what hoops were jumped through the last time we went around this merry go round, use that knowledge while assessing other, newer technologies that will also help. The wheel doesn’t need to be reinvented, just reinforce – an evolutionary change from a wooden spoke device to a steel rim, maybe with chrome. While everyone is holding out for broad 4G deployments to ease the cellular device performance issue, specialists in the field are already saying that the rate of adoption of new cellular devices indicates that 4G will be overburdened relatively quickly, so this problem isn’t going anywhere, time to look at solutions both old and new to make your applications perform on employee and customer cellular devices. F5 participates in the Application Acceleration market. I do try to write my blogs such that it’s clear there are other options, but of course I think ours are the best. And there are a LOT more ways to accelerate applications than can fit into one blog, I assure you. A simple laundry list of tools, configuration options, and features available on F5 products alone is the topic for a tome, not a blog. Now for the subliminal messaging: Buy our stuff, you’ll be happier. How was that? Italics and all. If you can flick a configuration switch on gear you’ve already paid for, do a little testing, and help employees and/or customers who are having performance problems quickly while other options are explored, then it is worth taking a few minutes to check into, right? Related Articles and Blogs: The Encrypted Elephant in the Cloud Room Stripping EXIF From Images as a Security Measure F5 Friday: Workload Optimization with F5 and IBM PureSystems Secure VDI Sign On: From a Factor of Four, to One The Four V’s of Big Data188Views0likes0CommentsAdvanced Load Balancing For Developers. The Network Dev Tool
It has been a while since I wrote an installment of Load Balancing for Developers, and now I think it has been too long, but never fear, this is the grad-daddy of Load Balancing for Developers blogs, covering a useful bit of information about Application Delivery Controllers that you might want to take advantage of. For those who have joined us since my last installment, feel free to check out the entire list of blog entries (along with related blog entries) here, though I assure you that this installment, like most of the others, does not require you to have read those that went before. ZapNGo! Is still a growing enterprise, now with several dozen complex applications and a high availability architecture that spans datacenters and the cloud. While the organization relies upon its web properties to generate revenue, those properties have been going along fine with your Application Delivery Controller (ADC) architecture. Now though, you’re seeing a need to centralize administration of a whole lot of functions. What worked fine separately for one or two applications is no longer working so well now that you have several development teams and several dozen applications, and you need to find a way to bring the growing inter-relationships under control before maintenance and hidden dependencies swamp you in a cascading mess of disruption. With maintenance taking a growing portion of your application development manhours, and a reasonably well positioned test environment configured with a virtual ADC to mimic your production environment, all you need now is a way to cut those maintenance manhours and reduce the amount of repetitive work required to create or update an application. Particularly update an application, because that is a constant problem, where creating is less frequent. With many of the threats that your ZapNGo application will be known as ZapNGone eliminated, now it is efficiencies you are after. And believe it or not, these too are available in an ADC. Not all ADC’s are created equal, but this discussion will stay on topics that most ADCs can handle, and I’ll mention it when I stray from generic into specific – which I will do in one case because only one vendor supports one of the tools you can use, but all of the others should be supported by whatever ADC vendor you have, though as always, check with your vendor directly first, since I’m not an expert in the inner workings of every one. There is a lot that many organizations do for themselves, and the array of possibilities is long – from implementing load balancing in source code to security checks in the application, the boundaries of what is expected of developers are shaped by an organization, its history, and its chosen future direction. At ZapNGo, the team has implemented a virtual test environment that as close as possible mirrors production, so that code can be implemented and tested in the way it will be used. They use an ADC for load balancing, so that they don’t have to rewrite the same code over and over, and they have a policy of utilizing a familiar subset of ADC functionality on all applications that face the public. The company is successful and growing, but as always happens in companies in that situation, the pressures upon them are changing just by virtue of their growth. There are more new people who don’t yet have intimate knowledge of the code base, network topology, security policies, whatever their area of expertise is. There are more lines of code to maintain, while new projects are being brought up at a more rapid pace and with higher priorities (I’ve twice lived through the “Everything is high priority? Well this is highest priority!” syndrome while working in IT. Thankfully, most companies grow out of that fast when it’s pointed out that if everything is priority #1, nothing is). Timelines to complete projects – be they new development, bug fixes, or enhancements are stretching longer and longer as the percentage of gurus in the company is down and the complexity of the code and the architecture it runs on is up. So what is a development manager to do to increase productivity? Teaming newer developers with people who’ve been around since the beginning is helping, but those seasoned developers are a smaller and smaller percentage of the workforce, while the volume of work has slowly removed them from some of the many products now under management. Adopting coding standards and standardized libraries helps increase experience portability between projects, but doesn’t do enough. Enter offloading to the ADC. Some things just don’t have to be done in code, and if they don’t have to be, at this stage in the company’s growth, IT management at ZapNGo (that’s you!) decides they won’t be. There just isn’t time for non-essential development anymore. Utilizing a policy management tool and/or an Application Firewall on the ADC can improve security without increasing the code base, for example. And that shaves hours off of maintenance projects, while standardizing on one or a few implementations that are simply selected on the ADC. Implementing Web Application Acceleration protocols on the ADC means that less in-code optimization has to occur. Performance is no longer purely the role of developers (but of course it is still a concern. No Web Application Acceleration tool can make a loop that runs for five minutes run faster), they can allow the Web Application Acceleration tool to shrink the amount of data being sent to the users’ browser for you. Utilizing a WAN Optimization ADC tool to improve the performance of bulk copies or backups to a remote datacenter or cloud storage… The list goes on and on. The key is that the ADC enables a lot of opportunities for App Dev to be more responsive to the needs of the organization by moving repetitive tasks to the ADC and standardizing them. And a heaping bonus is that it also does that for operations with a different subset of functionality, meaning one toolset gives both App Dev and Operations a bit more time out of their day for servicing important organizational needs. Some would say this is all part of DevOps, some would say it is not. I leave those discussions to others, all I care is that it can make your apps more secure, fast, and available, while cutting down on workload. And if your ADC supports an SSL VPN, your developers can work from home when necessary. Or more likely, if your code is your IP, a subset of your developers can. Making ZapNGo more responsive, easier to maintain, and more adaptable to the changes coming next week/month/year. That’s what ADCs do. And they’re pretty darned good at it. That brings us to the one bit that I have to caveat with F5 only, and that is iApps. An iApp is a constructed configuration tool that asks a few questions and then deploys all the bits necessary to set up an ADC for a particular application. Why do I mention it here? Well if you have dozens of applications with similar characteristics, you can create an iApp Template and use it to rapidly bring new applications or new instances of applications online. And since it is abstracted, these iApp templates can be designed such that AppDev, or even the business owner, is able to operate them Meaning less time worrying about what network resources will be available, how they’re configured, and waiting for operations to have time to implement them (in an advanced ADC that is being utilized to its maximum in a complex application environment, this can be hundreds of networking objects to configure – all encapsulated into a form). Less time on the project timeline, more time for the next project. Or for the post deployment party. One of the two. That’s it for the F5 only bit. And knowing that all of these items are standardized means less things to get mis-configured, more surety that it will all work right the first time. As with all of these articles, that offers you the most important benefit… A good night’s sleep.231Views0likes0CommentsFrom Point A to Point B.
The complexities of life often escape a young child. The Little Man asked me the other day why I had to go work, which was both a compliment to wanting to spend time with me and an unintended backhand slap at Lori, who was going to hang out with him while I took care of business. The answer was the usual stuff, that working paid the bills, and work has its own rewards… It did not include “and I like my job”, though I do, simply because I didn’t want to imply “more than hanging out with you” to a three year old. But children boil everything down to simplicity. The picture over there is said son, wearing a picklehaube with a Transformers shirt and (yes really) proclaiming he was an autobot because of the helmet. We adults, on the other hand, tend to layer complexity upon complexity until we’re not certain we’re getting value anymore, but we’re proud of whatever it is we have done/built/know. IT is like that sometimes. What is “the network” – in tweet length, for example? Not only is the answer tough to cram into tweet length, it is tougher to cram into tweet length and make useful. It is even more difficult to cram it into tweet length and include all the various constituencies of IT in the answer. But it can be done, because a “network” is a simple concept. You’re moving information from point A to point B. That’s it. Everything else is layers we’ve added over it to make some aspect of that movement better, or to facilitate the movement of data. But in the end, it is just sending bytes over the wire. If, for example, a business person with no IT background asks why a whole section of the corporate network is down, they don’t care about routing tables or even DNS, they care about “The network broke, and those clients can’t get to the datacenter. The network is complex, but we’ll have it up soon.” If you’re moving data over the WAN, it gets another layer of complexity – because you want to move data over the WAN at a decent speed, but most applications aren’t designed for network communication optimization. Instead they’re designed to be very good at moving data, and expect the network to worry about performance issues. But business users don’t want to hear about compression, dedupe, SSL offload, or any of that when things go wrong, they want to hear “The copy of our data at the remote site is a little out of synch right now, but we’re on it, and it will all be fine soon.” Want to secure the network? BAM! Another complex layer is tossed on top of that – but the point is that you don’t just want to move data from point A to point B anymore, you want to move data from point A to point B securely. Again, if your ADS or LDAP system goes on the fritz, you’re going to want to be able to tell people “users can’t log in right now, the servers that know who can do what are offline, but we’ll have them back up soon.” because users care that data isn’t moving from point A to point B, not about whatever bugbear has cropped up with authentication or the network. Want to give web users – internal and external – an enhanced experience while reducing the load on your servers? Another layer of complexity piles on, as you use Web Application Optimization techniques. They work great – at least those by F5 do, since I’ve been a user of them – but they add a whole new layer of oddities. “No, the new logo isn’t showing reliably, but the team is flushing the cache and/or changing the expiration date to get that fixed, and it’ll be right soon.” is what business users want to hear. Load balancing to increase reliability and performance adds yet another layer of complexity to the overall system, a layer that has all of its own terminology. But when load balancing goes wrong, “We misconfigured the Virtual IP and the Pool it points to does not serve your app” isn’t what the business person wants to hear. “Yes, we had an error, but your application should be back online soon.” is the right answer. Server virtualization doesn’t directly add complexity to the network, but server sprawl certainly does because now there are a lot more clients out there. One of the early problems with server sprawl that seems to be largely defeated was “where is that non-responding virtual running again?” But still, if the hardware goes down that a users’ VM is running on, they want to hear “Yes, we had a hardware failure, but your application should be back up on another server soon, and we’ll get the problem fixed then move your application back.” Desktop virtualization adds both complexity and traffic to your network, but simplifies a whole array of things from desktop management to licensing. Still, when it is performing poorly, a business leader does not want to hear about oversubscription, congestion, or the number of VMs per server, or anything else technical, they want to hear “Yes, we see that performance is down for those users. We’ve got a plan to fix it, and all should be back to normal soon.” The thing is, F5 sells tools to help with all of these issues. In fact, F5 sells a platform that you can customize to help with all of these issues… But notice that all the answers to business are simple, and end with "some variant of “back up soon”? We can supply you with tools to manage the “back up soon” or even make you able to say “there was a problem, but you shouldn’t have noticed”, we cannot provide you with a tool to make everything simple. The business sometimes needs educating, but most of the time they just need less detail and more information. We’ve got a ton of cool stuff going on in IT these days, but sometimes the complexity masks the simplicity. Boil it down to the basics, and tackle real problems. And enjoy talking simplicity for a change... Because the next round of Buzzword Bingo is on its way in 5,4,3,2,1…202Views0likes0CommentsEven the best written code has a weakness.
Developers are a great lot of folks, people who spend their day trying to do the impossible with bits for a customer base that is, by and large, impossible to satisfy. When the bits all line up correctly, the last line of code has been checked in, and the nightly compile accepted for deployment, then they get to sit back, relax for five minutes, and start over again. If this makes you think it’s not a great life, then you should live it. Developing gives instant feedback. No matter how unhappy users can be, fixing that nagging bug you’ve been chasing for hours is a rush, and starting with a blank source code file is like looking across a wide-open plain. You can see what might be, and you get to go figure out how to do it. But yeah, it’s high-stress. Deadlines are constant, and it’s not like writing where you have to get your content finished, once the code is done, ten million people want to have input into what you should have done. Various techniques have been developed to mitigate the depressing fact that people tell you what they want after they see what you’ve built, but the fact is, for most ordinary users, be they business users or end users, they don’t know what they want until they see something working on their monitor and can play with it. Because they need a point of comparison. Some few can tell you sight-unseen, early in the process, what they’d like, but most will have increasing demands as the application’s capabilities increase. And these days, there’s one more major gotcha. You have to care about the network. I’ve been saying that for years, but we’ve passed the point where you could ignore me. Some will say “cloud changes all that!” but the truth is, cloud changes the problem domain, not the fact that you have to care. Let’s say you have a web application (as there are precious few other types being developed these days), and you have tweaked it to uber-performance so that it is scalable. You’ve put it behind a load balancer or application delivery controller so that even if your tweaks aren’t enough, you can share the load amongst several copies. You’ve done it all right. And your primary Internet connection goes down. So your network staff switches to the backup connection – which is invariably smaller than the primary. The problem in this scenario is that your application can be load balanced and highly optimized, but now it is fighting for bandwidth on a reduced connection. This is hardly the only scenario in which your application can suffer from outside interference. Ever been on the receiving end of a router configuration error? Your application appears down to everyone in the multi-verse, but in reality, it is responding just peachy but the network is routing your users to Timbuktu. I could tell you about all the great solutions that F5 offers for this problem or that problem (there are many of them, and they’re pretty darned good), but from your perspective, the issue is (or should be) much bigger than that. You need to be able to understand when the problem at hand is a network problem, and you need to be able to diagnose that fact quickly, so the right people are on the job. And that means you need to know networking. Just as importantly, you need to at least viscerally understand your specific network environment. They’re all a bit different, and the likely pain points are different, even though some problems are universal. A DDOS attack, for example, is aimed at clogging your Internet connection, no matter your architecture… But some networking gear reduces the ability of DDOS to actually take the site down, so your network might only see degraded performance. So ask the network team to teach you. Ask them what devices are between your applications and your customers. Ask them how these devices (or their malfunction) impact your applications. Know the environment you’re in, because for most applications today, a problem on the network makes for a poorly performing application. And that is indeed your responsibility. In the cloud you can’t know all of these things for real, but you can understand the concepts. Is there a virtual ADC? What is being used for firewall services? What perf tools are available to determine the bottlenecks of applications deployed in the cloud? All things you’ll want to know, so you can know how best to start troubleshooting when the inevitable problems occur. Learning things like this after your application is the source of user pain seems to still be the norm, but it’s certainly not the best solution. Either it increases the amount of time your application is getting bad PR, or you are fixing things hastily, and haste does indeed make waste in most critical application situations. This knowledge will also give you a new set of tools to solve problems with. If you know that a Web Application Acceleration tool like F5’s WebAccelerator is in place between your application and the user, then you might be able to say “rather than rewrite this chunk of code, let’s tweak the Web Application Acceleration engine to handle it” and save both time and potential coding defect issues. It’s still a great time to be a developer, the fun is still all there, it’s just a more complex world. Master your network architecture, and be a better developer for it. Why Flash can't win the Web application war Virtualization Changes Application Deployment But Not Development Amazon Makes the Cloud Sticky Return of the Web Application Platform Wars Wanted: Application Delivery Network Experts The Stealthy Ascendancy of JSON Now it's Time for Efficiency Gains in the Network. "Application Delivery" Role Missing "Delivery" Focus Finding Your Balance202Views0likes0CommentsYou Say Tomato, I Say Network Service Bus
It’s interesting to watch the evolution of IT over time. I have repeatedly been told “you people, we were doing that with X, back before you had a name for it!” And likely, the speaker is telling the truth, as far as it goes. Seriously, while the mechanisms may be different, putting a ton of commodity servers behind a load balancer and tweaking for performance looks an awful lot like having LPARs that can shrink and grow. You put “dynamic cloud” into the conversation and the similarities become more pronounced. The biggest difference is how much you’re paying for hardware and licensing. Back in the day, Enterprise Service Busses (ESB) were all the rage, able to handle communications between a variety of application sources and route things to the correct destination in the correct format, even providing guaranteed delivery if you needed it for transactional services. I trained in several of these tools, most notably IBM MQSeries (now called IBM WebSphere MQ, surprised?) and MS MQ. I was briefed on a ton more during my time at Network Computing. In the end, they’re simply message delivery and routing mechanisms that can translate along the way. Oh sure, with MQSeries Integrator you could include all sorts of other things like security callouts and such, but core functionality was restricted to message flow and delivery. While ESBs are still used today in highly mixed environments or highly complex application infrastructures, they’re not deployed broadly in IT, largely because XML significantly reduced the need for the translation aspect, which was a primary use of them in the enterprise. Today, technology is leading us to a parallel development that will likely turn out much more generically useful than ESBs. Since others have referred to it as several things, but the Network Service Bus is the closest I’ve seen in terms of accuracy, I’ll run with that term. This is routing, translation, and delivery across the network from consumer to the correct service. The service is running on a server somewhere, but that’s increasingly less relevant to the consumer application, merely that their request gets serviced is sufficient. Serviced in a timely and efficient manner is big too. Translated while servicing is seeing a temporary (though not short, in my estimation) bump while IPv4 is slowly supplanted by IPv6, but has other uses – like encrypted to unencrypted, for example. The network of the future will use a few key Strategic Points of Control – like the one between consumers and web servers – to handle routing to a service that is (a) active, (b) responsive, and (c) appropriate to the request. In the interim, while passing the request along, the Strategic point of control will translate the incoming request into a format that the service expects, and if necessary will validate the user in the context of the service being requested and the username/platform/location the request is coming from. This offloads a lot from your apps and your servers. Encryption can be offloaded to the strategic point of control, freeing up a lot of CPU time, and running unencrypted within your LAN, while maintaining encryption on the public Internet. IPv6 packets can be translated to IPv4 on the way in and back to IPv6 on the way out, so you don’t have to switch everything in your datacenter over to IPv6 at once, security checks can occur before the connection is allowed inside your LAN, and scalability gets a major upgrade because you now have a device in place that will route traffic according to the current back-end configuration. Adding and removing servers, upgrading apps, all benefit from the strategic point of control that allows you to maintain a given public IP while changing the machines that service requests as-needed. And then we factor in cloud computing. If all of this functionality – or at least a significant chunk of it – was available in the cloud, regardless of cloud vendor, then you could ship overflow traffic to the cloud. There are a lot of issues to deal with, like security, but they’re manageable if you can handle all of the other service requests as if the cloud servers were part of your everyday infrastructure. That’s a datacenter of the future. Let’s call it a tomato. And in the end it makes your infrastructure more adaptable while giving you a point of control that can harness to implement whatever monitoring or functionality you need. And if you have several of those points of control – one to globally load balance, one for storage, one in front of servers… Then you are offering services that are highly adaptable to fluctuations in usage. Like having a tomato, right in the palm of your hands. Completely irrelevant observation: The US Bureau of Labor Statistics (BLS) mentioned today that IT unemployment is at 3.3%. Now you have a bright spot in our economic doldrums.212Views0likes0Comments