Why you should not use clustering to scale an application
It is often the case that application server clustering and load-balancing are mistakenly believed to be the same thing. They are not. While server clustering does provide rudimentary load-balancing functionality, it does a better job of providing basic fail-over and availability assurance than it does load-balancing. In fact, load balancing has effectively been overtaken by application delivery, which builds on load balancing but is much, much more than that today. Clustering essentially turns one instance of an application server into a controlling node, a proxy of sorts, through which requests are funneled and then distributed amongst several instances of application servers. Sounds like load-balancing, on the surface, but digging deeper will reveal there are many reasons why application server clustering will not support long-term scalability and efficiency. Aside from the obvious hardware accelerated functions provided by an application delivery controller (a.k.a. modern load balancer), there are a number of other reasons to look to options other than application server clustering when you are trying to build out a scalable, efficient application architecture. Here are the top three reasons you should reconsider (or not consider in the first place) a scalability solution centered around application server clustering technology. JUST LOAD BALANCING ISN'T EFFICIENT Simple load balancing is not efficient. It uses industry standard algorithms ultimately derived from network load balancing to distribute requests across a pool (or farm) of servers. Those algorithms don't take into consideration a wide variety of factors that can affect not only capacity of an application but the performance of an application. There is no intelligence, no real awareness of the application in an application server clustering architecture and thus the solution does not utilize resources in a way that squeezes out as much capacity and performance from applications. Application server clustering also lacks many of the features available in today's application delivery controllers that enhance the efficiency of servers and supporting infrastructure. Optimization of core protocols and reuse of connections can dramatically increase the efficiency and performance of applications and neither option is available in application server clustering solutions. That's because the application server clustering solution relies on the same core protocol stack (TCP/IP) as the application server and operating system, and neither are optimized for scalability. LACK OF SUPPORT FOR CLOUD COMPUTING and VIRTUALIZED ENVIRONMENTS Dynamism is the ability of your application and network infrastructure to handle the expansion and contraction of applications in an on-demand environment. If you're considering building your own private cloud computing environment and taking advantage of the latest style of computing, you'll want to consider options other than application server clustering to serve as your '"control node". Aside from failing to exhibit the four core properties necessary in a cloud computing infrastructure (transparency, scalability, security, and application intelligence), application server clustering itself is not designed to handle a fluid application infrastructure. Like early load balancers, it expects to manage a number of servers in a farm and that the number (and location) will remain the same. Its configuration is static, not dynamic, and it is not well-suited to automatically adjusting to changing infrastructure conditions in the data center. Virtualization initiatives put similar demands on controlling solutions like application delivery and application server cluster controllers; demands that cannot be met by application server cluster controllers due to their static configuration nature. IT ISN'T SCALABLE When it comes down to it there is only one reason you really need to stay away from application server clustering as a mechanism for scaling your applications: application server clustering doesn't scale well. Think about it this way, you are trying to scale out an application by taking an instance of the application server (the one you need to scale, by the way) and turning it into a controlling node. While the application server clustering functionality is likely capable of supporting twice the number of concurrent connections as a single instance running an application, it isn't likely to be able to handle three or four times that number. You are still limited by the software, by the operating system, and by the hardware capabilities of the server on which the clustering solution is deployed. The number of web sites that are static and do not involve dynamic components served from application servers of some kind are dwindling. Most sites recognize the impact of Web 2.0 on their customer base and necessarily include dynamic content as the primary source of web site content. That means they're trying to serve a high number of concurrent customers on traditional application server technology solutions. Scaling those applications is an important part of deploying a site today, both to ensure availability and to meet increasingly demanding performance requirements. Application server clustering technology wasn't designed for this kind of scalability, and there's a reason that folks like Microsoft, Oracle/BEA, and IBM partner with hardware application delivery solution providers: they know that in order to truly scale an application, you're going to need a hardware-based solution. Application server vendors build application servers that are focused on building, deploying, and serving up rich, robust applications. And every one of them has said in the past, "Use a hardware load balancer to scale." If the recommendation of your application server vendor isn't enough to convince you that application server clustering isn't the right choice for scaling web applications, I don't know what is.205Views0likes0CommentsRed Herring: Hardware versus Services
In a service-focused, platform-based infrastructure offering, the form factor is irrelevant. One of the most difficult aspects of cloud, virtualization, and the rise of platform-oriented data centers is the separation of services from their implementation. This is SOA applied to infrastructure, and it is for some reason a foreign concept to most operational IT folks – with the sometimes exception of developers. But sometimes even developers are challenged by the notion, especially when it begins to include network hardware. ARE YOU SERIOUSLY? The headline read: WAN Optimization Hardware versus WAN Optimization Services. I read no further, because I was struck by the wrongness of the declaration in the first place. I’m certain if I had read the entire piece I would have found it focused on the operational and financial benefits of leveraging WAN optimization as a Service as opposed to deploying hardware (or software a la virtual network appliances) in multiple locations. And while I’ve got a few things to say about that, too, today is not the day for that debate. Today is for focusing on the core premise of the headline: that hardware and services are somehow at odds. Today is for exposing the fallacy of a premise that is part of the larger transformational challenge with which IT organizations are faced as they journey toward IT as a Service and a dynamic data center. This transformational challenge, often made reference to by cloud and virtualization experts, is one that requires a change in thinking as well as culture. It requires a shift from thinking of solutions as boxes with plugs and ports and viewing them as services with interfaces and APIs. It does not matter one whit whether those services are implemented using hardware or software (or perhaps even a combination of the two, a la a hybrid infrastructure model). What does matter is the interface, the API, the accessibility as Google’s Steve Yegge emphatically put it in his recent from-the-gut-not-meant-to-be-public rant. What matters is that a product is also a platform, because as Yegge so insightfully noted: A product is useless without a platform, or more precisely and accurately, a platform-less product will always be replaced by an equivalent platform-ized product. A platform is accessible, it has APIs and interfaces via which developers (consumer, partner, customer) can access the functions and features of the product (services) to integrate, instruct, and automate in a more agile, dynamic architecture. Which brings us back to the red herring known generally as “hardware versus services.” HARDWARE is FORM-FACTOR. SERVICE is INTERFACE. This misstatement implies that hardware is incapable of delivering services. This is simply not true, any more than a statement implying software is capable of delivering services would be true. That’s because intrinsically nothing is actually a service – unless it is enabled to do so. Unless it is, as today’s vernacular is wont to say, a platform. Delivering X as a service can be achieved via hardware as well as software. One need only look at the varied offerings of load balancing services by cloud providers to understand that both hardware and software can be service-enabled with equal alacrity, if not unequal results in features and functionality. As long as the underlying platform provides the means by which services and their requisite interfaces can be created, the distinction between hardware and “services” is non-existent. The definition of “service” does not include nor preclude the use of hardware as the underlying implementation. Indeed, the value of a “service” is that it provides a consistent interface that abstracts (and therefore insulates) the service consumer from the underlying implementation. A true “service” ensures minimal disruption as well as continued compatibility in the face of upgrade/enhancement cycles. It provides flexibility and decreases the risk of lock-in to any given solution, because the implementation can be completely changed without requiring significant changes to the interface. This is the transformational challenge that IT faces: to stop thinking of solutions in terms of deployment form-factors and instead start looking at them with an eye toward the services they provide. Because ultimately IT needs to offer them “as a service” (which is a delivery and deployment model, not a form factor) to achieve the push-button IT envisioned by the term “IT as a Service.”198Views0likes0CommentsThe Context-Aware Cloud
Christofer Hoff, better known as @Beaker to the Twitterverse, put on his devil's advocacy hat (yes, it really is a good color for him) yesterday and questioned whether there was a need for hardware application delivery solutions in the cloud. He postulated via Twitter that application delivery functions would become part of the cloud fabric and thus whether they were implemented in hardware or software was largely irrelevant. Generally speaking we're in agreement on that one. But then he really used that devil's advocacy hat and suggested that the application delivery control layer might be virtualized and software-based as well. Of course he also said "it's just a switch". [strangled sounds - that hat works really well] I'm going to ignore that for now because it's just not relevant to this conversation, but rest assured I'll come back and visit that one eventually. It isn't so much that application delivery needs to be a hardware solution as it is that the solution needs to have certain capabilities that are typically found in hardware solutions that are not always found in software solutions. Spreading application delivery functions like load balancing, protocol optimization, and security around the data center, essentially embedding it in the cloud fabric, destroys the ability of the solutions to perform their tasks. Most application delivery functions require that the solution mediate; that it be between the client and the servers it is managing. It's difficult for a device to act as a virtual access point for a web-site if the device is itself inside the data center. And given the dwindling supply of public IPv4 addresses and the slow adoption of IPv6, virtual servers a la load balancing technology is an absolute requirement for cloud computing environments. They can't be hanging around inside the environment because they necessarily need to be at the edge, mediating between the public (client) and private (servers) networks. Then there's the real meat: contextual networking. That's the ability of a solution to take into consideration context when applying policies and rules and functions to traffic and data. Understanding the context of a request and response - location of the client, type of client, type of response data, network over which a client is connecting, etc... - makes it possible to apply application delivery functions like optimization and acceleration and security more efficiently. In order to understand the client, you've absolutely got to have visibility into the client-side of the equation as well as the server-side. If you're nothing more than a service in the fabric, you aren't going to have that visibility - some other device or solution will. Without that visibility you can't easily obtain the context, and thus you aren't capable of adapting to what's going on right now - on demand. Let's say your cloud computing provider is able to provide compression (a common acceleration function) on a per MB compressed basis. In other words, you are paying additional fees based on the amount of data that's compressed. This might save you money in bandwidth fees because you're using less bandwidth, but those savings can easily be eaten up by inefficient solutions that compress everything regardless of context. If the provider has an intelligent solution that can take into consideration all the parameters involved in a request and reply, it can determine whether applying compression will (a) make a difference and (b) impede or improve response times. We all know some data - particularly images - don't benefit from compression. We also know that applying compression on responses that are small (typically under the network MTU and maximum ethernet packet size) can actually impede performance because it takes longer to compress than it would to deliver it raw. An application delivery solution, usually hardware but not always, can intelligently take into consideration the context of the request and response and make those decisions in order to more efficiently make use of compression to do what it was intended to do: improve performance and the end-user experience and, if applicable, reduce the costs associated with the service. Moving compression functions off servers and onto an application delivery solution further improves the capacity and utilization of the server, which can reduce the amount of compute processing required to run your application in the cloud and further decreasing costs. It's more efficient from a processing power standpoint, and it's financially more efficient. And you can't really do it if your application delivery solution is distributed across the fabric inside the data center as virtual images of software solutions. Could you deploy virtual images of software solutions at the edge, as a sort of edge cloud fabric? Yes. Can you deploy them willy-nilly throughout the internal cloud fabric? No. There are plenty of other reasons why hardware solutions will likely remain the best option for providing application delivery services in the cloud fabric: failover, load balancing, throughput, capacity, hardware accelerated functionality, performance, reliability, etc... But the fact is that in a virtualized environment there is going to be some intermediary necessarily providing core application delivery functions like load balancing and failover and availability assurance and it just makes sense to leverage that same device by employing as many application delivery functions at that point where context can be taken into consideration and the application delivered as efficiently as possible. Related articles by Zemanta Managing Virtual Infrastructure Requires an Application Centric Approach Cloud Computing: Is your cloud sticky? It should be. Infrastructure 2.0: The Diseconomy of Scale Virus Why you should not use clustering to scale an application Cloud computing could be dangerous warns Richard Stallman VCs Back Tools to Look Inside the Cloud190Views0likes2CommentsData Center Feng Shui: SSL
Like most architectural decisions the choice between hardware and virtual server are not mutually exclusive. The argument goes a little something like this: The increases in raw compute power available in general purpose hardware eliminates the need for purpose-built hardware. After all, if general purpose hardware can sustain the same performance for SSL as purpose-built (specialized) hardware, why pay for the purpose-built hardware? Therefore, ergo, and thusly it doesn’t make sense to purchase a hardware solution when all you really need is the software, so you should just acquire and deploy a virtual network appliance. The argument, which at first appears to be a sound one, completely ignores the fact that the same increases in raw compute power for general purpose hardware are also applicable to purpose-built hardware and the specialized hardware cards that provide acceleration of specific functions like compression and RSA operations (SSL). But for the purposes of this argument we’ll assume that performance, in terms of RSA operations per second, are about equal between the two options. That still leaves two very good situations in which a virtualized solution is not a good choice. 1 COMPLIANCE with FIPS 140 For many industries, federal government, banking, and financial services among the most common, SSL is a requirement – even internal to the organization. These industries also tend to fall under the requirement that the solution providing SSL be FIPS 140-2 or higher compliant. If you aren’t familiar with FIPS or the different “levels” of security it specifies, then let me sum up: FIPS 140 Level 2 (FIPS 140-2) requires a level of physical security that is not a part of Level 1 beyond the requirement that hardware components be “production grade”, which we assume covers the general purpose hardware deployed by cloud providers. Security Level 2 improves upon the physical security mechanisms of a Security Level 1 cryptographic module by requiring features that show evidence of tampering, including tamper-evident coatings or seals that must be broken to attain physical access to the plaintext cryptographic keys and critical security parameters (CSPs) within the module, or pick-resistant locks on covers or doors to protect against unauthorized physical access. -- FIPS 140-2, Wikipedia FIPS 140-2 requires specific physical security mechanisms to ensure the security of the cryptographic keys used in all SSL (RSA) operations. The private and public keys used in SSL, and its related certificates, are essentially the “keys to the kingdom”. The loss of such keys is considered quite the disaster because they can be used to (a) decrypt sensitive conversations/transactions in flight and (b) masquerade as the provider by using the keys and certificates to make more authentic phishing sites. More recently keys and certificates, PKI (Public Key Infrastructure), has been an integral component of providing DNSSEC (DNS Security) as a means to prevent DNS cache poisoning and hijacking, which has bitten several well-known organizations in the past two years. Obviously you have no way of ensuring or even knowing if the general purpose compute upon which you are deploying a virtual network appliance has the proper security mechanisms necessary to meet FIPS 140-2 compliance. Therefore, ergo, and thusly if FIPS Level 2 or higher compliance is a requirement for your organization or application, then you really don’t have the option to “go virtual” because such solutions cannot meet the physical requirements necessary. 2 RESOURCE UTILIZATION A second consideration, assuming performance and sustainable SSL (RSA) operations are equivalent, is the resource utilization required to sustain that level of performance. One of the advantages of purpose built hardware that incorporates cryptographic acceleration cards is that it’s like being able to dedicate CPU and memory resources just for cryptographic functions. You’re essentially getting an extra CPU, it’s just that the extra CPU is automatically dedicated to and used for cryptographic functions. That means that general purpose compute available for TCP connection management, application of other security and performance-related policies, is not required to perform the cryptographic functions. The utilization of general purpose CPU and memory necessary to sustain X rate of encryption and decryption will be lower on purpose-built hardware than on its virtualized counterpart. That means while a virtual network appliance can certainly sustain the same number of cryptographic transactions it may not (likely won’t) be able to do much other than that. The higher the utilization, too, the bigger the impact on performance in terms of latency introduced into the overall response time of the application. You can generally think of cryptographic acceleration as “dedicated compute resources for cryptography.” That’s oversimplifying a bit, but when you distill the internal architecture and how tasks are actually assigned at the operating system level, it’s an accurate if not abstracted description. Because the virtual network appliance must leverage general purpose compute for what are computationally expensive and intense operations, that means there will be less general purpose compute for other tasks, thereby lowering the overall capacity of the virtualized solution. That means in the end the costs to deploy and run the application are going to be higher in OPEX than CAPEX, while the purpose-built solution will be higher in CAPEX than in OPEX – assuming equivalent general purpose compute between the virtual network appliance and the purpose-built hardware. IS THERE EVER A GOOD TIME to GO VIRTUAL WHEN SSL is INVOLVED? Can you achieve the same performance gains by running a virtual network appliance on general purpose compute hardware augmented by a cryptographic acceleration module? Probably, but that assumes that the cryptographic module is one with which the virtual network appliance is familiar and can support via hardware drivers and part of the “fun” of cloud computing and leased compute resources is that the underlying hardware isn’t supposed to be a factor and can vary from cloud to cloud and even from machine to machine within a cloud environment. So while you could achieve many of the same performance gains if the cryptographic module were installed on the general purpose hardware (in fact that’s how we used to do it, back in the day) it would complicate the provisioning and management of the cloud computing environment which would likely raise the costs per transaction, defeating one of the purposes of moving to cloud in the first place. If you don’t need FIPS 140-2 or higher compliance, if performance and capacity (and therefore costs) are not a factor, if you’re simply using the virtualized network appliance as part of test or QA efforts or a proof of concept, certainly – go virtual. If you’re building a hybrid cloud implementation you’re likely to need a hybrid application delivery architecture. The hardware components provide the fault tolerance and reliability required while virtual components offer the means by which corporate policies and application-specific behavior can be deployed to external cloud environments with relative ease. Cookie security and concerns over privacy may require encrypted connections regardless of application location, and in the case where physical deployment is not possible or feasible (financially) then a virtual equivalent to provide that encryption layer is certainly a good option. It’s important to remember that this is not necessarily a mutually exclusive choice. A well-rounded cloud computing strategy will likely include both hardware and virtual network appliances in a hybrid architecture . IT needs to formulate a strong strategy and guidance regarding what applications can and cannot be deployed in a public cloud computing environment, and certainly the performance/capacity and compliance requirements of a given application in the context of its complete architecture – network, application delivery network, and security - should be part of that decision making process. The question whether to go virtual or physical is not binary. The operator is, after all, OR and not XOR. The key is choosing the right form factor for the right environment based on both business and operational needs.189Views0likes1CommentWhat Developers Should Or Should Not Do.
Recently I was in a conversation where someone seriously suggested that Web Application Acceleration and WAN Optimization should be the job of developers, since they are in the code and creating the network traffic. At first I was taken aback by this suggestion. I was a manager of a small team of developers and admins when Web Application Firewalls first started to be bandied about (though I don’t think they had the fancy name then), and went through this entire discussion then. Never in my wildest dreams did I think we’d revisit it on the much grander scale mentioned. But that does bring up the question…. What is best left in the hands of app developers and what is not? Not so long ago, a friend of mine who repaired complex systems for a retail chain was laid off and his job eliminated. Even though he could prove that he saved the company a lot money, it was no longer seen as cost effective to maintain a test bench and the tools necessary to fix complex computer systems. It is just too easy to buy extended warranty plans or replace gear before it is worn out to warrant paying someone to do that job anymore. That may change again in the future, but I honestly don’t know many enterprises that keep bread-board level repair staff around these days. Why? Because the specialty of making and repairing breadboards is centralized in a place where that is all they do, making it much more cost-effective than every enterprise keeping someone on staff for the eventuality of a breakdown. Even knowing that you will have unexpected failures, you will have them whether you have someone handy to repair them or have to call a service in to repair them. There’s a similarity here. The things that a developer can do well are vast, because we don’t really have vertical market developers. Oh there are a few, and some places want experience in their vertical, but there’s no schooling to be a utility company developer or financial services developer, there’s schooling to write software, and the problem domain you’re taught to write it for is “everything”. You differentiate based upon languages or operating systems in college, but not on vertical market. And that is both a plus and a negative. Developers are not security experts, they’re software development experts. They’re not Web App Acceleration experts, nor are the WAN optimization experts. They’re development experts. Very good at turning ideas into applications. Some are specialized closer to the metal, others are specialized at more business development. Some like myself have done a bit of it all. But only those working for companies that make WAN Optimization, Web App Acceleration, or Security solutions are specialists in their respective areas. There are a few Web App Acceleration developers in the wild, and a few Security developers in the wild, but most have gone to the place where they can utilize their specialty full-time… Shops that make these products. And that is reason number one why it is not something developers should be doing. At a minimal level, not making fifteen trips across the network when two will do it, or checking for buffer overflows and SQL insertion attacks before deploying code? Certainly. But overarching security or Web App Optimization? No. They won’t be as good at is as a dedicated staff, and they won’t update it as often as a dedicated staff. The second reason is just as straight-forward. You don’t own the source to a whole bunch of your applications, so your developers can’t do these things. Of course you could insist that your vendor do in-depth catch-all security or implement web app acceleration in their product, or you could let them develop features your business needs to get the job done with their software. Of course the latter is a better choice if you have a choice. Chances are you will fall somewhere on the spectrum closer to “You are not our only customer… No” than to “Oh yes, we have a whole team with nothing better to do, we’ll start rearchitecting right away.” Again, it is reasonable to expect a certain level of proficiency be built into your purchased apps, but not complete solutions for all these issues. The third reason is a bit more esoteric. This is not what you hire developers for. It just isn’t. And it’s not what your vendors are hiring developers for. You’re hiring them to make apps the business can use. Is it a wise use of someone who is extremely proficient in the tools you use and has developed for your vertical to write non-business code? Not really. And fourth? Well fourth is a question of possibilities. In WAN Optimization, some of the solutions are across applications. Or more to the point across streams. Putting that functionality somewhere that sees more than a single application’s streams is necessary to get the benefits. The same is true in different ways for Security (SEIM for example) or Web Application Acceleration (you don’t optimize streaming or logo download per-app unless there is a specific reason to). So developers really cannot effectively write this stuff into an application. At least not and get the benefits offered by tools readily available today. In all of these cases, they are dealing with data on the wire also, so unless your staff writes network drivers, there are some optimizations/solutions that just cannot be achieved from within the application. Fifth is re-use or the lack thereof. Some code that would suit these needs would be highly reusable. Much of it would have to be rewritten with each product/platform/OS, simply because they’re not on the wire detecting things, they’re speaking a development language, not network protocols. And finally, a point hinted at above, what happens when a better way to do something in one of these specialized areas comes along? Do the developers trained in these things drop whatever they’re doing to respond? In the case of security I would say “yes” for the other two, as long as your apps are meeting SLAs or business expectations, probably not, even though the new way of doing things might bring a lot of benefit to the organization. So don’t push things onto developers that they are not in a position to deliver. Get them training in developing secure software – while you install a WAFS and other security tools. Get them training in network communications protocols – while installing a WAN Optimization solution. And get them training in optimizing web development projects – while installing a Web Application Acceleration product. And keep them primarily focused on building solutions that make your business responsive to the market and your customers. Don’t force them to reinvent the wheel, and don’t ask them to be a specialist for a short amount of time on a highly complex topic – they get enough of that already.183Views0likes0CommentsDon’t Throw the Baby out with the Bath Water
Or in modern technical terms, don’t throw the software out with the hardware Geva Perry recently questioned one of Gartner’s core predictions for 2010, namely that “By 2012, 20 percent of businesses will own no IT assets.” Geva asks a few (very pertinent) questions regarding this prediction that got me re-reading the prediction. Let’s all look at it one more time, shall we? By 2012, 20 percent of businesses will own no IT assets. Several interrelated trends are driving the movement toward decreased IT hardware assets, such as virtualization, cloud-enabled services, and employees running personal desktops and notebook systems on corporate networks. The need for computing hardware, either in a data center or on an employee's desk, will not go away. However, if the ownership of hardware shifts to third parties, then there will be major shifts throughout every facet of the IT hardware industry. For example, enterprise IT budgets will either be shrunk or reallocated to more-strategic projects; enterprise IT staff will either be reduced or reskilled to meet new requirements, and/or hardware distribution will have to change radically to meet the requirements of the new IT hardware buying points. [emphasis added] Geva asks: “’IT assets’ - They probably mean IT assets in the data center because aren't personal desktops and notebooks also IT assets?” That would have been my answer at first as well, but the explanation clearly states that “the need for computing hardware either in a data center or on an employee’s desk will not go away.” Is Gartner saying then that “computing hardware” is not an IT asset? If the need for it – in the data center and on the employee’s desk – will not go away, as it asserts, then how can this prediction be accurate? Even if every commoditized business function is enabled via SaaS and any custom solutions are deployed and managed via IaaS or PaaS solutions, employees still need a way to access them, and certainly they’ve got telecommunications equipment of some kind – Blackberries and iPhones abound – and those are, if distributed by the organization, considered IT assets and must be managed accordingly. As Geva subtly points out, even if an organization moves to a BYOH (bring your own hardware) approach to the problem of client access to remotely (cloud) hosted applications, they still must be – or should be – managed. Without proper management of network access the risk of spreading viruses and other malware to every other employee is much higher. Without proper understanding of what and how the organizational data is being accessed and where it’s being stored, the business is at risk. Without proper controls on employee’s “personal” hardware it is difficult to audit security policies that govern who can and cannot access that laptop and subsequently who might have access to credentials that would allow access to sensitive corporate information. What Gartner seems to be saying with this prediction is not that hardware will go away, but that the ownership of the hardware will go away. Organizations will still need the hardware – both on the desktop and in the data center – but that they will not need to “own” the IT hardware assets. Notice that it says “will own no IT assets” not “need no IT assets.” That said, the prediction is still not accurate as it completely ignores that there are more “IT assets” than just hardware.178Views0likes1Comment