Can SSL VPN client handle multiple simultaneous sessions?
From a single Windows machine, we have a need to have the F5 SSL VPN client connect both to multiple external organizations at once, and also to connect to single organizations by multiple tunnels, with separate credentials. If there's a way to do either of these, it's not obvious to us. It seems like only one SSL VPN client instance can run per machine, and that instance can only handle a single tunnel, with a single set of credentials, to a single remote location. It's testament to F5's market penetration that we find ourselves needing to do more than that. Is there a way? Thanks, Whit663Views0likes4CommentsProblems load balancing printing
Followed this guide to configure load balancing MS printing with npath routing: http://blog.loadbalancer.org/load-balancing-microsoft-print-server/ The problem is when I try to connect to the printer with the FQDN of virtual server (eg. \\virtualserver.mydomain.com) I get the error "Operation could not be completed (error 0x00000709). Double check the printer name and make sure that the printer is connected to the network.". If I connect to the VIP (eg. \\192.168.0.10) it works fine. If I connect to the host directly (by hostname or IP) it works fine. Any ideas?1.6KViews0likes3CommentsOutlook Client Prompting for Password
A few months ago we implemented Exchange 2010 with the help of our LTMs. However it has come to light that people have been complaining about how sometimes they are being prompted to log in after they've been logged in all day. What they don't understand is that when they switch between networks "Wired to wireless" or vice versa, their IP address changes so the CAS server they land on is likely different, prompting them to re-authenticate. I don't suppose there is an F5 solution to stop these password prompt. The best solution I came up with was to run Outlook anywhere and do the persistence based on cookies. Are there any other ideas out there?624Views0likes7Comments5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unified toolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications. API Jabberwocky: You Say Tomay-to and I Say Potah-to OpenAjax Metadata 1.0 and the Adobe Dreamweaver Widget Browser OpenAjax Alliance AJAX-based Dojo Toolkit The Stealthy Ascendancy of JSON JSON Continues its Winning Streak Over XML JSON versus XML: Your Choice Matters More Than You Think I am in your HTTP headers, attacking your application The Web 2.0 API: From collaborating to compromised IT as a Service: A Stateless Infrastructure Architecture Model JSON Activity Streams and the Other Consumerization of IT350Views0likes0CommentsSSL Connection Configuration between Apache Web server and Weblogic server
I'm currently using Apache web server as a front end server for Weblogic server 8.1 and now i' facing some configuration problem to setting up the SSL connection between this 2 server. When i open my web application page, it shows Failure of Server Apache bridge No backend server available for connection: timed out after 10 seconds or idempotent set to OFF. and my proxy.log shows: Thu Nov 03 09:36:41 2011 <182413202842013> INFO: SSL is configured Thu Nov 03 09:36:41 2011 <182413202842013> INFO: SSL configured successfully Thu Nov 03 09:36:41 2011 <182413202842013> Using Uri /favicon.ico Thu Nov 03 09:36:41 2011 <182413202842013> After trimming path: '/favicon.ico' Thu Nov 03 09:36:41 2011 <182413202842013> The final request string is '/favicon.ico' Thu Nov 03 09:36:41 2011 <182413202842013> SEARCHING id=[ebwdsk298.ebworx.com:7002] from current ID=[ebwdsk298.ebworx.com:7002] Thu Nov 03 09:36:41 2011 <182413202842013> The two ids matched Thu Nov 03 09:36:41 2011 <182413202842013> @@@FOUND...id=[ebwdsk298.ebworx.com:7002], server_name=[10.122.50.218], server_port=[80] Thu Nov 03 09:36:41 2011 <182413202842013> attempt 0 out of a max of 5 Thu Nov 03 09:36:41 2011 <182413202842013> general list: trying connect to '10.122.50.48'/7002/7002 at line 2696 for '/favicon.ico' Thu Nov 03 09:36:41 2011 <182413202842013> New SSL URL: match = 0 oid = 22 Thu Nov 03 09:36:41 2011 <182413202842013> Connect returns -1, and error no set to 10035, msg 'Unknown error' Thu Nov 03 09:36:41 2011 <182413202842013> EINPROGRESS in connect() - selecting Thu Nov 03 09:36:41 2011 <182413202842013> Setting peerID for new SSL connection Thu Nov 03 09:36:41 2011 <182413202842013> 0a7a 3230 5a1b 0000 .z20Z... Thu Nov 03 09:36:41 2011 <182413202842013> Local Port of the socket is 2121 Thu Nov 03 09:36:41 2011 <182413202842013> Remote Host 10.122.50.48 Remote Port 7002 Thu Nov 03 09:36:41 2011 <182413202842013> general list: created a new connection to '10.122.50.48'/7002 for '/favicon.ico', Local port:2121 Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs from clnt:[Host]=[10.122.50.218] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs from clnt:[Connection]=[keep-alive] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs from clnt:[Accept]=[*/*] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs from clnt:[User-Agent]=[Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.835.163 Safari/535.1] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs from clnt:[Accept-Encoding]=[gzip,deflate,sdch] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs from clnt:[Accept-Language]=[en-US,en;q=0.8] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs from clnt:[Accept-Charset]=[ISO-8859-1,utf-8;q=0.7,*;q=0.3] Thu Nov 03 09:36:41 2011 <182413202842013> URL::sendHeaders(): meth='GET' file='/favicon.ico' protocol='HTTP/1.1' Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[Host]=[10.122.50.218] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[Accept]=[*/*] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[User-Agent]=[Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.835.163 Safari/535.1] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[Accept-Encoding]=[gzip,deflate,sdch] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[Accept-Language]=[en-US,en;q=0.8] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[Accept-Charset]=[ISO-8859-1,utf-8;q=0.7,*;q=0.3] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[Connection]=[Keep-Alive] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[WL-Proxy-SSL]=[false] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[WL-Proxy-Client-IP]=[10.122.50.48] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[Proxy-Client-IP]=[10.122.50.48] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[X-Forwarded-For]=[10.122.50.48] Thu Nov 03 09:36:41 2011 <182413202842013> Hdrs to WLS:[X-WebLogic-Force-JVMID]=[unset] Thu Nov 03 09:36:41 2011 <182413202841921> INFO: No session match found Thu Nov 03 09:36:41 2011 <182413202842013> INFO: No CA was trusted, validation failed Thu Nov 03 09:36:41 2011 <182413202841921> INFO: DeleteSessionCallback Thu Nov 03 09:36:41 2011 <182413202842013> ERROR: SSLWrite failed Thu Nov 03 09:36:41 2011 <182413202842013> SEND failed (ret=-1) at 789 of file ../nsapi/URL.cpp Thu Nov 03 09:36:41 2011 <182413202842013> *******Exception type [WRITE_ERROR_TO_SERVER] raised at line 790 of ../nsapi/URL.cpp Thu Nov 03 09:36:41 2011 <182413202842013> Marking 10.122.50.48:7002 as bad Thu Nov 03 09:36:41 2011 <182413202842013> got exception in sendRequest phase: WRITE_ERROR_TO_SERVER [os error=0, line 790 of ../nsapi/URL.cpp]: at line 3078 Thu Nov 03 09:36:41 2011 <182413202842013> INFO: Closing SSL context Thu Nov 03 09:36:41 2011 <182413202842013> INFO: Error after SSLClose, socket may already have been closed by peer Thu Nov 03 09:36:41 2011 <182413202842013> Failing over after WRITE_ERROR_TO_SERVER exception in sendRequest() Here is my step to setup the SSL connection: 1. Create a keystore( SSLkey.jks ) for weblogic use. 2. Create a certificate signing request(certreq.pem) and sent to the trusted certificate authority. 3. Download Root CA(rootca.cer) and signed certificate(supportcert.pem) from certificate authority. 4. Import rootca.cer into a custom trust key store(supporttrust.jks). 5. Configure the Weblogic console -> keystores and ssl -> Custom identity and custom trust. 6. use SSLkey.jks as custom identity keystore and supporttrust as custom trust keystore. 7. Extract the trusted CA file from supporttrust.jks to trustedcafile.der 8. Convert trustedcafile.der into trustedcafile.pem 9. Copy trustedcafile.pem into 10. Configure httpd.conf in apache LoadModule weblogic_module modules/mod_wl_20.so Notes: replace [ to < [IfModule mod_weblogic.c] WebLogicHost abc WebLogicPort 7002 SecureProxy ON TrustedCAFile conf/ssl/trustedcafile.pem RequireSSLHostMatch false Debug ALL WLLogFile logs/proxy.log [/Ifmodule] [ Location /secureWebAuth] SetHandler weblogic-handler [/Location] Can anyone tell me what should i do in order to correct this error? Your help is kindly appreciate!!! Please~502Views0likes1CommentOptimizing application delivery with F5 Secure ICA proxy
F5's Secure ICA proxy solution on APM/EDGE is over a year old now, and has been successfully deployed at many of our customers. Besides the simplicity and ease of administration it provides, F5 customers are looking for more value and want to make sure that the solution they implement can provide the fastest deliver of Citrix XenApp and XenDesktop to the remote users. In one scenario, we've found that leveraging the following TCP profile on the APM ICA proxy virtual can drastically improve performance of applications where large data transfers are happening between the client and the XenApp/XenDesktop farm. This profile was tested in a typical WAN scenario with client connecting over T1 on a 200 ms link with 0.5-1% packet loss. In this scenario, F5 ICA proxy was able to maintain almost full bandwidth throughput(close to 1.5 Mbits/sec on the ICA connection, which was more than 2x improvement over throughput with regular TCP stack. This is the snippet of the TCP profile configuration from bigip.conf profile tcp optimized_xenapp_wan { defaults from tcp-lan-optimized reset on timeout enable time wait recycle enable delayed acks disable proxy mss disable proxy options disable deferred accept disable selective acks disable dsack disable ecn disable limited transmit disable rfc1323 disable slow start disable bandwidth delay disable nagle disable abc enable ack on push enable verified accept disable pkt loss ignore rate 0 pkt loss ignore burst 0 md5 sign disable cmetrics cache enable md5 sign passphrase none proxy buffer low 98304 proxy buffer high 131072 idle timeout 300 time wait 2000 fin wait 5 close wait 5 send buffer 65535 recv window 65535 keep alive interval 1800 max retrans syn 4 max retrans 8 ip tos 0 link qos 0 congestion control scalable zero window timeout 20000 } If you are running or deploying F5 Secure ICA proxy solution, we encourage you to try this tcp profile and see if it improves ICA performance in your environment as well. Any and all feedback will also be greatly appreciated.230Views0likes2CommentsDevops Proverb: Process Practice Makes Perfect
#devops Tools for automating – and optimizing – processes are a must-have for enabling continuous delivery of application deployments Some idioms are cross-cultural and cross-temporal. They transcend cultures and time, remaining relevant no matter where or when they are spoken. These idioms are often referred to as proverbs, which carries with it a sense of enduring wisdom. One such idiom, “practice makes perfect”, can be found in just about every culture in some form. In Chinese, for example, the idiom is apparently properly read as “familiarity through doing creates high proficiency”, i.e. practice makes perfect. This is a central tenet of devops, particularly where optimization of operational processes is concerned. The more often you execute a process, the more likely you are to get better at it and discover what activities (steps) within that process may need tweaking or changes or improvements. Ergo, optimization. This tenet grows out of the agile methodology adopted by devops: application release cycles should be nearly continuous, with both developers and operations iterating over the same process – develop, test, deploy – with a high level of frequency. Eventually (one hopes) we achieve process perfection – or at least what we might call process perfection: repeatable, consistent deployment success. It is implied that in order to achieve this many processes will be automated, once we have discovered and defined them in such a way as to enable them to be automated. But how does one automate a process such as an application release cycle? Business Process Management (BPM) works well for automating business workflows; such systems include adapters and plug-ins that allow communication between systems as well as people. But these systems are not designed for operations; there are no web servers or databases or Load balancer adapters for even the most widely adopted BPM systems. One such solution can be found in Electric Cloud with its recently announced ElectricDeploy. Process Automation for Operations ElectricDeploy is built upon a more well known product from Electric Cloud (well, more well-known in developer circles, at least) known as ElectricCommander, a build-test-deploy application deployment system. Its interface presents applications in terms of tiers – but extends beyond the traditional three-tiers associated with development to include infrastructure services such as – you guessed it – load balancers (yes, including BIG-IP) and virtual infrastructure. The view enables operators to create the tiers appropriate to applications and then orchestrate deployment processes through fairly predictable phases – test, QA, pre-production and production. What’s hawesome about the tools is the ability to control the process – to rollback, to restore, and even debug. The debugging capabilities enable operators to stop at specified tasks in order to examine output from systems, check log files, etc..to ensure the process is executing properly. While it’s not able to perform “step into” debugging (stepping into the configuration of the load balancer, for example, and manually executing line by line changes) it can perform what developers know as “step over” debugging, which means you can step through a process at the highest layer and pause at break points, but you can’t yet dive into the actual task. Still, the ability to pause an executing process and examine output, as well as rollback or restore specific process versions (yes, it versions the processes as well, just as you’d expect) would certainly be a boon to operations in the quest to adopt tools and methodologies from development that can aid them in improving time and consistency of deployments. The tool also enables operations to determine what is failure during a deployment. For example, you may want to stop and rollback the deployment when a server fails to launch if your deployment only comprises 2 or 3 servers, but when it comprises 1000s it may be acceptable that a few fail to launch. Success and failure of individual tasks as well as the overall process are defined by the organization and allow for flexibility. This is more than just automation, it’s managed automation; it’s agile in action; it’s focusing on the processes, not the plumbing. MANUAL still RULES Electric Cloud recently (June 2012) conducted a survey on the “state of application deployments today” and found some not unexpected but still frustrating results including that 75% of application deployments are still performed manually or with little to no automation. While automation may not be the goal of devops, but it is a tool enabling operations to achieve its goals and thus it should be more broadly considered as standard operating procedure to automate as much of the deployment process as possible. This is particularly true when operations fully adopts not only the premise of devops but the conclusion resulting from its agile roots. Tighter, faster, more frequent release cycles necessarily puts an additional burden on operations to execute the same processes over and over again. Trying to manually accomplish this may be setting operations up for failure and leave operations focused more on simply going through the motions and getting the application into production successfully than on streamlining and optimizing the processes they are executing. Electric Cloud’s ElectricDeploy is one of the ways in which process optimization can be achieved, and justifies its purchase by operations by promising to enable better control over application deployment processes across development and infrastructure. Devops is a Verb 1024 Words: The Devops Butterfly Effect Devops is Not All About Automation Application Security is a Stack Capacity in the Cloud: Concurrency versus Connections Ecosystems are Always in Flux The Pythagorean Theorem of Operational Risk241Views0likes1CommentAgile PLM - Weblog - Java Client
My company is upgrading from Agile PLM 9214 to 93. At the same time we are migrating from Oracle Application Server (OAS) to Weblogic (WLS). In front of the servers is a Big IP LTM 1600, v 9.4.6 Build 401.0 Final. The WLS environment consists of (1) node manager server. (2) managed servers. (1) file manager server. There are two clients for the user to login with: web and java. The web client works fine. The java client, when going through the LTM, does not. It does work connecting directly to either of the managed servers. This does give us some options for our small in number but noisy java client users, but we loose the LTM functions of availability and traffic management - we'd like to have those. This is what happens: User connects to the page with the link to launch the .jnlp file. http://agl.plexus.com:7001/JavaClient/start.html Clicks 'launch'. WLS sends the .jnlp file to the client, which opens Java Web Start. A login widget displays. Enter valid credentials. After 90 seconds this error displays: Server is not valid or is unavailable. I've got a ticket (C688392) open with support, they've got snoop captured packets from the server's POV of a session that works connecting directly to a managed server, and tcpdump captures of the malf'd session. Anyone have experience w/ Agile PLM, Weblogic and the java client? This is the jnlp file. Agile 9.3.0.1 Oracle Corporation Agile 9.3.0.1 Agile 9.3.0.1 Product Lifecycle Management (PLM) serverURL=t3://agl.plexus.com:7001 jvuecodebase=http://://jVue jvueserver=http://agl.plexus.com/Agile/VueServlet installationinfo=/opt/agl/agile93/agsetup.enc serverType=wls tunneling.shortcut=true webserverName=agl.plexus.com appserverVersion=10.3 UpdateVersions=9.3.0.1 useSessionGenerator=true525Views0likes4CommentsInside Look - PCoIP Proxy for VMware Horizon View
I sit down with F5 Solution Architect Paul Pindell to get an inside look at BIG-IP's native support for VMware's PCoIP protocol. He reviews the architecture, business value and gives a great demo on how to configure BIG-IP. BIG-IP APM offers full proxy support for PC-over-IP (PCoIP), a leading virtual desktop infrastructure (VDI) protocol. F5 is the first to provide this functionality which allows organizations to simplify their VMware Horizon View architectures. Combining PCoIP proxy with the power of the BIG-IP platform delivers hardened security and increased scalability for end-user computing. In addition to PCoIP, F5 supports a number of other VDI solutions, giving customers flexibility in designing and deploying their network infrastructure. ps Related: F5 Friday: Simple, Scalable and Secure PCoIP for VMware Horizon View Solutions for VMware applications F5's YouTube Channel In 5 Minutes or Less Series (24 videos – over 2 hours of In 5 Fun) Inside Look Series Life@F5 Series Technorati Tags: vdi,PCoIP,VMware,Access,Applications,Infrastructure,Performance,Security,Virtualization,silva,video,inside look,big-ip,apm Connect with Peter: Connect with F5:331Views0likes0CommentsTCP Payload String Swap for Oracle HA
Hi, I would like to use an iRule on a VIP that heads 4 Oracle DB RAC servers. Each server helps serve a single DB on SAN attached storage. However, Oracle requires that each rac host have a unique SID. So host db01 uses SID "acmesid1" So host db02 uses SID "acmesid2" So host db03 uses SID "acmesid3" So host db04 uses SID "acmesid4" The application servers which use the database hosted by these 4 RAC servers can only have a single SID configured. I would like the Application server to be configured with "acmesid_vip" and when when the application server hits the BigIP on port 1521 with this SID in tow the BigIP will open the TCP Payload and swap this incoming SID "acmesid_vip" with "acmesid1" or "acmesid2" or "acmesid3" or "acmesid4" based on which DB server the BigIP is planning on forwarding the request to, respectivley. So in short is there a way to do a regular expression on the TCP payload of all incoming packets s/acmesid_vip/acmesid1/ in the case of going to the first DB server? I have seen the TCP::payload function and this is what I have so far which I am sure is horribly busted as it uses a mix of pseudo code to get my goal across as I am no irules guru What do you think? set payload [TCP::payload] if { destDBHost=db01 } { set END_SID "acmesid1" regsub -all "(acmesid_vip)" $payload "\\1\\2$END_SID" payload TCP::payload replace 0 [TCP::payload length] $payload } if { destDBHost=db02 } { set END_SID "acmesid2" regsub -all "(acmesid_vip)" $payload "\\1\\2$END_SID" payload TCP::payload replace 0 [TCP::payload length] $payload } if { destDBHost=db03 } { set END_SID "acmesid3" regsub -all "(acmesid_vip)" $payload "\\1\\2$END_SID" payload TCP::payload replace 0 [TCP::payload length] $payload } if { destDBHost=db04 } { set END_SID "acmesid3" regsub -all "(acmesid_vip)" $payload "\\1\\2$END_SID" payload TCP::payload replace 0 [TCP::payload length] $payload }184Views0likes2Comments