Persisting Across Virtual Servers

We've seen a lot of interest lately in persisting connections from the same client to the same backend server when they arrive on different virtual servers. In many cases an iRule is really not necessary, so I thought I'd point out some of the other ways you can bend LTM persistence to your will.

While there are many other applications that require multiple services to be processed on the same server for each client, the most common scenario seems to be HTTP and HTTPS running on a set of webservers where clients must stick to the same server for both services to preserve session state when transitioning between them.  I'll base my examples on HTTP and HTTPS for simplicity's sake, and if you'd like some assistance adapting them to your situation, please post a question in the Advanced Design & Config General Discussion forum and we'll see if we can help you out.

Sharing source address persistence without backend re-encryption

The simplest scenario is for sharing persistence between HTTP & HTTPS virtual servers with no backend re-encryption.

If the virtual servers are listening on the standard ports (80 & 443), and the real servers are listening on the standard port for HTTP (80), you can simply configure both virtual servers to use the same pool and a persistence profile of type "source_addr" with the "persist across services" option enabled. (If the virtual server address is different for the 2 virtual servers, enable "persist across virtual servers" instead.) Port translation (enabled by default) must remain enabled on at least the HTTPS virtual.

Here's what the supporting configuration would contain:

pool SERVERS_80
 member 1.2.3.4:80
 member 1.2.3.5:80

virtual HTTP
 dest x.x.x.x:80
 pool SERVERS_80
 persist source_addr any virtual

virtual HTTPS
 dest x.x.x.x:443
 pool SERVERS_80
 port translation enabled
 clientssl profile
 persist source_addr any virtual


When the client connects to either virtual server, the same persistence record will be used. The persistence record will reference a pool member on port 80. With port translation enabled on at least the HTTPS virtual, when a request is forwarded to the pool member, the destination port will always be translated to 80.

Sharing source address persistence with backend re-encryption

To support backend HTTPS re-encryption, only a slight change to the above configuration is required.

With virtual servers listening on the standard ports (80 & 443), and the real servers also listening on the same ports, you'd define the pool members with wildcard port (0) instead of port 80, and port translation settings don't matter. The rest is the same: Configure both virtual servers to use the same pool and a persistence profile of type "source_addr" with the "persist across services" option enabled.

Here's what the supporting configuration would contain:

pool SERVERS_0
 member 1.2.3.4:0
 member 1.2.3.5:0

virtual HTTP
 dest x.x.x.x:80
 pool SERVERS_0
 persist source_addr any virtual

virtual HTTPS
 dest x.x.x.x:443
 pool SERVERS_0
 clientssl profile
 serverssl profile
 persist source_addr any virtual


When the client connects to either virtual server, the same persistence record will be used. The persistence record will reference a wildcard (port 0) pool member. When a request is forwarded to a wildcard pool member, port translation is disabled automatically, and the same destination port requested by the client is used when connecting with the server.

This solution will also support pass-through HTTPS load balancing (HTTPS load balancing with no decryption by LTM): Just don't apply the clientssl and serverssl profiles to the virtual server.

Sharing cookie persistence

Cookie persistence can be shared using the same "one pool/one persistence profile/port translation" idea. However, cookie persistence doesn't make an entry in the session table, so the "share across virtuals" setting we used for source address persistence doesn't apply.

Instead, we'll take advantage of the fact that a cookie insert persistence profile by default sets a cookie with a name derived from the name of the selected pool. When the same persistence profile is applied on multiple virtual servers sending traffic to the same pool, the same cookie will be set and read regardless of the virtual server to which the connection is associated.

If you don't need to re-encrypt traffic to the backend, here's the configuration to share cookie persistence:

pool SERVERS_80
 member 1.2.3.4:80
 member 1.2.3.5:80

virtual HTTP
 dest x.x.x.x:80
 pool SERVERS_80
 persist cookie insert

virtual HTTPS
 dest x.x.x.x:443
 pool SERVERS_80
 port translation enabled
 clientssl profile
 persist cookie insert


And here's how you can configure shared cookie persistence to support re-encrypted traffic to the backend:

pool SERVERS_0
 member 1.2.3.4:0
 member 1.2.3.5:0

virtual HTTP
 dest x.x.x.x:80
 pool SERVERS_0
 persist cookie insert

virtual HTTPS
 dest x.x.x.x:443
 pool SERVER_0
 persist cookie insert
 clientssl profile
 serverssl profile


Pass-through HTTPS load balancing cannot share cookie persistence with HTTP traffic: For cookie persistence to work with HTTPS, you will need to decrypt HTTPS traffic at LTM to see the persistence cookie.

there's always room for iRules...

Even though I didn't intend for this to be an iRules article when I started out, I just fielded a post in the iRules 9.x forum about persisting across virtual servers with port translation for multiple services -- a requirement that can only be addressed with an iRule -- so I thought I'd add that here.

In this case, a virtual server listening on port 80 must forward requests to server on port 7700, and a virtual server listening on port 443 must forward requests to server on port 7600. Requests from the same client to both virtual servers must follow the same persistence record.

For that we used the session command with the "any virtual" option, and some logic to handle the port translation appropriately:

when CLIENT_ACCEPTED {
  # set up default pool and fail counter
  set def_pool [LB::server pool]
  set lb_fails 0
  # set LB port based on requested port
  switch [TCP::local_port] {
    443 {set port 7600}
    80 {set port 7700}
  }
  # check for existing persistence record 
  # if it exists, directly select node by address:port
  set server [session lookup uie {[IP::client_addr] any virtual}]
  if {($server != "") && ($port != ""}{
    log local0. "persisting [IP::client_addr]:[TCP::client_port] to $server:$port"
    node $server $port
  }
}
when LB_SELECTED {
  # add/refresh session table entry (5 min timeout)
  session add uie {[IP::client_addr] any virtual} [LB::server addr] 300
  log local0. "adding persistence record: [IP::client_addr] to $server"
}
when LB_FAILED {
  # if connection fails, log & reselect a new server in same pool, up to the number of available servers
  if { $lb_fails < [active_members $def_pool] } {
     persist none
     LB::mode rr
     LB::reselect
     log local0. "Selected server $server:$port did not respond. Re-selecting node"
  }
}


Configure either a pool containing all the servers with a wildcard port, or a pool for each service containing members with the appropriate port. Configure a virtual server for each service and apply the appropriate default pool and the iRule above to each virtual server. Port translation settings on the virtual server are irrelevant, as the iRule explicitly selects the node by IP and port. No persistence profile is required, as the session table entries are all managed by the iRule.

Traffic for ports not defined in the switch command will fall through to the default pool using the port and address translation settings as configured on the virtual server.

Got more?

These ideas can be extended and adjusted to meet a much wider variety of persistence requirements than those demonstrated here. Field-validated persistence examples for specific applications make great codeshare / wiki contributions, and you'll be saving your fellow community members from duplicating your effort. If you'd like an assist formatting or finding the right place to post your example, drop me a line (deb-at-f5-dot-com). I'm happy to help.

Persistently yours,
/deb

Get the Flash Player to see this player.
 
Published Sep 26, 2007
Version 1.0

Was this article helpful?

3 Comments

  • uni's avatar
    uni
    Icon for Altostratus rankAltostratus
    In your example for sharing cookie persistence between virtual servers, you use wildcard services. However, your pool has no service monitor. Having a pool with no service monitor is next to useless. How do you get around that?
  • What about the monitors? I cannot add a standard monitor due to be configured the pool as service:0

     

     

    Thanks and Best regards