Forum Discussion

Aaron1121_669's avatar
Aaron1121_669
Icon for Nimbostratus rankNimbostratus
Feb 05, 2009

Implemenation Advice

I'm new to the forums here, and I wanted to see if I could get some advice on an implementation. I’ve worked a lot with the Cisco LD’s, CSM’s, and ASA’s, as well as Radware Applications directors, but my F5 experience is somewhat limited.

 

 

I’ve been put on a project to consolidate some old load balancing equipment, to use two new redundant F5 devices.

 

 

Here is the logical layout of what I’ve got:

 

 

Internet

 

|

 

CheckPoint Cluster

 

|

 

-----------------------------------------------------------------

 

| | | |

 

DMZ1 DMZ2 DMZ3 Internal

 

Old F5 520 (1) Old F5 520 (2) Radware Appdir Network

 

One IP/ Int Config One IP/ Int Config L2 Mode

 

 

I have setup L2 trunking so that everything is accessable from the new F5's, and I've almost got the failover setup.

 

 

From what I've seen, we really have three options for deployment. Can you guys let me know your thoughts on the options, and maybe any of the pro's and con's that I missed? One of the things I am worried about is that we are trying to use smaller F5 boxes, so we are looking at options that reduce traffic through them.

 

 

We currently use the firewall as the default gateway on all the hosts. Our backups take place accross the network, and I'm a little concerned about running all that traffic through the F5's.

 

 

Option 1-

 

Trunk out all three DMZ vlans to the F5 cluster. Setup each DMZ in a one IP config. This would be similiar to the way it is setup now. It also keeps the default gateway on the servers setup as the firewall, so most of the traffic does not have to traverse the F5.

 

Any major drawbacks? It also may require an I-Rule to deal with some of the routing implications??.......

 

 

Option 2-

 

Trunk out all three DMZ vlans to the F5 cluster. Use N-Path Routing for each DMZ. Return traffic would use the firewall, same as now......

 

 

Option 3- Trunk out all three DMZ vlans to the F5 cluster. Insert a new network in each of the DMZ's. Setup the F5 cluster with logical internal and external network connections for each DMZ. Setup the hosts default gateway as the F5.

 

This is the textbook way to do it, but I'm a little concerned about the overall throughput, and the amount architecture changes.

 

 

I really appreciate your input and your thoughts. I know that each of these methods would probably work, but each has it's own implications. I rather know some of them up front, before I get halfway throught he project and find a "gottcha".

 

 

Thanks in advance for your comments!

 

 

 

3 Replies

  • Hi,

     

     

    On Option 1, you would need to SNAT to deal with the routing (whether iRule based, SNATs configured on each virtual, or just a global SNAT). Whether this is a drawback depends on how critical it is that your servers "see" the original client source IP instead of the SNAT address. If it's all HTTP traffic, it's pretty easy to insert an X-Forwarded-For header and configure the servers to log that. For other traffic it's more difficult to get the original client IP into server logs. Again, you may or may not consider this an issue.

     

     

    Option 2 is less than ideal due to the need to configure loopback interfaces on every application server, so it can become quite difficult to scale with large numbers of servers. It can also make it very difficult to troubleshoot issues since you can only "see" the inbound traffic via tcpdump on the F5.

     

     

    Option 3 is, as you say, the "classic" way to do it, but you are correct that having backup traffic, mgmt traffic, etc (non-lb stuff) traverse the F5 tends to add load with not much benefit.

     

     

    I would probably recommend option 1 unless you have real heartburn about losing visibility to the client IP in server logs.

     

     

    Denny
  • Two other (random) notes here...you mention 520s, which are older appliances, but you also mention implementing new systems...could you clarify what version we're dealing with?

     

     

    Also, (on the "gotchas" front) you mention that there's a checkpoint cluster involved. If you're running them active/active and using MAC multicast you'll probably need to turn auto-lasthop off, as the BigIP will track the last *physical* MAC that the traffic sourced from as opposed to the floating MAC that the Checkpoints will try and use if you're running active/active.

     

     

    -Matt
  • Thanks for the advice! I really appreciate it.

     

    I was leaning toward option 1 also. Most of the traffic is http, so the X-Forward-For header should work fine.

     

    The 520's that we have are really old, they are running 4.2 software. As soon as we get the new boxes setup, they will be retired.

     

    Also, thanks for letting me know about turning the auto-lasthop off. It's nice to know what other problems people have run into prior to implementation. It saves a lot of time.

     

    Thanks Again!