Forum Discussion

pete_71470's avatar
pete_71470
Icon for Cirrostratus rankCirrostratus
Feb 18, 2013

mirrored forwarding virtuals: really a performance problem?

 

Salutaions,

 

 

I am beginning to work on a plan to have our single-bladed Viprion

 

2400 pair (11.2.1HF3) act as the default gateway for new servers but

 

have some concerns about maintenance failovers interfering with

 

traffic.

 

 

To perform maintenance, to upgrade software, or to periodically prove

 

empirically that the Standby will work properly in an Active role, a

 

force-standby is done on the Active unit. With vMAC in place for the

 

traffic group, and with connection mirroring for all non-HTTP(S)

 

virtuals, the failover goes unnoticed.

 

 

How would we use the F5 as default gateway for nodes and have routine

 

failovers go unnoticed? Our HA network carries only HA traffic and is

 

local to the pair, connected via a short 10G fiber host-to-host run.

 

Would mirrored forwarding virtual servers really cause a performance

 

problem for 2400's?

 

 

Thanks in advance!

 

9 Replies

  • The answer of course is 'it depends'. Do you have an idea of how many connections would be mirrored?
  • There will be about 40,000 additional concurrent connections and initially under 2 gigabits per second sustained traffic -- the traffic from a strained Firewall-1.
  • I don't think that's a particularly high load or that it will be a problem. However, I would certainly recommend you think carefully about the protocol profiles you use with your wildcard Virtual Servers and things like Idle Timeouts etc. to ensure memory and CPU resources are not used unnecessarily. I'd also think carefully about using any iRules or Packet Filters.

     

     

    Ideally you would make this change in a staged fashion rather than all at once.
  • Thank you, Steve. The plan is to have VLANs migrated one at a time after initial testing, giving us the opportunity to gauge impact incrementally.
  • Steve - I understand packet filtering is applied very early in the life of traffic passing through the F5 and that excessive use, especially if 'filter established connections' is selected, can cause performance woes. After we upgrade to 11.3.1 we'd like to use bandwidth controllers to prevent runaway nodes in our 'default gateway to F5' networks from causing a disturbance in the force. Do these controllers affect only the virtual servers to which they are applied or is there a lower level component to consider (ala packet filters) where we might be shooting ourselves in the foot simply by enabling them?
  • Bandwidth Controllers are a pretty new feature and unfortunately I've no experience of them. That being said, the very fact they are so new would put me off using them for a good while, personally I'd take a cautious approach and use Rate Shaping until then. I'm not aware of any potential performance or stability issues with RS but perhaps others who've used it more than I might have a different opinion. Anyone?
  • Thank you for the advice concerning bandwidth controllers. We'll take the cautious approach and stick with rate shaping as needed.
  • You're welcome. I should for the sake of balance that F5 code and new features are generally far more reliable than that/those from other vendors I could think of, particularly one based in San Fran!