Forum Discussion

sunnyman67_1367's avatar
sunnyman67_1367
Icon for Nimbostratus rankNimbostratus
Dec 16, 2013

Trunk Configuration on BIG-IP LTMs

Hi guys, i've faced to one simple problem on trunking configuration between our two LTMs and cisco nexus switches. i've configured trunk interface on both LTMs and put two ports 1.1 and 1.2 into this Trunk interface and activated LACP on mode active on both of them. On nexus switches also i've configured the related port-channel and trunk configuration properly. Meanwhile, between our two nexus switches i've configured vPC and everything is OK. But now, only port 1.1 of each LTM is working fine, and port 1.2 is not working and on nexus switches the related ports are in suspended state! When i've changed the status of related port-channel and its member to "shut" and changed back to "no shut", the problem didn't solved... Now, what should i do in your opinion? Thanks for your help...

 

11 Replies

  • Hi,

     

    Did you try to put your ports on LTM in passive mode ?

     

    Because sometimes Cisco doesn't work very well when others members (from others vendors) are in active mode.

     

  • Hi Thomas, no, i didn't try it, but port 1.1 of each LTM that is connected to port 17 ad 18 of nexus N5K-1 is working fine. our port plan is as following :

     

    LTM-1 port 1.1 --> N5K-1 port 17 , LTM-1 port 1.2 --> N5K-2 port 17

     

    LTM-2 port 1.1 --> N5K-1 port 18 , LTM-2 port 1.2 --> N5K-2 port 18

     

    Now, our trunk port is up but only port 1.1 of each member is up and working fine, but with the same configuration port 1.2 of each of them doesn't work!!! So, in this scenario is there any need to test passive mode on LTMs? Becuase i think testing the passive mode is successful when the trunk mode can't go on! Now have you any other suggestion thomas?

     

  • It was not with a BIG-IP but I had the same issue with 1 port up and the other in an error state.

     

    Just to continue to investigate together, I'd like to be sure that switching LACP in passive mode doesn't change anything.

     

    Can you test it please ?

     

  • OK Thomas, i'll try this solution in our first downtime, but is there any other solutions for redundancy guys???

     

  • Dear All, as i've read the best practice of F5, i noticed that it's better to use passive LACP mode on LTMs and active LACP mode on switches side. Also, we should to use "Short" LACP Timeout (1 Sec), not "Long" LACP Timeout (30 Sec), because "Long" mode is the default mode on all F5 LTMs. So, i'm going to test this on our first downtime. After that, i'll tell you the result...

     

  • Dear Techgeeeg, unfortunately i didn't test it yet, i'll test it in this week probably and tell all guys the result...

     

  • Dear All, as i've promised you before, i've tested the trunk configuration between nexus switches and F5-LTMs. The new thing i realized is the F5-LTM puts only one link active of each port-channel and put other ports to suspended mode! Also it's recommended to put LACP negotiation mode to passive mode on F5-LTM side and put it active on switch side. i've checked the HA test between them also, when i plugged out the cable that is connected to active port, the other port changed to active mode. So everything is OK with F5, no issue. The important thing here is that when you are using Port-channel between F5-LTM and switch, F5 doesn't change the B.W. of this port-channel and the total B.W. is as the same before, but it's not matter, because finally Throughput is important. I hope these new tips help you all...

     

  • Sunny, when you aggregate the 2 links between an F5 and end device participating in LACP for example that are each 1Gb, the TOTAL B/W is 2Gb.

     

    "network throughput is the rate of successful message delivery over a communication channel" per wiki.

     

  • Hi Champs , I am also have the same setup but unable to ping ip on both ends VLAN, please suggest

     

    • mnabors_63128's avatar
      mnabors_63128
      Icon for Nimbostratus rankNimbostratus
      If you put the "vpc peer gateway" command in your vpc of the nexus gear, this will fix your problem. The issue comes in as part of the vpc loop avoidance mechanism.