Forum Discussion

Jason_40733's avatar
Jason_40733
Icon for Cirrocumulus rankCirrocumulus
Nov 20, 2013

GTM 10 to 11 upgrade experiences

Anyone have any experiences they can share on GTM 10 to GTM 11 upgrades?

 

Specifically, has it been a smooth upgrade or did you have to rebuild topology rules, wideip entries, pools, etc.?

 

We are in the planning stages for upgrading our single module bigip GTM 10.2.1 to 11.4.0 latest hot fix.

 

Google searches have very little information, but what was found was split between the process being a very simple code install and cpcfg to a code install and complete rebuild of all configs.

 

The F5 documentation is fairly slim on this topic.

 

FEATURE OR FUNCTIONALITY        DESCRIPTION
Assigning a BIG-IP system to probe a server to gather health and performance data       Assigning a single BIG-IP system to probe a server to gather health and performance data, in version 10.x, is replaced by a Prober pool in version 11.x.

Any information is greatly appreciated.

 

Thanks,

 

Jason

 

18 Replies

  • Actually yes. I sent F5 some qkviews, both before and after the upgrade. They were able to determine that the issue was being caused by a single HTTP monitor which had been built into a user-created partition. For some reason, this caused a bug. The fix was as follows...

     

    After upgrading, I had to move this particular monitor from the user-created partition, to the Common partition. This was accomplished by editing wideip.conf with vi. I simply opened the file in vi, scrolled down to the monitor in question, and changed its partition name.

     

    I then ran this command: esrapmtg -l -k

     

    At that point, the wideip.conf file loaded up with the modification I had just made in vi, and then all other objects appeared like magic.

     

    The F5 engineer I worked with stated this is a known bug where occasionally certain objects created outside of the Common partition can cause a problem during upgrade.

     

    Your best bet is probably to open a TAC case with F5 and send them qkviews of before and after the failed upgrade, same as I did, as you may not be able to determine which particular object or objects is causing the hangup without first having F5 review your qkview.

     

  • We have separate LTMs and GTMs. When we upgraded the GTMs, it went smoothly. When we upgraded the LTMs, the GTMs' pools lost half their data! The V11 conversion changed LTM Virtual Server names by adding "/Common/" to the front of the names. There were no longer any LTM Virtual Servers on the GTM with the old names, so the old names were removed from all GTM WideIP Pools.

     

    Fortunately, we had kept copies of all the old configuration files, so could reconstitute the states of the pools (and VS-dependent stuff like dependencies) using 'tmsh' statements. But it was annoying. And F5 tells me there is no workaround.

     

    It did occur to me later that I could have set "Virtual Server Discovery" to "Enable (No Delete)" before upgrading the LTMs, and then move the changes over. But that would probably not have worked, as F5 says there is no workaround.

     

  • I have to perform few GTMs upgrades to v11, and they all are in one sync group. I wanted to check the correct procedure to remove a GTM from the sync group as I came across SOL14044 (https://support.f5.com/kb/en-us/solutions/public/14000/000/sol14044.html) According to this, "If you attempt to remove a member from the BIG-IP GTM synchronization group by changing the name of the BIG-IP GTM synchronization group for that member, the new name will be synchronized to the remaining members instead." Thoughts please ?

     

  • would request your assistance in migrating the all environments from F5 10.2.4 to Viprion 11.5.1 Viprion is installed -TMSH-VERSION: 11.5.1 & one of the site has been migrated LTM - Product: BIG-IP Version: 10.2.4 There are multiple partitions for development, QA & production as a precutover activity I am interested to create the 200 virtual servers ,pools,profiles over viprion & have them in disabled state till cutover could you please advise to migrate 200 VIP's having the same VLAN .... shall I need to create via Viprion console or is there a better way to handle the VIP ,pool & profile creation.Is there any command to create virtual servers ,profiles,pools via CLI Assume the existing VIP & pool as follows over F5 let me know the TMSH command to create the VIP,pools & profiles over Viprion

     

    VIP(http & https)

     

    virtual delltest01.dell.com-VS-HTTP { snat automap pool pool_delltest01.dell.com_http destination 10.21.123.80:http ip protocol tcp rules { irule_pool_redirect irule_https } profiles { profile_delltest01.dell.com_http {} tcp {} } } virtual delltest01.dell.com-VS-HTTPS { snat automap pool pool_delltest01.dell.com_http destination 10.21.123.80:https ip protocol tcp rules { irule_pool_redirect irule_https } profiles { dell-COM-Wildcard { clientside } profile_dell_dev_https {} tcp {} } }

     

    Profile desc

     

    profile http profile_dell_dev_https { defaults from http header insert "Original-Protocol:HTTPS" redirect rewrite all insert xforwarded for enable }

     

    Pool desc

     

    pool pool_delltest01.dell.com_http { monitor all tcp members 10.21.123.28:29086 {} }