Forum Discussion

JBengtsson_1773's avatar
JBengtsson_1773
Icon for Nimbostratus rankNimbostratus
Jan 18, 2016

APM global subtable not working

Hi,

I am adding a value to an APM subtable in an APM event, like this:

table set -subtable [IP::client_addr] publikt_pbr 1 72000 72000

Directly after the above row, I do a lookup to verify that the value is properly set.

log local0. "LOOKUP: publikt_pbr( [table lookup -notouch -subtable [IP::client_addr] publikt_pbr] )" Output: info tmm[18386]: Rule /Common/apm_event : APM - LOOKUP: publikt_pbr( 1 )

This works fine!

I then need to verify this value in a separate iRule using this line of code:

if { [table lookup -notouch -subtable [IP::client_addr] publikt_pbr] eq 1 } {....

This is not working. The value from the exact same subtable is not available for me. Doing a lookup for this value from the separate iRule show this:

log local0. "LOOKUP: publikt_pbr( [table lookup -notouch -subtable [IP::client_addr] publikt_pbr] )" info tmm[18386]: Rule /Common/other_irule : LOOKUP: publikt_pbr( )

The same code, on the same software version+HF works fine in labb. When moving this code to production( BIG-IP 4000 ), it does not work anymore.

Does anyone know why this might happen? For me, it makes no sense. I thought APM subtables were to be global, but in this case it seems like I cannot read the value from other iRules.

Thanks in advance!

3 Replies

  • Hi JBengtsson,

    I'm aware of two circumstances that may break the functionality of writing/reading [table] data.

    1.) The [table] data isn't shared across different traffic-groups. So if you write [table] data on a virtual A in traffic-group A, then it can't be access on virtual B in traffic-group B. For further information see...

    https://devcentral.f5.com/articles/big-ip-114-behavior-change-global-data-now-partitioned-by-traffic-group

    2.) The [IP::client_addr] command may resolve differently across virtual servers. Using different route domains may cause your code to store data for TABLE_KEY=IP%1 (virtual server A in route domain %1) and query data for TABLE_KEY=IP%2 (virtual server B in route domain %2). But this issue could be easily resolved by stripping the route domain suffix before passing it to the [table] command (e.g. using [getfield [IP::client_addr] "%" 1]).

    Cheers, Kai

  • So I reviewed the configuration and we do indeed use routing domains. However, both VS reads client_addr with the routing domain suffix %2. I tried to strip the routing domain suffix according to your example just for the sake of it, but it did not make any difference.

     

    The VS's does not explicitly reside in different traffic groups, however one of them is abit different then the other residing in traffic-group-2.

     

    The first VS has the destination of 0.0.0.0%2, together with an iRule that checks for this table value. If the value is set, the user will be able to access Internet. If not, the user is redirected to an APM portal. The APM portal is where I set the table value.

     

    I'am not sure if the VS("Performance (Layer 4)") that is doing ip-forwarding is in traffic-group-2 or not. It does not make use of a specific destination IP. Maybe it will be defaulted to the local group?

     

    Might be the problem?

     

  • Hi JBengtsson,

    the provided link mentioned the "tmm.sessiondb.match_ha_unit" sys option. You may try the following command to change the behavior...

    tmsh modify sys db tmm.sessiondb.match_ha_unit value false

    .. if this solves your problem, then you know that it is a problem of the different "traffic-groups".

    Note: I can't judge the overall impact of the "tmm.sessiondb.match_ha_unit=false". But I assume it would cause some trouble if the [table] data requires read/write access on different device group members. In this case it would require some sort of multimaster replication, which would then open some rooms for ugly two-way edit problems due to replication latency. So you may contact the F5 support to get sure where the limitations of this workaround are...

    Cheers, Kai