Forum Discussion

Angel_Lopez_116's avatar
Angel_Lopez_116
Icon for Altostratus rankAltostratus
Mar 18, 2015

iRule development: subtable spreading among TMMs

Hi,

 

I'm trying to understand the best way to design an iRule that will need to handle a lot of table entries and do that fast, as we're talking about rate limit on client connections.

 

I've found a very useful example of an iRule that create some subtables so they're spreaded among the TMMs, but I don't fully understand how it works.

 

In the documentation about the "table" command I read "All of the entries in a given subtable are on the same processor. So if you put all of your entries (or the vast majority of them) into the same subtable, then one CPU will take a disproportionate amount of memory and load."

 

So if I understand it right, each subtable will be pinned to a processor, so creating several subtables I'd be able to spread it among the processors and the iRule will handle the data on the subtables more efficiently, right?

 

I'm working on a Viprion with 2 B2150. Each blade has a Intel Quad Core processor, that gives me just a tmm process that creates 4 threads, one for each core. The Hyperthreading in the processor gives me 8 virtual processing cores, but from the point of view of the TMM the system has 4 cores per blade, right?

 

In summary, the TMM::cmp_count variable gives me a value of 8, I guess that this 8 are the 8 physical cores that I got with those 2 quad core processors, right?

 

I think that I'd have to create at least 8 subtables to get advantage of the data spreading among cores, wight? (1 subtable per core) so... what's the real meaning of that ~3+ factor??? does it mean that I'm creating 3 subtables per core? why a value of 3 and not, for example, 2 or 4?

 

I guess that maybe that 3 factor depends on how much data you have to handle, maybe "1 * TMM::cmp_count" is enough if my subtable doesn't grow to much, maybe if I want smaller subtables I have to use "2 * TMM::cmp_count", "3 * TMM::cmp_count" or even "4 * TMM::cmp_count" right?

 

Someone can explain me the meaning of that 3 factor???? Thanks! :)

 

4 Replies

  • Hi Frank, as I understand it when the platform supports HT the system splits data related tasks and control related tasks among the hyper threads inside the core, so if your platform has a quad core as mine, you'll have 4 hyper threads running data plane tasks and 4 hyper threads running control plane tasks. If the TMM process reach the utilization thread of 80% the control plane tasks are constrained to a maximum utilization of 20%. So, in summary, the hyper threads for data plane tasks are higher priority and in case of high use, they are guaranteed an 80% of the processing resources of the core.

     

  • With regard to the CPU allocation (one core to the data plane and one to the control plane): didn't I read that when the data plane core gets over 80% utilisation, the other core is also going to process data plane traffic? Or did I misunderstand sol15003?

     

  • Hi nitass. My Viprion system has two 2150 blades installed, so I got 2 CPUs, 4 physical cores each, so 8 physical cores. That value of 8 is what I'm getting with the TMM::cmp_count variable so I guess that 8 would be my available "computing units".

     

    As the CPUs in the B2150 are HT+ I'd get 2 threads per core, but as you said, from 11.5.0 1 thread is dedicated to data plane tasks and the other thread to control plane tasks. I guess that iRule execution is a data plane task, so I'd stick with that value of 8 for my "processing units".

     

    If I'd have to design my iRule from scratch, and after reading that each subtable will be handled by a core, I'd choose to use maybe TMM::cmp_count subtables, but it seems that is recommended to use N * TMM::cmp_count being N=3 in the example iRule... I don't get why 3 and not other value... I guess that it's just to get several smaller subtables per core, but... is it 3 a magic number? any documentation about it?

     

    Thanks!

     

  • i understand subtable is pinned to one tmm. for 2150 blade, there are 8 tmm but starting from 11.5.0, tmm data plane adn non-tmm control plane tasks are split. so, i understand it is going to be 4.

     

    sol14358: Overview of Clustered Multiprocessing (11.3.0 and later)

     

    https://support.f5.com/kb/en-us/solutions/public/14000/300/sol14358.html

     

    sol15003: TMM data plane tasks and non-TMM control plane tasks use separate logical cores on systems with HT Technology CPUs

     

    https://support.f5.com/kb/en-us/solutions/public/15000/000/sol15003.html

     

    about ~3+ x tmm count, i do not know where Aaron got the number from (probably he did some test). anyway, i think having multiple subtables is better because there is more chance subtables are distributed to all tmm.

     

    Split records across many subtables for better distribution across TMMs by Aaron

     

    https://devcentral.f5.com/wiki/iRules.Split-records-across-many-subtables-for-better-distribution-across-TMMs.ashx