LTM Object Limitations ( vips, self-ips etc )
What we have discovered is that when using individual partitions TMM starts to panic and have issues once we have deployed 1200 pods. In addition, syncing the configs becomes problematic and results in one or both nodes going inoperative. ( We suspect that when the configs are written, a file handle is opened for every single file, the data written, then the file handles are closed. This gives some atomic structure to the save config or config sync, but leads to significant I/O issues that cause a pause in traffic and often a complete tmm bounce at high numbers of partitions )
When using a single partition, our scalability starts breaking at ~ 1400 pods. Keep in mind, both of these break points are with zero traffic.
I've googled, searched Devcentral ( anyone else having zero search results out of the search function every time? ), opened a case with F5, and done my best to pillage all available resources. I'm hoping someone knows of these limitations or can point me to some documentation that lays out these limitations and best practices for large scale implementations ( object wise, not traffic wise ).
We've heard that VCMP will allocate more memory for holding objects, so our Viprions could scale higher based off our number of guests. Can anyone confirm if this is a linear scaling or are there diminishing returns as we add VCMP guests?
Thank you everyone for any assistance,
Jason