Please refer K7751
From document:
Load balancing behavior on CMP enabled virtual servers Connections on a CMP enabled virtual server are distributed among the available TMM processes. The load balancing algorithm, specified within the pool associated with the CMP enabled virtual server, is applied independently in each TMM. Since each TMM handles load balancing independently from the other TMMs, distribution across the pool members may appear to be incorrect when compared with a non-CMP enabled virtual server using the same load balancing algorithm.
Consider the following example configuration:
Virtual Server: 172.16.10.10:80
Pool with 4 members:
10.0.0.1:80 10.0.0.2:80
10.0.0.3:80
10.0.0.4:80
Pool Load Balancing Method: Round Robin
Scenario 1: Virtual Server without CMP enabled
Four connections are made to the virtual server. The BIG-IP system load balances the four individual connections to the four pool members based on the Round Robin load balancing algorithm:
--Connection 1--> | | --Connection 1--> 10.0.0.1:80
--Connection 2--> |-> BIG-IP Virtual Server ->| --Connection 2--> 10.0.0.2:80
--Connection 3--> | | --Connection 3--> 10.0.0.3:80
--Connection 4--> | | --Connection 4--> 10.0.0.4:80
Scenario 2: Virtual Server with CMP enabled on a BIG-IP 8800
Four connection are made to the virtual server, unlike the first scenario where CMP was disabled, the BIG-IP distributes the connections across the multiple TMM processes. The BIG-IP 8800 with CMP enabled can use four TMM processes. Since each TMM handles load balancing independently of the other TMM processes, it is possible that all four connections are directed to the same pool member.
--Connection 1--> | | --Connection 1--> TMM0 --> 10.0.0.1:80
--Connection 2--> |-> BIG-IP Virtual Server ->| --Connection 2--> TMM1 --> 10.0.0.1:80
--Connection 3--> | | --Connection 3--> TMM2 --> 10.0.0.1:80
--Connection 4--> | | --Connection 4--> TMM3 --> 10.0.0.1:80
This behavior is expected, as CMP is designed to speed up connection handling by distributing connections across multiple TMM processes. While initially this behavior may appear to favor one or several servers, over time the load will be distributed equally across all servers.