Forum Discussion
Hi Piotr,
when you need to count the "master_subtable" for a maximum number of concurrent "session id" on each or even most of the requests then it may be better to not splitt the table at all.
Its like that, that splitting the table may cause a better RAM consumption, but on the other hand it would require additional cpu cycles to count every subtable independently and finaly calculate the total result.
A good compromise of CPU cycles and RAM distribution could be achived if a certain tolerance for the maximum session counter could be accepted by your application logic. In this case you could implement some additional caches to calculate the current total only once in a while...
if { [set master_table_current_size [table lookup -notouch master_table_current_size]] eq "" } then {
set master_table_current_size 0
for {set table_id 0} {$table_id < $NUMBER_OF_TABLES} {incr table_id} {
incr master_table_current_size [table keys -subtable -count "master_table$table_id"]
}
table set master_table_current_size $master_table_current_size indef 5
} else {
Use the retrived $master_table_current_size value parameter
}
The outline example would -count the independend master_tables only once every 5 seconds and request in between would use the previously cached total number fetched from a regular table entry... 😉
Cheers, Kai