[OLSR-users] CPU load

Thomas Lopatic (spam-protected)
Thu Jan 20 13:03:17 CET 2005


Hi there,

some quick thoughts on the CPU load problem.

* What happens if debug output is switched off, i.e. the debug level is 
set to 0? On my old Pentium 233 olsrd was not usable due to CPU load 
even in a pretty small network with the debug level set to anything 
beyond 1.

* It would be interesting which parts of olsrd cause the high load. If 
increasing the poll interval makes much of a difference, then it's the 
periodically executed tasks like expelling timed out nodes from the 
internal data structures. If increasing the HELLO intervals of all 
surrounding nodes makes much of a difference, then it's the processing 
of incoming packets.

* I've thought through the linear list thing again. True, there are more 
efficient data structures and I have a gut feeling that we could save a 
few lookups by caching the results of previous lookups. But then again, 
the performance problems start already at fifty nodes, which shouldn't 
be a problem for the existing data structures. (For potentially larger 
sets we use a hash function to distribute the set elements over a number 
of linear lists.)

To sum up, I don't have any idea why we have such a high CPU load. :-) 
So, let's find out. I guess that the easiest change would be to lower 
the debug level on a single node and see what this does to the CPU load 
on this node. The next change could then be to increase the poll 
interval length on the node and see whether this makes any difference.

If this still doesn't make any difference, we should come up with a 
profiling version of olsrd. Has anyone ever used the profiling features 
of GCC? Are they supported by the cross-compiler? I guess that this 
would give us a good idea in which functions olsrd spends most of its 
CPU cycles.

-Thomas





More information about the Olsr-users mailing list