[Olsr-users] reduce OLSR bandwidth by tuning emission intervals
Tue Feb 26 12:57:31 CET 2013
As you might now, as ninux.org we are running some community networks in
Italy. Some of these are interconnected through DSLs, using a tinc VPN
over which OLSRd is running.
However, many people complain about the bandwidth used by OLSR traffic
over the DSLs.
My suggestion is to increase the emission intervals and validity times
(e.g. multiplying them x15) on the interfaces that talk on the VPN. This
is something expected in the RFC [*] and should get rid at least of some
>From a small emulation that I did, it looks like it works well: On the
"slowed down" link I see OLSR packets coming out at more or less the
smallest emission interval (i.e. the x15 HelloInterval), but with a
bunch of delayed TC and HNA messages inside (pcap here [^]). I think
that things work as long as the smallest validity time on the network is
smaller than the biggest emission interval.
Do you think there might be drawbacks in such setup in the real world on
a ~150 nodes network? For example RAM usage (we are using mostly
embedded devices)? Or perhaps because of the amount of TCs and HNAs
there will be no significant bandwith decrease?
cheers and thanks,
[*] RFC 3626 paragraph 18.1:
- the emission intervals (section 18.2), along with the
advertised holding times (subject to the above constraints)
MAY be selected on a per node basis.
More information about the Olsr-users