[Olsr-dev] Thoughts about an ETX-FF-Metric plugin
Thu Jun 19 14:18:44 CEST 2008
On 6/19/08, Henning Rogge <(spam-protected)> wrote:
> Am Donnerstag 19 Juni 2008 11:30:20 schrieb Markus Kittenberger:
> > doing so would prefer stable links from instable links (1 minute
> > then bad, and so on) (i prefer stable/reliable connections)
> > while reacting quite fast when an usually stable link becomes very
> > instable,.. (in 8 seconds from LQ 1.0 to 0.5)
> > and i think the results will not "hard" jump (up) too much,.. (becasue a
> > group of bad values will not ever move out from history fastly, instead
> > they loose they gradually loose their weight when getting into longer
> > timelots) this approach is also immune to changing amounts of olsr
> > traffic,..
> I'm not sure if I like a LQ measurement that drops the LQ fast but increase
> it slowly ...
luckily my proposed algorithm handles this better than you seem to think,..
it can properly differentiate short loss periodes from longer/multiple
we will just get a lot bad LQ values in the network.
partly yes, but from my point of view they should get bad values, because
these links perform bad,.. (#)
imo its a bad idea to start routing again over a link 15 seconds after he
was disastrous, even when the link is disastrous every 30 seconds, just
because we have forgotten this already,..
if al link only once is lossy for some seconds, my approach would let him
recover to nearly the LQ he had before quite fast.
following an example that shows that 24 seconds after the end of a 12 second
high loss periode the LQ has 90% of his "original" value again, and 95%
after 48 seconds.
shorter loss periodes will recover a bit faster,..
but longer periodes will result in lower LQ values and more slowly
recovering LQ values
multiple loss periodes within the total history duration (in this example
about 5 minutes), will result in low but quite stable LQ values (which is my
LQ 1.0 all the time
followed by 12 seconds 75% loss starting at time 0
afterwards the link runs lossless again,..
the resulting LQ would be (with the same parameters for the algorithm i used
in my mails before)
time : lq (hints)
0 : 1,00 (all time slots hav eLQ of 1)
3 : 0,90 (1 tim eslot has LQ 0,25)
6 : 0,81 (2 time slots)
9 : 0,62 (3 time slots...)
12: 0,53 (4 time slots ...)
15: 0,53 (4 time slots have LQ 0,25 3 3 second slots, and one 6 second lot
18: 0,62 (3 time slotes have LQ 0,25 (2 3 second slot, 1 6second)
21: 0,62 (3 time slots ... (1 3 second slot, 2 6 second slots)
24: 0,81 (2 time slots ... (2 6 second slots)
30: 0,81 (1 6 second slot, and on 12 second slot)
36 0,90 (1 12 second slot has LQ 0,25)
60: 0,95 (1 24 second slot has LQ 0,5)
190 0,97 (1 48 second slot has LQ 0,75)
330 1,00 (everything is forgotten)
hmm... maybe we could use the "hello-timeout" value here...
The algorithm should work for small networks too... and in small networks
> have only a few packets per second.
it will work in small networks, if the base for calculating the LQ values
which goes into the exponential window is >= the hello timeout
and if the algorithm for the first averageing (on interval doubling) checks
how much data it has before deciding which fields it averages,..
for the event of an restarting neighbouring olsrd, detectable through
(unexpectedly) new/low sequence numbers, the link sensing could handle this,
with deleting the history (at least the parts containing 100% loss due to
the reboot of the device)
how deep the LQ sinks, and partly how fast he recovers depends on the
paramters for the function which calculates the LQ from the x (in my examlpe
4) worst time slots (wLQ)
and the other time slots oLQ
could be parametrized to
LQ=(wweight*wLQ) + ((1-wweight)*oLQ)
wsw stand for worst slot weight and reasonable values would be between 0.4
and 0.7 i guess
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Olsr-dev