[Olsr-dev] Thoughts about an ETX-FF-Metric plugin
Thu Jun 19 17:37:12 CEST 2008
On Thu, Jun 19, 2008 at 17:09, Markus Kittenberger
>> Do you think we could use this two arguments to get reasonable values for
>> model ?
> this two parameters are very important for the size/layout of the first
> linear window,.. and the first averaging function,..
> one question, how do you now the hello timeout of your neighbours, do they
> put them into their hello messages?
You don't know, but the timeout of your neighbor is not important. The
hello rate is transmitted by your neighbor and it works together with
The neighbor is running the same (or a similar) algorithm to calculate
the LQ and the result of the calculation is transmitted with every
> the exponentially part of the window, has it`s own tuning paramters, which
> once we found good values should be quite ok for most networks
> maybe one total history length/duration parameter makes sense (in seconds)
> (may be useful for example if you know you`ve got some (too long shots) of
> 5ghz bridges which tend to repeatedly-completely fall out in bad weather
> (the old window size could be even used as input for this
I would like an algorithm which automatically choose it's parameters
by using the hello timeout.
(the hello rate of the neighbor might become variable, so we cannot
assume it's a constant)
> i can deliver a pseudocode sample, or working simulation in java(script) or
> so, but as i only code in c only about 1% of my coding time, i think it`s
> better you implement it in c,.. (-;
Java would be great (I have done more coding in Java than in c ;) )
> i prefer do delete only the affected parts of history (timeslots younger or
> equal than the last received packet, which is quite equivalent to deleting
> every timeslot younger or equal to the youngest timeslot having lq > 0, when
> a large jump is detected)
We have to test what's easier/better...
> deleting the fron part of the history, has also the side effect (depending
> on implementation) of (indirectly) reducing the weight of the remaining
> parts, as the get rerated/moved from 48 seconds time slot (which will stay
> the same for 48 seconds) maybe even down to 12 second time slots, which will
> be summarized soon,..
> (i´m not sure if this is a good side effect or not, or negligible)
> also a question is how to handle large jumps caused by anything else than
> a reboot (changing antenna, someone standing in front of antenna,.. (-;)
> if this difference (from a reboot) is detectable depends on whether the
> olsrd starts with sequence number 0 or with a random number,..
olsrd starts with a random number I think.
> the problem is that when the algorithm finds out it was a reboot he has
> already filled parts of the history with the low lq values, he generated
> with every lq timeout
yes, if the reboot happens fast enough this might be a problem.
> of course it may make sense to keep history longer then the link timeout,
> depending non how short the link timeout is
Not good... when the link times out the datastructure we are using to
store our information will be freed.
> a link timeout (at which definitely no route will use this neighbour) makes
> much sense with this history algorithm,..
> than for example while after 12 seconds the lq has dropped to 50% of his
> "original" value
> it still has LQ of about 37,5% after 30 seconds
> after 66 seconds 25%
> and after 138 seconds it`s still 12,5%
We already have a timeout algorithm, so that should be no rpoblem ;)
> in this case ETX values drop linear and not quadratic as usual, as on a
> completely dead link the nlq is freezed,.. while in reality (if its no
> unidirectional link) the lq at the neighbour drops too,..
Unidirectional links will not really matter because a link is only
announced if both the neighbor and the node itself generate a "good"
LQ value (exchanged by Hellos).
> dispite the fact that this history function remembers/weights bad thing
> better than good times, when starting with a histroy full of "good times",
> it clearly takes the full 280 seconds to reach lq 0,
> and connot compete against a shorter history in this special case,..
> a method to improve this algorithm on lowering the lq deeper on desastrous
> situations (this includes but is not limited to dead links) would be to use
> the very old part (last quarter (= last 3-4 time slots)) of the history only
> when their values are lower than the newer parts:
> that means oLQ would be split into an old and a new part, comparing them
> before deciding whether to use the old parts,..
We will have to be careful with this... if most nodes of the network
just pump out bad ETX values, it will not really matter for routing
(only the relative differences will matter).
> this would reach lq 0 in 96 seconds (ignoring last 3 of 13), respectively
> 114 seconds (ignoring last 4 of 16)
> after only 48 seconds the LQ would consist of 1/6 of the old "good" value
> and to 5/6 of the new "desastrous" value , respectively after 66 seconds the
> LQ would be 12,5% on the 16 slot layout
> so this function 1. gives bad time slots a heigher weight, and forgets the
> good old times, when operating in bad times (-;, but remembers multiple/long
> bad times quite long
> the conclusion, is an completely pessimistic link sensing, which will only
> generate compareable (to <= 0.5.5) lq result on perfect and/or stable links.
> but this behaviour the target, and if its too pessimistic, there are some
> parameters, which (when misused) can even make it optimistic,..
Difficult to use together with old routers.
> but remember the old etx code would perform much, much worse on dead links,
> 98% after 30 seconds, 95% after 75 seconds,..
> i would call that blindly optimistic,..
> but i think i`ve defined so many parameters so far, that it`s time for an
> simulation (-;
"Wo kämen wir hin, wenn alle sagten, wo kämem wir hin, und niemand
ginge, um einmal zu schauen, wohin man käme, wenn man ginge." (Kurt
More information about the Olsr-dev