<br><br><div><span class="gmail_quote">On 6/19/08, <b class="gmail_sendername">Henning Rogge</b> <<a href="mailto:rogge@fgan.de">rogge@fgan.de</a>> wrote:</span><blockquote class="gmail_quote" style="margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; margin-left: 0.80ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid; padding-left: 1ex">
Am Donnerstag 19 Juni 2008 14:18:44 schrieb Markus Kittenberger:<br><br>> imo its a bad idea to start routing again over a link 15 seconds after he<br> > was disastrous, even when the link is disastrous every 30 seconds, just<br>
> because we have forgotten this already,..<br><br>That's right... especially for networks with mobile nodes.<br><br><br> > if al link only once is lossy for some seconds, my approach would let him<br> > recover to nearly the LQ he had before quite fast.<br>
<br><br>> ...<br><br> The example looks fine. :)<br><br><br> > > hmm... maybe we could use the "hello-timeout" value here...<br> > ack<br> ><br> > > The algorithm should work for small networks too... and in small networks<br>
> > you have only a few packets per second.<br> ><br> > it will work in small networks, if the base for calculating the LQ values<br> > which goes into the exponential window is >= the hello timeout<br>
> and if the algorithm for the first averageing (on interval doubling) checks<br> > how much data it has before deciding which fields it averages,..<br><br><br>Let's see... we have to tuning parameters for links. Hello-Interval (should be<br>
related to the mobility of the node and it's neighbors) and Hello-Timeout<br> (something like the "maximum memory" of a link).<br><br> Do you think we could use this two arguments to get reasonable values for your<br>
model ?</blockquote><div><br>this two parameters are very important for the size/layout of the first linear window,.. and the first averaging function,..<br><br>one question, how do you now the hello timeout of your neighbours, do they put them into their hello messages?<br>
<br>the exponentially part of the window, has it`s own tuning paramters, which once we found good values should be quite ok for most networks<br>maybe one total history length/duration parameter makes sense (in seconds) (may be useful for example if you know you`ve got some (too long shots) of 5ghz bridges which tend to repeatedly-completely fall out in bad weather situations) <br>
(the old window size could be even used as input for this (total_history_duration=hello_interval*window_size)<br></div><br><blockquote class="gmail_quote" style="margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; margin-left: 0.80ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid; padding-left: 1ex">
I will see if I can implement this algorithm during the weekend or maybe next<br> week.</blockquote><div><br>i can deliver a pseudocode sample, or working simulation in java(script) or so, but as i only code in c only about 1% of my coding time, i think it`s better you implement it in c,.. (-; </div>
<br><blockquote class="gmail_quote" style="margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; margin-left: 0.80ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid; padding-left: 1ex">
> MArkus<br> ><br> > #<br> > for the event of an restarting neighbouring olsrd, detectable through<br> > (unexpectedly) new/low sequence numbers, the link sensing could handle<br> > this, with deleting the history (at least the parts containing 100% loss</blockquote>
<div><br>i prefer do delete only the affected parts of history (timeslots younger or equal than the last received packet, which is quite equivalent to deleting every timeslot younger or equal to the youngest timeslot having lq > 0, when a large jump is detected) <br>
<br>deleting the fron part of the history, has also the side effect (depending on implementation) of (indirectly) reducing the weight of the remaining parts, as the get rerated/moved from 48 seconds time slot (which will stay the same for 48 seconds) maybe even down to 12 second time slots, which will be summarized soon,..<br>
(i´m not sure if this is a good side effect or not, or negligible)<br><br>also a question is how to handle large jumps caused by anything else than a reboot (changing antenna, someone standing in front of antenna,.. (-;) <br>
<br>if this difference (from a reboot) is detectable depends on whether the olsrd starts with sequence number 0 or with a random number,..<br></div><br><blockquote class="gmail_quote" style="margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; margin-left: 0.80ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid; padding-left: 1ex">
> due to the reboot of the device)<br><br>If it was just a reboot we might just "jump" to the new sequence number, no<br> need to kill the history.</blockquote><div><br>the problem is that when the algorithm finds out it was a reboot he has already filled parts of the history with the low lq values, he generated with every lq timeout</div>
<br><blockquote class="gmail_quote" style="margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; margin-left: 0.80ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid; padding-left: 1ex">
If it was a long downtime, the link will have timed out and we will start with<br> a fresh history.</blockquote><div><br>ack, otherwise it would be gone anyways<br><br>of course it may make sense to keep history longer then the link timeout, depending non how short the link timeout is<br>
<br>a link timeout (at which definitely no route will use this neighbour) makes much sense with this history algorithm,..<br> than for example while after 12 seconds the lq has dropped to 50% of his "original" value<br>
it still has LQ of about 37,5% after 30 seconds <br>after 66 seconds 25%<br>and after 138 seconds it`s still 12,5%<br><br>in this case ETX values drop linear and not quadratic as usual, as on a completely dead link the nlq is freezed,.. while in reality (if its no unidirectional link) the lq at the neighbour drops too,..<br>
<br>using a steeper history with only 13 timeslots (3,3,3,3,6,6,12,12,24,24,48,48,96) <br>(history duration of this layout is 288 seconds instead of 282 secnds in 16 slots (3,3,3,3,6,6,6,12,12,12,24,24,24,48,48,48))<br><br>
improves this just negligible to this values:<br>30: 36,1%<br>66 23,6%<br>138 11,5%<br><br>so this helps nearly nothing, while having side effects at the whole history,..<br> in fact imo dead links are better handled with a simple link timeout,.. (below is an adjustment of the algorithm to handle this and nearly dead links (e.g. 90% loss) better) <br>
<br>dispite the fact that this history function remembers/weights bad thing better than good times, when starting with a histroy full of "good times", it clearly takes the full 280 seconds to reach lq 0,<br> and connot compete against a shorter history in this special case,..<br>
<br>a method to improve this algorithm on lowering the lq deeper on desastrous situations (this includes but is not limited to dead links) would be to use the very old part (last quarter (= last 3-4 time slots)) of the history only when their values are lower than the newer parts:<br>
that means oLQ would be split into an old and a new part, comparing them before deciding whether to use the old parts,..<br><br>this would reach lq 0 in 96 seconds (ignoring last 3 of 13), respectively 114 seconds (ignoring last 4 of 16)<br>
after only 48 seconds the LQ would consist of 1/6 of the old "good" value and to 5/6 of the new "desastrous" value , respectively after 66 seconds the LQ would be 12,5% on the 16 slot layout <br><br>
so this function 1. gives bad time slots a heigher weight, and forgets the good old times, when operating in bad times (-;, but remembers multiple/long bad times quite long<br> the conclusion, is an completely pessimistic link sensing, which will only generate compareable (to <= 0.5.5) lq result on perfect and/or stable links. <br>
but this behaviour the target, and if its too pessimistic, there are some parameters, which (when misused) can even make it optimistic,.. <br><br>but remember the old etx code would perform much, much worse on dead links, 98% after 30 seconds, 95% after 75 seconds,..<br>
i would call that blindly optimistic,..<br><br>but i think i`ve defined so many parameters so far, that it`s time for an simulation (-;<br><br></div><blockquote class="gmail_quote" style="margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; margin-left: 0.80ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid; padding-left: 1ex">
<br>I'm very interested in this idea because it's independant of the link-quality<br> metric we use. Even with better metrics we could still use this algorithm<br> to "smooth" the incoming data.</blockquote>
<div><br>i know (-;<br><br>and it does more than just smoothing (-;</div><br><blockquote class="gmail_quote" style="margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; margin-left: 0.80ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid; padding-left: 1ex">
Henning<br><br><br> *************************************************<br> Diplom Informatiker Henning Rogge<br> Forschungsgesellschaft für<br> Angewandte Naturwissenschaften e. V. (FGAN)<br> Neuenahrer Str. 20, 53343 Wachtberg, Germany<br>
Tel.: 0049 (0)228 9435-961<br> Fax: 0049 (0)228 9435-685<br> E-Mail: <a href="mailto:rogge@fgan.de">rogge@fgan.de</a><br> Web: <a href="http://www.fgan.de">www.fgan.de</a><br> ************************************************<br>
Sitz der Gesellschaft: Bonn<br> Registergericht: Amtsgericht Bonn VR 2530<br> Vorstand: Dr. rer. nat. Ralf Dornhaus (Vors.), Prof. Dr. Joachim Ender<br> (Stellv.)<br><br> --<br> Olsr-dev mailing list<br> <a href="mailto:Olsr-dev@lists.olsr.org">Olsr-dev@lists.olsr.org</a><br>
<a href="http://lists.olsr.org/mailman/listinfo/olsr-dev">http://lists.olsr.org/mailman/listinfo/olsr-dev</a></blockquote></div><br>