<span class="gmail_quote">On 6/19/08, <b class="gmail_sendername">Henning Rogge</b> <<a href="mailto:rogge@fgan.de">rogge@fgan.de</a>> wrote:</span><blockquote class="gmail_quote" style="margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; margin-left: 0.80ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid; padding-left: 1ex">
Am Donnerstag 19 Juni 2008 11:30:20 schrieb Markus Kittenberger:<br><br>> doing so would prefer stable links from instable links (1 minute perfectly,<br> > then bad, and so on) (i prefer stable/reliable connections)<br>
> while reacting quite fast when an usually stable link becomes very<br> > instable,.. (in 8 seconds from LQ 1.0 to 0.5)<br> > and i think the results will not "hard" jump (up) too much,.. (becasue a<br>
> group of bad values will not ever move out from history fastly, instead<br> > they loose they gradually loose their weight when getting into longer<br> > timelots) this approach is also immune to changing amounts of olsr<br>
> traffic,..<br><br>I'm not sure if I like a LQ measurement that drops the LQ fast but increase it slowly ...</blockquote><div><br>luckily my proposed algorithm handles this better than you seem to think,..<br> it can properly differentiate short loss periodes from longer/multiple ones,..</div>
<br><blockquote class="gmail_quote" style="margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; margin-left: 0.80ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid; padding-left: 1ex">
we will just get a lot bad LQ values in the network.</blockquote><div><br>partly yes, but from my point of view they should get bad values, because these links perform bad,.. (#) <br><br>imo its a bad idea to start routing again over a link 15 seconds after he was disastrous, even when the link is disastrous every 30 seconds, just because we have forgotten this already,..<br>
<br>if al link only once is lossy for some seconds, my approach would let him recover to nearly the LQ he had before quite fast.<br><br>following an example that shows that 24 seconds after the end of a 12 second high loss periode the LQ has 90% of his "original" value again, and 95% after 48 seconds. <br>
shorter loss periodes will recover a bit faster,..<br>but longer periodes will result in lower LQ values and more slowly recovering LQ values<br>multiple loss periodes within the total history duration (in this example about 5 minutes), will result in low but quite stable LQ values (which is my intention)<br>
##<br><br>scenario:<br>LQ 1.0 all the time<br>followed by 12 seconds 75% loss starting at time 0<br>afterwards the link runs lossless again,..<br><br>the resulting LQ would be (with the same parameters for the algorithm i used in my mails before)<br>
time : lq (hints)<br>0 : 1,00 (all time slots hav eLQ of 1)<br>3 : 0,90 (1 tim eslot has LQ 0,25) <br>6 : 0,81 (2 time slots) <br>9 : 0,62 (3 time slots...) <br>12: 0,53 (4 time slots ...)<br>15: 0,53 (4 time slots have LQ 0,25 3 3 second slots, and one 6 second lot<br>
18: 0,62 (3 time slotes have LQ 0,25 (2 3 second slot, 1 6second)<br>21: 0,62 (3 time slots ... (1 3 second slot, 2 6 second slots)<br>24: 0,81 (2 time slots ... (2 6 second slots)<br>30: 0,81 (1 6 second slot, and on 12 second slot)<br>
36 0,90 (1 12 second slot has LQ 0,25)<br>60: 0,95 (1 24 second slot has LQ 0,5)<br>190 0,97 (1 48 second slot has LQ 0,75)<br>330 1,00 (everything is forgotten)<br><br></div><br><blockquote class="gmail_quote" style="margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; margin-left: 0.80ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid; padding-left: 1ex">
hmm... maybe we could use the "hello-timeout" value here...</blockquote><div><br>ack</div><br><blockquote class="gmail_quote" style="margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; margin-left: 0.80ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid; padding-left: 1ex">
The algorithm should work for small networks too... and in small networks you<br> have only a few packets per second.</blockquote><div><br>it will work in small networks, if the base for calculating the LQ values which goes into the exponential window is >= the hello timeout <br>
and if the algorithm for the first averageing (on interval doubling) checks how much data it has before deciding which fields it averages,..<br></div><br>MArkus<br><br>#<br>for the event of an restarting neighbouring olsrd, detectable through (unexpectedly) new/low sequence numbers, the link sensing could handle this, with deleting the history (at least the parts containing 100% loss due to the reboot of the device)<br>
<br>##<br>how deep the LQ sinks, and partly how fast he recovers depends on the paramters for the function which calculates the LQ from the x (in my examlpe 4) worst time slots (wLQ)<br>and the other time slots oLQ<br> LQ=(wLQ+oLQ)/2<br>
<br>could be parametrized to<br>LQ=(wweight*wLQ) + ((1-wweight)*oLQ)<br><br>wsw stand for worst slot weight and reasonable values would be between 0.4 and 0.7 i guess<br>