[Olsr-dev] Thoughts about an ETX-FF-Metric plugin
Markus Kittenberger
(spam-protected)
Thu Jun 19 11:30:20 CEST 2008
On 6/18/08, Henning Rogge <(spam-protected)> wrote:
>I just did another test implementation of the new algorithm, this one with
FPM
>and with a sliding window (length fixed to 16 seconds) for calculating the
>average (instead of the exponential moving average).
>Test results or more ideas are welcome.
another approach would be a "exponential" window consisting of values
representing time slots of increasing duration
position: summary of x seconds
01 : 3
02 : 3
03 : 3
04 : 3
05 : 6(updated every 6 seconds with average of fields 3,4, so the field
contains summary of 6..12, 9..12seconds old results)
06 : 6
07 : 6
08 : 12(updated every 12 seconds with average of fields 6,7)
09 : 12
10 : 12
11 : 24(...)
12 : 24
13 : 24
14 : 48(...)
15 : 48
16 : 48
from this 16 values for example the 4 worst values are averaged to wLQ
all other values are averaged to oLQ
and the LQ=(wLQ+oLQ)/2
doing so would prefer stable links from instable links (1 minute perfectly,
then bad, and so on) (i prefer stable/reliable connections)
while reacting quite fast when an usually stable link becomes very
instable,.. (in 8 seconds from LQ 1.0 to 0.5)
and i think the results will not "hard" jump (up) too much,.. (becasue a
group of bad values will not ever move out from history fastly, instead they
loose they gradually loose their weight when getting into longer timelots)
this approach is also immune to changing amounts of olsr traffic,..
maybe an even larger or steeper history would be a good idea (the above
sample is "only" 2,5 minutes)
the parameters of this window would be total length (16 was used above),
initial slot length (3 seconds), initial slot count (4), exponential slot
count (3)
initialslotlength * initialslotcount >> hello interval (i think 2x hello
interval (of the neighbour) is quite fine)
and exponential slot count must be at least 2. (#)
the initial slot length is a bit problematic, having to small intervals
risks creating bad values (when there is no packet per time slot)
while making it too large makes results in slower reaction on suddenly dead
or extremly bad links,..
maybe the first summary function (for field 5 above) should be a bit smarter
(than just blindy averaging) ##
Markus
#
this represents the number of values representing the same time slot
duration, (before doubling it again)
with it you can tune the steepness of history, or you can interprete it as
avaerged duration increase between the Xth and Xth + 1 field in history
2 means 1,41 sqrt(2)
3 1,25 = 2^(1/3)
4 1,19 = 2^(1/4)
##
it could analyze the 2nd or even first field in case of not enough data is
in fields 3 and 4.
and the "initial" slots may use two values (lost, received), like in
hennings latest approach, to enable analysis of data "quality"
(0:0 would be for surely not enough data to get reasonable results)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.olsr.org/pipermail/olsr-dev/attachments/20080619/1e8cc397/attachment.html>
More information about the Olsr-dev
mailing list