[Olsr-dev] ETX mambo jambo
Mon May 10 23:08:06 CEST 2010
some very brief remarks:
- olsr is/will become modular with metric-plug-ins (see list archive).
- etx works by addition because it is trying to reflect the expectation
value for total transmissions of a packet under the assumption that only
local 802.11 retransmissions occur, no server/client TCP/IP retransmissions.
- your full/half-duplex-differentiation does make sense as a category.
however i don't see an easy way to auto-detect which links falls into
which category. if you have to decide it manually, there is no
difference to using an ethernet/backbone-flag.
> Probably a lot on this topic has already been written but ETX is
> really badly conceptualized and it really does not properly influence
> routing decisions.
> For example, we should have first two kind of links: full-duplex (or
> all in the neighborhood can talk with a very low cost at the same
> time) or half-duplex (or all in the neighborhood have to be quiet when
> one speaks). And then packet loss for those links. And then costs for
> routing decisions should take into account this. And if there is a
> full-duplex link without packet loss (like ethernet cable) there
> should be no cost associated with it). And this would be the reason
> why ethernet links would be less costly than one hop over half-duplex
> link. And not that we have a "special" flag for ethernet links.
> And then if OLSR would support asymmetric links it could take this
> half-duplex information and even better do routing.
> Also hops does not really add much on full duplex links (except for
> latency which is not the main performance factor in comparison to
> others in wireless links) in comparison with half duplex links.
> So a crude way would be to multiply ETX costs on full duplex links and
> add on half duplex (in comparison with just adding).
> I really do not understand why ETX costs are summed in the first
> place. If we measure packet loss this is a probability of packet
> getting through over that particular hop and probability of packet
> getting through over the whole path is a product of all this
> probabilities for one hop.
> The only reason why summing is reasonable is because there is a hidden
> cost of transferring packets over half-duplex (wireless) links to
> other links in vicinity which have to be quiet at that time. But all
> this is a bad probability model for what is happening and this is why
> discrepancies between the model and real world occur.
> Because now, as we use a lot of ethernet links and VPN links, routing
> decisions are very bad sometimes.
> (And all this is without taking into the account bandwidth limits.)
> More that I think now more I see it is a wrong approach. What routing
> protocols are? They are prediction tools which try to evaluate past
> behavior to predict behavior and performance of one route over
> another. As such first there should be a good model which would the
> networks be are talking about and what we would like to predict -
> performance (bandwidth, latency...) and what influences performance
> (packet loss, selected wifi link speed, limits on VPN tunnel) and how
> to get measurements which would be used for filling this model with
> data for prediction (with of course side-effects of active
> measurements on the performance).
> And what are current routing protocols differ is how they measure data
> and how they compute shortest paths. I do not understand why exactly
> do we need new routing protocol for every this combination? Why we
> cannot make a modular routing protocol where you would be able to
> define how to measure links (which messages you send and such) and
> then some prediction tool which you can vary to evaluate all this data
> (of which a good approach to its computation is just one part) and
> then what you do once you evaluate/predict paths performance - do you
> use it for L2 or L3 links, routes? If all this would be modular we
> would be able to interchange things. Like if somebody makes a good L3
> module for adding/removing IPv4 and IPv6 routes once then everybody
> would be able to use it. Then we could play with different data
> collections and feed data into different predictive models. And if we
> collect data with OLSR or B.A.T.M.A.N. messages - it is not really
> Currently everybody is trying to make everything from the scratch.
> Their way of measuring data, evaluating it and then pushing changes to
> network stack. This is crazy. Probably because most work originates
> from academia where author is concerned with testing/evaluating
> his/her own combination of things mentioned above. But if we would
> have such general "routing stack" it would be useful for everybody.
> For networks because we would be able to all work on same codebase
> (and have interchangeable modules) and for academia because they would
> only need to write a new, for example, prediction model and try it
More information about the Olsr-dev