[Olsr-dev] Meshing over VPNs

Markus Kittenberger (spam-protected)
Mon Oct 10 11:20:17 CEST 2011

On Tue, Oct 4, 2011 at 3:22 PM, Benny Baumann <(spam-protected)> wrote:

> Hi,
> >
> > use flat! *G
> Have to look into this.
> >
> >     destination, but one route per distance and host. So sometimes we
> >     get up
> >     to 2 or 3 routes per host.
> >
> > (which is (or at least sounds like) a bug,..)
> I can provide a routing table as created from OLSRd alongside the
> graph/topology of the network. Basically the routes are distinct like
> 1. hostN via host1 metric 2
> 2. hostN via host2 metric 3
> So the routing table is correct,

no its not (-;
olsrd shall only create one route to a target,..
the others just exists as olsrd failed to delete the outdated routes,..

and if the metric of a updated route gets higher, the old (now wrong) route
will still be used instead of the new (correct) route

(infact i know this (or a very similar) bugs, but believed it already got

> except that the second route /could/ be
> dropped.
> >
> >     > olsrd would anyways just create as many routes as there are targets
> >     > (hosts)
> >     > (regardless of the topology)
> >     Yes, but using the plugin we can somehow channel these routes
> somewhat
> >     and make some of them appear as "INFINITE" so OLSR knows the host is
> >     there, but doesn't try to route directly to it. And that's what wwe
> >     wanted to get.
> >
> > knowing a host/link is there, also needs memory (just less than having
> > it in topology and creating a route for it)
> If we drop this completely we got effects of unreachable hosts.
keeping infinite links instead of dropping them, canĀ“t make anything
reachable (as they are never used for routing)

>I know. But for slow links (as we sometimes have in our network, the
> background traffic is more of a concern. Dividing the Tinc broadcast
> domain if necessary shouldn't be too hard.
yes thats a (additional) solution, but imho the plugin/setup should scale
without having to divide into groups of 20 nodes

> > (but how does tinc broadcast the messages, sending duplicated packets
> > to all other nodes in the tinc cloud?)
> Tinc internally builds a spanning tree and broadcasts using this
> spanning tree. This tree holds information about all the connections
> every one of the peer can establish (e.g. two internet servers that are
> not configured to establish a connection automatically, but will du to
> proxy traffic).
> M> would this create 450KB/sec for a 100 nodes tinc cloud??
> Would have to calc, but you'd get 100*99 messages per TC from 100 nodes.
> So'd I guess without the plugin you'd be more like 4.5MB/s. With this
> plugin and a quite sparse connectivity (10 conenctions per node) 50KB/s.

hmm imho this sounds like waste of bandwidth,..

why do you actually want n to n links?

avoiding the central server (as single point of failure)
or avoiding the traffic (and the doubling of it) through a single server

or both?

as having just one (or some) central tunnelserver(s), would solve all
olsrd-protocol and olsrd-bandwidth problems,.. (i think thats what manuel
munzs tinc options do aswell)

for a olsrd plugin (an protocol adaptions) i would try to reach both things.
(direct n to n routing connectivity, but no direct n to n olsrd link

i guess i would configure most olsrds to use unicast replies instead of
broadcasts on their vpn interfaces (and only some olsrd nodes do real
linksensing with broadcasts)

(but i aim for scalability to thousands of vpn peers)
for smaller ones your current approach might be better/easier, but honestly
i still do not understand how u even run into problems with only 10-15 vpn

(maybe its really just due to not using flat fibmetric,..)

> >
> >     But as far as I understand it OLSR doesn't include INFINITE/non-SYMM
> >     links in the HELLO-message.
> >
> > this is true for Tcs but not for Hellos.
> k, so the VPN BCD would have to be splitted when nearing this limit.
or olsrd protocol needs to be changed,..

> >

p.s. maybe we can chat about this
(henning is also interested)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.olsr.org/pipermail/olsr-dev/attachments/20111010/7ba423c0/attachment.html>

More information about the Olsr-dev mailing list