[Olsr-dev] Meshing over VPNs
Benny Baumann
(spam-protected)
Tue Oct 4 15:22:25 CEST 2011
Hi,
Am 04.10.2011 14:14, schrieb Markus Kittenberger:
>
>
> On Tue, Oct 4, 2011 at 1:46 PM, Benny Baumann <(spam-protected)
> <mailto:(spam-protected)>> wrote:
>
> Hi Markus,
>
> Am 04.10.2011 11:23, schrieb Markus Kittenberger:
> > On Tue, Oct 4, 2011 at 2:00 AM, Benny Baumann <(spam-protected)
> <mailto:(spam-protected)>
> > <mailto:(spam-protected) <mailto:(spam-protected)>>> wrote:
> >
> > Hi guys,
> > (well, several of our routers died regularly from the
> routing tables
> > being too large).
> >
> > hmm why are the routing tables getting too large?
> When doing FIBMetric "correct" the RT not only holds one route per
>
> use flat! *G
Have to look into this.
>
> destination, but one route per distance and host. So sometimes we
> get up
> to 2 or 3 routes per host.
>
> (which is (or at least sounds like) a bug,..)
I can provide a routing table as created from OLSRd alongside the
graph/topology of the network. Basically the routes are distinct like
1. hostN via host1 metric 2
2. hostN via host2 metric 3
So the routing table is correct, except that the second route /could/ be
dropped.
>
> > olsrd would anyways just create as many routes as there are targets
> > (hosts)
> > (regardless of the topology)
> Yes, but using the plugin we can somehow channel these routes somewhat
> and make some of them appear as "INFINITE" so OLSR knows the host is
> there, but doesn't try to route directly to it. And that's what wwe
> wanted to get.
>
> knowing a host/link is there, also needs memory (just less than having
> it in topology and creating a route for it)
If we drop this completely we got effects of unreachable hosts.
>
> > but the with such a huge layer2 broadcast domain, olsrds network
> > topology gets bigger (and therefore the olsrd memeory consumption)
> > furthermore the number of olsrd messages might get very unfunny)
> Problems arose for us at only about 10-15 nodes in the Tinc network. A
> live graph can be seen at
> http://graph.chemnitz.freifunk.net/ffc.svg The
> plugin doesn't run on all the (pink) nodes yet, but on those which
> have
> it you'll see no gray lines (which are the virtual connections Tinc
> introduces). The pink lines are direct Tinc connections. BTW: This
> drastically reduces the size of TC messages which are much more common
>
> sure the tc messages are a bigger traffic/bandwidth problem as the hellos.
> (but they can be fragmented into multiple messages)
I know. But for slow links (as we sometimes have in our network, the
background traffic is more of a concern. Dividing the Tinc broadcast
domain if necessary shouldn't be too hard.
> so as hellos can not fragmented they limit your maximum number of
> neighbours,..
> but for ipv4 its more than 100 neighbours that fit into one packet,..
Thanks for this info. But well, we're still well below this. And as far
as I see this the interesting point for HELLOs is the size of the
associated BCD per node.
>
> than the HELLO messages (I sent the stats for our network
> previously on
> the list). For all the Tinc nodes we use HELLO intervals of
> sometimes up
> to 5 minutes thus there's not much concern regarding HELLO messages;
>
> ok, fine. but as said above its not only about bandwidth,..
Jep.
>
> more regarding TC. Currently we have 4.5KB/s TC down and 0.5KB/s
> TC up;
> which makes using the network on Edge links feasable (not quite nice,
> but feasable).
>
> this sound like a very small network,.. (but i don not know your tc
> timings on all the nodes)
Usually 5-15, sometimes 30 seconds with validity up to 60-180
> (but how does tinc broadcast the messages, sending duplicated packets
> to all other nodes in the tinc cloud?)
Tinc internally builds a spanning tree and broadcasts using this
spanning tree. This tree holds information about all the connections
every one of the peer can establish (e.g. two internet servers that are
not configured to establish a connection automatically, but will du to
proxy traffic).
> would this create 450KB/sec for a 100 nodes tinc cloud??
Would have to calc, but you'd get 100*99 messages per TC from 100 nodes.
So'd I guess without the plugin you'd be more like 4.5MB/s. With this
plugin and a quite sparse connectivity (10 conenctions per node) 50KB/s.
>
> But as far as I understand it OLSR doesn't include INFINITE/non-SYMM
> links in the HELLO-message.
>
> this is true for Tcs but not for Hellos.
k, so the VPN BCD would have to be splitted when nearing this limit.
>
> And as we are dropping default quality down
> to INFINITE those get removed.
>
>
> Markus
BenBE.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 479 bytes
Desc: OpenPGP digital signature
URL: <http://lists.olsr.org/pipermail/olsr-dev/attachments/20111004/9b535422/attachment.sig>
More information about the Olsr-dev
mailing list