<div class="gmail_quote">On Tue, Oct 4, 2011 at 3:22 PM, Benny Baumann <span dir="ltr"><<a href="mailto:BenBE1987@gmx.net">BenBE1987@gmx.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi,<br>><div class="im">
> use flat! *G<br>
</div>Have to look into this.<br>
<div class="im">><br>
> destination, but one route per distance and host. So sometimes we<br>
> get up<br>
> to 2 or 3 routes per host.<br>
><br>
> (which is (or at least sounds like) a bug,..)<br>
</div>I can provide a routing table as created from OLSRd alongside the<br>
graph/topology of the network. Basically the routes are distinct like<br>
1. hostN via host1 metric 2<br>
2. hostN via host2 metric 3<br>
So the routing table is correct, </blockquote><div>no its not (-;</div><div>olsrd shall only create one route to a target,..</div><div>the others just exists as olsrd failed to delete the outdated routes,..</div><div><br>
</div><div>and if the metric of a updated route gets higher, the old (now wrong) route will still be used instead of the new (correct) route</div><div><br></div><div>(infact i know this (or a very similar) bugs, but believed it already got resolved,.)</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">except that the second route /could/ be<br>
dropped.<br>
<div class="im">><br>
> > olsrd would anyways just create as many routes as there are targets<br>
> > (hosts)<br>
> > (regardless of the topology)<br>
> Yes, but using the plugin we can somehow channel these routes somewhat<br>
> and make some of them appear as "INFINITE" so OLSR knows the host is<br>
> there, but doesn't try to route directly to it. And that's what wwe<br>
> wanted to get.<br>
><br>
> knowing a host/link is there, also needs memory (just less than having<br>
> it in topology and creating a route for it)<br>
</div>If we drop this completely we got effects of unreachable hosts.<br></blockquote><div>keeping infinite links instead of dropping them, canīt make anything reachable (as they are never used for routing)</div><div><br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">>I know. But for slow links (as we sometimes have in our network, the</div>
background traffic is more of a concern. Dividing the Tinc broadcast<br>
domain if necessary shouldn't be too hard.<br></blockquote><div>yes thats a (additional) solution, but imho the plugin/setup should scale without having to divide into groups of 20 nodes</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">> (but how does tinc broadcast the messages, sending duplicated packets<br>
> to all other nodes in the tinc cloud?)<br>
</div>Tinc internally builds a spanning tree and broadcasts using this<br>
spanning tree. This tree holds information about all the connections<br>
every one of the peer can establish (e.g. two internet servers that are<br>
not configured to establish a connection automatically, but will du to<br>
proxy traffic).<br>
<div class="im">M> would this create 450KB/sec for a 100 nodes tinc cloud??<br>
</div>Would have to calc, but you'd get 100*99 messages per TC from 100 nodes.<br>
So'd I guess without the plugin you'd be more like 4.5MB/s. With this<br>
plugin and a quite sparse connectivity (10 conenctions per node) 50KB/s.<br></blockquote><div><br></div><div>hmm imho this sounds like waste of bandwidth,..</div><div><br></div><div>btw:</div><div>why do you actually want n to n links?</div>
<div><br></div><div>avoiding the central server (as single point of failure)</div><div>or avoiding the traffic (and the doubling of it) through a single server</div><div><br></div><div>or both?</div><div><br></div><div>as having just one (or some) central tunnelserver(s), would solve all olsrd-protocol and olsrd-bandwidth problems,.. (i think thats what manuel munzs tinc options do aswell)</div>
<div><br></div><div>for a olsrd plugin (an protocol adaptions) i would try to reach both things.</div><div>(direct n to n routing connectivity, but no direct n to n olsrd link sensing)</div><div><br></div><div>i guess i would configure most olsrds to use unicast replies instead of broadcasts on their vpn interfaces (and only some olsrd nodes do real linksensing with broadcasts)</div>
<div><br></div><div>(but i aim for scalability to thousands of vpn peers)</div><div>for smaller ones your current approach might be better/easier, but honestly i still do not understand how u even run into problems with only 10-15 vpn neighbours.</div>
<div><br></div><div>(maybe its really just due to not using flat fibmetric,..)</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">><br>
> But as far as I understand it OLSR doesn't include INFINITE/non-SYMM<br>
> links in the HELLO-message.<br>
><br>
> this is true for Tcs but not for Hellos.<br>
</div>k, so the VPN BCD would have to be splitted when nearing this limit.<br></blockquote><div>yes, </div><div>or olsrd protocol needs to be changed,..</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">></div></blockquote><div> Markus</div><div><br></div><div>p.s. maybe we can chat about this</div><div>(henning is also interested)</div></div>