[Olsr-dev] mDns plugin improvement

Teco Boot (spam-protected)
Thu May 31 07:30:56 CEST 2012


Op 31 mei 2012, om 00:47 heeft ZioPRoTo (Saverio Proto) het volgende geschreven:

>> Let's try to comply to standards. We should not break nodes that
>> discards mDNS packets not having TTL=225. And support protocols
>> that have TTL=1.
> 
> I still do not see this problem.
> I agree application should generate packets with TTL=255. But routers
> should not be able to alter the TTL value of the IP packets as usual ?
mDNS is designed for link local. An approach is to bridge the mDNS 
packets. Bridges do not change TTL. It is fully transparent.

I use this for Groove peer discovery with p2pd, with broadcasts with 
TTL=1. So your approach may work with many mDNS implementations, not 
for others. Also think of future mDNS with TTL=1.

> If packet is not originated on the local link then TTL !=255, and this
> is okay with the standard.
There are some risks. Nodes could discard packets with TTL!=255.
With avavi, are you sure 100% of nodes have check-response-ttl=no ?
And RFC4903:
      o  Older clients of Apple's Bonjour [MDNS] use messages with TTL
         255 checked on receipt, and only respond to queries from
         addresses in the same subnet.  (Note that multi-link subnets do
         not necessarily break this, as this behavior is to constrain
         communication to within a subnet, where a subnet is only a
         subset of a link.  However, it will not work across a multi-
         link subnet.)
 
>> Packet rate of mDNS would be low, so hash calculation looks
>> acceptable to me. This is the way SMF works.
>> http://tools.ietf.org/html/rfc6621#section-6.2.2
>> I prefer having the standards based method implemented. In
>> IETF, we had a long discussion. With this outcome.
> 
> The rate of mDNS packet is not low if you have a big network.
> We have hundreds of end devices connected that generate a huge amount
> of mDNS traffic !
> 
> Look at the size of our network:
> http://tuscolomesh.ninux.org/images/topology.png
> and IPv6
> http://cleopatra.ninux.org/topology.png
> 
> we run both networks on top of devices that have small CPU and memory
> resources. Mostly Ubiquiti NanoStation M5. We already have issues with
> CPU spikes to 100%. We have two olsrd running one for IPv4 and one for
> IPv6
You might need better flooding for such networks.

A CPU is busy or idle. The 100% is an average over some time. Is your
CPU overloaded?

I was in favor of the ID based dup check. Less CPU demanding.


Anyway, make a config option, and run experiments with it?


Teco



More information about the Olsr-dev mailing list