[Olsr-dev] [RFC] problems with multiple interfaces on the same medium + proposed solution

Ferry Huberts (spam-protected)
Fri Jan 6 11:23:01 CET 2017


Below we describe a problem we encountered with olsrd and we explicitly
ask any interested party to read this and provide feedback. Especially so
because we will also propose a solution which will make changes to the
way HELLO messages are generated and sent.

Ok, now into it...

A while ago we noticed that neighbours of nodes with multiple interfaces
on the same medium report infinite costs on their links to those nodes.

Below is the setup in which we noticed the problem, which is 100%
reproducible.

        wlan0                                                 wlan0
        172.31.175.97/16                           172.31.175.61/16
     (((*))) ------------------------------------------------- (((*)))
        |                                                         |
        |                                                         |
        |                                                         |
    ____|___   172.29.175.97/15   ________  172.29.175.61/15  ____|____
   |         |-eth1.2580---------|        |--------eth1.2580-|         |
   | Node 97 |                   | Switch |                  | Node 61 |
   |_________|-eth2.2580---------|________|                  |_________|
               172.28.175.97/15


In this setup node 97 will report normal link costs for its (wired) links
to node 61 (see the first table below), while node 61 will report infinite
link costs for both its (wired) links to node 97 (see the second table
below).

   Table: Links (node 97)
   Local IP       Remote IP       Hyst.   LQ      NLQ     Cost
   172.29.175.97  172.29.175.61   0.000   1.000   1.000   0.100
   172.28.175.97  172.29.175.61   0.000   1.000   1.000   0.100
   172.31.175.97  172.31.175.61   0.000   1.000   1.000   1.000

   Table: Links (node 61)
   Local IP       Remote IP       Hyst.   LQ      NLQ     Cost
   172.29.175.61  172.29.175.97   0.000   1.000   0.000   INFINITE
   172.29.175.61  172.28.175.97   0.000   1.000   0.000   INFINITE
   172.31.175.61  172.31.175.97   0.000   1.000   1.000   1.000


Checking the HELLO messages on node 61 that are received from node 97
we see the following:

   [node 61] # tcpdump -vni eth1.2580 udp port 698
   tcpdump: listening on eth1.2580, link-type EN10MB (Ethernet), capture 
size 262144 bytes
   06:21:23.528204 IP (tos 0xc0, ttl 1, id 42455, offset 0, flags [DF], 
proto UDP (17), length 80)
       172.28.175.97.698 > 255.255.255.255.698: OLSRv4, seq 0xf7c0, 
length 52
      Hello-LQ Message (0xc9), originator 172.31.175.97, ttl 1, hop 0
        vtime 3.000s, msg-seq 0x533d, length 48
        hello-time 1.000s, MPR willingness 3
          link-type Symmetric, neighbor-type Symmetric, len 12
            neighbor 172.29.175.61, link-quality 0.00%, 
neighbor-link-quality 0.00%
          link-type Unspecified, neighbor-type Symmetric, len 20
            neighbor 172.31.175.61, link-quality 0.00%, 
neighbor-link-quality 0.00%
            neighbor 172.29.175.61, link-quality 0.00%, 
neighbor-link-quality 0.00%


Node 61 receives HELLO messages from node 97 that report (among others):
1- a  SYMMETRIC   link-type to node 61 (172.29.175.61)
2- an UNSPECIFIED link-type to node 61 (172.29.175.61)

Clearly, this is 'confusing' and the root cause of why node 61 reports
infinite costs for the links, as show above.

We pose that in a HELLO message the same neighbour should NEVER be reported
with conflicting information.


Proposed solution:

1- NEVER report a neighbour more than once in a HELLO message
2- Use the 'best' values for a neighbour that is reported in a HELLO
    message


1- Requires that we de-duplicate neighbours when constructing a HELLO
    message.

2- Requires - when a neighbour is already present in the HELLO message
    that is under construction - that we determine the 'best' values and
    overwrite those in the neighbour that is already present if those
    are worse.


We have a quick-and-dirty patch implementing this proposed solution and it
works well. We are currently cleaning up that patch.


We're explicitly soliciting feedback on this approach.



Greetings,

Iwan, Teco and Ferry



More information about the Olsr-dev mailing list