[Olsr-users] How to prioritize OLSR UDP packets.

(spam-protected) (spam-protected)
Tue Jan 12 17:26:55 CET 2010


On 01/12/2010 04:02 PM, (spam-protected) wrote:
> On 01/12/2010 03:10 PM, Michael Rack wrote:
>>
>>> ... to specify the available bandwith. Otherwise (i.e. with "infinite"
>>> bandwidth available") the tc queues will always have zero packets and
>>> prioritization will not work.
>>
>> Markus Kittenberger has posted a short snipped of the QOS Setup.
>> Message-ID: (spam-protected)
>>
>> This setup handle 3 SFQs (Stochastic Fairness Queuing). The filter
>> methods have priority settings 1,2,3,4 and 5. With this setup you don't
>> have to know the uplink speed. Matching packets that are queued will
>> reordered as defined the rest is done by your OS network-stack.
> 
> 
> The man page of SFQ [http://linux.die.net/man/8/tc-sfq] says:
> 
> """
> Please note that SFQ, like all non-shaping (work-conserving) qdiscs, is
> only useful if it owns the queue. This is the case when the link speed
> equals the actually available bandwidth. This holds for regular phone
> modems, ISDN connections and direct non-switched ethernet links.
> 
> Most often, cable modems and DSL devices do not fall into this category.
> The same holds for when connected to a switch and trying to send data to
> a congested segment also connected to the switch.
> 
> In this case, the effective queue does not reside within Linux and is
> therefore not available for scheduling.
> 
> Embed SFQ in a classful qdisc to make sure it owns the queue.
> """
> 
> I think that:
> 1) Wireless links do not fall into the category of "link speed being
> equal to the available bandwidth".
> 
> 2) If you have three SFQs attached to a PRIO, neither the SFQs nor the
> PRIO will own the queue. This is because the PRIO is work-conserving and
> also because there are three unlimited SFQs attached to the same qdisc.
> 
> However I am about to make a test. I will post my results...


The two tests that I made involved three (real) PCs. PC1 was connected
to PC2 through an ethernet cable, while PC2 was connected to PC3 through
an 802.11b 11Mbps link.

+-----+        +-----+        +-----+
| PC1 |--lan1--| PC2 |--lan2--| PC3 |
+-----+        +-----+        +-----+

Addresses:
lan1: 10.0.0.0/24
lan2: 172.16.250.0/24

PC1: 10.0.0.2
PC2: 172.16.250.1 on ath0 (wireless) and 10.0.0.1 on eth0
PC3: 172.16.250.101

I ran the following script on PC2:

--- begin script ---
#!/bin/sh
DEV="ath0"
tc qdisc del dev $DEV root
#setup 3 sfq queues
tc qdisc add dev $DEV root handle 1: prio
tc qdisc add dev $DEV parent 1:1 handle 10: sfq perturb 10
tc qdisc add dev $DEV parent 1:2 handle 20: sfq perturb 10
tc qdisc add dev $DEV parent 1:3 handle 30: sfq perturb 10

tc filter add dev $DEV protocol ip parent 1: prio 1 \
u32 match ip tos 0x10 0xff flowid 1:1
tc filter add dev $DEV protocol ip parent 1: prio 2 \
u32 match ip tos 0x20 0xff flowid 1:2
tc filter add dev $DEV protocol ip parent 1: prio 3 \
u32 match ip tos 0x30 0xff flowid 1:3
--- end script ---

Then with iperf (version 2.0.2), I made two tests.

=== Test 1 - UDP: ===
I started two UDP servers on PC3, with the following commands:
iperf -s -u -i 3 -p 4455
iperf -s -u -i 3 -p 4456

Then on PC1, I issued the first stream:
sudo iperf -u -c 172.16.250.101 -b 10M -p 4456 -t 600 -S 0x30
getting a throughput of about 7.8Mbps, and then the second stream:
sudo iperf -u -c 172.16.250.101 -b 10M -p 4455 -t 600 -S 0x10

I got a throughput of about 7.2Mbps on the first stream and of about
750kbps on the second stream.

I expected the first stream (that should have a lower priority) to starve.

The ToS is correctly set. On PC2:
$ sudo tcpdump -v -i eth0 "ip dst 172.16.250.101" -c 8
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96
bytes
17:07:20.648915 IP (tos 0x30, ttl 64, id 30755, offset 0, flags [DF],
proto UDP (17), length 1498) 10.0.0.2.43344 > 172.16.250.101.4456: UDP,
length 1470
17:07:30.659988 IP (tos 0x10, ttl 64, id 27232, offset 0, flags [DF],
proto UDP (17), length 1498) 10.0.0.2.60415 > 172.16.250.101.4455: UDP,
length 1470
17:07:30.660043 IP (tos 0x30, ttl 64, id 30756, offset 0, flags [DF],
proto UDP (17), length 1498) 10.0.0.2.43344 > 172.16.250.101.4456: UDP,
length 1470
17:07:30.660079 IP (tos 0x10, ttl 64, id 27233, offset 0, flags [DF],
proto UDP (17), length 1498) 10.0.0.2.60415 > 172.16.250.101.4455: UDP,
length 1470
17:07:30.660110 IP (tos 0x30, ttl 64, id 30757, offset 0, flags [DF],
proto UDP (17), length 1498) 10.0.0.2.43344 > 172.16.250.101.4456: UDP,
length 1470
17:07:30.660141 IP (tos 0x10, ttl 64, id 27234, offset 0, flags [DF],
proto UDP (17), length 1498) 10.0.0.2.60415 > 172.16.250.101.4455: UDP,
length 1470
17:07:30.660172 IP (tos 0x30, ttl 64, id 30758, offset 0, flags [DF],
proto UDP (17), length 1498) 10.0.0.2.43344 > 172.16.250.101.4456: UDP,
length 1470
17:07:30.660203 IP (tos 0x10, ttl 64, id 27235, offset 0, flags [DF],
proto UDP (17), length 1498) 10.0.0.2.60415 > 172.16.250.101.4455: UDP,
length 1470
8 packets captured
17039 packets received by filter
16969 packets dropped by kernel


And the packets are correctly classified by the filter:
$ tc -s qdisc sh dev ath0
qdisc prio 1: root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
 Sent 137106316 bytes 90680 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 10: parent 1:1 limit 127p quantum 1588b perturb 10sec
 Sent 52103520 bytes 34460 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 20: parent 1:2 limit 127p quantum 1588b perturb 10sec
 Sent 1180 bytes 2 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 30: parent 1:3 limit 127p quantum 1588b perturb 10sec
 Sent 85001616 bytes 56218 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0

So in the UDP case, the scheduler seems *not* to work.


=== Test 2 - TCP: ===
Same setup and same scheduling script as above.

I started two TCP servers on PC3, with the following commands:
iperf -s -i 3 -p 4455
iperf -s -i 3 -p 4456

Then on PC1, I issued the first stream:
sudo iperf -c 172.16.250.101 -p 4456 -t 600 -S 0x30
getting a throughput of about 6.25Mbps, and then the second stream:
sudo iperf -c 172.16.250.101 -p 4455 -t 600 -S 0x10


The two streams, after about 5 seconds in which the first stream slowly
lowers its bandwidth, get to a point in which the first stream has
slightly more bandwidth than the second stream (3.2Mbps vs. 3Mbps).

Also here, I expected the first stream to starve.

The tos is correctly set:
$ sudo tcpdump -v -i eth0 "ip dst 172.16.250.101" -c 8
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96
bytes
17:22:43.400067 IP (tos 0x30, ttl 64, id 31263, offset 0, flags [DF],
proto TCP (6), length 1500) 10.0.0.2.52355 > 172.16.250.101.4456: .
1499508491:1499509951(1460) ack 3583315616 win 92
17:22:53.407906 IP (tos 0x30, ttl 64, id 31264, offset 0, flags [DF],
proto TCP (6), length 1500) 10.0.0.2.52355 > 172.16.250.101.4456: .
1460:2920(1460) ack 1 win 92
17:22:53.407955 IP (tos 0x30, ttl 64, id 31265, offset 0, flags [DF],
proto TCP (6), length 1500) 10.0.0.2.52355 > 172.16.250.101.4456: .
2920:4380(1460) ack 1 win 92
17:22:53.407990 IP (tos 0x30, ttl 64, id 31266, offset 0, flags [DF],
proto TCP (6), length 1500) 10.0.0.2.52355 > 172.16.250.101.4456: .
4380:5840(1460) ack 1 win 92
17:22:43.405241 IP (tos 0x30, ttl 64, id 31267, offset 0, flags [DF],
proto TCP (6), length 1500) 10.0.0.2.52355 > 172.16.250.101.4456: .
5840:7300(1460) ack 1 win 92
17:22:43.411194 IP (tos 0x10, ttl 64, id 63343, offset 0, flags [DF],
proto TCP (6), length 1500) 10.0.0.2.52526 > 172.16.250.101.4455: .
1559365993:1559367453(1460) ack 3658762903 win 92
17:22:43.411312 IP (tos 0x10, ttl 64, id 63344, offset 0, flags [DF],
proto TCP (6), length 1500) 10.0.0.2.52526 > 172.16.250.101.4455: .
1460:2920(1460) ack 1 win 92
17:22:43.411435 IP (tos 0x10, ttl 64, id 63345, offset 0, flags [DF],
proto TCP (6), length 1500) 10.0.0.2.52526 > 172.16.250.101.4455: .
2920:4380(1460) ack 1 win 92
8 packets captured
5297 packets received by filter
5236 packets dropped by kernel

And the tc filter classifies well:
$ tc -s qdisc sh dev ath0
qdisc prio 1: root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
 Sent 1664096994 bytes 1100261 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 10: parent 1:1 limit 127p quantum 1588b perturb 10sec
 Sent 743395632 bytes 491497 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 20: parent 1:2 limit 127p quantum 1588b perturb 10sec
 Sent 14820 bytes 124 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 30: parent 1:3 limit 127p quantum 1588b perturb 10sec
 Sent 920686542 bytes 608640 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0

So also in the TCP case, the scheduler seems *not* to work.

Perhaps I did something wrong?

Bye,
Clauz


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 262 bytes
Desc: OpenPGP digital signature
URL: <http://lists.olsr.org/pipermail/olsr-users/attachments/20100112/d3a228ed/attachment.sig>


More information about the Olsr-users mailing list