SonicWALL 5.8.1 Microscope & Magnifier User Manual


  Open as PDF
of 1490
 
Firewall Settings > QoS Mapping
766
SonicOS 5.8.1 Administrator Guide
Queue processing utilizes a time division scheme of approximately 1/256th of a second per
time-slice. Within a time-slice, evaluation begins with priority 0 queues, and on a packet-by-
packet basis transmission eligibility is determined by measuring the packet’s length against the
queue credit pool. If sufficient credit is available, the packet is transmitted and the queue and
link credit pools are decremented accordingly. As long as packets remain in the queue, and as
long as Guaranteed link and queue credits are available, packets from that queue will continue
to be processed. When Guaranteed queue credits are depleted, the next queue in that priority
queue is processed. The same process is repeated for the remaining priority queues, and upon
completing priority 7 begins again with priority 0.
The scheduling for excess bandwidth is strict priority, with per-packet round-robin within each
priority. In other words, if there is excess bandwidth for a given time-slice all the queues within
that priority would take turns sending packets until the excess was depleted, and then
processing would move to the next priority.
This credit-based method obviates the need for CBQ’s concept of overlimit, and addresses
one of the largest problems of traditional CBQ, namely, bursty behavior (which can easily flood
downstream devices and links). This more prudent approach spares SonicOS the wasted CPU
cycles that would normally be incurred by the need for re-transmission due to the saturation of
downstream devices, as well as avoiding other congestive and degrading behaviors such as
TCP slow-start (see Sally Floyd’s Limited Slow-Start for TCP with Large Congestion Windows),
and Global Synchronization (as described in RFC 2884):
Queue management algorithms traditionally manage the length of packet queues in the router
by dropping packets only when the buffer overflows. A maximum length for each queue is
configured. The router will accept packets till this maximum size is exceeded, at which point it
will drop incoming packets. New packets are accepted when buffer space allows. This
technique is known as Tail Drop. This method has served the Internet well for years, but has
the several drawbacks. Since all arriving packets (from all flows) are dropped when the buffer
overflows, this interacts badly with the congestion control mechanism of TCP. A cycle is formed
with a burst of drops after the maximum queue size is exceeded, followed by a period of
underutilization at the router as end systems back off. End systems then increase their windows
simultaneously up to a point where a burst of drops happens again. This phenomenon is called
Global Synchronization. It leads to poor link utilization and lower overall throughput. Another
problem with Tail Drop is that a single connection or a few flows could monopolize the queue
space, in some circumstances. This results in a lock out phenomenon leading to
synchronization or other timing effects. Lastly, one of the major drawbacks of Tail Drop is that
queues remain full for long periods of time. One of the major goals of queue management is to
reduce the steady state queue size.
Algorithm for Outbound Bandwidth Management
Each packet through the SonicWALL is initially classified as either a Real Time or a Firewall
packet. Firewall packets are user-generated packets that always pass through the BWM
module. Real time packets are usually firewall generated packets that are not processed by the
BWM module, and are implicitly given the highest priority. Real Time (firewall generated)
packets include:
WAN Load Balancing Probe
ISAKMP
Web CFS
PPTP and L2TP control packets
DHCP
ARP Packets