Notes/UNB/Year 4/Semester 2/CS3873/2024-01-22.md
2024-01-22 12:38:16 -04:00

56 lines
2.8 KiB
Markdown

Lecture Topic: Packet Switching Performance
# Packet Switching
## Congestion
A relevant example is air plane ticket overbooking. If an air plane has a capacity of 100 seats, and the probability of of a passenger showing up to their flight is 80%, then you can overbook ticket sales due to the probability of passengers not showing up
- If 110 tickets are sold, the probability of more than 100 passengers is 0.0058%
- If 115 tickets are sold, the probability goes up to 1.94%
- If 120 tickets are sold, the probability is 15.17%
- If 130 tickets are sold, the probability is 78.12%
## Performance
Throughput: Rate (bits/time) at which bits are transferred between sender/receiver
- Instantaneous: Receiving rate at any instant of time
- Average: Receiving rate over a longer period of time
How fast a node (host or router) is transmitting depends on
1. How fast the sender is sending
2. How fast the link is transmitting
End-to-end throughput is constrained by rate of bottleneck link (the link of the minimum rate on an end-to-end path). The weakest link in the chain (of nodes) determines the throughput of the entire link.
## Delay and Loss
Packets queue in a router buffer (Store and Forward)
- They are delayed while waiting in the buffer for it's turn
- Slowed down while the queue keeps growing (congestion)
- Dropped (lost) if no free space in a full buffer
There is four sources of nodal delay:
1. Node processing: Decoding the incoming electronic signal and accounting for distortion (e.g. wireless signal distortion), and verifying the correctness of the packet, and determining the output link. Usually very small ($10^{-6}$ secs)
2. Queuing: Time waiting at the output link for transmission. Amount depends on the congestion of the network.
3. Transmission: $L/R$, L = Packet length, R = Link bandwidth
4. Propagation: $m/s$ m = Physical distance of link (e.g. 100m wire), s = propagation speed of link (e.g. speed of electricity)
The entire delay is the sum of all of these figures
### Measuring queuing delay
Traffic intensity is a measure of congestion.
$$ \frac{L \times a}{R} $$
a: Average packet arrive rate (packets/s)
L: Packet length/size (bits/packet)
R: Link bandwidth/rate (bps)
If this figure is 0, the delay on average is very small
If this figure is 1, the delay is large
If this figure is > 1, then more work arriving than serviced (severe congestion)
Note: There is a field called traffic engineering, and an important rule for this field is to not let the traffic intensity exceed 1.
## Example: Delay
Consider only transmission delay and propagation delay. S sends 1 packet of length L to D over a single link of rate R and distance m. s is the speed of the link
L = 1 kb
R = 100 kb/s
m = 100 km
s = $2\times10^8$ m/s
$d_{prop} = m/s = 10^5/(2\times 10^8) = 5 \times 10^{-4}$