04 - Transport Layer

Problem 1: Short Answer Questions

Problem 2: UDP Service Model

What’s the service model exported by UDP to the application programs. When would you use this service model?

Solution

UDP provides a connectionless, unreliable, unordered datagram service with no flow or congestion control. Use it for latency-sensitive or application-tolerant workloads (e.g., streaming, VoIP, DNS) where the app can handle loss/reordering or add its own lightweight reliability.

Elaboration: UDP’s simplicity makes it ideal for applications where speed matters more than perfect delivery. Each datagram is independent and self-contained, so the sender needn’t establish a connection or maintain per-flow state at the receiver. The trade-off is that applications must implement their own reliability mechanisms if needed. Streaming applications often accept occasional packet loss (resulting in audio/video glitches) because the alternative—TCP’s retransmission delays—would cause unacceptable buffering and latency. DNS queries use UDP because most names resolve on the first try; if a response is lost, the client simply retransmits the query.

Problem 3: TCP Service Model

What’s the service model exported by TCP to the application programs. When would you use this service model?

Solution

TCP provides a reliable, ordered, byte-stream, connection-oriented service with flow and congestion control. Use it when correctness and in-order delivery matter (e.g., HTTP, file transfer, email, DB protocols).

Elaboration: TCP’s comprehensive reliability guarantees come at the cost of complexity and latency. The 3-way handshake establishes a connection before data transfer; retransmission and flow control add overhead. However, for applications where data integrity is critical—file downloads, database queries, email—these costs are acceptable. The byte-stream abstraction hides packet boundaries from the application, allowing the sender to transmit any amount of data and the receiver to reassemble it in order transparently. Congestion control ensures that TCP flows share network resources fairly, preventing any single connection from monopolizing bandwidth.

Problem 4: Video Streaming Protocol Selection

Consider an application where a camera at a highway is capturing video of the passing cars at 30 frames/second and sending the video stream to a remote video viewing station over the Internet. You are hired to design an application-layer protocol to solve this problem. Which transport-layer protocol, UDP or TCP, would you use for this application and why? Justify your answer.

Solution

UDP. Real-time video tolerates some loss and out-of-order frames; avoiding retransmissions and head-of-line blocking reduces latency. The application can add minimal FEC or selective repair as needed.

Elaboration: Live video streaming presents a unique trade-off: losing a frame or two is preferable to displaying outdated video due to retransmission delays. TCP’s reliability guarantees introduce two problems: (1) timeouts and retransmissions cause playback buffering and latency spikes, and (2) if a frame arrives after the viewer has moved past that frame’s timestamp, displaying it causes visually jarring “out-of-order” frames. UDP avoids these by discarding lost frames, keeping the stream live. Modern streaming protocols (like those in WebRTC) run over UDP and employ lightweight redundancy (e.g., forward error correction) to repair occasional losses without the overhead of full TCP-style reliability. Some video systems (e.g., RTMP over TCP) do use TCP, but typically for non-interactive or pre-recorded content where latency is less critical.

Problem 5: UDP Socket Address and Port Handling

Suppose a UDP server creates a socket and starts listening for UDP packets at UDP port 5000 at host interface having IP address $IP_S$.

Comment on UDP's connectionless demultiplexing model

This problem illustrates UDP’s connectionless demultiplexing model. Unlike TCP, which identifies a connection by the 4-tuple (source IP, source port, destination IP, destination port) and maintains per-connection state, UDP simply listens on a specific port and accepts datagrams from any source. The server can distinguish clients by examining the source address and port of each incoming datagram, but UDP itself does not enforce connection semantics—it’s the application’s responsibility to parse packet contents and handle any client-specific logic.

Problem 6: TCP Port Numbers Bidirectional

Consider a TCP connection between host A and host B.

Suppose that the TCP segments travelling from host A to host B have source port number x and destination port number y. What are the source and destination port numbers for the segments travelling from host B to host A?

Solution

SrcPort=y, DestPort=x (the ports swap direction across the connection).

Elaboration: This symmetry reflects TCP’s point-to-point connection model. Each connection is identified by an unordered pair of endpoints: (A:x, B:y). From A’s perspective, it sends to B:y using its local port x. From B’s perspective, it sends back to A:x using its local port y. This symmetry is essential for TCP to demultiplex return traffic correctly at each end. Unlike UDP, where the server typically uses a well-known port and clients use ephemeral ports, TCP requires port symmetry to maintain bidirectional communication on a single connection.

Problem 7: Telnet Multiple Clients Port Assignment

Suppose client A initiates a Telnet session with server S. At about the same time, client B initiates a Telnet session with server S. (Recall that a Telnet session runs over TCP and a Telnet server waits for client connections at TCP port 23.) Provide possible source and destination port numbers for:

Comment on TCP's connection-oriented demultiplexing model

Elaboration: This problem shows how a single server can support multiple concurrent client connections by leveraging TCP’s 4-tuple demultiplexing. The server binds to a well-known port (23 for Telnet), but each client connection uses a unique 4-tuple. When client A connects, its ephemeral port is different from B’s, creating separate connection identifiers. This allows the server’s OS to dispatch incoming segments to the correct application instance (or the application to identify which client sent each segment). Without this demultiplexing, a single well-known port could support only one client at a time.

Problem 8: TCP Server Socket Binding and Connection Handling

Suppose that a TCP server creates a TCP socket and binds it to TCP port 5000 at its host’s IP address $IP_S$. Suppose that a client A at host with IP address $IP_A$ and TCP port 3000 sends a TCP connection request to this server to a TCP connection.

Comment on TCP's connection-oriented demultiplexing model

Elaboration: This problem illustrates the connection setup and demultiplexing in TCP. A server socket binds to a port but listens for connections from any client; the OS uses the source IP and port to distinguish different clients. When a malicious packet arrives with a mismatched source IP (as in part d), TCP’s strict 4-tuple matching rejects it. This tight coupling between addresses and connections provides security: an attacker cannot easily inject packets into an established connection without knowing the exact sequence numbers and 4-tuple involved.

Problem 9: Go-Back-N Packet Loss and Timeouts

Assume that a sender wants to send 10 packets labeled 1 to 10 with Go-Back-N protocol with a sender window size of 3. Assume that the RTT between the sender and the receiver is 20ms and the timeout is always set to 50ms. Suppose packets 5 and 8 get lost the first time they are sent.

Problem 10: Sender Window Sequence Numbers Analysis

Consider the Go-Back-N protocol with a sender window size of 3. Suppose that at time t, the next in-order packet that the receiver is expecting has a sequence number of k. Assume that the medium does not reorder messages. Answer the following questions:

Problem 11: Go-Back-N Packet Loss Transmission Count

Host A needs to send a message consisting of 9 packets to host B using a sliding window protocol with a window size of 3. Assume that Go-Back-N is used for reliable packet transfer. All packets are ready and immediately available for transmission. If every 5 th packet that A transmits gets lost but no ACKs from B ever get lost, what is the number of packets that A will transmit to send the message to B? Show your work. (Answer: 16)

Solution

alt text

Problem 12: Selective Repeat Packet Loss Transmission Count

Host A needs to send a message consisting of 9 packets to host B using a sliding window protocol with a window size of 3. Assume that Selective Repeat is used for reliable packet transfer. All packets are ready and immediately available for transmission. If every 5 th packet that A transmits gets lost but no ACKs from B ever get lost, what is the number of packets that A will transmit to send the message to B? Show your work. (Answer: 11)

Solution

alt text

Problem 13: Go-Back-N Larger Window Packet Loss

Host A needs to send a message consisting of 10 packets to host B using a sliding window protocol with a window size of 4. Assume that Go-Back-N is used for reliable packet transfer. All packets are ready and immediately available for transmission. If every 6 th packet that A transmits gets lost but no ACKs from B ever get lost, what is the number of packets that A will transmit to send the message to B? Show your work. (Answer: 17)

Problem 14: Selective Repeat Larger Window Packet Loss

Host A needs to send a message consisting of 10 packets to host B using a sliding window protocol with a window size of 4. Assume that Selective Repeat is used for reliable packet transfer. All packets are ready and immediately available for transmission. If every 6 th packet that A transmits gets lost but no ACKs from B ever get lost, what is the number of packets that A will transmit to send the message to B? Show your work. (Answer: 11)

Problem 15: Sliding Window with TCP Fast Retransmit

Consider a sender and receiver employing sliding window protocol with a window size of 4. Assume that the receiver sends cumulative acknowledgements, and the sender employs TCP’s fast retransmit algorithm to recover lost packets. You want to send 8 packets labeled 1 to 8 to the receiver. Take a 10ms one-way delay between the sender and the receiver and a 50ms timeout period.

Elaboration: Fast retransmit is a key optimization in TCP. Instead of waiting for a timeout when a packet is lost, the sender retransmits upon receiving three duplicate ACKs (i.e., four total ACKs with the same sequence number). This allows recovery in roughly one RTT rather than waiting for the timeout. The diagram will show packet 3 being lost, followed by the receiver sending duplicate ACKs for packet 2, and the sender triggering fast retransmit before the timeout fires.

Solution

alt text

Problem 16: Reliable Broadcast Channel Protocol

Consider a scenario in which host A wants to simultaneously send messages to hosts B and C. A is connected to B and C via a broadcast channel, i.e., a packet sent by A is carried by the channel to both B and C. Suppose that the broadcast channel connecting A, B and C can independently lose or corrupt packets (and so for example, a packet sent from A might be correctly received by B, but not by C). Design a stop-and-wait-like protocol for reliably transferring a packet from A to B and C, such that A will not send the next packet until it knows that both B and C have correctly received the current packet.

Solution

Answer: A broadcasts each packet and waits until it receives ACKs from both B and C (retransmitting as needed); it advances to the next sequence number only after both ACKs have been received.

Elaboration:

The Core Idea

Since the channel is broadcast, A sends a single packet that “should” go to both B and C. However, because losses are independent, A cannot assume that if B got it, C got it too.

The Rule

Host A acts as if it has two logical “Stop-and-Wait” threads running in parallel but synchronized to the same transmission. It cannot proceed to Packet $N+1$ until both logical threads have confirmed Packet $N$ (i.e., both B and C have ACKed).


1. Sender (Host A) Protocol

Variables:

Behavior:

  1. Send State:

    • A creates a packet with sequence number SeqNum.
    • A broadcasts the packet to the channel.
    • A starts a Timer.
  2. Wait State (Listening for ACKs):

    • Event: Received ACK from B
      • Mark Received_ACK_B = True.
    • Event: Received ACK from C
      • Mark Received_ACK_C = True.
    • Event: Timer Expires
      • If Received_ACK_B and Received_ACK_C are NOT BOTH True:
        • Retransmit the packet.
        • Restart the Timer.
        • (Note: Even if B already ACKed, A must re-broadcast to reach C. B will just handle the duplicate.)
  3. Transition State:

    • If (Received_ACK_B == True) AND (Received_ACK_C == True):
      • Stop Timer.
      • Toggle SeqNum (0 $\rightarrow$ 1).
      • Reset flags (Received_ACK_B = False, Received_ACK_C = False).
      • Fetch next data and go to Send State.

2. Receiver (Hosts B and C) Protocol

The receivers behave exactly like a standard Stop-and-Wait receiver. They don’t need to know about each other.

Behavior for Host B (and similarly for C):

  1. Wait for Packet:

    • Wait for packet from A.
  2. Packet Arrives:

    • Check for corruption (checksum).
    • If Corrupt: Discard. Do nothing.
    • If Correct:
      • Check Sequence Number.
      • If SeqNum == Expected SeqNum:
        • Pass data to application.
        • Send ACK(SeqNum) to A.
        • Increment Expected SeqNum.
      • If SeqNum == Old/Duplicate SeqNum:
        • (This happens if A retransmitted because C missed the packet.)
        • Discard data (don’t pass to app).
        • Send ACK(SeqNum) to A. (Crucial: A needs to hear this again to know B is still happy.)

Problem 17: Stop-and-Wait vs Sliding Window Protocols

What’s the problem with Stop-and-Wait protocol? Briefly describe how the sliding windows protocols solve the problem.

Solution

Stop‑and‑Wait allows only one outstanding packet, wasting bandwidth on high BDP paths. Sliding windows permit multiple outstanding packets, improving utilization and throughput.

Elaboration: Stop-and-Wait’s fundamental limitation is that the sender must wait for an ACK before sending the next packet. On a high-latency or high-bandwidth link, the sender sits idle for an entire RTT while waiting for the ACK to return. For example, a 1 Gbps link with 100 ms RTT can transmit 100 Mb of data during one RTT, but Stop-and-Wait sends only one packet per RTT, severely underutilizing the link. Sliding windows decouple the RTT from throughput by allowing the sender to transmit multiple packets before waiting for ACKs. The optimal window size equals the bandwidth-delay product: the number of packets that fit in flight during one RTT. This allows full utilization of the link capacity.

Problem 18: TCP Sequence Number Exhaustion

Consider transferring an enormous file of L bytes from host A to host B. Assume an MSS of 1460 bytes.

Problem 19: Go-Back-N vs Selective Repeat Comparison

Go-Back-N and Selective-Repeat are retransmission strategies with sliding window protocols. Describe and discuss the advantages of each. How does using one or the other change the requirements for buffers in the sending and receiving windows?

Solution

Elaboration: GBN’s simplicity comes from requiring only cumulative ACKs and a single sender-side timer. The receiver can discard out-of-order packets (saving buffer space) and simply duplicate-ACK the highest in-order sequence. However, on lossy links, a single loss triggers retransmission of the entire window, wasting bandwidth. SR avoids this by allowing selective retransmission: the receiver ACKs individual packets and buffers out-of-order arrivals, and the sender retransmits only the missing packet. This efficiency comes at a cost: the receiver needs a buffer of size ≈ window for out-of-order storage, each packet needs its own timer at the sender, and implementation is more complex. Modern protocols like TCP (which uses a form of SR with SACK) and QUIC prefer SR for high-speed, lossy links.

Problem 20: Optimal Sliding Window Size Calculation

Host A uses 32 byte packets to transmit a message to host B using the sliding window protocol. The RTT between A and B is 80 milliseconds and the bottleneck bandwidth on the path between A and B is 128Kbps. What is the optimal window size that A should use? Show your work. (Answer: 40).

Solution

Optimal window ≈ bandwidth–delay product in packets:

Elaboration: The bandwidth-delay product (BDP) represents the total number of bits that can be “in flight” across the network at any given time. With a 128 Kbps link and 80 ms RTT, the network can hold 10,240 bits. The sender should always have this amount of data in flight to keep the pipe full and achieve maximum throughput. A window smaller than BDP/packet-size will leave the link idle (underutilization); a window larger than BDP won’t improve throughput (and will unnecessarily buffer packets in the network). This calculation is critical for link capacity planning and helps explain why high-speed, long-distance networks (e.g., intercontinental transfers) require large TCP windows—the BDP is enormous.

Problem 21: Optimal Window Size Formula

The distance between two hosts A and B is L kilometers. All packets are K bits long. The propagation time per kilometer is t seconds. Let R bits/sec be the channel capacity. Assume that A and B use the sliding window protocol for reliable packet deliver. Assuming that the processing and queuing delay is negligible, what’s the optimal window size that A should use? Show your work. (Answer: (2LtR)/K)

Solution

Optimal window equals the bandwidth–delay product in packets:

Problem 22: Flow Control via Delayed ACKs

Consider combining TCP’s reliable packet delivery, i.e., sliding window protocol, with its flow control mechanism as follows: The receiver delays ACKs, that is, not send the ACK until there is a free buffer space to hold the next frame. In doing so, each ACK would simultaneously acknowledge the receipt of the last frame and tell the source there is now a free buffer space available to hold the next frame. Explain why implementing flow control in this way is NOT a good idea.

Solution

Delaying ACKs hides successful delivery from the sender. Without timely ACKs, the sender’s timeout fires and triggers needless retransmissions, wasting bandwidth and increasing latency. Flow control should be signaled via advertised window, not by withholding ACKs.

Elaboration: This approach confuses TCP’s reliability and flow control mechanisms. ACKs serve dual purposes: confirming packet delivery and advertising buffer space. Delaying ACKs until buffer space opens conflates these functions, causing the sender to interpret delayed ACKs as packet loss rather than flow control. The result is spurious retransmissions and reduced performance. TCP’s proper approach separates concerns: ACKs confirm delivery promptly, while the advertised window field explicitly signals buffer availability. This allows the receiver to throttle the sender (window = 0) while still acknowledging received data.

Problem 23: TCP Zero Window Probing

A sender on a TCP connection that receives a 0 advertised window periodically probes the receiver to discover when the window becomes nonzero. Why would the receiver need an extra timer if it were responsible for reporting that its advertised window had become nonzero (i.e., if the sender did not probe)?

Solution

To avoid deadlock. If the sender isn’t probing and an ACK indicating a nonzero window is lost, the receiver must use a timer to retransmit the window‑update notification; otherwise both sides could wait indefinitely.

Elaboration: This deadlock scenario occurs when the receiver’s window opens (buffer space becomes available) and it sends a window update, but that update is lost in transit. Without sender-driven probing, the sender waits forever for a window update that will never be retransmitted, while the receiver waits forever for data that will never be sent. The receiver needs a timer to periodically resend window updates until acknowledged, or the sender needs to probe with zero-window probes. TCP chooses sender-driven probing because it’s simpler: only one entity (sender) needs the timer, and the probe frequency can adapt to network conditions.

Problem 24: TCP Memory Allocation Effects

Assume that an operating system can give more memory to, or request memory back from, a TCP connection, based on memory availability and the connection’s requirements. (a) What would be the effects of the memory available to a connection? (b) What about reducing the memory for the connection?

Solution

(a) More memory enlarges send/receive buffers, allowing a larger advertised window and higher throughput on high‑BDP paths. (b) Less memory shrinks buffers, reducing the advertised window and throughput; the sender must pace to avoid overrunning the smaller window.

Elaboration: Reduced memory forces smaller buffers, which constrains the advertised window and limits throughput. On high-BDP paths, small windows severely underutilize link capacity. The sender’s transmission rate becomes limited by the receiver’s buffer size rather than network capacity. This creates a feedback loop: smaller windows lead to lower throughput, which may actually be appropriate during memory pressure to avoid overwhelming the receiver’s processing capabilities. Modern operating systems dynamically adjust TCP buffer sizes based on available memory and connection requirements.

Problem 25: Sliding Window Packet Arrival Sequence

Consider a sliding-window reliable delivery protocol that uses timeouts and cumulative ACKs as in TCP. Assume that the sender’s and receiver’s window size is 4, i.e., at most 4 packets can be buffered by the receiver and at most 4 unACKed packets can be sent by the sender. Further assume that the receiver has received all packets up to packet number 7 and is expecting packet number 8, i.e., Next Packet Expected

(NPE) = 8 and the Last Packet Acceptable (LPA) is 11. Suppose the receiver receives the following packets 8, 11 9 10 12 13 in this order. Show the values of Next Packet Expected (NPE) and Last Packet Acknowledged (LPA) after each packet arrival. Also describe whether an ACK is sent to sender after each packet arrival and if so write down the sequence number of the packet that is ACKed.

Event Action NPE LPA
Initial Condition   8 11
Packet 8 arrives Send ACK for 8 9 12
Packet 11 arrives Buffer 11, Send ACK for 8 9 12
Packet 9 arrives Send ACK for 9 10 13
Packet 10 arrives Send ACK for 11 12 15
Packet 12 arrives Send ACK for 12 13 16
Packet 13 arrives Send ACK for 13 14 17
Solution

The table shows cumulative ACK behavior with receiver window size 4: out‑of‑order arrivals are buffered; ACKs reflect the highest contiguous sequence received (8→11), then jump when missing packets arrive (ACK 11 after 10), advancing NPE and LPA accordingly.

Elaboration: This problem demonstrates TCP’s sliding window operation with buffering and cumulative acknowledgments. The receiver maintains a window [NPE, LPA] and can buffer out-of-order packets within this range. When packet 8 arrives, it’s the expected sequence, so NPE advances to 9 and an ACK is sent. Packet 11 arrives out of order and is buffered, but NPE remains 9 (still waiting for 9 and 10). When packets 9 and 10 finally arrive, they enable delivery of the buffered sequence 8-11, causing NPE to jump to 12 and LPA to advance accordingly. This buffering improves performance by avoiding unnecessary retransmissions of correctly received but out-of-order packets.

Problem 26: TCP Zero Window Probing Timer

A sender on a TCP connection that receives a 0 advertised window periodically probes the receiver to discover when the window becomes nonzero. Why would the receiver need an extra timer if it were responsible for reporting that its advertised window has become nonzero (i.e., if the sender did not probe)?

Solution

Same rationale as Problem 23: the receiver needs a timer to retransmit its window‑opening notification if the prior update was lost, ensuring progress without sender probes.

Elaboration: This problem highlights a critical design choice in TCP’s zero-window handling. If the receiver were responsible for advertising window openings (rather than the sender probing), the receiver would need to implement a retry mechanism for window update messages. Without this, a lost window-opening ACK creates deadlock: the receiver thinks it has notified the sender that space is available, but the sender never received this notification. Implementing receiver-driven window updates would require additional timer management, state tracking, and retransmission logic at the receiver—adding complexity compared to TCP’s simpler sender-driven probe approach.

Problem 27: TCP Sequence Number Wrap Around

The TCP sequence number field in the TCP header is 32 bits long, which is big enough to cover 4 billion bytes data. Even if this many bytes were never transferred over a single connection, why might the sequence number still wrap around from 232 - 1 to 0.

Solution

Sequence numbers are modulo $2^{32}$. On long‑lived connections or high rates, the sender can cycle through the 32‑bit space even if the application’s total data is below $2^{32}$, leading to natural wrap‑around.

Elaboration: While $2^{32}$ bytes (4 GB) seems enormous, modern high-speed links can exhaust this space within seconds. For example, a 10 Gbps link sending maximum-sized segments will wrap around in about 3.4 seconds. TCP handles wrap-around through sequence number arithmetic (modulo $2^{32}$), which works correctly as long as sequence numbers don’t wrap around within the maximum segment lifetime (MSL). The MSL (typically 2 minutes) bounds how old segments can be; TCP’s timestamp option (RFC 1323) further protects against wrap-around issues and is essential for modern high-speed links.

Problem 28: Sliding Window Protocol Field Sizing

You are hired to design a reliable byte-stream protocol that uses a sliding window (like TCP). This protocol will run over a 100-Mbps network. The RTT of the network is 100ms, and the maximum segment lifetime is 60 seconds. How many bits would you include in the AdvertisedWindow and SequenceNum fields of your protocol.

Solution

Elaboration: The AdvertisedWindow field must cover the BDP—the maximum bytes in flight. The SequenceNum field must avoid wrap-around within MSL: if a late-arriving segment from a previous connection instance carries a recycled sequence number, the receiver must not confuse it with a current segment. Standard TCP uses 32 bits for both fields; this problem demonstrates why high-bandwidth or high-latency links (intercontinental, satellite) need extensions like RFC 1323 timestamp options to safely handle sequence number space and large windows.

Problem 29: TCP Congestion Window with Fast Retransmit

Sketch the TCP congestion window size as a function of time (measured in RTTs) if a single loss occurs on the 12th packet. Assume that the system uses fast retransmission.

Solution

alt text

Elaboration: The graph shows the characteristic TCP Reno sawtooth pattern: slow-start (exponential growth) until loss, then fast retransmit halves the window and enters congestion avoidance (linear growth), avoiding the drastic reset-to-1 that a timeout would cause. Fast retransmit relies on receiving 3 duplicate ACKs, which signals isolated loss rather than network collapse; this allows faster recovery and better throughput than timeout-based recovery on lossy links.

Problem 30: Flow Control and Congestion Control Timing

Assume that you want to send 14600 bytes of data to a TCP receiver. Further assume that during connection establishment, the TCP receiver exports a receive window of size 5840. Also assume that MSS is 1500 bytes, IP header is 20 bytes and TCP header is 20 bytes. Thus you can put 1460 bytes of application data to each TCP packet.

Solution

alt text

alt text

Problem 31: Congestion Control Threshold Update

Consider a TCP connection with a current congestion window size of 10.

Problem 32: Extended TCP Window Scaling Performance

Recall the proposed TCP extension that allows flow control window sizes much larger than 64KB. Suppose that you are using this extended TCP over a 1-Gbps link with a latency of 100ms to transfer a 10-MB file, and the TCP receive window size is 1MB. If TCP sends 1-KB packets, assuming no congestion and no lost packets.

Problem 33: UDP vs TCP Single Byte Transfer Latency

Consider two hosts A and B separated by a one-way delay of 10ms. Assume that B runs a server and A runs a client. Assume that the client application running at host A wants to send 1 byte to the server.

Problem 34: TCP Reno Congestion Window Analysis

Consider the following plot of TCP window size as a function of time:

Assuming TCP Reno is the protocol experiencing the behavior shown in the graph, answer the following questions. In all cases a short discussion justifying the answer is provided.

Problem 35: RTT Variance Impact on Congestion Control

In TCP (Jacobson) congestion control, the variance in round-trip times for packets implicitly influences the congestion window. Explain how a high variation in the round-trip time affects the congestion window. What is the impact of this high variation on the throughput for a single connection?

Solution.

Higher RTT variation increases the deviation term DevRTT, which raises the retransmission timeout TimeoutInterval = EstimatedRTT + 4·DevRTT. More conservative timeouts reduce spurious retransmissions but slow recovery. Jitter can also cause spurious timeouts (if the estimate lags), triggering multiplicative decreases of cwnd. Net effect: slower cwnd growth, more conservative behavior, and lower throughput for a given path.

Elaboration: TCP’s timeout calculation is deliberately conservative: TimeoutInterval = EstimatedRTT + 4*DevRTT adds four standard deviations of margin. High jitter (variance) inflates DevRTT, which raises the timeout threshold, meaning TCP waits longer before declaring loss. This conservatism prevents spurious retransmissions from good packets that are merely delayed. However, on highly variable paths (e.g., cellular, wifi), packets legitimately delayed 2-3x their average RTT may never trigger timeout, paradoxically causing TCP to underestimate loss and grow cwnd recklessly—then sudden loss triggers recovery. Modern algorithms (like CUBIC and BBR) use packet loss as the primary congestion signal rather than timeout, making them less sensitive to RTT jitter.


Problem 36: Flow Control vs Congestion Control Bottleneck

Host A is sending an enormous file to host B over a TCP connection. Over this connection, there is never any packet loss, and the timers never expire. Denote the transmission rate of this link connecting host A to the Internet by $R$ bps. Suppose that the process in host A is capable of sending data into its TCP socket at a rate of $S$ bps, where $S = 10R$. Further suppose that the TCP receive buffer is large enough to hold the entire file, and the send buffer can only hold one percent of the file. What would prevent the process in host A from continuously passing data to its TCP socket at $S$ bps. TCP flow control? TCP congestion control? Or something else?

Solution

The bottleneck is the path capacity $R$ (and the socket/send buffer pacing). Flow control isn’t limiting (receiver buffer is large), and congestion control won’t reduce rate without loss; the NIC/link rate caps throughput near $R$, so the application cannot sustain $S=10R$.

Elaboration: This problem highlights an often-overlooked constraint: the physical link capacity. Flow control (receiver advertised window) is irrelevant because the receiver buffer is huge. Congestion control is irrelevant because there’s no loss, so cwnd grows unchecked to a very large value. What actually limits the sender is the network interface card (NIC) and the physical link: they cannot transmit faster than $R$ bps, period. The send buffer (holding 1% of the file) fills up quickly because the application produces data at $10R$ but the NIC drains it at only $R$. The OS blocks the application’s send calls when the send buffer fills, naturally throttling the application to the link rate. This is an important distinction: TCP’s mechanisms (flow control, congestion control) cooperate with the physical layer’s hard limit, not replace it.


Problem 37: TCP RTT Estimation Exponential Moving Average

Consider the TCP procedure for estimating RTT. Suppose that $\alpha = 0.1$. Let $SampleRTT_1$ be the most recent sample RTT, let $SampleRTT_2$ be the next most recent sample RTT and so on.

Solution.

Elaboration: The choice of $\alpha = 0.1$ means recent samples get 10% weight and historical estimates get 90% weight. This is relatively conservative, favoring stability over fast reaction. Higher $\alpha$ (e.g., 0.5) reacts faster to RTT spikes but risks instability; lower $\alpha$ (e.g., 0.05) smooths out jitter but lags in truly changing RTT. The exponential form is elegant: old samples automatically decay without explicit pruning, and the formula remains O(1) in memory and computation. TCP implementations use the same principle for DevRTT (variance), tuning both to balance responsiveness and robustness.


Problem 38: TCP File Transfer with Flow and Congestion Control

Consider a TCP client that wants to establish a connection to a TCP server and send a file of size 8192 bytes. Assume that the RTT between the client and the server is 20ms, the MSS is 1024 bytes and the server’s receive buffer size is 3072. Further assume that the initial sequence number selected by the client during connection establishment is 4099, and that no packets are lost during transmission.


Problem 39: TCP Segment Loss and Acknowledgement Numbers

Suppose host A sends four TCP segments back-to-back to host B over a TCP connection. The first segment has sequence number 90; the second has sequence number 110, the third has sequence number 170 and the fourth has sequence number 250.