06 - Socket Programming
Problem 1: Short Answer Questions
-
(a) Describe the service model exported by UDP. Write down the name of 2 application layer protocols that make use of UDP.
View Answer
In short: UDP exports a connectionless, unreliable datagram service with no ordering guarantees or flow control. Each message is independent and may be lost, duplicated, or arrive out of order. DNS and TFTP are two protocols that use UDP.
Elaboration:
UDP Service Model:
Characteristics: - Connectionless: No setup/teardown, send immediately - Unreliable: Packets may be lost - Unordered: Messages may arrive out of order - No flow control: Sender not told if receiver overwhelmed - No congestion control: Sender ignores network conditions - Low overhead: Minimal header (8 bytes) - Low latency: No waiting for connection establishment - Datagram-oriented: Each send() = one complete messageMessage Delivery Guarantees:
Application sends 3 datagrams: Send: Message 1 Send: Message 2 Send: Message 3 Possible outcomes: 1. Receive: 1, 2, 3 (all arrive in order) 2. Receive: 1, 3 (message 2 lost) 3. Receive: 2, 1, 3 (out of order) 4. Receive: 1, 1, 3 (message 1 duplicated) 5. Receive: nothing (all lost) 6. Receive: 3 (only last one) All are valid UDP behaviors! Application must handle all casesTwo UDP Protocols:
1. DNS (Domain Name System)
Why UDP? - Simple query/response protocol - Single request → single response expected - If no response: Timeout, retry with different server - Speed matters: DNS queries are frequent - Low bandwidth: Queries/responses small (~200 bytes typically) - Can't require TCP overhead (3 RTT handshake) DNS Query flow: 1. Client sends DNS query (UDP, port 53) 2. Server responds (UDP) 3. Done No connection setup: Fast Loss tolerance: Retry mechanism in client2. TFTP (Trivial File Transfer Protocol)
Why UDP? - Simple file transfer protocol - Designed for embedded systems with limited resources - Minimal overhead - Client retransmits if timeout - Each block (~512 bytes) is independent TFTP flow: 1. Client sends READ/WRITE request (UDP, port 69) 2. Server responds from random port 3. Exchange data blocks with ACKs 4. If block lost: Client retransmits Loss handled by application layer Can work over simple networksOther Common UDP Protocols:
- NTP: Network Time Protocol (network synchronization)
- SNMP: Simple Network Management Protocol (monitoring)
- DHCP: Dynamic Host Configuration Protocol (IP assignment)
- VoIP: Skype, Zoom (real-time audio/video)
- Online Games: Quick, low-latency communication
- Streaming: Video/audio streaming (loss tolerance)
Conclusion:
UDP provides a connectionless, unreliable datagram service with minimal overhead. Applications using UDP must handle packet loss, duplication, and reordering. DNS and TFTP are classic examples that benefit from UDP’s low latency and simple operation, accepting unreliability as a trade-off for speed.
-
(b) Write down the service model exported by TCP. Write down the name of 2 application layer protocols that make use of TCP and why they use TCP instead of UDP.
View Answer
In short: TCP exports a connection-oriented, reliable, ordered byte-stream service with flow control and congestion control. HTTP and SMTP are two protocols requiring TCP’s reliability guarantees because they handle important data (web pages, emails) that must arrive completely and in order without loss.
Elaboration:
TCP Service Model:
Characteristics: - Connection-oriented: 3-way handshake setup, graceful close - Reliable: All data delivered exactly once (no loss, no duplication) - Ordered: Bytes arrive in same order sent - Flow control: Receiver tells sender its buffer size - Congestion control: Sender adapts to network conditions - Byte-stream oriented: Application writes bytes, receives bytes - High overhead: 20-byte header + handshake (3 RTT) - Error detection: Checksums verify data integrityGuaranteed Properties:
Application sends: "Hello World" TCP guarantees: 1. ALL bytes arrive (no loss) 2. No duplicates (each byte once) 3. In order (H-e-l-l-o-space-W-o-r-l-d) 4. Application never sees partial data or corruption If TCP detects: - Loss: Retransmit - Out of order: Buffer, reorder - Corruption: Drop, request retransmit Application unaware of these issuesTwo TCP Protocols:
1. HTTP (HyperText Transfer Protocol)
Why TCP instead of UDP? Reason 1: Data integrity critical - Downloading web page with images - Loss of even one byte corrupts data - Image file missing bytes = unrenderable - Can't download partial webpage - HTTP/1.0: One request per connection - Can't retry individual images Reason 2: Large, variable-sized messages - Webpage: Several KB to MB - UDP has 64 KB limit per datagram - Would need to fragment at application layer - TCP handles fragmentation transparently Reason 3: User expectation - "Click link → page loads completely" - No data loss tolerated - HTTP relies on TCP's reliability Example: Client downloads webpage (500 KB) UDP: Would need 500+ separate datagrams Losing even one: Must retry all Inefficient TCP: Single stream, 500 KB arrives reliably Lost packets retransmitted invisibly2. SMTP (Simple Mail Transfer Protocol)
Why TCP instead of UDP? Reason 1: Message integrity essential - Email loss unacceptable - "Sent" button means reliable delivery - Recipients expect complete messages - Can't lose partial email content - Can't lose attachments Reason 2: Guaranteed delivery semantics - SMTP tracks delivery status: "250 OK" = message accepted Server stores for retry - UDP: No such guarantees possible - Can't tell if email reached server Reason 3: Error detection and recovery - TCP: If packet lost, automatic retransmit - SMTP: Application layer can detect failures - Can retry with different server if needed Example: Client sends 5 MB email with attachments UDP: Would fragment into many datagrams Losing one → entire retry Unacceptable for email TCP: Transparent, reliable transmission Application gets "250 OK" = safe to deleteComparison:
Aspect UDP TCP Connection Connectionless Connection-oriented Reliability Unreliable Reliable Order Unordered Ordered Flow Control None Yes (window) Congestion Control None Yes (AIMD) Error Handling App layer TCP layer Handshake None 3-way Data Size Per datagram Byte stream More TCP Protocols:
- FTP: File transfer (reliability critical)
- Telnet: Remote login (ordered interaction)
- SSH: Secure shell (data integrity required)
- POP3/IMAP: Email retrieval (data loss intolerable)
Conclusion:
TCP provides connection-oriented, reliable, ordered byte-stream delivery with flow and congestion control. HTTP uses TCP because web page integrity is critical—pages can be large, multi-part, and losing even one byte breaks the page. SMTP uses TCP because email delivery must be guaranteed and errors must be detectable. Both protocols require reliability that UDP cannot provide.
-
(c) What’s the maximum size of user data that can be sent over TCP with a single send() operation? Justify your answer.
View Answer
In short: There is no fixed maximum enforced by TCP for a single send() call. The send() operation can be called with megabytes of data, and TCP will fragment it into appropriately-sized segments. The actual limit depends on available memory and system buffer sizes, not TCP protocol limits.
Elaboration:
No Protocol-Level Limit:
TCP allows send() with arbitrary amount of data: send(socket, buffer, 1000000); // 1 MB send(socket, buffer, 100000000); // 100 MB Both are valid and will work TCP doesn't reject based on sizeHow TCP Handles Large Sends:
Application calls: send(sock, large_buffer, 1,000,000) TCP does: 1. Accepts the 1 MB request 2. Segments it into MSS-sized chunks - MSS (Maximum Segment Size): Typically 1460 bytes - 1,000,000 / 1460 ≈ 685 segments 3. Sends segments as network allows: - Respects congestion window - Respects receiver's advertised window - Paces packets according to congestion control 4. Returns to application when data queued in TCP buffer Application doesn't wait for all 685 segments Just waits for buffer spacePractical Limits:
Limit 1: TCP send buffer size - Default: 64 KB to 2 MB (OS dependent) - Can increase with setsockopt() - send() buffers data in kernel - Can't send more than buffer holds Limit 2: Available memory - System has finite RAM - Can't allocate infinite buffers - Large send() may fail (ENOMEM) Limit 3: Receiver's advertised window - Receiver tells sender: "I have X bytes buffer" - Sender won't send more than this - But send() doesn't fail - Just waits for receiver to read data No limit from TCP protocol itselfWhy No Protocol Limit?
TCP header specifies segment size with 16-bit field: IP header: Total Length: 16 bits → Max 65535 bytes per IP packet But TCP payload in each IP packet limited by MTU: Ethernet MTU: 1500 bytes IP header: 20 bytes TCP header: 20 bytes TCP payload: 1500 - 20 - 20 = 1460 bytes (MSS) So each packet carries ~1460 bytes But send() call isn't limited to one packet TCP fragments across multiple packets Therefore: No limit on send() call size Only limit on individual segment size (MSS)Example:
// Send 10 MB of data char buffer[10 * 1024 * 1024]; // ... fill buffer ... int bytes_sent = send(sock, buffer, 10*1024*1024, 0); What happens: 1. TCP accepts request 2. Buffers as much as fits in send buffer 3. Immediately returns number of bytes buffered 4. Application can send() again for remaining data 5. TCP fragments into ~6850 segments (1460 bytes each) 6. Transmits segments, respecting flow/congestion control Total time: Depends on network, not on send() callsend() Return Value:
send() returns: Number of bytes BUFFERED (not sent) Example: send(sock, 1MB_buffer, 1000000, 0); Returns: 65536 (TCP buffer size) Means: 65536 bytes queued in TCP Remaining 934464 bytes not yet queued Application must call send() again Or wait for space So send() may not send all data requested! Application must loop: int total_sent = 0; while (total_sent < data_size) { int n = send(sock, ptr + total_sent, data_size - total_sent, 0); if (n < 0) error(); total_sent += n; }Conclusion:
TCP has no protocol-level limit on send() data size. The practical limits are the TCP send buffer (typically 64 KB-2 MB) and available system memory. TCP internally fragments large sends into segments of MSS size (~1460 bytes) and transmits them according to congestion control. Applications may need to call send() multiple times for very large data, as send() returns only the amount buffered, not the amount requested.
-
(d) Assume that you create a UDP socket. Can you send DNS queries to two different DNS servers using this socket. Justify your answer. Assume now you create a TCP socket. Can you send queries to two different DNS servers using this socket. Justify your answer.
View Answer
In short: YES for UDP—a single UDP socket can send datagrams to multiple different servers by calling sendto() with different destination addresses. NO for TCP—a TCP socket connects to exactly one server, and must be closed and recreated to connect to a different server.
Elaboration:
UDP Socket with Multiple Servers:
UDP is connectionless: Each sendto() specifies destination Code: socket_fd = socket(AF_INET, SOCK_DGRAM, 0); // Query Server 1 struct sockaddr_in server1; server1.sin_addr.s_addr = inet_aton("8.8.8.8"); server1.sin_port = htons(53); sendto(socket_fd, query, query_len, 0, (struct sockaddr*)&server1, sizeof(server1)); // Query Server 2 struct sockaddr_in server2; server2.sin_addr.s_addr = inet_aton("1.1.1.1"); server2.sin_port = htons(53); sendto(socket_fd, query, query_len, 0, (struct sockaddr*)&server2, sizeof(server2)); // Both work! Same socket, different addressesWhy UDP Allows This:
UDP characteristics: - No connection state - Each datagram independent - Destination specified per send - Socket is just an endpoint Socket can: 1. Send to any address 2. Receive from any address 3. Address changes per packet 4. No setup/teardown needed Example flow: Client → Server1: Query A Client ← Server1: Response A Client → Server2: Query B Client ← Server2: Response B All on same socketDNS Example with UDP:
import socket sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) dns_servers = [ ("8.8.8.8", 53), # Google DNS ("1.1.1.1", 53), # Cloudflare DNS ("208.67.222.123", 53) # OpenDNS ] query = build_dns_query("example.com") for server, port in dns_servers: sock.sendto(query, (server, port)) # Send to different server, same socket # Receive responses for _ in dns_servers: data, addr = sock.recvfrom(512) print(f"Response from {addr[0]}") sock.close()TCP Socket with Multiple Servers:
TCP is connection-oriented: Socket connects to exactly ONE server Code: socket_fd = socket(AF_INET, SOCK_STREAM, 0); // Connect to Server 1 struct sockaddr_in server1; server1.sin_addr.s_addr = inet_aton("8.8.8.8"); server1.sin_port = htons(53); connect(socket_fd, (struct sockaddr*)&server1, sizeof(server1)); // Now connected to 8.8.8.8 // Send query to Server 1 send(socket_fd, query, query_len, 0); recv(socket_fd, response, response_len, 0); // To connect to Server 2: Must close first! close(socket_fd); // Create NEW socket socket_fd = socket(AF_INET, SOCK_STREAM, 0); // Connect to Server 2 struct sockaddr_in server2; server2.sin_addr.s_addr = inet_aton("1.1.1.1"); server2.sin_port = htons(53); connect(socket_fd, (struct sockaddr*)&server2, sizeof(server2)); // Now connected to 1.1.1.1 send(socket_fd, query, query_len, 0); recv(socket_fd, response, response_len, 0); close(socket_fd);Why TCP Can’t:
TCP connection = 4-tuple: (Source IP, Source Port, Dest IP, Dest Port) Once connected: - Destination is fixed - Can't change destination mid-connection - sendto() not typically used (use send()) To connect to different server: - Must close connection to first server - Must create new socket - Must establish new connection (3-way handshake) Multiple servers = multiple sockets neededTCP Example (Multiple Sockets):
import socket dns_servers = [ ("8.8.8.8", 53), ("1.1.1.1", 53) ] for server, port in dns_servers: # Create NEW socket for each server sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Connect to this server sock.connect((server, port)) # Send query query = build_dns_query("example.com") sock.send(query) # Receive response response = sock.recv(512) print(f"Response from {server}") # Close this connection sock.close() # Can't reuse socket for different server!Why DNS Uses UDP:
Part of answer: DNS is simple request/response With UDP: - One socket for multiple servers - No setup overhead per server - Stateless With TCP (if DNS used it): - Would need multiple sockets - Each requires 3-way handshake - Setup overhead too high - Overkill for simple query/response This is why DNS uses UDP despite unreliability Can handle loss with retriesConclusion:
UDP socket CAN send queries to multiple DNS servers using a single socket. The sendto() call specifies the destination each time, and the socket remains connectionless. TCP socket CANNOT send to multiple servers with a single socket because TCP is connection-oriented—one socket = one connection = one server. To query multiple servers with TCP requires creating new sockets and establishing new connections for each server, which is inefficient and impractical for DNS.
-
(e) What’s the maximum size of user data that can be sent over UDP? Justify your answer.
View Answer
In short: UDP has a practical maximum of approximately 65,507 bytes per datagram. This limit comes from the 16-bit length field in the IP header (65,535 bytes total) minus the IP header (20 bytes) and UDP header (8 bytes). Larger messages must be fragmented at the application layer.
Elaboration:
UDP Datagram Size Limit:
UDP is datagram-oriented: Each send() = one complete datagram Maximum datagram size: = IP packet size - IP header - UDP header = 65,535 - 20 - 8 = 65,507 bytes of user dataWhy This Limit?
IP Header structure: [Version: 4 bits][Header Len: 4 bits] [Type of Service: 8 bits] [Total Length: 16 bits] ← THIS FIELD [Identification: 16 bits] [Flags: 3 bits][Fragment Offset: 13 bits] [TTL: 8 bits] [Protocol: 8 bits] [Checksum: 16 bits] [Source IP: 32 bits] [Destination IP: 32 bits] Total Length field: - 16 bits = max value 2^16 - 1 = 65,535 - Specifies entire IP packet (header + data) Calculation: IP Total Length = 65,535 bytes IP Header = 20 bytes (minimum) Data = 65,535 - 20 = 65,515 bytes UDP Header = 8 bytes User Data = 65,515 - 8 = 65,507 bytesPractical Limit: Path MTU
Theoretical maximum: 65,507 bytes But network has MTU (Maximum Transmission Unit): Typical MTU values: - Ethernet: 1,500 bytes (most common) - WiFi: 1,500 bytes - PPP: 576 bytes - Loopback: 65,535 bytes If UDP datagram > MTU: - IP fragmentation occurs - One datagram → multiple IP fragments - Each fragment travels separately - Receiver reassembles Problem with fragmentation: - If one fragment lost: entire datagram lost - No per-fragment retransmission - Reassembly timeout on receiver - InefficientExample with Fragmentation:
Send 3000-byte UDP datagram over Ethernet (MTU 1500): UDP datagram: 3000 bytes IP fragments: Fragment 1: [IP header: 20][Data: 1480 bytes][Frag offset: 0] Fragment 2: [IP header: 20][Data: 1480 bytes][Frag offset: 1480] Fragment 3: [IP header: 20][Data: 40 bytes][Frag offset: 2960] Network: Fragment 1 → arrives Fragment 2 → lost Fragment 3 → arrives Receiver: Has 1480 + 40 bytes Missing middle fragment Reassembly timer expires Discards fragments Datagram lost! Application receives nothing UDP reports no errorWhy Not Just Fragment at Transport Layer?
UDP doesn't provide fragmentation: "Send what you give me, or fail" If data > 65,507: sendto() returns error (EMSGSIZE on some systems) Or quietly fails Application must: 1. Split into smaller messages 2. Add sequence numbers 3. Handle reassembly 4. Detect loss This is why TCP is preferred for large data TCP handles fragmentation transparentlyBest Practices:
Safe UDP datagram size: - With IPv4: Up to 65,507 bytes - In practice: Limit to 65,000 to be safe Over typical networks (MTU 1500): - Limit to 1,472 bytes to avoid fragmentation (1500 - 20 IP - 8 UDP = 1472) - Better: 512 bytes (very safe) - Even better: Let network determine optimal size Example DNS: - Queries: ~50-200 bytes (always safe) - Responses: ~200-512 bytes (usually safe) - TCP fallback if > 512 bytesChecking Datagram Size:
// Try to send large datagram char data[70000]; int n = sendto(sock, data, 70000, 0, &addr, addr_len); Possible outcomes: 1. Fails: returns -1, errno = EMSGSIZE (Message too large for transport) 2. Succeeds: returns 70000 But IP will fragment it Risky: any lost fragment = lost datagram 3. Truncates silently on some systems (UGH!) Better: Use recvmsg/sendmsg with control messages To determine path MTUConclusion:
UDP maximum datagram size is 65,507 bytes of user data, determined by the 16-bit length field in the IP header (65,535 bytes total minus 20-byte IP header and 8-byte UDP header). However, practical limits are much smaller due to network MTU (typically 1,500 bytes). Datagrams larger than MTU are fragmented by IP, and loss of any fragment causes loss of the entire datagram. Applications should limit UDP messages to avoid fragmentation, typically to 512 bytes or the estimated path MTU minus headers.
-
(f) Consider two hosts A and B attached to the same link (with no intervening router). An application needs to send 1000 messages from A to B over UDP. A programmer implements this operation as follows: S/he write a for loop, and simply dumps 1000 packets to the network as fast as possible. The receiver application reports that at least 20\% of the messages has not arrived at it. Describe what might be happening here?
View Answer
In short: The sender is flooding the network faster than the receiver can process packets, causing the receiver’s buffer to overflow and packets to be dropped. Additionally, the sender’s buffer may overflow, the network interface may have limits, and the receiver’s kernel may not keep up with processing the incoming packet stream.
Elaboration:
Root Cause: Receiver Buffer Overflow:
Sender behavior: for (int i = 0; i < 1000; i++) { sendto(sock, data, len, 0, &addr, addr_len); } This sends 1000 packets IMMEDIATELY Assumptions: - No delay between packets - Sends as fast as socket allows - Doesn't wait for receiver response Receiver side: - Arrives at receiver's NIC (network interface card) - Buffered in kernel receive buffer - Application reads at its own pace Problem: If packets arrive faster than application reads: - Kernel buffer fills up - New arriving packets dropped - Application unaware (UDP = no error report!)Detailed Packet Flow:
Time T0: Sender sends packet 1-1000 as fast as possible (~microseconds apart) Time T0-T10ms: Packets arrive at receiver NIC (1 Gbps link) All 1000 packets arrive in ~10 milliseconds Receiver kernel: Buffer size: Typically 128-256 KB Each UDP packet: ~100-1000 bytes Can buffer: ~128-256 packets maximum What happens: Packets 1-128 arrive → buffered in kernel Packets 129-200 arrive → buffer full → DROPPED Packets 201-1000 arrive → buffer still full → DROPPED Receiver application: Calls recvfrom() at rate of 10 packets/second By time it reads packet 1: Time: 100 ms Sent: 1000 packets already arrived Available: ~13 packets (1000 ms / 100 packets-per-100ms) Received: 128 buffered, rest DROPPED Result: ~128 packets received, 872 dropped Success rate: ~13% (87% dropped!)Issue 1: Kernel Receive Buffer Size
Linux socket buffer sizes: Default: 128 KB (can vary) Maximum settable: 256 MB UDP datagram: ~100-1500 bytes 128 KB buffer = ~128 datagrams before overflow If 1000 sent in 10 ms: 100 datagrams/ms arrival rate Kernel processes at application's read rate Without gaps: All but first 128 lostIssue 2: Receiver Application Processing Delay
Application loop: while (1) { recvfrom(sock, buf, size, 0, &addr, &addr_len); // Process message // ... maybe print, database write, etc ... // Takes 1-10 ms per packet } If processing takes 1 ms per packet: - Can handle 1000 packets/second Sender sends 1000 packets in 10 ms: - Rate: 100,000 packets/second - Receiver rate: 1,000 packets/second - 99% will be dropped! Even with zero processing: - Kernel still has buffer limits - At least 872 packets droppedIssue 3: NIC Hardware Buffering
Before reaching kernel: - Packets arrive at NIC - NIC has small buffer (1-2 KB typically) - If NIC can't offload to kernel fast enough Scenario: - NIC buffer overflows - Drops packets - Even before kernel receives them Plus kernel buffer: - Another layer of droppingIssue 4: Interrupt Handling
Each packet generates interrupt: - NIC: "Packet arrived" - Kernel handles interrupt - Copies packet to buffer - Wakes application (maybe) If packets arrive too fast: - Interrupt coalescing: Groups interrupts - Kernel might not keep up - Packets arrive during interrupt processing - Might be dropped by NICIssue 5: Lack of Flow Control
UDP is "fire and forget": Sender: while (packets_to_send--) { sendto(...); } No feedback to sender: - Sender doesn't know receiver struggling - Sender doesn't slow down - No congestion control - No flow control TCP would: - Wait for ACKs - Get feedback on receiver window - Automatically slow down - All packets arrive (or retransmit)Demonstration with Numbers:
Scenario: - 1000 messages of 100 bytes each - Sender loop with no delays - Receiver application processes 10 packets/second Timing: T = 0 ms: Sender starts sending All 1000 packets queued in TCP socket Start arriving at receiver T = 10 ms: All 1000 packets have arrived at receiver NIC Kernel has buffered ~128 packets max ~872 packets DROPPED by kernel T = 0 - 100,000 ms: Receiver application reads at 10/second Processes: packets that weren't dropped Gets: ~128 packets total Success rate: 128/1000 = 12.8% Matches problem: "at least 20% didn't arrive" Actually worse than stated!Solutions:
1. Increase receive buffer size sock.setsockopt(SO_RCVBUF, larger_size) Helps, but not complete solution 2. Add delays in sender for (int i = 0; i < 1000; i++) { sendto(...); usleep(100); // 100 microsecond delay } Receiver can keep up 3. Slow down to match receiver capacity Rate-limit sender to receiver's processing speed 4. Use TCP instead Automatic flow control Guaranteed delivery No drops 5. Implement application-level ACKs Receiver ACKs each packet Sender waits for ACK before sending next (But now you're reimplementing TCP!)Why This Matters:
UDP is "best effort" delivery: - No guarantee all packets arrive - No feedback to sender - If network/receiver overwhelmed: packets lost Developers must understand: - Can't just "dump" data and expect it to work - Must match sender rate to receiver rate - Must use larger buffers or flow control - Or switch to TCPConclusion:
The 20% packet loss is likely caused by receiver buffer overflow. When the sender floods 1000 packets in milliseconds and the receiver application processes them slowly (or with delays), the kernel’s receive buffer (typically 128 KB ≈ 128 packets) fills up and subsequent packets are silently dropped by the kernel or NIC. UDP provides no flow control or feedback, so the sender is unaware and continues sending. Solutions include increasing buffer size, adding delays in the sender, or using TCP for reliable delivery with automatic flow control.
-
(g) Is it possible to implement a multicasting application over TCP? Why or why not?
View Answer
In short: NO. TCP cannot be used for multicasting because TCP is a point-to-point, connection-oriented protocol that establishes connections between exactly one sender and one receiver. Multicasting requires one sender to reach multiple receivers simultaneously, which TCP’s architecture fundamentally does not support.
Elaboration:
TCP’s Point-to-Point Nature:
TCP connection: One sender ←→ One receiver Connection identified by 5-tuple: (Source IP, Source Port, Dest IP, Dest Port, Protocol) Example: (192.168.1.1, 5000, 10.0.0.2, 80, TCP) This connects to ONE specific destination (10.0.0.2) Cannot connect to multiple destinationsWhat Multicasting Is:
Multicast model: One sender → Many receivers Example: Video stream from server to 1000 clients Server sends once All 1000 clients receive same stream Network efficiency: - Server sends one packet - Network duplicates as needed - All clients receive copy Like TV broadcast vs phone callWhy TCP Can’t Do This:
Reason 1: Connection Semantics
TCP requires: 1. Three-way handshake (SYN, SYN-ACK, ACK) 2. Establishes connection with ONE peer 3. Send data through that connection 4. Receive ACKs from that ONE peer Multicast scenario: Server connects to Receiver1 Server connects to Receiver2 Server connects to Receiver3 ... Server connects to Receiver1000 Result: 1000 separate TCP connections 1000 × handshakes = massive overhead 1000 × window management = complex 1000 × retransmissions = inefficient This is NOT multicasting anymore It's unicasting 1000 times Defeats the purposeReason 2: Unicast vs Multicast Addresses
Unicast addresses (TCP): - **(192)**168.1.1 (specific host) - **(10)**0.0.2 (specific host) - Each address uniquely identifies one device Multicast addresses (UDP): - **(224)**0.0.0 to 239.255.255.255 (Class D) - **(224)**0.0.1 (all hosts on subnet) - **(239)**255.255.255 (site-local) - One address represents multiple hosts TCP requires unicast addressing Cannot work with multicast groupsReason 3: ACK and Ordering Requirements
TCP guarantees: - In-order delivery - Reliable delivery (ACK each byte) Multicast scenario: Server sends to 1000 clients Clients 1-999 ACK immediately Client 1000 lost packets (network congestion) What should happen? - Retransmit for client 1000? - But clients 1-999 already got it! - Can't resend just for one client - Would have to resend to all 1000 Inefficient and violates multicast semanticsAttempted Workarounds (All Bad):
Workaround 1: Multiple TCP Connections Server opens TCP to each multicast recipient Problems: - 1000 recipients = 1000 connections - 3000 handshake packets minimum - Each connection has overhead - Server resource exhaustion - Not true multicast (unicast 1000 times) Workaround 2: Central Relay Server sends to central relay via TCP Relay broadcasts to all via UDP Problems: - Relay becomes bottleneck - Defeats multicast purpose - Still using UDP for actual broadcast - Adds latency Workaround 3: Application-Layer Multicast Application implements multicast in software Problems: - Reinventing TCP for multicast - Inefficient and complex - Network doesn't help (no multicast support) - Packet duplication happens at app layerWhy UDP is Used for Multicast:
UDP characteristics enable multicast: - No connection state - Stateless - No ACKs to coordinate - No flow control per destination - Can send one packet, let network duplicate Multicast addresses: - Kernel joins multicast group - Receives all packets to that group - Sender sends once - Network replicates as needed Example: Sender: sendto(sock, data, len, 0, &multicast_addr, ...) Multicast address: 239.255.255.1 Network sees: One packet to multicast group Replicates to all subscribers Efficiency: One transmission, many receiversMulticast Example (UDP):
import socket import struct # Multicast group address MCAST_GRP = '239.255.255.1' MCAST_PORT = 5007 # Sender sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 2) sock.sendto(b'Hello Multicast', (MCAST_GRP, MCAST_PORT)) # Receivers (any number of them) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(('', MCAST_PORT)) group_bin = socket.inet_aton(MCAST_GRP) mreq = struct.pack('4sL', group_bin, socket.INADDR_ANY) sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq) data, addr = sock.recvfrom(1024) # All receivers get same packetWhen You Might Wrongly Think TCP:
Scenario: Need to send data to multiple clients Common (wrong) solution: - Open TCP to each client - Send data to all Why this isn't multicast: - 1000 clients = 1000 connections - Server sends 1000 times - Network carries 1000 copies - Not network-assisted Correct multicast approach: - UDP multicast address - Clients join group - Server sends once - Network duplicates - Much more efficientConclusion:
NO. TCP cannot implement multicasting because it is strictly point-to-point—a TCP connection exists between exactly one sender and exactly one receiver. Multicast requires one sender to reach many receivers, which would require either multiple TCP connections (defeating multicast efficiency) or reinventing the protocol. UDP, being connectionless and stateless, is the correct choice for multicast applications, allowing the network to efficiently replicate packets to all group members.
-
(h) In your DNS client project you used blocking UDP sockets and assumed that a reply from the server always comes back. We know that UDP is not reliable, and packets sent over UDP might get lost. Describe how you would have changed your code to implement the following: Your client sends the request and waits for a reply from the server for 5 seconds. If a reply arrives within 5 seconds, you print it on the screen. If no reply arrives for 5 seconds, your client wakes up and prints an error message.
View Answer
In short: Set a socket timeout using setsockopt() with SO_RCVTIMEO (receive timeout), or use select()/poll() to monitor the socket with a timeout. When recvfrom() returns EAGAIN/EWOULDBLOCK (timeout), print an error. Alternatively, use non-blocking sockets with select() to monitor multiple sockets.
Elaboration:
Approach 1: Socket Timeout (Simplest)
#include <sys/socket.h> #include <netinet/in.h> #include <stdio.h> #include <string.h> #include <unistd.h> int main() { int sock; struct sockaddr_in server_addr, client_addr; struct timeval timeout; char buffer[512]; int n; // Create UDP socket sock = socket(AF_INET, SOCK_DGRAM, 0); // Set 5-second timeout timeout.tv_sec = 5; // 5 seconds timeout.tv_usec = 0; // 0 microseconds setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, (const char*)&timeout, sizeof(timeout)); // Send DNS query server_addr.sin_family = AF_INET; server_addr.sin_port = htons(53); inet_pton(AF_INET, "8.8.8.8", &server_addr.sin_addr); sendto(sock, query, query_len, 0, (struct sockaddr*)&server_addr, sizeof(server_addr)); // Try to receive with timeout socklen_t addr_len = sizeof(client_addr); n = recvfrom(sock, buffer, sizeof(buffer), 0, (struct sockaddr*)&client_addr, &addr_len); if (n < 0) { // Check error type if (errno == EAGAIN || errno == EWOULDBLOCK) { printf("Error: No reply from server after 5 seconds\n"); } else { perror("recvfrom"); } } else { printf("Received response: %s\n", buffer); } close(sock); return 0; }How Socket Timeout Works:
Without timeout: recvfrom() call ↓ Blocks indefinitely waiting for packet ↓ Packet arrives or never arrives With SO_RCVTIMEO: recvfrom() call ↓ Waits for 5 seconds ↓ Packet arrives: Return data (before timeout) ↓ OR 5 seconds elapse: Return -1, errno=EAGAINApproach 2: Using select() (More Control)
#include <sys/select.h> #include <sys/socket.h> #include <errno.h> #include <stdio.h> int main() { int sock; struct sockaddr_in server_addr; struct timeval timeout; fd_set readfds; char buffer[512]; int n; // Create socket sock = socket(AF_INET, SOCK_DGRAM, 0); // Send query server_addr.sin_family = AF_INET; server_addr.sin_port = htons(53); inet_pton(AF_INET, "8.8.8.8", &server_addr.sin_addr); sendto(sock, query, query_len, 0, (struct sockaddr*)&server_addr, sizeof(server_addr)); // Set up select() timeout timeout.tv_sec = 5; timeout.tv_usec = 0; // Monitor socket for readability FD_ZERO(&readfds); FD_SET(sock, &readfds); // Wait for socket to be readable or timeout int activity = select(sock + 1, &readfds, NULL, NULL, &timeout); if (activity < 0) { perror("select"); } else if (activity == 0) { // Timeout occurred printf("Error: No reply from server after 5 seconds\n"); } else if (FD_ISSET(sock, &readfds)) { // Socket is readable struct sockaddr_in client_addr; socklen_t addr_len = sizeof(client_addr); n = recvfrom(sock, buffer, sizeof(buffer), 0, (struct sockaddr*)&client_addr, &addr_len); printf("Received response: %s\n", buffer); } close(sock); return 0; }How select() Works:
select(nfds, readfds, writefds, exceptfds, timeout) Returns: - Positive: Number of file descriptors ready - 0: Timeout occurred, no descriptors ready - -1: Error Usage: FD_ZERO(&readfds); // Clear set FD_SET(sock, &readfds); // Add socket to set timeout.tv_sec = 5; // 5 seconds select(sock+1, &readfds, ...); if (timeout elapsed) return 0 else if (data ready) return 1Approach 3: poll() (Modern Alternative)
#include <poll.h> #include <stdio.h> int main() { int sock; struct pollfd fds[1]; int poll_timeout = 5000; // milliseconds int poll_result; sock = socket(AF_INET, SOCK_DGRAM, 0); // Send query // ... // Set up poll fds[0].fd = sock; fds[0].events = POLLIN; // Interested in readable events // Wait for 5 seconds (5000 milliseconds) poll_result = poll(fds, 1, poll_timeout); if (poll_result < 0) { perror("poll"); } else if (poll_result == 0) { // Timeout printf("Error: No reply from server after 5 seconds\n"); } else if (fds[0].revents & POLLIN) { // Socket readable char buffer[512]; struct sockaddr_in client_addr; socklen_t addr_len = sizeof(client_addr); int n = recvfrom(sock, buffer, sizeof(buffer), 0, (struct sockaddr*)&client_addr, &addr_len); printf("Received response: %s\n", buffer); } close(sock); return 0; }Comparison of Approaches:
Approach Simplicity Control Use Case SO_RCVTIMEO Very simple Limited Single socket, simple timeout select() Medium Good Multiple sockets, complex logic poll() Medium Good Modern preference (more portable) With Retry Logic:
// Try up to 3 times with 5-second timeout each int max_retries = 3; int retry_count = 0; while (retry_count < max_retries) { // Set timeout timeout.tv_sec = 5; timeout.tv_usec = 0; setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, (const char*)&timeout, sizeof(timeout)); // Send query sendto(sock, query, query_len, 0, (struct sockaddr*)&server_addr, sizeof(server_addr)); // Try to receive struct sockaddr_in client_addr; socklen_t addr_len = sizeof(client_addr); int n = recvfrom(sock, buffer, sizeof(buffer), 0, (struct sockaddr*)&client_addr, &addr_len); if (n > 0) { // Success printf("Received response: %s\n", buffer); break; } else if (errno == EAGAIN || errno == EWOULDBLOCK) { // Timeout retry_count++; if (retry_count < max_retries) { printf("Timeout, retrying (%d/%d)...\n", retry_count, max_retries); } } else { perror("recvfrom"); break; } } if (retry_count >= max_retries) { printf("Error: No reply from server after %d retries\n", max_retries); }Python Implementation:
import socket import time sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # Set timeout to 5 seconds sock.settimeout(5.0) server_addr = ("8.8.8.8", 53) try: # Send DNS query sock.sendto(query, server_addr) # Receive with timeout data, addr = sock.recvfrom(512) print(f"Received response: {data}") except socket.timeout: print("Error: No reply from server after 5 seconds") except Exception as e: print(f"Error: {e}") finally: sock.close()What Changes from Original Code:
Original (blocking, no timeout): recvfrom(sock, buffer, sizeof(buffer), 0, ...); // Blocks forever waiting for packet Modified (with 5-second timeout): setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof(timeout)); int n = recvfrom(sock, buffer, sizeof(buffer), 0, ...); if (n < 0 && errno == EAGAIN) { // 5 seconds passed, no response }Key Points:
1. SO_RCVTIMEO: Sets receive timeout on socket 2. select()/poll(): More flexible, can monitor multiple sockets 3. Returns immediately if data arrives before timeout 4. Raises error/timeout if 5 seconds pass without data 5. Application detects timeout and handles it 6. Typical pattern: Retry with different server if timeoutConclusion:
To implement a 5-second timeout for DNS queries, use setsockopt() with SO_RCVTIMEO to set the receive timeout, then check for EAGAIN/EWOULDBLOCK errors when recvfrom() returns. Alternatively, use select() or poll() for more control. Python’s socket.settimeout() provides similar functionality. This allows detecting when the server doesn’t respond and either retrying or printing an error message, making the client robust to packet loss inherent in UDP.
-
(i) One way to make a Web server unavailable is to send it a lot of TCP SYN packets with an invalid source IP address, called SYN flooding. Describe why this crashes the Web server?
View Answer
In short: SYN flooding exploits TCP’s 3-way handshake. The attacker sends many SYN packets with spoofed source IPs. The server responds with SYN-ACK but the attacker never completes the handshake. The server keeps half-open connections in memory, exhausting the connection queue and preventing legitimate users from connecting.
Elaboration:
Normal TCP Connection (3-Way Handshake):
Client (legitimate) Server | | | -------- SYN -------→ | | seq=x | | | | ←---- SYN-ACK ------- | | seq=y | | ack=x+1 | | | | ------- ACK --------→ | | seq=x+1 | | ack=y+1 | | | |←--- Connection Established Time: ~1 RTT Server state: ESTABLISHED Connection added to accept queue Application can read from connectionSYN Flooding Attack:
Attacker sends SYN with spoofed IP Attacker Server | | | -- SYN ----→ | | (src: fake) | | | |←- SYN-ACK -- | | (goes to fake IP, never delivered) | Attacker never sends ACK! Server state: SYN received, waiting for ACK Connection in HALF-OPEN state Allocated memory for connection Waits for ACK or timeoutWhat Happens on Server:
Server's TCP/IP stack: Receives SYN from client IP X: 1. Creates connection state (TCB - Transmission Control Block) 2. Allocates memory for connection info 3. Sends SYN-ACK back to IP X 4. Moves to SYN-RECEIVED state 5. Waits for ACK to complete handshake No ACK arrives (source IP is fake): - Connection sits in half-open state - Waits for timeout (30-60 seconds typically) - Memory still allocated - Connection queue slot still occupiedResource Exhaustion:
Attacker sends 1000 SYN packets/second Each with different spoofed source IP Server's half-open connection queue: Typical limit: 128-256 half-open connections Timeline: T=0 ms: Queue has 0 half-open connections T=100 ms: Queue has ~100 half-open connections T=150 ms: Queue FULL (128 or 256 limit reached) T=200 ms: New legitimate client tries to connect Sends SYN Server receives it But queue is FULL Server drops SYN Never sends SYN-ACK Legitimate client's handshake fails Legitimate client gives up Web server appears unavailableWhy Server Resources Exhaust:
Each half-open connection requires: 1. TCB (Transmission Control Block) - State variables - Sequence numbers - Window information - ~200-500 bytes per connection 2. Memory in accept queue 3. File descriptor slot 4. Kernel data structures With 1000 SYN/second attack: Half-open connections accumulate at rate: Arrive rate: 1000/sec Decay rate: ~30 per second (timeouts) Queue growth: ~970/sec Queue limit: 128-256 Queue becomes full in seconds New connections rejected Legitimate users cannot connectTimeline of Attack:
T=0 seconds: Attacker starts flooding SYN packets Normal server state T=0-1 second: Thousands of SYN packets arrive Half-open queue fills rapidly T=1-2 seconds: Queue full New SYN packets dropped T=2-10 seconds: Legitimate users try to connect Server drops their SYN packets No SYN-ACK responses Connection timeouts after ~20-60 seconds Users see "Connection refused" or timeout T=30-60 seconds: First spoofed connections timeout Half-open queue has space But attack continues Queue refills immediately Duration: As long as attack continues Server remains unavailableWhy It’s Effective:
Asymmetry of work: Attacker sends: - Simple SYN packets (very cheap) - Spoofed IP (no return traffic) - Bandwidth: Few kbps Server work: - Receives SYN (stores state) - Sends SYN-ACK (wastes bandwidth) - Allocates memory per connection - Waits for timeout (~30 seconds) - CPU cycles for management Attacker cost: Minimal Server cost: Maximal Amplification: 1 SYN packet can cause server to waste ~100 bytes for 30 seconds This is a 3000x amplification!Modern Defenses:
1. SYN Cookies Don't allocate full TCB until ACK received Server doesn't store state for half-open connections TCB created only when ACK completes handshake 2. SYN Limit Limit half-open connections per IP Can't flood from single attacker But distributed attack (botnet) still works 3. Rate Limiting Drop excessive SYN packets from same source 4. SYN Proxy / WAF Firewall drops spoofed packets before reaching server Requires ISP support 5. Increase Queue Size Allows more half-open connections But uses more memory Eventually still exhausted 6. Shorter Timeout Close half-open connections faster But legitimate slow clients hurtSYN Cookies Explanation:
Traditional: SYN received → Allocate TCB immediately Problem: Many allocations → memory exhaustion SYN Cookies: SYN received → Don't allocate TCB Send SYN-ACK with "cookie" in sequence number Cookie encodes: - Server port - Client port - Client IP - Timestamp Client ACK arrives: - Server decodes cookie - Verifies legitimate client (has correct cookie) - Only then allocates TCB Result: - Server can handle many SYN packets - Only allocates resources for completed handshakes - Spoofed requests die naturallyExample Attack & Defense:
Attack: Attacker: 10,000 SYN/second (spoofed IPs) Without SYN Cookies: Half-open limit: 128 Server: FULL in 0.01 seconds Legitimate users: BLOCKED With SYN Cookies: Server receives 10,000 SYN/second Doesn't allocate memory Responds with SYN-ACK (stateless) Spoofed clients: Never send ACK Legitimate clients: Send ACK Server: Only allocates for completed handshakes Result: Can handle 10,000+ SYN/secondConclusion:
SYN flooding crashes the web server by exploiting the 3-way handshake. The attacker sends many SYN packets with spoofed source IPs. The server responds with SYN-ACK but the attacker (or spoofed client) never completes the handshake by sending ACK. The server keeps half-open connections in memory waiting for the ACK, eventually exhausting the connection queue. Legitimate users’ connection attempts are then dropped or delayed. Modern defenses like SYN cookies prevent allocation of resources until the handshake is complete, making the server resistant to SYN flooding attacks.
-
(j) Consider an idle TCP connection, i.e., a connection where no data is flowing at the time. If one end of the connection crashes without issuing a close call, is it possible for the other end of the connection to be aware of this? Why or why not?
View Answer
In short: NO, not automatically. If the crashed end doesn’t send a FIN or RST packet, the other end cannot detect the crash unless it tries to send data (which generates an RST on timeout) or uses TCP keep-alive packets to detect the broken connection. An idle connection has no mechanism to detect the remote crash.
Elaboration:
Why Idle Connections Can’t Detect Crashes:
TCP connection states: Normal: ┌──────────┐ ┌──────────┐ │ Host A │ ←-----→ │ Host B │ │ │ CLOSE │ │ │ FIN sent │ │ FIN recv │ └──────────┘ └──────────┘ Result: Both sides agree connection is closed Crash (no close): ┌──────────┐ ┌──────────┐ │ Host A │ ←-----→ │ Host B │ │ CRASHED │ ??????? │ IDLE │ │ No FIN │ │ Doesn't │ │ │ │ know! │ └──────────┘ └──────────┘ Host B has no way to know A crashedWhy TCP Can’t Detect Idle Crashes:
TCP is a stateless-ish protocol: Once connection established, TCP minimal activity No heartbeat or keep-alive by default Connection state is remembered, but not verified Idle connection: ├─ Last data sent: 10 minutes ago ├─ Last data received: 10 minutes ago ├─ No activity since then ├─ No way to know if other end is alive └─ No way to know if other end crashedScenario: One Side Crashes
Time T0: Connection established Both sides in ESTABLISHED state Time T100: Host A and Host B exchange data Connection working Time T200: Both sides idle No data sent or received Time T300: HOST A CRASHES (power failure, network cable pulled) Host A doesn't send FIN/RST Connection state just disappears Host B's TCP still thinks connection is open Time T400: Host B still idle Still unaware of crash Would wait forever if no activity requiredWhy No Automatic Detection:
TCP operates on demand: 1. Data sent by application 2. TCP sends segment 3. Receives ACK 4. Connection good If no data sent by application: - No segments generated - No ACKs checked - No probes sent - Nothing to verify connection Connection is ASSUMED to be alive No mechanism to verify idle connectionDetection Options:
Option 1: Send Data (Application Layer)
Host B sends heartbeat/keepalive: send(sock, "ping", 4, 0); What happens: - If Host A is alive: Receives and echoes - If Host A is crashed: Router responds with RST (destination unreachable) Or timeout after ~20 seconds Result: Host B knows connection is dead But this requires application to: - Know to send heartbeat - Know when to send (timeout interval) - Handle responsesOption 2: TCP Keep-Alive
Use SO_KEEPALIVE socket option: setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE, &opt, sizeof(opt)); Behavior: - After 2 hours of idle (default) - TCP sends keep-alive probe - If no response: Considers connection dead - Closes connection, notifies application Linux tuning: tcp_keepidle = 7200 (seconds = 2 hours) tcp_keepintvl = 75 (seconds between probes) tcp_keepcnt = 9 (number of probes) Total time to detect: 2 hours + (75 * 9) seconds Long time!Option 3: Application Protocol Timeouts
Application implements its own keep-alive: Example (HTTP): HTTP/1.1 Keep-Alive header Connection: keep-alive Keep-Alive: timeout=5, max=100 Server closes connection if no request for 5 seconds Client must send request or connection closes Example (Chat application): App sends "typing..." messages If no typing, sends heartbeat every 30 seconds If heartbeat times out: Connection deadOption 4: Application Layer Heartbeat
// Pseudo-code while (1) { timeout = select(sock, ..., 30_seconds); if (timeout) { // 30 seconds with no activity // Send heartbeat send(sock, "HEARTBEAT", 9, 0); // Set timeout for response timeout = select(sock, ..., 5_seconds); if (timeout) { // No response to heartbeat printf("Connection lost, other end crashed\n"); close(sock); break; } } // Activity detected (data or heartbeat response) // Continue }Timeline Examples:
Scenario 1: No Keep-Alive, Idle Connection
T=0: Connect, exchange data T=100: Both sides idle T=100: Host A crashes T=200: Host B still idle, unaware T=500: Host B still idle, unaware T=1000: Host B still idle, unaware Host B never finds out A crashedScenario 2: Keep-Alive Enabled
T=0: Connect T=100: Both sides idle T=100: Host A crashes T=7200: (2 hours later) TCP send keep-alive probe to A T=7200+75: No response, send second probe T=7200+675: No responses to 9 probes TCP: "Connection dead" Application notified Socket becomes unusable Total detection time: ~2.5 hoursScenario 3: Application Heartbeat (30 second timeout)
T=0: Connect T=100: Both sides idle T=100: Host A crashes T=130: 30 seconds elapsed with no activity Host B sends heartbeat T=135: Wait for response (5 second timeout) T=135: No response = connection dead Host B detects crash Total detection time: ~35 secondsWhy This Matters:
Common problem: Zombie connections Host A crashes while idle Host B doesn't know Host B keeps socket open Resources tied up: - TCP connection slot - File descriptor - Memory - Potential application state With many clients: Server accumulates zombie connections Eventually runs out of file descriptors Cannot accept new connections Appears to hangReal-World Example: Web Server
Client makes HTTP request: GET /index.html HTTP/1.1 Connection: keep-alive Server responds: HTTP/1.1 200 OK Connection: keep-alive Connection remains open for next request Client crashes without closing Without keep-alive timeout: Server waits forever Connection never closes Slot remains occupied With application timeout: Server closes after ~5-30 seconds Frees resources Slot available for new clientsConclusion:
NO. TCP cannot automatically detect if an idle connection’s remote end has crashed. TCP is a demand-driven protocol—it only verifies connections when data is transmitted. If one end crashes without sending FIN or RST, the other end has no way to know unless it tries to send data (which causes timeout) or uses TCP keep-alive (2-hour default, too slow) or implements application-level heartbeats (best for detecting quick crashes). Most applications implement their own keep-alive/heartbeat mechanism to detect broken connections quickly rather than relying on TCP’s passive approach.
Problem 2: Dynamic Host Configuration Protocol
We discussed in class that a host’s IP address can either be configured manually, or by Dynamic Host Configuration Protocol (DHCP).
-
(a) Describe the advantages and disadvantages of each approach.
View Answer
Manual (Static) Configuration
- Advantages:
- Stable, predictable IP addresses.
- Suitable for servers and infrastructure devices.
- No dependency on DHCP availability.
- Disadvantages:
- Error-prone and time-consuming.
- Poor scalability.
- Risk of IP address conflicts.
DHCP (Dynamic Configuration)
- Advantages:
- Plug-and-play for clients.
- Centralized network configuration.
- Avoids most IP conflicts.
- Disadvantages:
- Depends on DHCP server availability.
- IP addresses may change.
- Slight delay during address acquisition.
- Advantages:
-
(b) Describe how a host gets an IP address using DHCP.
View Answer
DHCP Process (DORA)
- DHCPDISCOVER – Client broadcasts request.
- DHCPOFFER – Server offers IP configuration.
- DHCPREQUEST – Client requests chosen offer.
- DHCPACK – Server confirms lease.
(Then later: lease renew with REQUEST/ACK before it expires; if server refuses, DHCPNAK)
DHCPNAK stands for DHCP Negative Acknowledgment. It is a message sent by a DHCP server to a client to tell the client that its requested IP configuration is invalid and cannot be used.
Problem 3: UDP Remote Calculator Server
You are asked to design a UDP server that would run at 10.10.100.180, port 30000, and would be used as a remote calculator to perform addition, subtraction and multiplication on two 4 byte integers sent by clients. Your server needs to run in a loop, accept the next client request, perform the operation and send the result back to the client. Your client needs to run in a loop, ask the user for the type of operation and two numbers, put them into a message and send them to the server. When the client receives the reply, it prints the result on the screen. You are asked to design an application layer protocol and implement the client/server code. Take into consideration that the client and the server may have different endian representation of integers, i.e., the client may be little-endian while the server is big-endian and viceversa.
Application-Layer Protocol
- All integers use network byte order (big-endian).
Request (9 bytes):
- 1 byte: operation (‘+’, ‘-‘, ‘*’)
- 4 bytes: integer A
- 4 bytes: integer B
Reply (4 bytes):
-
4 bytes: result
-
(a) Show the pseudocode for your UDP client.
View Answer
client(): sock = udp_socket() server = (10.10.100.180, 30000) loop: op = input_operation() a = input_int() b = input_int() msg = op + htonl(a) + htonl(b) sendto(sock, msg, server) reply = recvfrom(sock, 4) print(ntohl(reply)) -
(b) Show the pseudocode for your UDP server.
View Answer
server(): sock = udp_socket() bind(sock, (10.10.100.180, 30000)) loop: msg, addr = recvfrom(sock, 9) op = msg[0] a = ntohl(msg[1:5]) b = ntohl(msg[5:9]) if op == '+': r = a + b if op == '-': r = a - b if op == '*': r = a * b sendto(sock, htonl(r), addr)
Problem 4: Multi-Threaded TCP Remote Calculator
You are asked to design a multi-threaded TCP server that would run at 10.10.100.180, port 30000, and would be used as a remote calculator to perform addition, subtraction and multiplication on two 4 byte integers sent by clients. Your server needs to run in a loop, accept the next client connection and create a new thread that would interact with the client. The service thread runs in a loop, receives the next request from the client, performs the requested operation and sends the result back to the client until the client closes the connection. Your client needs to run in a loop, ask the user for the type of operation and two numbers, put them into a message and send them to the server. When the client receives the reply, it prints the result on the screen. You are asked to design an application layer protocol and implement the client/server code. Take into consideration that the client and the server may have different endian representation of integers, i.e., the client may be little-endian while the server is bigendian and vice-versa.
-
(a) Show the pseudocode for your TCP client.
View Answer
client(): sock = tcp_socket() connect(sock, (10.10.100.180, 30000)) loop: op, a, b = user_input() write_all(sock, op + htonl(a) + htonl(b)) reply = read_exact(sock, 4) print(ntohl(reply)) -
(b) Show the pseudocode for your TCP server.
View Answer
server(): listen_sock = tcp_socket() bind(listen_sock, (10.10.100.180, 30000)) listen(listen_sock) loop: conn, addr = accept(listen_sock) create_thread(service_client, conn) service_client(conn): loop: req = read_exact_or_eof(conn, 9) if EOF: break op, a, b = parse(req) compute result write_all(conn, htonl(result)) close(conn)
Problem 5: Multi-Socket UDP Server with select()
Assume you have a UDP server that will be listening to requests from 2 sockets: One listening to port 20000, one listening to port 30000. Assume both sockets are blocking sockets. Show the pseudocode for a generic single-threaded UDP server that would receive data from any of these sockets. Make sure that your server is not blocked waiting for a message on one socket, while there are messages ready for reading on the other. In other words, as soon as a message is ready on one of the sockets, your server must be able to read from it.
View Answer
server():
sock1 = udp_socket(port=20000)
sock2 = udp_socket(port=30000)
loop:
ready = select({sock1, sock2})
if sock1 ready:
recvfrom(sock1)
if sock2 ready:
recvfrom(sock2)
Problem 6: UDP Echo Server
Assume you would be designing an UDP Echo Server that would run at 10.10.100.180, port 30000. Your server would get a message from the UDP socket and simply echo (send) it back to the sender (client). Your echo client would run in a loop: Asks the user to enter the size of the message, sends it to the server, gets the reply back and prints the message size of the reply on the screen. Assume that a UDP client can potentially send a max. sized UDP packet.
-
(a) Show the pseudocode for a generic UDP Echo client.
View Answer
client(): sock = udp_socket() loop: n = input_size() msg = make_bytes(n) sendto(sock, msg, server) reply = recvfrom(sock) print(len(reply)) -
(b) Show the pseudocode for a generic UDP Echo server.
View Answer
server(): sock = udp_socket() bind(sock, (10.10.100.180, 30000)) loop: msg, addr = recvfrom(sock) sendto(sock, msg, addr)
Problem 7: Multi-Port UDP Echo Server with Threads
Assume you would be designing a server that would run at 10.10.100.180 and listen to UDP ports 20000 and 30000 for client requests. Upon reception of a message from any of these ports, the server simply echoes the message back to the client.
-
(a) Show the pseudocode for this UDP server if you must implement a single-threaded server.
View Answer
server(): s1 = udp_socket(20000) s2 = udp_socket(30000) loop: ready = select({s1, s2}) for s in ready: msg, addr = recvfrom(s) sendto(s, msg, addr) -
(b) Show the pseudocode for this UDP server if you are asked to use 2 separate threads, one serving client requests at port 20000 and the other at port 30000.
View Answer
server(): create_thread(echo_loop, socket_20000) create_thread(echo_loop, socket_30000) echo_loop(sock): loop: msg, addr = recvfrom(sock) sendto(sock, msg, addr)
Problem 8: TCP Server with Initial Message Exchange
Assume you would be designing a TCP client and a single-threaded TCP Server. Your server would run at 10.10.100.180, port 30000. Once a connection is established, your server will first send a 100 byte message. Your client must read this 100 byte message, and send it back to the server. The server must then read the message back, close the connection and go back to accept a new connection.
-
(a) Show the pseudocode for this TCP client.
View Answer
client(): sock = tcp_socket() connect(sock, (10.10.100.180, 30000)) msg = read_exact(sock, 100) write_all(sock, msg) close(sock) -
(b) Show the pseudocode for this TCP server.
View Answer
server(): listen_sock = tcp_socket() bind(listen_sock, (10.10.100.180, 30000)) listen(listen_sock) loop: conn, addr = accept(listen_sock) write_all(conn, make_100_byte_msg()) echoed = read_exact(conn, 100) close(conn)