Design Philosophy ================== * Written in 1988, around 15 years after ideas underlying the Internet were first developed * Mostly a retrospective, but a very influential paper * Written to teach newcomers to the IETF/IRTF why the Internet was the way it was * Been revisited many times since. Key ideas in DP paper ================== * The main idea: Spelling out the design goals of the Internet and how it influenced the design of protocols * What were these goals? --> "Effective technique for multiplexed utilization of existing interconnected networks" (originally, the ARPANET and the ARPA packet radio network; foresight: "there would be other networks"). * Why packet switching? Well understood from the days of the ARPANET * Other goals in priority order: --> Resilience to networks or gateways being lost (military origins) --> Generality to support multiple types of communications (The designers did not what would run on the Internet.) --> Variety of networks (wanted to get everyone on the Internet) --> Distributed management (no single controlling entity) --> Cost effective --> Host attachment with low effort --> Resource accounting Importantly, commercial utility was not a concern back then (actually until the mid 1990s). Interesting difference from the telcos. Extremely well-engineered, single entity control, performance guarantees. * Question: What did they miss? Performance guarantees, security, mobility, hosts may not be computers anymore. * How did they do (on whatever they cared about)? --> Surprisingly well! --> It's quite hard to build something when you don't know what it will be used for (generality), yet ... --> the Internet is good enough ("only just works") for almost everything. --> Unforunately, it's optimal for almost nothing. --> This is the principle of best effort: think of it as postal service without insurance --> Important to keep this in mind before critiquing the design. It was good for what they cared about. --> Doesn't mean we shouldn't think about new ways of doing things, which may be better for what _we_ now care about. --> E.g., Performance, security, mobility, etc. The goal of resilience =================== * Only a network partition should prevent communication. Communication should continue under all other circumstances. * Need to protect state associated with a conversation. * What if we used hop-by-hop reliable delivery? What if some hops dies and takes the state to its grave? * The development of the "fate sharing" principle: OK to lose state if entity associated with state dies. Very democratic in some sense. Minimizes collateral damage from routers dying. * Resilience is still second to interconnection. So there are no failure reporting mechanisms assumed of the end hosts. The goal of generality =================== * Need to support multiple different services * Example from 1988: file transfer, remote login, the cross-network debugger (unreliable), and audio * Early observation: Reliability hurts latency because a reliable, in-order service results in HoL blocking. * Resulted in the use of datagrams/packets as the underlying building block * Use raw datagrams for good delay characteristics * Use TCP layered on datagrams for reliability * Btw, not all networks are built this way. E.g., Infiniband is a network technology that guarantees reliability, even if the application does not want it. The goal of network variety =================== * The Internet had to easily incorporate existing local networks. * Helped expedite the growth of the Internet. * Examples: Satellite, radio, LAN, serial links * How did they achieve this? Minimum assumptions on what the network provides so that everyone could interoperate. * End-to-end argument: Place functionality within the switches/routers only if it's necessary and sufficient. Otherwise, leave it at the end hosts. * What was the network not expected to do?: Reliable delivery, prioritization, broadcast, multicast. Other goals ================= * Distributed management lead to two-tier routing * Cost was sidelined: Retransmissions have to go through the whole network. * Accounting: again sidelined. Summary: Goals and their consequences ================= * Multiplexed utilization of existing networks => packet switched networks connected by packet forwarding gateways * Resilience => Stateless switches, fate sharing * Generality => Datagrams as a building block * Network variety => Minimum assumptions of the network; move everything into hosts; end-to-end principle * Distributed management => Two-tier routing * Cost/accounting => Relatively sidelined (retransmissions have to go through the whole network) Aside: ================= David Clark has much more to say on this topic. Highly recommend this book: https://mitpress.mit.edu/books/designing-internet and this video: https://www.youtube.com/watch?v=qX-ojw1gLmE For a history of how the ARPANET---and then the Internet---came to be: https://www.amazon.com/Where-Wizards-Stay-Up-Late/dp/0684832674