Internet Protocol Suite
This article may be deleted soon. | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The Internet Protocol Suite (IPS) is a loose description of the set of protocol (computer)|protocols, and the architectural principles supporting them, that are defined by the Internet Engineering Task Force (IETF) in Request for Comments (RFC) specifications. It must be understood that the IETF does not use rigid layering or specifications as does the Open Systems Interconnection Reference Model (OSIRM), yet the lack of unnecessary formalism is one reason that IPS protocols are dominant in the world's networks and OSI protocols are largely a historical curiosity.
It is fair to say that the core IPS specifications are designated "standards track" rather than "informational", "experimental", or "historic". Informational protocol RFCs sometimes describe the technical details of a widely used vendor-specific protocol, and it is not at all unprecedented for a consensus to develop that an informational RFC should be put into the standards track. In like manner, experimental protocols, if the experiments are successful, may enter the standards track. See Internet Engineering Task Force (IETF) for a discussion of its standards process. Some Informational RFCs are incredibly insightful, such as the "Twelve Networking Truths".[1] The mechanisms in Historic specifications are precisely that: they are obsolete, but can be of use in understanding the background for a current standard. Architectural principlesThe Internet is relatively informal about architecture, but some general principles apply, with real-world exceptions discussed in RFC 3439 and in the specifications that break the principles. One general principle, which can be broken in a controlled way, is called the end-to-end assumption. Its major points are that intelligence should be concentrated at the edge of the network, and the internals of the network should be as simple and efficient as possible, but no simpler. A consequence of this principle is that addresses, within a routing domain, need to be unique, again with some special cases. Another principle is the guideline on protocol processing:
LayeringIn the context of the IPS, it seems appropriate, before discussing the relevance of layering, to give a warning that very intense and graphic emotions may be invoked, and only mature readers should continue in this section. It may also be appropriate to peruse the Signed Articles section, to consider what many regard as exotic relationships among layers and theological concepts, or how insistence on rigid layering may create a model that describes a real network as well as epicycles described the solar system. There is a continuing and frustrating tendency, in academic, some vendor, and random tutorials on network architecture, to treat the Open Systems Interconnection Reference Model as if it is still used other than as a teaching aid, and to try to “coerce” (using the lovely word choice of Priscilla Oppenheimer) Internet Protocol Suite protocols into OSI layers. Layering, as an abstraction, is useful up to a point. It can be overused. An updated IETF architectural document, RFC3439,[2] [2] even contains a section entitled:
The IPS was not intended to match OSI, was developed before OSI, the full set of OSI specifications (i.e., not just document ISO 7498) subdivide layers so that it is no longer seven, and that OSI has, in the real world, been relegated to a teaching tool. The Internet Protocol Suite has four layers, defined in RFC1122[3]and no IETF document, as opposed to some nonauthoritative textbooks, say it has five. No IETF standards-track document has accepted a five-layer model, and IETF documents indeed deprecate strict layering of all sorts. Given the lack of acceptance of the five-layer model by the body with technical responsibility for the protocol suite, it is not unreasonable to regard five-layer presentations as teaching aids, possibly to make the IP suite architecture more familiar to those students who were first exposed to layering using the Open Systems Interconnection Reference Model. Comparisons between the IP and OSI suites can give some insight into the abstraction of layering, but trying to coerce Internet protocols, not designed with OSI in mind, can only lead to confusion. Again, RFC1122 defines 4 layers. If anyone can find another IETF document that states the Open Systems Interconnection Reference Model is followed, please cite it. Further, RFC 1122 was published in 1989, while the OSI Reference Model, ISO 7498, was published in 1984. If the RFC 1122 authors had wanted to be OSI compliant, they had the OSI definitions available to them. They didn't use them. Does that suggest they were not concerned with OSI compliance? For Internet Protocol Suite architecture, textbooks are not authoritative; the IETF's work, particularly the Standards Track, is definitive for the Internet Protocol Suite. OSI's own cleanupEven further emphasizing that even OSI does not rigorously comply with seven layers, the International Organization for Standardization produced several documents that supplement the basic Reference Model in ISO 7498. To some extent, they are the technical equivalent of a political spokesman cleaning up after his principal, with such comments as "what the President, Prime Minister, Senator, etc., meant to say, before his unfortunate misstatement, was..." ISO clarified its intention, or intention after bitter experience taught what its intention should have been, with documents refining:
A revised version of the basic OSIRM clarified that protocols could indeed be connectionless rather than connection-oriented. Since the organizations that developed the original OSIRM came from the telephone industry, they assumed all protocols must work like a telephone call: call setup, commitment of resources for the duration of use, and disconnection. It was soon made obvious that the postal system worked quite nicely by sending self-contained units of information inside envelopes marked with a sender and recipient address. Network ManagementAnnex 4 to ISO 7498 gives the OSI Management Framework [4], with both system management and layer management components. System management lives at the application layer, typically has a user interface or automated user, and has to be invoked by the system administrator. System management can access, perhaps through a proxy, abstractions of management information at every layer. Layer management, in contrast, needs no routine administration. Individual protocols operating at a specific layer, in the process of initializing, will bring up appropriate mechanisms for periodic status checking, error handling, etc. RoutingAnother ISO document, "OSI Routeing [sic] Framework" [5], makes it clear that routing protocols, no matter what protocol carries their payloads, are layer management protocols for the network layer. No matter what "delivery protocol" carries routing information, the payload is network layer management information.
Network layer organizationUnfortunately not available free online, there is an ISO document, "Internal Organization of the Network Layer" [6], which splits the network layer nicely into three levels, logical (lower-layer agnostic), subnetwork (i.e., link technology) specific, and a mapping sublayer between them. ARP, with which many people struggle, drops perfectly into the mapping (technically subnetwork dependence convergence) between them. Functional Profiles and TunnelingIt is said that an elephant is a mouse designed by a committee. While it is true the IETF designs things in things that technically are committees, we call them Working Groups, perhaps because their products must be demonstrated to work. Without getting into the mechanics of IETF standardization, suffice it to say that the more recognized that a standard will be, the more independent implementations will have been demonstrated to work with one another. ISO, however, developed its standards in classic committees, and an unfortunate number of protocol features were put into the specifications to satisfy some constituency, as opposed to being a minimum necessary function to provide the needed functionality. The resultant layer-by-layer protocols became elephantine, and they had so many options that, without guidance, it was likely that two different computers would pick incompatible sets of options and thus be unable to talk to each other. To try to make the elephants dance together, ISO introduced the idea of Functional Profiles, or particular sets of options at each layer. The documents that specified functional profiles were not "standards", but merely "technical reports". Nevertheless, having the functional profiles for guidance was the key to getting various OSI implementations to work, until IPS implementations left them at the side of the information superhighway.[7] The functional profiles dealt with the concept of interconnecting incompatible addressing systems, protocols at different layers, non-OSI protocols, etc. with the idea of a gateway.[8] In the ISO context, a gateway could take one protocol, the payload protocol, wrap it in a delivery protocol usually belonging to the transport layer, and pass it to an egress gateway, which would strip off the delivery protocol and deliver the payload to wherever it needed to go. OSI purists considered this very sinful, especially when someone realized that protocol data units of delivery protocol 2, containing payload protocol 1, could, in turn, become a payload for delivery protocol 3, and so forth.
The IETF and fleasBig fleas have little fleas On their backs to bite them Little fleas have littler fleas And so on ad infinitum Remember that the IETF is concerned with running code, rather than votes on architectural purity. If it was necessary to get information between two computers, it was entirely possible to have an application protocol, such as the File Transfer Protocol, which violated layering by embedding network addresses in application layer information, run in a protocol stack such as:
While the OSI purist would be having hysterics, the IETF and IEEE people would merely ask, "does it work? Good! Let's go have a beer." The IPS and the Lowest LayersOne of the essential concepts in the original internetworking layer protocol that became IP is that it is "agnostic" to the medium-sharing, attachment to physical medium, and physical medium internal management protocols below it. As long as there was a way to pass IP over whatever dragons lay beneath, the organization of the dragons was the province of people more interested in hardware than software, such as the Institute of Electrical and Electronic Engineers (IEEE). IEEE's Project 802, cleverly given that number because it was created in February 1980, takes care of wired and wireless local area networks, which happily take IP packets. A few other physically-oriented standards groups deal with matters such as cellular telephony and optical networking, but, again, have a clear boundary of responsibilities with the IETF. When the IETF was dealing with Multi-Protocol Label Switching (MPLS) and some other things that "don't quite fit", and some people insisted on calling it "layer 2.5", the reality is that the IETF set up a "Sub-IP Area" and did the original work there. MPLS is now back under the Routing Area. There was also a Performance Implications of Link Characteristics (PILC) working group that has ended its effort, but also deals with sub-IP (archives at http://www.isi.edu/pilc/). In the process of developing MPLS, the IETF realized that transmission systems could deal with things other than packets, such as time slots, optical wavelengths, and physical positions in a connection panel. Working cooperatively with standards bodies that are interested in non-packet things, Generalized MPLS emerged to give the best of both worlds. By adding certain optical-networking-specific things to an IP routing protocol, for example, it is quite possible to use GMPLS to set up a path, of a given wavelength, through an all-optical network. References
|