Streamlining the Internet-Fiber Connection
There's more than one way to carry Internet protocol traffic on a wavelength-division multiplexing infrastructure

The rapidly growing popularity of the Internet is the key driver in communications today. Even conservative observers estimate that Internet traffic will double every six months for the next few years. That kind of buildup means that, before the end of next year, the United States will need a communications capacity just for the Internet of about 35 terabits per second--an order of magnitude more than is required by all the voice traffic in the world today.
To provide such capacity, how should the present networking infrastructure be developed? In answering that question, some matters of fact should be taken into account. Since every study of future communications patterns predicts that data traffic will continue to grow much more rapidly than voice, any new transport infrastructure should clearly be optimized for data. More specifically, since the Internet protocol (IP) is at the heart of so much revenue-generating activity in the world today and seems certain to become even more entrenched in the future, suggesting that the network should be optimized to carry IP traffic seems reasonable.
At the same time that data traffic was outpacing voice, optical fibers were proving their value as what can fairly be described as a nearly ideal future-proof transmission medium. Unlike other media, optical fiber has enormous unused capacity, which can be exploited by deploying wavelength-division multiplexing (WDM).
In WDM, the optical transmission spectrum is divided into a number of non-overlapping wavelength bands, with each wavelength supporting a single communication channel operating at the highest speed permitted by the technology of the day. Today, systems capable of operating at 10 Gb/s are being installed for commercial use; at the same time, systems rated at 40 Gb/s are undergoing field testing. With WDM, multiple numbers of these channels can be transported through a single fiber. Better yet, each channel can run at a different rate and, indeed, in a different format. So WDM provides not only high capacity, but a great deal of flexibility too.
Most WDM systems in commercial operation today multiplex 40 or fewer channels on a single fiber. But several manufacturers have announced systems that can multiplex up to 160 channels. What's more, 320-channel systems are almost ready to emerge from the laboratory, and these numbers will surely increase in time. It is anticipated that the next-generation Internet will increasingly use WDM-based optical transport not only on long backbone runs, but in metropolitan-area networks as well. There is even talk of exploiting WDM in passive optical networks running to the very edge of a network, to subscribers' homes.
So, given the inevitability of IP and the tremendous capacity of WDM, it would appear that the worldwide communications system clearly ought to be evolving into an IP network running on a WDM physical infrastructure. Right?
Probably. But not quite yet, and not without a great deal of very careful planning. The problem, in a nutshell, is that IP and WDM lack several critical capabilities essential to the operation of a successful communications system. For example, most WDM systems today involve point-to-point links--manually provisioned fixed pipes capable of carrying tremendous amounts of data between two fixed points, but not capable of lightpath switching--redirecting traffic flows from one pipe to another--in real time.
They cannot, therefore, automatically route traffic around points of failure in the network. Neither can they "groom" traffic in real time--that is, consolidate or segregate different kinds of traffic (voice, data, or video, for example) to make more efficient use of the facilities. To get these and other essential capabilities, communications carriers rely on synchronous optical network (Sonet) technology.
While IP is well suited to data communication, it is less than ideal for voice and video because it has no way to ensure that packets will be forwarded quickly enough to support such real-time applications. To make certain that different kinds of traffic are delivered with the quality and reliability they require, carriers usually turn to the asynchronous transfer mode (ATM) protocol, with its multiplicity of quality of service (QoS) levels.
Today, therefore, although much IP traffic is indeed carried over fibers with WDM, several intermediate technologies are required to make everything work. What needs to be determined is whether the services provided by those technologies can be supplied by a network architecture in which IP rides directly over WDM, and, if so, what that network might look like.
A vision of future networks
The multilayer communications network now in use was designed primarily for circuit switching, when voice was dominant, bandwidth was expensive, and capacity was portioned out in 64 kb/s pieces (the size of an uncompressed voice channel). Although this multilayer architecture works well for voice, it is less than ideal for data, especially with respect to efficiency and cost effectiveness.
For one thing, the layers are largely unaware of each other, which increases overhead and may lead to duplication of their services. Error correction may be performed twice, for example, which wastes time and adds no value. Network management and control algorithms at different layers can even work against each other, creating conflicts and oscillations. For instance, end-to-end flow control implemented at the transmission control protocol (TCP) layer can interfere with explicit flow control defined for the ATM layer. (Flow control starts and stops messages to keep the receiver's buffer from overflowing.)
Interestingly, because of the availability of IP routers capable of operation at the wirespeed of 10 Gb/s, the Sonet grooming function is becoming unnecessary in IP networks. Also, introducing measurement and control functions needed for traffic engineering can make networks more scalable.
When all is said and done, for IP over WDM to prove successful will depend on two main issues: whether the IP layer can support the necessary QoS mechanisms, and whether the WDM layer can provide survivability as robust as that provided by Sonet and its European equivalent, the synchronous digital hierarchy (SDH) network.
Developing IP layer QoS has proven to be a difficult task and has led to a variety of proprietary solutions that were unable to interoperate efficiently because they relied on different technologies. In the end, benefiting from the knowledge gained in arriving at those proprietary multilayer-switching solutions, the Internet Engineering Task Force (IETF), a group that is involved in setting standards for the Internet, came up with a standards-based approach called multiprotocol label switching (MPLS), which introduced the notion of connection-oriented forwarding to IP networks.
IP networks, it must be emphasized, are connectionless, like the postal system. Packets are inserted into such networks without first setting up an explicit connection to the intended recipient. This is in contrast to a connection-oriented telephone system, where a connection must be established before a conversation can begin.
With MPLS, no physical connections are nailed up, but a path is defined (by the sequence of the addresses of the nodes it comprises) and all the packets in a particular session are sent along that path by giving them all the same label when they enter the MPLS network. At each node, each packet is routed according to its label value and is provided with a new label for use at the next node; hence the term label switching. The great achievement of MPLS is that it took the most valuable attribute of circuit-switched networks--the establishment of a defined path for each session--and added it to IP.
At the WDM layer, multiprotocol lambda (or wavelength) switching (MPlS) extends the label-switching concept to include wavelength-routed and -switched lightpaths. As its name implies, in MPlS the wavelength of the light serves as its own label. The technology can be considered to be a simplified version of MPLS without label stacking or per-packet forwarding. (In label stacking, labeled packets from separate paths are concatenated and labeled with a fresh MPLS label for transport over a path of much greater capacity. This process can be continued recursively to create even bigger pipes [aggregated paths] containing lower-granularity MPLS connections. This renders fewer connections to be managed resulting in scalability and efficient traffic engineering.) It looks promising as an enabler for future IP-over-WDM systems, although it will necessitate extensions to the IETF control and signaling protocols used for provisioning, maintaining, and restoring switched lightpaths.
Similarities and differences
For closer integration with higher-layer networking protocols, most notably IP, the network control signals must be handled separately from the actual data traffic--or in industry parlance, distinct control and data planes must be established. With the recent advances in MPLS, possibilities for establishing the needed IP-based optical control plane architecture have opened up. Such a uniform control plane--specifically, MPLS and its MPlS extension in the optical domain--can provide the powerful traffic engineering features needed by multiservice networks while also fitting nicely with the WDM optical network.
To adapt MPLS to control optical crossconnects (OXCs), four main issues need to be tackled: addressing, signaling, routing, and survivability.
Establishing WDM lightpaths between optical crossconnects at the network edges requires an addressing scheme at the optical layer. Current optical networks have fairly few nodes, so they can get away with simple flat (that is, non-hierarchical) addressing schemes. Such schemes, however, do not scale well.
In the client-server model [middle], the WDM network is treated as a separate intelligent network layer that provides circuit-switched services to multiple higher-layer protocols like IP and ATM. No routing information is exchanged between the WDM layer and the higher layers. Instead, clients (like ATM switches and IP routers) submit traffic parameters to an optical connection controller (OCC). OCCs maintain all the topology and resource availability information about the optical transport network. The OCC provides optical circuits to satisfy the requests. The client packets are carried over the optical circuits with Sonet or digital wrapper framing.
In the peer-to-peer model [bottom], border routers and optical crossconnects are treated equally, the unified control plane supports complete information exchange. This network model should encourage competition by opening up equipment markets and enabling carriers to use each other's infrastructure.
The augmented model [not shown] falls between the client-server and peer-to-peer models.
Click on the image for the full illustration view.
Client-server model
For the IP layer to communicate across a WDM domain, the ingress and the egress optical crossconnect addresses must be known, and they must be capable of resolving higher layer address endpoints. Although there are potentially several ways to approach this task, it seems prudent to use an addressing scheme that already exists and is known to work well.
For example, the OXCs work as IP-addressable devices, and since each can have many thousands of ports, an obvious way to proceed is to assign one IP address per crossconnect and to identify the ports of each in a hierarchical manner, first by IP address and then by port number. If such a scheme is adopted, the other client devices in the WDM network, such as ATM switches and Sonet devices, will also need IP-aware addresses.
Signaling is another critical element in the control plane. It is responsible for resource discovery; exchanging information among IP and WDM network devices; and creating, deleting, and modifying routes. Although the signaling function is similar at both the IP and WDM layers, important distinctions exist between them. In IP networks, there is no physical separation of data and signaling paths (the control traffic and data traffic are carried over the same channels). But optical networks need to keep their data and signaling topologies (logically) separate. Whereas the data channels, or lightpaths, can run from end to end, with no termination points in the immediate nodes, the signaling topology carries information needed for managing the data flows (lightpaths). The control flows must therefore be terminated at each OXC, so that the necessary control information can be extracted or added as required.
Based on different signaling mechanisms, four network models can be defined: client-server, big fat router, augmented, and peer-to-peer.
Much of the current work being done with the client-server model has essentially focused on defining a protocol interface between the lower WDM optical layer and the higher layers to provide "circuit-switched" services to multiple higher-layer protocols like IP. In this "overlay" model, the WDM network is treated as a separate intelligent network layer. The control planes in the optical (server) and client domains operate independently of each other, with no exchange of routing information. To the higher-layer services, the optical network is a black box with interfaces through which higher-layer "client" protocols request circuit-switched lightpath channels.
The client-server setup requires that the user-to-network interface (UNI) and the network-to-network interface (NNI) be separate. Typical lightpath channels have coarse granularity, offering capacity in increments of 2.5 Gb/s or more. This approach is well suited for running IP directly over WDM. Its main advantage is that it can be easily deployed commercially while enabling the carriers to use proprietary UNIs and NNIs as competitive tools.
Currently, depending on the different implementations for the network interface, there are two main ways to implement a client-server model: optical internetworking and optical data networking. Optical internetworking represents a data-optimized network infrastructure in which switches and routers have been integrated into one unit, and optical interfaces are directly connected by WDM network elements. In this case, IP-over-WDM means that IP packets are mapped into Sonet/SDH frames and transported over Sonet/SDH-based point-to-point dense WDM systems.
In optical data networking, IP-over-WDM means that the packets are directly mapped into the optical channel by simply enclosing them within a "digital wrapper," consisting of a header and error-correcting bytes.
The advantage here is that multiple protocols can be transparently supported. As more services migrate toward IP transport, however, such protocol "transparency" may be less of a concern. Additionally, akin to issues in IP-over-ATM, the client-server model will inevitably introduce additional integration complexity, increase operations costs, and raise interoperability concerns.
Big fat router model
To avoid the problems of the client-server model, such as the functionality overlap between the intelligent optical layer and the IP layer, the big fat router architecture puts the IP layer in charge of everything. In this model, there is no intelligent optical connectivity management. WDM just provides a number of parallel optical channels between directly connected routers. This results in a simple optical layer, responsible only for providing transmission services.
The drawback of this approach is that it couples the capacities of the electronics with those of the optics, and increases the hop count, providing no end-to-end light paths. This may limit its usefulness because the capacity of optical technology is growing at a much faster pace than that of electronics.
Peer-to-peer model
The peer-to-peer model is an IP-centric WDM network in which optical crossconnects and label-switching routers freely exchange all information and run the same routing and signaling protocols. To that end, the OXCs in this approach are programmable and may support wavelength conversion. The peer-to-peer network architecture is divided into a data plane and a single unified control plane.
Since the label-switching routers and OXCs exchange complete information, the routers know the optical network topology and can compute paths over it. There is no separate user-network interface or network-to-network interface.
The peer-to-peer model promises to encourage competition by opening up equipment markets and enabling carriers to use each other's infrastructure. By deploying a uniform control plane, this approach will eliminate the administrative complexity of managing hybrid optical internetworking systems with separate, dissimilar control and operational semantics. It will also give network operators the option of providing a service-oriented network model, and offering a scalable and survivable network architecture. Its biggest drawbacks are the difficulty it has supporting legacy non-IP services, such as leased lines and telephony, and the need for a longer standardization period than the preceding two network models.
Augmented model
In terms of routing integration between the optical and IP domains, the augmented model falls between the independence of the client-server model and the commonality of the peer-to-peer approach. It maintains separate control planes for the optical (WDM) and client (IP) domains. However, it does allow for a limited exchange of routing information between the layers. For example, the border OXCs exchange summarized routing information with border clients.
Comparing these four models and considering advances in optical technologies, it becomes clear that the big fat router model, which uses the optical layer only for simple transmission services, has fallen out of date. Therefore, only the other three models need be seriously considered in thinking about IP-over-WDM networks. The authors expect the client-server model to be the first step toward supporting IP-over-WDM integration. It will be followed, they believe, by the augmented model, and eventually networks will migrate to a peer-to-peer model.
The job of routing
Another of the functions to be tackled in adapting MPLS to control OXCs is routing. Optical networking contains the intelligence for such jobs as routing and wavelength assignment (RWA), fault detection and protection switching, and real-time provisioning of lightpath in automatically switched optical networks. Routing in the optical domain is based on MPLS constraint-based routing, which computes paths that satisfy certain specifications subject to certain constraints. It is implemented by running a path-selection, or RWA, algorithm on a traffic-engineering database that contains information on network resource availability and topology, as well as path performance constraints.
Researchers have proposed a variety of approaches for doing RWA, including static and dynamic lightpath establishment. Static lightpath establishment is an off-line approach that employs optimization schemes to formulate the routing problem, with the objective of minimizing various cost functions. Lightpath computation in dynamic lightpath establishment uses graph-theoretic approaches, and can be used on-line, although it is generally not as good as the static approach--that is, there is a higher probability of blocking over the paths it computes.
It appears that the best way to address the RWA problem is with a hybrid approach: regularly optimize the entire network with the static approach and continually fine-tune it in real time with the dynamic approach. How often the fine-tuning is done will depend on the policies of the network operator; all that can be said is that it will be much more frequent than the static network optimization.
Routing can be centralized or distributed. The centralized method requires that routes be computed and implemented in one place. It simplifies control and allows for optimal utilization of network resources, but does not scale well.
In the distributed version, routing and control are performed in each switch or router. This approach is scalable, but requires that large amounts of information be exchanged among network nodes. With the distributed method, the operation of one place may depend on the status of others.
In today's networks, lightpath routing is done much less frequently than label-switched path routing at the packet level in the traditional IP network. Scalability is therefore not very critical in the near term, and the centralized approach is thus still a good choice for optical routing.
Survivability: protect or restore
The fourth key factor in adapting MPLS to control optical crossconnects, survivability, is a crucial concern in any commercial communications network. It can be approached in two main ways: protection and restoration. With the former, failure recovery is pre-provisioned; the latter involves more dynamic signal recovery.
Protection schemes have two forms: in one, traffic is transmitted over both the primary path and a (pre-provisioned) secondary path, and the end point chooses between them on the basis of signal quality.
In the other approach, a secondary path is predetermined, which can be used by low-priority traffic until it is needed to protect the primary path. Additionally, different levels of protection granularity can also be provisioned: at the fiber level, the wavelength level, and the protected path group level.
Unlike protection approaches, restoration schemes use re-routing. They calculate the secondary path and set it up only after failure has occurred. Restoration is therefore slower than protection, for which Sonet has established a benchmark time of 50 ms. In both approaches, obviously, prompt detection and notification of failures are critical for fast failure recovery.
Focusing on the hurdles
It seems safe to say that IP-over-WDM is a winning combination. It combines IP--with its flexibility, compatibility with existing solutions, and huge base of existing systems--with WDM, which offers huge capacities and low per-bit cost. Still, some major hurdles must be overcome before carriers can deploy these solutions.
The most serious is the lack of standards for signaling in optical networks. Nevertheless, several existing proprietary solutions do provide IP-over-WDM integration and are cost-effective for the delivery of multi-services. Unfortunately, they don't work when equipment from different vendors is mixed.
Currently, three organizations are working on IP-over-WDM specifications.One crucial topic of discussion within and among the groups is over what needs to be standardized. A group within the the IETF is seeking to define signaling standards that would enable user equipment to set up and tear down connections across optical backbones automatically using the peer-to-peer model.
The Optical Internetworking Forum (OIF) is proposing schemes based on both the client-server and peer-to-peer models under a single umbrella. This approach would use MPLS so that the standards-based equipment can be used in both environments.
A third industry group, Optical Domain Service Interconnect (ODSI), has proposed a UNI different from the OIF's approach. The main difference is in the signaling protocol. Whereas the ODSI solution is driven to shorten development time by reusing the existing TCP/IP software stack, this approach does not support traffic engineering.
Also lacking is a standard for wavelength spacing and guard bands among vendors. The only way to deal with this issue at present is to buy a complete package of switching and transmission equipment from one vendor.
What may be overlooked when considering the foregoing details is that ultra-high speed data-centric networks will be needed sooner rather than later. The IP layer of those networks must be able to support QoS mechanisms so that, in the long term, IP traffic can be sent directly over the optical layer. Some carriers are already dealing with the problem in a brute-force manner by installing huge amounts of fiber. With the recent advances in MPLS and its adaptation to MPlS, IP-over-WDM is being made possible in a way that promises to provide a fast, low-cost, end-to-end solution.
Michael J. Riezenman, Editor
About the Authors
SUDHIR S. DIXIT (SM) and YINGHUA YE are involved in research on broadband networks at Nokia Research Center, Burlington, Mass. Dixit (sudhir.dixit@nokia.com) currently heads the center's broadband networks group and Ye (Yinghua.Ye@nokia.com) is a senior research engineer.