The December 2022 issue of IEEE Spectrum is here!

Close bar

Simulating Networks

For quick and accurate results, hybrid simulation is becoming the technique of choice

5 min read
Simulating Networks

This is part of IEEE Spectrum's special report: Always On: Living in a Networked World.

network model

Simulation tools, and the relatively recent class of hybrid simulation tools in particular, are just the ticket for those faced with designing a network from scratch or keeping one operating in the face of ever-increasing traffic. It's the most cost-effective way to predict how the myriad PCs, servers, printers, routers, hubs, and switches, as well as all the applications, protocols, and communication link technologies making up a network will function, what with all the data packets, more data packets, and still more data packets they must handle.

Fortunately, lots of tools are available to do such straightforward tasks as specifying the topology of a local-area or a frame relay network, and finding points of bottleneck or delay. There are library models for the simulators containing the attributes of devices such as routers, switches, and workstations, and for communication links such as T1 lines, 1-Gb/s Ethernet, and wireless local-area networks (LANs). With these libraries, entire networks of anywhere from several nodes to tens of thousands of nodes can be modeled, depending on the level of detail required. "Some of our customers run detailed simulations of one node that last for hours, or days. Others run 10 000 nodes [with less detail] in a few minutes," noted Todd Kaloudis, vice president of marketing at Opnet Technologies Inc., Washington, D.C.

To check out different operating scenarios, network designers can rerun the simulators using different device throughputs specified in bits or packets per second, together with transaction rates, routing protocols, and applications such as Web browsing, videoconferencing, and so on. The payoff is in how well the designers can balance user needs with network resources and cost, noted Arnold W. Bragg, principal member of the technical staff with Fujitsu Network Communications, Raleigh, N.C.

The Modeler network simulator from Opnet Technologies is the leading hybrid simulation tool. At US $29 000 per license, it combines two types of simulations. Analytical simulation relies on mathematical models of the network using such data as the average packet rate at a node. The other, a discrete-event simulator, relies on analyzing the movement of packets through the network. The hybrid approach is helpful because it can speed up performance considerably while still providing the required information [see "Hybrid Simulation's Ingredients"].

For example, to study an application's performance, a designer may simulate that application precisely, that is, simulate every packet--the creation of the packet, its IP address and size, segmentation and reassembly at each layer in the protocol stack, plus any overhead, routing, and so on, Kaloudis explained. The other traffic on the network is modeled mathematically as background traffic, even if its rate is as high as gigabits per second. The math model includes basic information such as the source and destination of the traffic, and its basic properties.

The payoff is in how well the designers can balance user needs with network resources and cost

Increasingly, modeling packages are put on-line to acquire real traffic data. That's what Opnet's application characterization environment, or ACE, does. Announced last May, the software identifies and visually depicts packets of traffic of networked applications, as picked up by probes on the network. It can then derive the rate of data flow (in bytes per second), as well as other information about the application.

The ACE data is subsequently combined with Opnet's Modeler or DecisionGuru software to simulate the captured application traffic in "what-if" scenarios, to predict the outcome.

The blessing of modules

Users of simulation software often turn to software modules to address specialized needs, rather than having to add code to general network-simulation packages to meet those needs. Some companies--Opnet Technologies and Netcracker, Waltham, Mass.--are already offering such modules, typically in the form of libraries that are included with the main software or available as an option at an added cost. "I would like [vendors] to build more of these," Bharat Doshi, senior director of performance analysis at Lucent Technologies Inc., Murray Hill, N.J., told IEEE Spectrum. He noted that his staff writes a lot of simulation modules for wireless networks because "new problems [show up] all the time." He remarked that today's simulation tools do not have modules for universal mobile telecommunications systems (UMTS), the next generation of wireless systems.

Doshi also noted that the commercially available graphic modeling environment is inadequate for simulating such complex problems as the restoration of an optical network following a disruption caused by, say, a cable break or power interruption. Very sophisticated mathematical algorithms are needed to solve a problem like who gets how much capacity, and when, during the restoration. "This is better done in C, or C++," he added, meaning that Lucent engineers must write their own code to solve these problems.

Ease of use is important

Ease of use is a blessing to those trying to handle such tough problems. Relative ease of use is the claim of NetRule by Analytical Engines Inc., McLean, Va., a software package that uses primarily analytical techniques to predict network performance. Ever since Analytical Engines began marketing NetRule in late 1998, the company has emphasized the tool's simple data model and its intuitive graphical depiction of the networks [see illustration].

Although NetRule scales well and can model networks with over 100 000 nodes, it is inexpensive enough ($7500) to explore performance issues on any size network. NetRule 3.0, expected to become available at press time, models such quality-of-service (QoS) techniques as class-based weighted fair queuing--one that ensures that short messages have a fair access to the network in comparison with access afforded to longer messages such as graphics files. NetRule 3.0 also predicts all of the QoS measures, as does Opnet's Modeler. Typical measures include system availability (up time), packet drop rates (the percentage of packets that fail to reach their destination), and packet latency (end-to-end packet delay).

Research focal points

While vendors update and improve commercial simulation tools, researchers are tackling longer-range goals. John Heidemann, researcher with the Information Sciences Institute at the University of Southern California in Marina del Rey, noted the increasingly pressing need to simulate very large networks, or perhaps even the entire Internet, as one of the main research focal points today. Of course, this latter goal is easier said than done [see "Simulation is Crucial"].

Heidemann added that researchers in network simulation are also preoccupied with how to study the network in different time scales, say, at 1-, 10-, and 100-second intervals, and with how to validate the simulation results. Such studies at are needed because "there is increasing evidence that different protocols behave very differently at different time scales," Heidemann noted. For example, Web traffic is bursty across all time scales, which may not be the case with audio traffic, he explained.

As for validation, the networking community is developing better techniques for demonstrating that simulations actually match real-world networks, Heidemann noted.

To Probe Further

An insight into the use of network design and simulation tools is provided in "Which Network Design Tool is Right for You?," by Arnold W. Bragg, writing in IT Professional, a new IEEE magazine (September/October 2000, pp. 23-31).

For more on network simulation software, check out,, and

Go to introduction

Keep reading...Show less

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions
Array of devices on a chip

This analog electrochemical memory (ECRAM) array provides a prototype for artificial synapses in AI training.

IBM research

How far away could be an artificial brain? Perhaps a very long way still, but a working analogue to the essential element of the brain’s networks, the synapse, appears closer at hand now.

That’s because a device that draws inspiration from batteries now appears surprisingly well suited to run artificial neural networks. Called electrochemical RAM (ECRAM), it is giving traditional transistor-based AI an unexpected run for its money—and is quickly moving toward the head of the pack in the race to develop the perfect artificial synapse. Researchers recently reported a string of advances at this week’s IEEE International Electron Device Meeting (IEDM 2022) and elsewhere, including ECRAM devices that use less energy, hold memory longer, and take up less space.

Keep Reading ↓Show less

Practical Solid-State Batteries Using Pressure

Mechanical stress exploits workaround to electrochemical failure

4 min read
Illustration shows a grey disk  with two metal circles on each end and a thin piece of metal attached to each. Thin grey strips branch out of one of them. Above and below the disk are illustrative red arrows facing the disk.

Researchers solved a problem facing solid-state lithium batteries, which can be shorted out by metal filaments called dendrites that cross the gap between metal electrodes. They found that applying a compression force across a solid electrolyte material (gray disk) caused the dendrite (dark line at left) to stop moving from one electrode toward the other (the round metallic patches at each side) and instead veer harmlessly sideways, toward the direction of the force.


Solid-state lithium-ion batteries promise to prove more safe, lightweight, and compact than their conventional counterparts. However, metal spikes can grow inside them, leading to short-circuit breakdowns. Now a new study finds that applying pressure on these batteries may prove a simple way to prevent such failures.

Conventional batteries supply electricity via chemical reactions between two electrodes, the anode and cathode, which typically interact through liquid or gel electrolytes. Solid-state batteries instead employ solid electrolytes such as ceramics.

Keep Reading ↓Show less

Learn How Global Configuration Management and IBM CLM Work Together

In this presentation we will build the case for component-based requirements management

2 min read

This is a sponsored article brought to you by 321 Gang.

To fully support Requirements Management (RM) best practices, a tool needs to support traceability, versioning, reuse, and Product Line Engineering (PLE). This is especially true when designing large complex systems or systems that follow standards and regulations. Most modern requirement tools do a decent job of capturing requirements and related metadata. Some tools also support rudimentary mechanisms for baselining and traceability capabilities (“linking” requirements). The earlier versions of IBM DOORS Next supported a rich configurable traceability and even a rudimentary form of reuse. DOORS Next became a complete solution for managing requirements a few years ago when IBM invented and implemented Global Configuration Management (GCM) as part of its Engineering Lifecycle Management (ELM, formerly known as Collaborative Lifecycle Management or simply CLM) suite of integrated tools. On the surface, it seems that GCM just provides versioning capability, but it is so much more than that. GCM arms product/system development organizations with support for advanced requirement reuse, traceability that supports versioning, release management and variant management. It is also possible to manage collections of related Application Lifecycle Management (ALM) and Systems Engineering artifacts in a single configuration.

Keep Reading ↓Show less