# Case Study: Buffering Packets for a Link

## Problem Description

When data is transmitted on a computer network, it must leave the component, e.g., server, switch, PC, via a connection, e.g., network interface card (NIC), port, host-bus adapter (HBA), and be transmitted across a link to another component. The ouput connection must buffer data (i.e., store the data in some memory) if the link is already being used by another piece of data. This is know as output buffering.

One common question asked by network engineers is: "How big does my output buffer need to be so I don't lose data very often?".

In this case study we are going to consider data being sent from a server, via a NIC and ethernet link, to some data storage. The data gets split into packets and transmitted across the link. The size of the data determines the number of packets and, thus, the time the link needs to send the data.

The data being generated on the server is video being compressed before being sent. The time between transmissions is exponentially distributed with mean 1 minute. The size of the data follows a triangular distribution with minimum 3.75GB, mode 7.5GB and maximum 11.25GB. Given that the ethernet link has 1 Gbps of bandwidth and there are 8 bits in a byte, the transmission times also follow a triangular distribution with minimum 0.5 mins, mode 1 min and maximum 1.5 mins.

The goal of this simulation study is to find the average buffer size required in bytes.

## Problem Formulation

This problem may be modelled as a single-server queue. The data forms the customers waiting for service and once data is being transmitted, it is being "served" by the link.

The interarrival and processing times are given by exponential and triangular distributions respectively. Although the interarrivals are Markovian, the service times are not. Thus we cannot use the M/M/1 queueing model analytical solution and must solve this problem numerically via simulation.

## Computational Model

This model requires one instance of each of three flowchart modules: Create, Process and Dispose. Data is created at the Create module, moves to the Process module to be transmitted across the link and then leaves via the Dispose module. See the Arena Guide for a demonstration of adding modules.

## Results

Once your Arena model is complete, you can run the model for multiple replications and get estimates for various different quantities:

## Conclusions

Need to show what the size of the buffer needs to be in GB. Get count of data in queue and multiply by size of data - what is average of triangular distribution? to get GBs

## Extra for Experts

Rather than run trial replications in order to estimate how many replications are necessary to ensure the accuracy of a particular output, we can use dynamic simulation. Dynamic simulation measures the accuracy, i.e., half-width, of an output at the end of each replication and stops replicating once the desired accuracy has been achieved.

In Arena, this is implemented by setting the maximum number of replications "on the fly". However, this means that one extra replication is always performed. See the attached flash movie for a tutorial on implementing dynamic simulation:

Topic attachments
I Attachment History Action Size Date Who Comment
swf dynamic-single-server.swf r1 manage 620.3 K 2009-07-26 - 10:00 TWikiAdminUser Dynamic Simulation with the Single Server Model
swf run-single-server.swf r1 manage 1042.0 K 2009-07-26 - 09:59 TWikiAdminUser Running the Single Server Model
swf single-server.swf r1 manage 330.1 K 2009-07-26 - 05:57 TWikiAdminUser Single Server Arena Tutorial

This topic: OpsRes > SubmitCaseStudy > OutputBuffering
Topic revision: r4 - 2009-08-18 - MichaelOSullivan

Copyright © 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback