Case Study: Buffering Packets for a Link
Submitted: 26 Jul 2009
Application Areas: Telecommunications Networks
Contents
Problem Description
When data is transmitted on a computer network, it must leave the component, e.g., server, switch, PC, via a connection, e.g., network interface card (NIC), port, host-bus adapter (HBA), and be transmitted across a
link to another component. The ouput connection must
buffer data (i.e., store the data in some memory) if the link is already being used by another piece of data. This is know as
output buffering.
One common question asked by network engineers is: "How big does my output buffer need to be so I don't lose data very often?".
In this case study we are going to consider data being sent from a server, via a NIC and ethernet link, to some data storage. The data gets split into packets and transmitted across the link. The size of the data determines the number of packets and, thus, the time the link needs to send the data.
The data being generated on the server is video being compressed before being sent. The time between transmissions is exponentially distributed with mean 1 minute. The size of the data follows a triangular distribution with minimum 3.75GB, mode 7.5GB and maximum 11.25GB. Given that the ethernet link has 1 Gbps of bandwidth and there are 8 bits in a byte, the transmission times also follow a triangular distribution with minimum 0.5 mins, mode 1 min and maximum 1.5 mins.
The goal of this simulation study is to find the average buffer size required in bytes.
Return to top
Problem Formulation
This problem may be modelled as a single-server queue. The data forms the customers waiting for service and once data is being transmitted, it is being "served" by the link.
The interarrival and processing times are given by exponential and triangular distributions respectively. Although the interarrivals are
Markovian, the service times are not. Thus we cannot use the
M/M/1 queueing model analytical solution and must solve this problem numerically via simulation.
Return to top
Computational Model
This model requires one instance of each of three flowchart modules:
Create,
Process and
Dispose. Data is created at the
Create module, moves to the
Process module to be transmitted across the link and then leaves via the
Dispose module. See the
Arena Guide for a demonstration of adding modules.
Return to top
Results
Once your
Arena model is complete, you can run the model for multiple replications and get estimates for various different quantities:
Running the Output Buffering Model in Arena
Return to top
Conclusions
Need to show what the size of the buffer needs to be in GB. Get count of data in queue and multiply by size of data - what is average of triangular distribution? to get GBs
Return to top
Extra for Experts
Rather than run trial replications in order to estimate how many replications are necessary to ensure the accuracy of a particular output, we can use
dynamic simulation. Dynamic simulation measures the accuracy, i.e., half-width, of an output at the end of each replication and stops replicating once the desired accuracy has been achieved.
In Arena, this is implemented by setting the maximum number of replications "on the fly". However, this means that one extra replication is always performed. See the attached
flash movie for a tutorial on implementing dynamic simulation:
Dynamic Simulation for the Output Buffering Model in Arena
Return to top