Designing a Storage Area Network for the Department of Engineering Science


The objective of this project was to use mixed-integer programming (MIP) models to automatically design a minimum-cost storage area network (SAN). We designed this SAN as a possible alternative to the storage system that was used (up until 2007) by the Department of Engineering Science (DES) in The University of Auckland.

Problem Description

When we started this project in 2006 the DES storage system consisted of:

  • 3 servers, two used as domain controllers (Server 1 for staff and Server 2 for students, respectively), and one server (Server 3) used purely for data storage;
  • Server 1 and Server 2 both contain two "plug-and-play" hard drives. The drives have 300 GB of memory and their disks run at 10000 rpm;
  • Both Server 1 and Server 2 mirror their data, i.e., both hard drives in a server contains a copy of the data accessed by that server;
  • A new storage device (with 15 TB of storage) was to be added to the storage system.

While the mirroring of the disks gives this system some reliability, if either Server 1 or Server 2 fails or requires maintenance, then the data contained on the disks within that server is unavailable. However, having directly attached storage meant that access times for the data are fast.

In this project we wanted to design an alternative storage system that was cost-effective and reliable.

Storage Devices

To replicate the functionality of the current DES storage system we need to have two storage devices and one backup storage device.

The two storage devices must model the behaviour of the current embedded storage devices, so they will be disk arrays with two 300 GB, 10000 rpm disks.

The backup storage device will be identical to the storage device being added to the current storage system.


The new storage system will be connected to the original servers (Server 1, Server 2, and Server 3 respectively) and also connected to two new servers (New Server 1 and New Server 2). The new servers will initially be used for running iSCSI software, iSCSI Target, and testing the SAN before it is connected to the DES servers. Once the decision to use the SAN has been made the new servers will be used predominantly for running iSCSI Target.


Each server and storage device must have two connections to the SAN fabric to allow for 2(+) disjoint paths between any server/storage device pair.

Switches, Hubs and Links

Initially we used a fully-connected single-edge Core-Edge topology for our SAN design:

  • Fully-Connected means that all the server ports and all the storage device ports are connected to the SAN;
  • Single-Edge means that both server ports and storage device ports connect to a single layer of edge switches;
  • Core-Edge is a topology using edge switches to connect to hosts (e.g., servers, client machines, etc) and devices (e.g., storage devices such a disks, disk arrays, etc) and core switches to connect the edge switches to each other. All links go between the hosts/devices and the edge switches or the edge switches and the core switches. To preserve network symmetry, all edge switches are the same switch type and all core switches are the same switch type (although the core switch type may be different from the edge switch type).

There are no hubs in a Core-Edge topology. For the DES storage network, we sourced appropriate ethernet switches and links via the internet and came up with the following list of possible switches:

Switch Types Cost ($) Ports
Cameo 73.83 5
DLink 89 8
CNet 275 16
Linksys 366.35 24

All switches have the full complement of ports pre-configured (so there is no cost per port) and run at 1Gbps. We decided to use Cat6e (1 Gbps) cable for the links.


We want to design a fully-connected, single-edge, Core-Edge SAN from ethernet switches to connect 5 servers to 3 storage devices. We will use Cat6e ethernet cable to link the servers, storage devices and ethernet switches and select our switches from a list of possible switch types.

We developed a MIP model for core-edge SAN design in AMPL and solved it using the CPLEX solver. For a full discussion of our models and results, see our publications on SAN design.


We solved this problem using mixed-integer programming. The resulting configuration of the Core-Edge SAN is shown below: san_configuration_small.jpg

Here is a full size image (approximately A3 size).

After finding this network, we wanted to see how much traffic it could handle reliably. Using another MIP, we found that the network could handle 200 MB/s of data traffic between all server-storage device pairs with diverse protection reliability (i.e., 2 node disjoint paths to support each flow). The data flow diagram is shown below: san_flows_small.jpg

Here is a full size image (approximately A3 size).

In 2007 NDSG received a grant from The University of Auckland's Faculty of Engineering CAPEX fund to implement the core-edge SAN we designed in this project. The details of the implementation work can be found in the Implementing a Storage Area Network Prototype project.


-- MichaelOSullivan - 14 Dec 2010

Topic attachments
I Attachment History Action Size Date Who Comment
JPEGjpg san_configuration.jpg r1 manage 955.3 K 2010-12-15 - 01:46 MichaelOSullivan  
JPEGjpg san_flows.jpg r1 manage 1351.1 K 2010-12-15 - 01:48 MichaelOSullivan  
Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r5 - 2011-09-08 - TWikiAdminUser
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback