RFC2544 standard, established by the Internet Engineering Task Force (IETF) standards body, is the de facto methodology that outlines the tests required to measure and prove performance criteria for carrier Ethernet networks. The standard provides an out-of-service benchmarking methodology to evaluate the performance of network devices using throughput, back-to-back, frame loss and latency tests, with each test validating a specific part of an SLA. The methodology defines the frame size, test duration and number of test iterations. Once completed, these tests will provide performance metrics of the Ethernet network under test.
The throughput test defines the maximum number of frames per second that can be transmitted without any error. This test is done to measure the rate-limiting capability of an Ethernet switch as found in carrier Ethernet services. The methodology involves starting at a maximum frame rate and then comparing the number of transmitted and received frames. Should frame loss occur, the transmission rate is divided by two and the test is restarted. If during this trial there is no frame loss, then the transmission rate is increased by half of the difference from the previous trial. This methodology is known as the half/doubling method. This trial-and-error methodology is repeated until the rate at which there is no frame loss is found. The throughput test must be performed for each frame size. Although the test time during which frames are transmitted can be short, it must be at least 60 seconds for the final validation. Each throughput test result must then be recorded in a report, using frames per second (f/s or fps) or bits per second (bit/s or bps) as the measurement unit.
The latency test measures the time required for a frame to travel from the originating device through the network to the destination device (also known as end-to-end testing). This test can also be configured to measure the round-trip time; i.e., the time required for a frame to travel from the originating device to the destination device and then back to the originating device. When the latency time varies from frame to frame, it causes issues with real-time services. For example, latency variation in VoIP applications would degrade the voice quality and create pops or clicks on the line. Long latency can also degrade Ethernet service quality. In client-server applications, the server might time out or poor application performance can occur. For VoIP, this would translate into long delays in the conversation, producing a “satellite call feeling”. The test procedure begins by measuring and benchmarking the throughput for each frame size to ensure the frames are transmitted without being discarded (i.e., the throughout test). This fills all device buffers, therefore measuring latency in the worst conditions. The second step is for the test instrument to send traffic for 120 seconds. At mid-point in the transmission, a frame must be tagged with a time-stamp and when it is received back at the test instrument, the latency is measured. The transmission should continue for the rest of the time period. This measurement must be taken 20 times for each frame size, and the results should be reported as an average.
The frame loss test measures the network’s response in overload conditions—a critical indicator of the network’s ability to support realtime applications in which a large amount of frame loss will rapidly degrade service quality. As there is no retransmission in real-time applications, these services might rapidly become unusable if frame loss is not controlled. The test instrument sends traffic at maximum line rate and then measures if the network dropped any frames. If so, the values are recorded, and the test will restart at a slower rate (the rate steps can be as coarse as 10%, although a finer percentage is recommended). This test is repeated until there is no frame loss for three consecutive iterations, at which time a results graph is created for reporting. The results are presented as a percentage of frames that were dropped; i.e., the percentage indicates the variable between the offered load (transmitted frames) vs. the actual load (received frames). Again, this test must be performed for all frame sizes.
Back to Back:
The back-to-back test (also known as burstability or burst test) assesses the buffering capability of a switch. It measures the maximum number of frames received at full line rate before a frame is lost. In carrier Ethernet networks, this measurement is quite useful as it validates the excess information rate (EIR) as defined in many SLAs.
Previous: BERT and Multi-StreamTest
Next: No Information