Benchmarks for LAN Performance Evaluation

Communications of The Association for Computing Machinery, August, 1988.



Author:

     Larry Press
     Professor, Computer Information Systems
     California State University, Dominguez Hills
     10726 Esther Avenue
     Los Angeles, CA 90064
     (213) 475-6515
     (213) 516-3579
     ARPA: LPRESS@VENERA.ISI.EDU

Abstract:

     After a brief discussion of analytic models and simulation,
     this paper describes the benchmarks developed for a
     comparative study of eighteen local area network (LAN)
     hardware/software configurations.  Benchmark tasks were run
     on a foreground workstation while varying numbers of
     background workstations simulated contending activity.  Two
     background-activity programs, appropriate for different
     environments, are presented along with an example of their
     application.  The programs are available upon request.

Keywords:

     Local area network, performance evaluation, benchmarking

Acknowledgment:

     This work was done for Apple Computer in conjunction with
     the Seybold Group.

-----

     This paper is confined to LAN performance evaluation.
Performance is just one of many factors including hardware and
software compatibility, ease of installation and maintenance,
file and record locking, security, reliability, bridge and
gateway availability, and user interface quality, which should be
considered in LAN evaluation.  Discussion and checklists of such
general considerations may be found in books such as [1].

     After a brief discussion of analytic modeling and
simulation, our benchmarking approach and activity simulation
programs are described, along with an example of their
application.

I. Evaluation Alternatives

     Analytic modeling, simulation and benchmarking are three
performance evaluation alternatives.

     Analytic models have been developed for the low-level
physical and data-link layers of LANs.  They predict relative
efficiency of different network topologies (bus, star, and ring),
transmission media (twisted pair, broadband and baseband coaxial
cable and optical fiber), and access-control protocols (CSMA and
token passing).  Alternatives and analytic model results at this
level are reviewed in [2].  Analytic models of complete network
systems, including the characteristics of actual hardware and
software that implement low-level protocols, file and other
services, and applications have not been developed because of the
level of complexity and the variety of configurations.

     Simulation might be valuable for modeling a network file
system and its file-server hardware and software in addition to
low-level protocols.  Simulation models could go beyond analytic
studies, taking into account factors like disk directory
structures, seek-sequencing algorithms, server and workstation
buffer capacities and management algorithms, packet assembly
time, and error check time as they interact with the access
protocol and media.  Simulation would require a detailed
knowledge of the hardware and software being modeled, and would
probably be of most value to a LAN manufacturer.

     Benchmarking is a third general alternative, which can
account for system-level complexity with relatively little
effort.  The remainder of this paper discusses the benchmarking
approach taken in a comparison of eighteen commercial LAN
configurations.

II. Benchmarks

     In our tests we used eleven IBM PC-XT personal computers as
workstations which could be connected using various media and
network interface cards.  We also varied the file server hardware
and network control program.

     For each configuration tested, one workstation was used as
the benchmark station, and the others were to simulate background
activity.  A suite of foreground tasks was run and timed on the
benchmark station while the number of background workstations was
varied from zero to as many as ten.

     Programs were written to simulate both constant and
intermittent background activity (figures 1 and 2).

     The main loop of the constant-activity program (lines 60-
200) writes then reads a fixed length record and displays the
cumulative mean and standard deviation of the times between
transactions.  Reading then writing a fixed length record is a
simple example of network activity, and you could easily modify
the program for different activity, for instance to open/close a
file (as in Figure 2) or to force a seek to a randomly located
record.

     In addition to generating background activity for the
benchmark tests, the mean and standard deviation of time between
transactions are computed.  The standard deviation is interesting
as a measure of the likely variability of response times when the
network becomes heavily loaded.  We were surprised to observe
that the means were not always constant from one background
station to the next.  In some configurations certain workstations
got more attention than others, presumably due to variation in
the characteristics of the network interface cards.

     The constant-activity program simulates performance when one
or more users access the network at the same time as the
foreground station.  Such contention might be heavy at certain
times during the day (for example in the morning when people were
reading mail and starting applications) or during software
development when several programmers were running compilers.  In
interpreting the importance of degradation due to such
contention, you would have to consider the likely distribution of
simultaneous access requests in your environment.

     The intermittent-activity benchmark is meant to simulate a
transaction-processing environment.  The time between network
transactions is normally distributed, with the mean and standard
deviation specified by the user.  The mean and standard deviation
would be chosen to represent your environment, and again the
transaction activity subroutine (lines 190-260) could be tailored
to reflect the nature of your transactions.  For instance, a
typical transaction might require two seeks and reads and writes
of specified-length records.

III.  Sample Run

     Figure 3 shows the results of running word processing and
file management foreground tasks while the constant-activity
program was executing on background workstations.  The files used
in the benchmark tasks were a 50 KB word processing document and
a 500 record dBase file.  Both files were artificial, generated
for these tests.

     The results are not in elapsed time, but are normalized in
terms of the time to run the same task on a stand-alone PC-XT.
For example, loading the word processing program when no
background workstations were running took the same amount of time
as loading it from the hard disk of a stand-alone PC-XT, while
loading it when there was one background workstation running the
constant-activity program took 1.1 times as long as the stand-
alone XT.  Note that some tasks, for example loading or saving
the word processing document, are faster on the network when
there is limited background activity than on a stand-alone XT.
That is due to the superior speed of the file-server disk drive
and the high network data rate.

IV. Conclusion

     This paper presents a straightforward approach to
benchmarking LAN performance along with two background-activity
programs.  The background-activity programs simulate constant
contention which would be encountered at load times in an office-
automation or software-development environment and intermittent
contention which would be found in a transaction-processing
environment.  The tests are simple to setup, quickly run and
repeatable.  If you would like a copy of the programs (source and
object code), send me a formatted MS-DOS disk and a self-
addressed, stamped mailer.

------

References:


1. Archer, Rowland, The Practical Guide to Local Area Networks,
   Osborne McGraw-Hill, Berkeley, CA, 1986.

2. Stallings, William, Local Networks, ACM Computing Surveys, ??
   1984.

3. Latour, Alain, Polar Normal Distribution, Byte, August, 1986.

-----

10 REM CONST.BAS generates constant LAN activity, reporting
20 REM the variance of the time between transactions.
30 REM  -------------------------------------------------------------
40 GOSUB 220'  initialization subroutine
50 REM  -------------------------------------------------------------
60 REM Main loop
70 REM    TN       transaction number
80 REM    DELT     time since previous transaction
90 REM    SX       sum of the times between transactions
100 REM   SSX      sum of the squares of the times between transactions
110 REM   VAR      variance of the time between transactions
120 REM -------------------------------------------------------------
130 TN=TN+1
140 DELT=TIMER-PREV: PREV=TIMER
150 SX=DELT+SX: SSX=SSX+DELT^2
160 VAR = (TN*SSX-SX^2)/(TN*(TN-1))'     computer variance
170 PRINT "transaction:"; TN; " mean:"; SX/TN; " variance"; VAR
180 PUT #1, 1
190 GET #1, 1
200 GOTO 130
210 REM  -------------------------------------------------------------
220 REM Initialization of:
230 REM    N        length of the noise record
240 REM    D$       server "drive" for scratch file
250 REM    FLNM$    name of scratch file on server
260 REM    PREV     time of previous transaction
270 REM    QQ$      pad string to be written to disk
280 REM  -------------------------------------------------------------
290 KEY (1) ON
300 ON KEY (1) GOSUB 420'     stop execution
310 REM initialize file to be read
320 INPUT "enter record length: ", N
330 INPUT "enter drive identifier, without the colon: ", D$
340 PRINT "hit F1 to stop"
350 FLNM$ = D$ + ":junk"
360 OPEN "r", 1, FLNM$, N
370 FIELD #1, N AS Q$
380 QQ$=STRING$ (N, "q")
390 LSET Q$ = QQ$
400 PREV=TIMER'               initialize for statistics
410 RETURN
420 END'                      operator hits F1 to stop execution

Figure 1.  A program for generating constant background activity
during LAN benchmarking.
-----

10 REM NORM.BAS generates normally distributed LAN noise.
20 REM  ---------------------------------------------------------------
30 GOSUB 500'                 initialization subroutine
40 REM  ---------------------------------------------------------------
50 REM main program
60 REM    DEL        time until next net event
70 REM    NXT        time of next event
80 REM    TN         transaction number
90 REM  ---------------------------------------------------------------
100 GOSUB 310'                 get random variable in Z
110 DEL = M + SD*Z
120 NXT = TIMER+DEL
130 TN = TN+1
140 PRINT "Transaction"; TN; "in"; DEL; " seconds."
150 IF TIMER < NXT THEN 150
160 GOSUB 200'                 fire off some network activity
170 GOTO 100
180 REM  -------------------------------------------------------------
190 REM Subroutine with side effect of network activity.
200 OPEN "r", 1, FLNM$, N
210 FIELD #1, N AS Q$
220 LSET Q$ = QQ$
230 PUT #1, 1
240 GET #1, 1
250 CLOSE #1
260 RETURN
270 REM  -------------------------------------------------------------
280 REM This subroutine returns a standard normal random variable
290 REM in Z, but it has lots of side effects (TOGGLE, Z2, S, R1, R2).
300 REM See Byte Magazine, 8/86, for a discussion of the algorithm.
310 IF TOGGLE = 1 THEN Z=Z2: TOGGLE=0 : RETURN
320 TOGGLE=1: S=1
330 WHILE (S>=1)
340   R1=2*RND-1
350   R2=2*RND-1
360   S=R1^2 + R2^2
370 WEND
380 S=SQR(-2*LOG(S)/S)
390 Z=S*R1: Z2=S*R2
400 RETURN
410 REM  -------------------------------------------------------------
420 REM Initialization of:
430 REM    M        mean time between network events
440 REM    SD       standard deviation of time between events
450 REM    N        length of pad record
460 REM    D$       server "drive" for scratch file
470 REM    FLNM$    name of scratch file on the server
480 REM    QQ$      pad string to be written over the network
490 REM  -------------------------------------------------------------
500 KEY (1) ON
510 ON KEY (1) GOSUB 630'   stop execution
520 REM initialize the distribution
530 RANDOMIZE TIMER
540 INPUT "enter mean time between network activity: ", M
550 INPUT "enter the standard deviation: ",SD
560 REM initialize the file to be read
570 INPUT "enter record length: ", N
580 INPUT "enter drive identifier, without the colon: ", D$
590 FLNM$ = D$ + ":junk"
600 QQ$=STRING$ (N, "q")
610 PRINT: PRINT "Hit F1 to stop."
620 RETURN
630 END'                     operator hits F1 to stop execution

Figure 2.  A program for generating normally distributed
background activity during LAN benchmarking [3].  This would be
appropriate for simulating transaction-processing environments.

-----
Server: 3Com 3server
Network interface card: 3Com, short ethernet board
Cable: thin ethernet
Network control program: 3Com Ether Series

                                   NUMBER OF BACKGROUND WORKSTATIONS

 TASK                                 0      1      3      5     10

    Word Processing (50 KB)
       load program                  1.0    1.1    1.5    1.8    6.1
       load document                 0.7    0.7    0.9    1.3    2.3
       save document                 0.5    0.6    0.7    0.9    2.8

    File Management (500 record)
       load program                  1.1    1.3    1.9    2.3    9.6
       serial search                 1.0    1.0    1.1    1.4    2.7
       sort                          1.0    1.0    1.2    1.4    3.7
       index                         0.9    1.0    1.0    1.3    3.3


Figure 3.  This is a typical test run.  The results are
normalized to the times of a stand-alone IBM PC-XT executing the
tasks from its local hard disk.


Disclaimer: The views and opinions expressed on unofficial pages of California State University, Dominguez Hills faculty, staff or students are strictly those of the page authors. The content of these pages has not been reviewed or approved by California State University, Dominguez Hills.