next up previous contents
Next: 6. Security Requirements Up: Status Server Requirements Previous: 4. Interface Requirements   Contents


5. Performance Requirements

There are a number of internal and external factors which will affect the performance of the Status Server. The hardware platform, operating system, network bandwidth, packet size, CPU load, and resident memory will all play a part in the throughput, performance, and latency of the Status Server. The target platform and implementation should be designed to maximize performance and throughput and limit latency.

Assuming a well planned design and implementation, it is likely that the overall performance of the Status Server will depend largely on the load placed upon it by each connected client. As a result, it is important to characterize the type of data to be stored in the Status Server and the update frequency of this data.

5.1 Individual objects within the Status Server should be updated at a maximum frequency of one hertz.

Each time an object in the Status Server is updated, there is a chance that the object is being monitored by multiple clients. As a result, a single update can trigger a number of additional actions by the Status Server. In order to maintain sufficient overall performance and throughput, clients must restrict the number of updates to the Status Server and specify an "age" and "deadband" range wherever possible. If a client requires a specific piece of information at a more frequent interval, a direct API should be considered between subsystems instead of using the Status Server.


5.2 The maximum storage size associated with the value of an object within the Status Server should be fixed.


The Status Server should be designed to hold a series of small objects. By restricting the maximum size of each element, it is possible to prevent memory and performance bottlenecks as well as help characterize the type of information the Status Server should be designed to store.

A file system should be considered as an alternative for larger pieces of information which must be stored and shared.

5.3 As a goal, the typical transaction latency should be less than 10 milliseconds.

As mentioned earlier, the latency will depend on a number of factors. However, an initial goal should be a latency of less than 10 milliseconds for a round-trip transaction (request-response) over a 100 mbps LAN within the same subnet. Accurate benchmark figures should be available once the Status Server has been implemented.

The 10 milliseconds target is based on some benchmarking performed using the single-threaded non-blocking socket server which the QSO Tools use to send commands to director. Typical round-trip latency on an unloaded Pentium 3 500 Mhz server via a 100 mbps LAN is roughly 3 milliseconds. This includes the time to send a command over the network, parse the command, fork a child process, send the command to director via a cli_cmd API call, receive a PASSFAIL response from the child process, and forward the response over the network to the client. In this case, the command used for testing purposes was a say command with an 80 character message.

While the processing the Command Server, used by the QSO Tools, performs is quite different than that of the Status Server, 10 milliseconds should be a reasonable first estimate.

next up previous contents
Next: 6. Security Requirements Up: Status Server Requirements Previous: 4. Interface Requirements   Contents
Tom Vermeulen