There are two areas where our attempt at extensive software generality, as encouraged by object-oriented design and modularity, might come into conflict with efficiency. However, neither appears to be a major problem. One is the distribution of GBT processes across many workstations and single-board computers. The really time-critical functions are isolated in hardware and closely connected processors. The whole system is tied together by the scan coordinator which communicates with other device managers over a dedicated Ethernet. The Ethernet protocol does not guarantee a maximum transmission time so we need to be conservative about the time latency that can be tolerated by the start-of-scan coordination process.
The other area where efficiency and generality are at odds is in monitoring a large number of hardware diagnostic points. The sampler software nearest the hardware points transfers data continuously into ring buffers at the highest rate expected to be requested by the user of the monitor system for each hardware test point. This allows considerable flexibility for the user to connect to and disconnect from any test point without affecting the hardware and low level software configuration. It does mean that, at any given moment, most data are either ignored or greatly decimated in time before being looked at. Some tuning of the system will be required to avoid a significant processor load at the hardware connections.