There are three main locations where computing devices are installed. All are interconnected by various speeds of Ethernet (mostly Gigabit, with a few hosts and switch uplinks at 10 GB) as of 2016.
For this discussion, anything installed on the telescope itself, whether at Prime, Cassegrain, or even Coude is all considered ``Telescope.'' [Though this might be a good place for some discussion of differing connection facilities at each?] Only hardware which is required to be near the instrument is installed here. This includes hardware to control power supplies and monitor auxiliary electronics, and the detector controller for the camera itself.
Locations on the telescope, such as the prime focus, are wired with Ethernet. It is possible to connect a portable computer to have limited access to the instrument. Small ``network appliances'' (I-Openers) are also currently installed at locations on the telescope to transmit images from sky-monitoring camera cameras.
Ethernet is also used to connect remote diagnostic units like the BayTech ``MDAC'' or ``RPC'' units. which provides a route to Serial connections for other hardware and control of power sources. Units such as this can also be used to acquire basic data (voltages, temperatures, etc.) at points near the instrument. RS232 and other serial connections are commonly used by devices on the telescope, but it is possible to tunnel them through the Ethernet, or through fiber in the case of MegaCam's controllers. This simplifies error correction and virtually eliminates distance limitations.
MegaCam uses a detector controller designed at the C.E.A. in France by Jean de Kat, which is based on Analog Devices' ``sharc'' DSP. This new controller handles the high readout speeds of MegaCam. The guide CCDs for MegaCam use the San Diego State University (``ARC'' generation II) controller.
Currently, all our other controllers are also the SDSU generation II or III, by Bob Leach. These consist of a set of boards providing the analog and the digital functions, and are based around the Motorola 56000 Digital Signal Processor (DSP).
All controllers currently use fiber optic links to send pixel data down to the 4th floor computer room.
Below the telescope, on the 4th floor of the building, is a climate-controlled computer room. It houses two important computing hosts for data acquisition: the Detector Host and the Session Host.1A third host, the Display Host is located outside the computer room and close to the screens which it drives, for logistical reasons.
The fiber carrying the pixel data from the instrument connects (via a nearby patch-panel) to an interface on a Detector Host's PCI bus. (This is true for both SDSU and MegaCam controllers.) The main function of this host is to reliably read out the detector and provide the data in FITS format to the Session Host. Redundant hosts exist mainly for backup, though it is conceivable to use multiple detector hosts at once.
For infrared acquisition, the Detector Host may perform more complex tasks, such as co-adding frames and performing calculations for on-the-ramp integration. (It is preferable to us to do these tasks on a Unix host than on a DSP.) Each host will be configured with sufficient memory and local disk space for these tasks. If possible, the local disk space should be sufficient to also take over the functions of the Session Host (see below) for a full night in an emergency. Thus, if everything save what we have described so far were to fail, it is still possible to do a lot with just the detector controller and the detector host. In a lab setting, this is often the normal mode of operation. It also means our engineering interfaces are not radically different from our observing interfaces. [NOTE: This mode is not practical for the Queue Scheduled observing mode we use with MegaCam. Instead, we make sure that down-stream hardware of the Detector Hosts are also redundant.]
The Session Host runs all other acquisition software and user interface components. Images are transferred across either a 1 GBit switch. Since the roll-out of FreeNAS Network Attached Storage devices at CFHT, the Session Host no longer deals with storing the bulk image data. These are written to the NAS RAID volume directly from the Detector Host, over NFS (Network Filesystem protocol.)
Instrument control and high level sequencing is handled by the Session Host. Any quick data evaluation required by observation and needing fast access to the data may run here as well, but for MegaCam, separate real-time elixir systems exist for this purpose but in Waimea and at the Summit. The Session Host is the computer with which the user has the most interaction.
The Display Host for the observer runs a multi-headed X-Server. No applications are run locally on the Display Host except a Web browser, should the operator choose to launch one on the acquisition computers. Hard drives may or may not be present in this machine. If they are they will only be used for temporary or local swap space. The network speed requirement for this host will never exceed the needs of X11 traffic to update the screen.
Through 2015, we used Display Hosts with 3 or 4 monitors. Multiple monitors still exist on the TCS console but we are working to eliminate them. Single, high resolution screens are available today such as the ``4K'' 32 inch monitors which have been deployed for ike and maka. For most of the first decade of 2000's, these were 3 separate 1600x1024 screens configured as a single logical screen (SLS, or Xinerama) totalling 4800x1024 pixels. Today, the single monitor has a resolution of 3840x2160.
A high speed network connects the Summit and Waimea computing facilities.
In Waimea, an identical display host is located in the Remote Observing Room. Once again, all processes displayed here run far away, on the Session Host at the summit. A method to duplicate the contents of Summit Display Host's screen in Waimea (Virtual Network Computing) has been explored, but the approach of making each control interface capable of having multiple instances is being favored. Such a solution is currently being used for both graphical (X11 or Web based) and command line interfaces. Like the Summit Display Host, the network speed and latency needs of this Remote Display Host are limited to screen updates, and are already met by our DS3 network.