While NEO could have an
impact on each of the "big picture" items listed above, the project
is limited to data acquisition software (I've marked them with a
'*') and the interfaces that this software
provides to external components (marked with a '+').
NEO should be seen as an evolution of CFHT's Pegasus system.
Supporting wide field imaging on CFH12K and Megaprime will be the focus,
as this is where the most features are lacking in the current system.
Where ever possible, design choices will be made such that we do not exclude
other types of observing, such as infrared or spectroscopy. But these
instruments will only be converted to use the NEO system as time permits.
Our primary goal, and the one we think we can realistically meet with
the given manpower over the next 2+ years, will be to support CFH12K and
Megaprime. (Something without which we'd be in big trouble!) More
details on the potential benefits of NEO to each existing and future
CFHT instrument can be found in the Purpose section below.
NEO will encompass the following:
- NEO includes any software dealing with the data acquisition phase
(critical to observing at night).
- NEO does not plan to use a commercial database (such as
Sybase) as an interface to external projects like the Queue Scheduler,
even if the need arises for NEO to have its own internal database.
- TCS and most hardware control systems are not included, but each
one may require a special "agent" that will interface the system
to NEO. Development of these agents, or specifying what
information a system may need to make public through the
NEO status server will be part of the project.
- Specifications for other interfaces in use during the data
acquisition phase are included. Each non-NEO
component in the list above will probably need to talk to
NEO at some time. This boils down to the following:
- The command protocol and command set used by the Queue
Scheduler or Queue Operator Person to trigger acquisition
actions is included, but the Queue Scheduler itself is not.
- The triggers and hooks for the data archive and distribution
systems are included, but the archiver is not.
- The triggers and hooks for data pre-processing pipelines
is included, but the pipeline is not.
- The FITS header specification is included but extracting
this information and feeding it back into the Queue database
is not.
Why does NEO set interfaces that may impose some restrictions
on the designs of other systems? NEO handles a time-critical
phase of observation. Our philosophy will be to create a
simple (and sometimes boring) system where data flows
smoothly and is easy to trace, even by those with only
modest knowledge of the system.
Why are we choosing/developing new tools instead of using what exists
in the Pegasus system? After a decade of evolution of our instruments
and detector systems, some aspects of the Pegasus system have reached
their limit. NEO will in fact be a further evolution of Pegasus,
but without as much emphasis on backward compatibility as would have
been the case if we continue to try to make Pegasus, as it stands,
support our new and upcoming needs.
In some cases, modules of the NEO system, or things like improvements to
the general observing session (window manager, etc.) may be able to
benefit selected Pegasus sessions as well.
Here are the major goals of the NEO project, followed by the
approximate [date] we need to meet the goal. This is
not a time estimate! Just a reflection of what we are being
asked to do.
- New data format. [1999 for CFH12K]
Fully and efficiently support
the large data sets produced by our CCD mosaics.
Upgrade image display, conversion tools, and data analysis
tools to provide at a minimum the functionalities
we currently have with our single chip detectors. An image
display and conversion tools and support for the Multi-Extension
FITS format are needed today.
- Framework for Megaprime Data Acquisition. [1999-10 for Megaprime Design]
A remote shell daemon running under VxWorks is required to allow
the "Megacam Agent" (co-written with CEA) to be integrated into
the observing session. Other utilities and libraries may need to
be ported to the real-time environment as well.
- Interface with Queue and Automation support. [1999-10 for Queue Design]
Having a unified point
for controlling the system with a set of commands allows
easier control from a variety of scripting languages, and will be a
well defined point of communication with the Queue scheduler.
Queue may have other special needs from the NEO system, but
the command set is the most important interface. Anything that a
regular observer also might need, like access to weather information
and scripts that facilitate common observing scenarios, will
be part of the NEO project. See the task list for some specific examples.
Queue will play the role of a regular observer and use the same
interfaces whenever possible.
- New Hooks for External systems. [1999-10 for Queue Design]
Archiving, data pre-processing, and data distribution systems
should register interest in new data with NEO.
Whatever the mechanism, a problem with a non-critical system must
not halt observations. This is a feature already present in the
connection between Pegasus and the archive system, for example.
- Remote access to system during observations. [2000-4 for Queue or earlier for Remote Obs Room]
Provide simultaneous views and real-time remote control of
graphical user interfaces and text feedback windows. The
effect of any remote access layer are expected to be: (1) unnoticable
on the local machine, (2) not significant if operated from the
remote observing room in Waimea, and (3) at least usable in an
emergency if operated from home, or even abroad. There is
a possibility that the solutions for this could be
implemented with Pegasus sessions.
- Integration with TCS. [2000-4 for Queue Testing]
A single interface will be created as part of NEO which will import
status and pass requests to the Telescope Control System. Whether
these requests will actually be capable of moving the telescope
directly or whether they result in a pop-up on the observing
assistant's console will be up to the receiving end. At a minimum,
this may just consist of a re-packaging of Pegasus' TCS handler,
telescope offset utility, plus some extra capabilities needed for
Queue. Basic instrument status will be available to TCS for
informational purposes.
- Failsafe operation modes. [2000-4 for Megaprime Testing]
Pegasus provided a mechanism to
"fake" various control systems. This is most often needed during
engineering (when all systems may not yet be functional)
or during an emergency when a problem cannot be solved but
the observer wishes to continue observations, with some possible
bad side-effects on their data (bogus/missing FITS keywords for example.)
When possible, a "simulation" mode might be provided in addition to
"fake". The difference would be that viable, simulated results would
be provided instead of bogus/missing results.
- Parallelized command sequencer. [2000-7 deliver to CEA]
Allow pre- and post-
exposure tasks to run in parallel. Also add a class of
"during-exposure" tasks, which the pegasus command sequencer
(called "mama") currently does not have. (It supports
only before and after tasks, and runs them serially.)
DetI has demonstrated the benefit (and indeed, necessity
with infrared observing) of these features.
- Status Server. [Interface def. needed now, server itself by 2000-7]
Improve overall system efficiency and reliability
by replacing passive text file databases ("par" files) with an active
status server. Provide new functionalities,
such as data lifetime, deadband,
consumer callbacks, and other event driven modes. There is a
possibility here to update some of the Pegasus handlers to use the
new status server as well.
- FITS Data Capture Agent. [2000-7 deliver to CEA]
One reason Pegasus cannot parallelize
its "handlers" is that each one must open the FITS file
to directly deposit its own headers. A more dynamic mechanism
for this, and one that doesn't require templates, will not
only be faster, but should be simpler to maintain, especially
when applied to the MEF format.
- Graphical User Interfaces. [2001-5 for Megaprime users]
Web based or remotely accessible graphical user interfaces will
become a larger part of NEO in the later phases. Since they must
communicate with NEO using the same command protocol that users
can use for text-mode operation, the choice of user interface
will not drive the project. We (CEA and CFHT) should count on
operating Megaprime mostly from a command prompt, but a set of
GUIs must eventually be developed for the end users. Currently this
item involves a low amount of maintenance on existing CFH12K
and gecko user interfaces (which are not Pegasus-style.)
- Scripting Language. [2001-5 for Megaprime users]
Just as with the GUIs, We (CEA and CFHT) are free to implement
testing and integration scripts using our favorite languages,
but NEO should provide some clean examples in various "recommended"
languages for the end-users. Our own test scripts are not always
ideally suited for this purpose.
The following table needs a lot of cleaning up, and also some
discussion. Although the detector software is not part of
NEO, remember that the interface that the detector software
uses is. The same is true for instrument control interfaces.
Converting both of these to NEO may in many cases not be
feasible or worth the effort.
[1] - DetI compliant (uses compatible command interface).
Session | Instrument | Detector | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
|
---|
megacam(NEO) | Megaprime | Megacam(ma[1])
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
cfh12k(NEO) | CFH12K(12kcom[1])
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
aobir(Pegasus) | AOB | KIR(DetI[1])
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
focam??(Pegasus) | AOB | ccd??/focam(GIIIV3)
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
isis(Pegasus) | CFHTIR(DetI/12kcom[1])
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
wircam(???) | WIRCAM(???)
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
espadons(Pegasus->mos) | espandons(???)
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
gecko(Pegasus) | gecko | ccd(GIIIV3)
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
geckoeev(Pegasus) | gecko | EEV(DetI[1])
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
fts(Pegasus) | FTS | InSb/InGas(???)
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
bear(Pegasus) | FTS | Redeye(GIIIV3)
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
redeye(Pegasus) | ??? | GIIIV3
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
mos(Pegasus) | MOS | GIIIV3
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
mosfp(Pegasus) | MOS | GIIIV3
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
mos??(Pegasus) | MOS-ARGUS | GIIIV3
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
osis(Pegasus) | OSIS | Gen III
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
osisr(Pegasus) | OSIS ? | GIIIV3
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
???(Pegasus) | OSIS ? | CFHTIR(DetI/12kcom[1])
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
sis(Pegasus) | SIS | GIIIV3
|
|
|
|
|
|
|
|
|
|
|
|
|
---|
As of October 1999, we have the following resources
to design and build the NEO software.
We expect software and hardware purchase costs to be minimal
for this project. In many cases we can use the existing tools
purchased on the regular software budget. Some licensed software
may still need to be purchased specially for the project. I
will need help estimating how much this could be. We would also
request that the remote observing console be officially designated
as the NEO development machine when it is not in use by observers
(which, initially, will mostly be the case). For one thing, this
will be essential in meeting goal #5. Once it becomes
critical to operations, we will need to re-think this.
People
Jean-Charles Cuillandre, the project scientist, collects scientific
requirements for the data acquisition system and linked components.
He lays out the requirements for the interfaces that the
"big picture" items need. NEO will define and provide these interfaces to
the other software systems involved in the overall observing process.
Jean-Charles will be implementing some of those external components
(like Elixir) himself. So this is good insurance that everything will
fit together well in the end, and will be based on a consistent philosophy.
He has a strong background in both astronomy and instrumentation,
which makes him an ideal point of contact between the
programmers (Rosemary and myself) and the community that will rely
on our product.
Sidik Isani, the project manager and engineer has at least
80% of his time to devote to the project (with the other 20%
going to CFH12K and other operational support duties.)
Rosemary Alles will be in a similar situation, spending 80%
of her time designing and building the software (the other 20%
are used when her expertise is required for the TCS IV project.)
This leaves a total commitment of "1.6 person-months per month."
At this rate, the duration of the project would be approximately
2.5 years (assuming we accomplish the bare minimum listed in the
10 goals above, and the scope of the project does not expand
beyond the clear boundaries outlined above.) The requirements
for supporting Megaprime will likely become quite rushed without
some extra help next year. (Timeline is still under construction,
but should show this soon.) Our goal right now should be to
agree on the scope and the 12 major goals of the project.
Then we'll figure out how we're going to do it.