Remaining Issues 1) There are no mechanical prototypes. This primarily affects the optical design, for which we assume a spacing of 500 um between detector edges; for the planned 2.8 mm detectors size, that corresponds to a 3.3mm spacing between detectors centers. Dominic is confident that detectors with this pitch can be constructed. With the existing optics design, we then get a spacing of 4.2 arcseconds between adjacent beams. At 100 GHz (3mm) the GBT, illuminated uniformly out to a 90m diameter, samples up to spatial frequencies of 1/(6.88 arcsec) requiring a sample every 3.44 arcsec. By scanning at 26.6 degrees to the array edge a single-pass sampling of 0.671*4.2 arcsec=2.8 arcsec is achieved, leaving a comfortable margin. If the spacing is unexpectedly large, multiple passes can be used to obtain full sampling should this be needed. In almost all applications cross-linked scans will be used anyway, and that will also help. Also the effective nyquist sampling depends on the source spectrum, and will always be less than 3.44 arcseconds since it is the band center and not the top edge which is relevant. Therefore we have this base well-covered. [note: see Krauss section 6-9 for a derivation of the spatial frequency, D/lambda bit] 2) Bandpass/Detector Parameters: The optics plan calls for an IR blocking filter at the optics box entrance; a low-pass filter at the lyot stop; and a bandpass filter over the array. The "band" passes are <350 GHz, < 150 Ghz, and X GHz respectively. Using Simon's spreadsheet I have varied X and calculated the loading at tau=0.15, ZA=65 degrees: X total load ---- ---------- 81-99 8.08 pw 83-97 6.28 pw 84-96 5.38 pw I think a 12 GHz bandpass (the "middle" filter in the NSF proposal) together with a > 8 pw saturation power-- the number we agreed on in December-- is the way to go. This will allow for a 14 GHz filter, and an 18 GHz filter down to ZA=45 degrees at tau=0.15 with a 30% margin. 18 vs 12 GHz is only a 22% penalty in sensitivity but gives you nearly a factor of two larger headroom on the loading, and a significantly larger fraction of time over which the instrument will be operable (which will to some extent compensate for less bandwidth). The lower nominal loading will also enable us to operate with a higher bias, which increases the linearity & stability, and decreases the effective time constant. It is actually not possible to do this optimization right without more knowledge of how they tune detector parameters, ie, increasing the saturation power spec probably increases the phonon noise in the detectors. Similarly, making the detectors faster does the same. I think the above is a safe route which gives good performance and control of systematics; and the increase in operational efficiency is likely to get science of any type out substantially more quickly (commissioning, not integration time, is the bottleneck here). This is an important consideration given that there is much competition from all sides (APEX for SZ; LMT for high z galaxies; etc) In December we "decided" on a 5 msec time constant spec to allow faster scanning (for wide-area surveys, and to improve atmosphere suppression in imaging extended sources). Previously we had been working to 20 msec. As a compromise to these, and to yield somewhat better noise performance, I suggest a 10 msec time constant target. Final "target" detectors pars: 10 msec time constant; >=8 pw saturation power; detector noise < 1.2e-5 pW rtsec. Caveat: I calculated assuming the "bandpass" configuration. Is it the case that if the filterwheel is funded, it is a trivial reconfiguration to but the 2nd IR blocker over the array, and the bandpass filter(s) on the wheel at the lyot stop? I think it is but we should note this explicitly. If it is not, I'll go back and do the numbers for the "lowpass" case.