17nov03 bsm attendence: NRAO- Brian Mason, Bill Cotton (Thurs), Don Wells (Thurs) Penn- Simon Dicker, Michelle Caler, Mark Supanich (Fri) GSFC- Dominic Benford, Harvey Moseley, Rick Arendt, Dale Fixsen, Rick Shafer, Joshua Forgione, Troy Ames Notes are in my black hardcover GBT-Dev II notebook (pp 108- 119). High points below. Thursday- Focus on Data Analysis ================================ Dale gave a quick summary of the Fixsen et al procedure & formalism. In subequent discussion Harvey indicated we should expect about half the noise to come from the detectors, and half to come from the sky (ie the latter term is photon fluctuations in the signal of interest). The detector contribution is approximately white. Rick Arendt gave an overview of imaging software packages/implementations. We have: *L'ESCARGOT: Arendt/Fixsen/Moseley-- details below (IDL) *SHARC-SOLVE: Fixsen-ish implementation by Darren Dowell (java? C?) *CRUSH: Atilla Kovac's emperical/iterative java package. *(): Rick Shafer has an iterative fit that operates in the fourier domain *(): Bill Cotton's iterative C program Escargot comes in several flavors: IRAC(SIRTF), HST/NICMOS, SHARC. These differ in what they fit for, as well as the frontends used to suck in the data. SHARC currently fits for a gain term (pixel based) that's constant for each pixel over one scan, atmosphere (an offset that is multiplied by the gain), and pixel-based offsets. The atmosphere is currently only a piston. Rick Shafer noted that he sees fluctuations in the term identified with atmosphere all the way out to *14 Hz*, and that it's power spectrum is 1/f^2 in variance ie power. This is puzzling. Ionizing cosmic rays (>5 keV) are expected at about 1/cm^2/(10 to 60 sec) L'Escargot is currently run on one Sharc II scan at a time (~10 minutes of data: 384 pixels @ 20-30 Hz)-- this takes ~1 Hr on an 800 MHz linux PC. The RAM reqmt is 10-15 x size of raw data. Most of the time is spent in the Z_n+1 = z_o + T (T^T Z_N) loop, which determines instrument parameters-- the sky follows as one step from this-- though you can wrap the whole thing into one big loop. This is the bit coded in C (eqns 15 & 16)- runs 10 x faster in C than IDL. Currently none of the covariances are written out although you can select individual pixels and ask about the N_pixel-1 specific other entries of the covariance matrix to do with this pixel. Rick Shafer gave an overview of his fourier fitting methods, and Bill Cotton similarly summarized his package. Most of the sharc-ii maps are currently more or less uncalibrated, with brightness temperatures determined from a hot/cold based v/k gain. We discussed a bit the fact that the noise level due to sky loading only increases as the square root of the loading (as opposed to proportional to the loading for coherent systems), thus we can expect the Penn Array on the GBT to be used under a fairly wide range of conditions in comparison to coherent radio systems. Friday- DAQ/Electronics focus ======================================================= MUX Demo: muxclock.py needs the line period set to Max=32 in order to set up the clocks (this is a divider): this is a divider used to construct the LSYNC clock from the master clock (aka CLK), ie, LSYNC occurs at 50 MHz/this number-- it is the rate at which rows are read out in a given column (I think). The FRAME pulse is a batch of row readings (and this is what the FRAME bit in the PCI data stream indicates). "batch" is defined by the address card lookup table settings. The blinking yellow "frame run" light is meant to indicate whether this clock is running. Then set up muxcontrol.py --> one instance of the program for each DFB (column). The card addresses correspond to dip sw settings on the (DFB) card. We use this to set up the triangle wave generation. Then set up pcimux control (.py?) to configure dumping/readout of data. each line dumps 4x32 bit words: top 14 bits are value sent to DAC (I think this is the triangle wave). I don't remember how we set this up, or why there were *four* 32 bit words per line. As I understand it: one DFB controls one group ("column" ie group) of detectors. There is a table (constructed by configuring the address card) which determines what order the rows are read out in -- this is the same for every DFB card (since the address lines are common). At every row change there is an LSYNC pulse. Once this table has been completely executed, a new frame pulse is generated. We don't know the order of the different channels (columns; and interface card) in the PCI data stream-- at least, this is what I think we don't know. Joshua Forgione at GSFC is working on a full overhaul of the firmware; this will be v3.0 instead of the v2.1 which we've been working with, ie, Josh has been specifying v2.1 in answer to questions so far. Do not confuse this with Mark II vs Mark III *hardware* (MUXing electronics); Mark III.5 refers to Mark III with v3.0 firmware. the V3 firmware will also have: *proper timestamping (each frame will be stamped with the master clock count since last reset) so you can better determine if frames were dropped and which they were-- of course this isn't an absolute timestamp; *further levels of coaddition in the firmware so the PCI bus data rate isn't quite so high; * a diagnostic "full raw data" dump mode; * a more flexible test signal generator (currently just triangle wave) * a better, and more user-controlled feedback algorithm (rather than the current Proportional-Integral algorithm) We discussed briefly what you do with the data. For each frame for each row&column, you get *two* data: the DAC value (14 bit) and the ADC value (16 bit). If as we hope the instrument (photon+detector) noise is significantly greater than the electronics noise you can just use the DAC value; if not you must take into account the error. If slew rates have been pushed and you are beyond the linear part of the SQUID responsivity, corrections will be necessary; these should probably be done before further coaddition. We need a spec for the maximum slew rate on the Penn Array. To build in v3 forward-compatibility to our current DAQ electronics we need to add some RAM to the DFB cards (I think there is already space for this)-- we should do this. We noted that provided the PCI data can be related to the computer clock to within 0.1 sec or so, the GB 1pps signal could be embedded in the timestream via the interface card (+/-3v analog inputs) to determine the absolute time to better than a ms. The CAL on/off signal could be similarly embedded. As noted below we may still want realtime linux to reliably read out the PCI card, however, depending on how big the PCI buffers are and thus how often an interrupt must therefore be serviced. IRC --- IRC cannot currently read out Mark III (v2.1 firmware) MUX data, the stumbling block being the PCI interface. Since the command interface is serial, it can however configure and command Mark III v2.1. With an understood PCI interface Troy estimates this is between 1 and 5 days' work to get MUX readout. IRC needs a DMA port driver-wrapper (in addition to your device-level debian or red hat driver) and this doesn't exist, so that's most of what needs to be written. IRC has worked under every standard flavor of linux that has been tried (including embedded versions) but Debian hasn't yet been specifically tried. Frame-dropping will happen if you can't read out the PCI card quickly enough, and *this* is where you potentially want RT linux. with v3 firmware we will know which frame has been dropped; with v2.1 we would only know by counting that a frame had been dropped, and not which one. The Java VM has read out and coadded 4 columns at 100 kHz each with no problem. Drivers for IRC development are HAWC (SOFIA/popups); SAFIRE (SOFIA/MUXed TESs); earth science flotilla project; next generation SOFIA instruments; and GBT commissioning. As far as Antenna Control: for SHARC-2 IRC is capable of commanding the antenna by writting commands to the antenna control server; it is also capable of ignoring the antenna, which the user controls via a UIP client that connects to the antenna control server. Similar approaches could be adopted for the GBT, ie, the user controls the antenna via GO and the system doesn't know the receiver is there (or there is a simple manager wrapped around the receiver/IRC which does little if anything); or the antenna is operated directly from IRC from an interface GB specifies. No specific conclusions on this front were reached. Operational Procedures: the experts are Johannes & Dominic, as well as Troy's group (who wrote the lock-up scripts for Herschel-SPIRE prototype). The IRC group tentatively plans to get IRC working with MkIII (v3) MUX. The next version of IRC will be approximately open-source. IRC is quite scriptable; its scripting is done in Python via the beanscripting framework. There was some discussion of how we do housekeeping tasks during early commissioning. IRC doesn't currently have a GPIB driver but Troy thinks this would be easy; provided this, UPenn could translate all its housekeeping procedures (eg fridge cycling) into Python. In summary there are 3 IRC interfaces: housekeeping, DAQ, and antenna control. To Do ===== -Get GSFC IDL software -Share our sims/get them piped into l'escargot & Rick Shafer's stuff -Get Rick Shafer's fourier fitting writeup -summarize instrument card interface questions for NIST -Josh update PCI card intfc doc --> CHECK -Brian: formulate/follow up on data format question (i think it is, what order are the columns in?) --> CHECK Penn/HW -Add cal signal to interface card input -pipe GB 1pps to interface card input -solder extra RAM into DFB cards for v3 firmware forward-compatibility To Do- later/low priority ===== -Work out interpretation/linearization of DAC versus ADC data. -Dale's 1/f / scan pattern bit -Estimate maximum slew rate, which sets a lower limit on the SQUID feedback loop bandwidth.