GBT Commissioning and Operations Meeting Friday, 12 September 2003 AGENDA 1. Az Track Status -- Bob A. 2. PTCS results -- Richard 3. Observing news -- Ron 4. Spectrometer status -- Rich 5. Spectral Baseline, Front-end, and IF work -- Roger 6. Software status -- Nicole 7. Schedule -- Carl 8. Project scheduling -- John 9. Any other business PRELIMINARY REPORTS 1. Az Track Status Azimuth track: * Splices 25 and 2 had deteriorated significantly in the past month. These were among the splices shimmed with zinc and it had been working it's way out of the joint. They were reshimmed with teflon/bronze material this week. * Splice 45 continues to have steady performance - no deterioration. * Dennis continues with material research. * Simpson, Gumpertz, and Heger are working on Phase 2 of the FEA. Structural Repairs: * We have obtained the necessary materials and equipment to do the repairs. Equipment is being staged and checked out. * We have developed a plan for the repairs of the access platform and will begin that work as soon as practical. * We have received the report from the ultrasonic inspection, and are awainting an assessment from a structural fatigue engineer. We hope to have that this week. -- RA 2. PTCS results A "North Celestial Pole Astronomy/Trilateration" experiment was successfully completed on Friday 5th/Saturday 6th September. 11 of 12 ground rangefinders were available, and trilaterated on various targets on the structure while repeat point/focus astronomical measurements were made, and temperature sensor data logged. The astronomical and temperature data are excellent, and show strong correlations. The rangefinder results are still being processed. More details of the temperature results will be presented at the meeting. The improved weather and flexible scheduling in place through the end of August/start of September has drastically improved the productivity of our commissioning runs. As well as the temperature results, we now have good measurements of the residual elevation-dependent pointing and focus corrections. It will take some time to complete the analysis of the these, and implement the corrections in the antenna manager. The core python pointing/focus scripts are now working well, although a variety of items need to be tidied up. These should be ready for release at the end of this cycle. -- RMP 3. Observing news Over the last week we serviced five projects and had two nights with shaky starts. One occurred after the power outage on Wednesday and Toney produced a summary of what happened and some suggestions for the future. According to Toney and the operators logs, most of the system was up an hour after the power was restored. A disk-mounting/network problem added two hours to the down time. Another two hours were lost when the active surface required a manual restart by Jason and J.D. (Unfortunately, Tim Robishow opted out of a fine dinner at Snowshoe to witness these delays.) Monday's night observing was frustrated by the observers trying a frequency that was well outside the nominal band of the 8-10 GHz receiver plus a few software issues that Toney summarized in his report. These include some insufficient functionality in IARDS for the astronomer's type of observing; and some issues with the recent patch for checking on the location of FITS files when using the Active Surface. Toney also reports that a bank of the Spectral Processor has been dead for sometime but that it hadn't been fully reported before. Even though the Spectral Processor was in moderately-heavy use, I counted only four times when the Spectral Processor had to be restarted. There was one instance when the observers had problems with the Antenna Az/El velocities. Once the operators had difficulty with SafeHold errors when control was being passed to the M&C system -- RJM 4. Spectrometer status - The first draft of the pulsar programming requirements was reviewed with M&C. As a result of the meeting, a few additions and modifications are required. M&C plans to review the document more carefully and provide further feedback. - Attempts to test the oversampling spigot mode, which will give 400 MHz bandwidth instead of 800 MHz were foiled by software and network problems during the one day available for this testing this week. - New cables for the Spectral Processor were received. The good news is that all handshaking and 15 out of 16 data lines are working - that is also the bad news! We will attempt to repair the cables on Monday when technician time is available. Work continued on fixing two LTA boards that haven't passed self test after updated to 32MHZ. Now the board passed self test on the bench, but still produce DMA errors in self test in the system. Continue working on preparing for LTA redesign: deeply study and and analysis old design, writen down need to be changed and improved in new design; searching the new parts which are going to replace the old parts. Next week: 1. continue working on fixing two LTA boards, 2. continue prepare for redesign LTA, and take one day in CV to discuss the redesign LTA issues with spectrometer designer Ray, 3. update pulsar programming requirements per feedback from M&C and O'Neil, 4. set up to test high speed sampler modifications, 5. repair and test new spectral processor cables, 6. test spigot modes. --Rich and Holly 5. Spectral Baseline, Front-end, and IF work Five of the ripple-compensated fiber modulators are now in service. The remaining units will be sent to the fiber splicer next week. The K-band receiver upgrade proceeds. The beam 3 vacuum window rework was successful, eliminating frequency regions of high loss. The receiver just got cold again for testing to resume. Work on the new Ka-band 1cm receiver, prep of the Q-band receiver for this winter's installation, and reconstruction of the feed defrost system continue. For the 1cm receiver, encouraging results are being seen on the frequency multiplexer for continuum detection, and the band select filters for the associated frequency converter. --RDN 9/12/03 6. Software status Single Dish Development IPT #50 - Friday, September 12, 2003 This week ends Week 4 of the 6 week development cycle which is the 7th cycle in 2003. The Plan of Record for the current development cycle is available from the Project Office web site at http://tryllium.gb.nrao.edu/docs/POR/POR_Sept03.pdf. This was a very quiet week, and although continuing progress was made on this cycle's deliverables, there were few items completed. Several adjustments were made this week regarding deliverables required for short term PTCS goals. For the remainder of this cycle, EMS work will focus on resolving memory leak issues in preparation for PTCS experiments on 9/19, after which upgrades for the quadrant detector will be scoped out. Configuration work continues on schedule; Frank and Melinda have conducted many tests with support from Ray and also Paul which will continue next week. Keep in mind this is only the first of three development cycles dedicated to the configuration task; during the next cycle our focus will shift to actively iterating on this functionality as driven by observer support. The first draft of an interactive memo detailing a prototype for a unified data representation for continuum and spectral line data was circulated to a limited audience this week, and met with a wide spectrum of both good and bad comments from astronomers. Over the remaining time in the cycle we will examine the comments in light of our goals, to determine the best approach for sharing these developments with the rest of the community upon completion of this cycle. A meeting was held to discuss a requirements document written by Rich Lacasse regarding software needs for the pulsar spigot card and spectrometer pulsar modes on Thursday. The information is sufficient for software personnel to begin work on these items once Mark and Amy have identified a level of effort for each of the two independent tasks (spigot card and new modes) and as the project planning schedule permits. Operational support this week has included participation in restoring the systems after the power shutdown on Wednesday. A patch was also added so that new message windows launched will reflect messages which have not yet been cleared; up until 9/9, new message windows were blank upon startup. -NMR 7. Schedule Last Week ======== Observations for: GBT02C-056, GBT02A-031, BP107, GBT01A-007, GBT02C-046, GBT03A-014, GBT02A-002 Completed: BP107, GBT02C-056 September ======= Scheduled hours [backup] Astronomy ~ 285 [69] Maintenance ~ 174 [24] Tests & Comm ~ 262 [29] October ====== Scheduled hours [backup] Astronomy ~ 225 [148] Maintenance ~ 124 Tests & Comm ~ 395 [29] -- RCB 8. Project scheduling September 8th Planning Meeting Minutes 1:30 P.M. 0) Observer comments None available on the Web Langston's email comments Vicki Kaspi and Scott Ransom's target of opportunity AGBT02C_056 Summary: Observed strong pulses from 1713+0747, J1818-1422, J1803-2137, J1801-2304. Startup was troubled. Date: 4:30pm Sept 2 - 07:00am Sept 3 Project: 2C10 Observer: Robishaw Support: K. O'Neil Problems Summary: (1) Switching Signal Selector Failure - 40 minutes lost (2) Intermittent CLEO/M&C failure (see below) - 50 minutes lost (3) Spectral Processor Failure - 13 minutes lost (4) GO 'freezing' after SP failure - 10 minutes lost (5) Sadira failure - 2 1/2 hours lost See details at the end. 1) This week's schedule Power outage Wednesday 2) Next week's schedule Nothing of note 3) September Observing Schedule discussions Tentatively plan overnight shutdown for elevation axle repairs w/o Sept 22 4) October Observing Schedule discussions None 5) GBT development planning No activity 6) AOB None. Observing Summaries: GBT Observing Summary: Vicki Kaspi and Scott Ransom's target of opportunity AGBT02C_056 Observed strong pulses from 1713+0747, J1818-1422, J1803-2137, J1801-2304. Startup was troubled. Details ======= Used standard setup following web writeup. However antenna would not move, managers crashed and GO would not start. Stopped and started GO and CLEO several times. Eventually found that Antenna was left in simulate, so that we could not get control. After switching to real antenna, still could not get observing going. Shutdown CLEO and restarted it. Found that the IF-RACK configuration had changed from S-band to Noise source (similar to Karen's previous observing problem). Some managers appeared to be stopping and starting all by themselves. Not sure what was going on, but eventually all cleared. Got help from Paul and Melinda on using glish -l fixGoHang.g and then GO would start. Got on Source after an hour of clicking. This type of software manager fumbling is confusing to the observers. The startup manager reliability is better than in the past, but still could be improved. PSRGUI ====== Frank got the cabling changed to match the PSRGUI, so that after all manager were running, the PSRGUI correctly set the frequencies, without requiring expert knowledge. (Updated the web page: http://wwwlocal.gb.nrao.edu/GBT/setups/bcpm_observe.html to document LO2 frequency requirements for the BCPM) BCPM ==== Don Backer indicated he has an idea how to fix the 1.4 MHz channel mode of the BCPM. He may do this after the Jansky Lecture. The BCPM stopped twice last night during the observations. Greg Monk called S. Ransom, who just requested restarting the scans on SGR1806. (No BCPM manager reset was required) Date: 4:30pm Sept 2 - 07:00am Sept 3 Project: 2C10 Observer: Robishaw Support: K. O'Neil Problems Summary: (1) Switching Signal Selector Failure - 40 minutes lost (2) Intermittent CLEO/M&C failure (see below) - 50 minutes lost (3) Spectral Processor Failure - 13 minutes lost (4) GO 'freezing' after SP failure - 10 minutes lost (5) Sadira failure - 2 1/2 hours lost Observing Summary: Last night's start-up was hampered by a series of failures resulting in data not being consistently taken until 3 1/2 hours after the after the project set-up began. Unfortunately, halfway through the night sadira failed, resulting in another 2 1/2 hours of observing lost. A description of the failures is below, and anyone interested in encouraged to also take a look at the operator's logs. Problem Descriptions: (1) Switching Signal Selector Failure - 40 minutes lost The M&C system lost its ability to communicate with the switching signal selector. (2) Intermittent CLEO/M&C failure - 50 minutes lost Once the switching signal selector was back up and running, I began re-setting up the system for Robishaw's observations. At some random point during my set-up, a number of parameters would 'reset themselves'. This included both the L-band receiver being taken out of the scan coordinator and LO1 and replaced by the Noise Source, and the balance enabled and laser power on buttons in the IF rack (leading to the optical receivers) switching from the even numbered optical drivers (where we were setting them) to the odd numbered drivers. This changeover repeated itself six times, until about 6pm. Joe Brandt was called at this point, but since the switchover did not occur again he could not track it. As only trobisha and gbtops were in the gateway, Joe's only suggestion as how this could have happened is if someone were running a glish script to change the system (possibly thinking no observing was happening until after 6pm?). Since glish scripts can bypass the gateway, this could have caused our problem. Regardless of whether the use of a glish script did cause the problem, the fact that the gateway can be so easily bypassed shows a serious hole in the observing security. (3) Spectral Processor Failure - 13 minutes lost The spectral processor had its 'usual' failure only once during the entire night. (4) GO 'freezing' after SP failure - 10 minutes lost After the spectral processor failure, GO froze in the middle of the next operation. It was killed and restarted and had no problems after that. (5) Sadira failure - 2 1/2 hours lost Sadira failed at around 00:30am. Chris Clark and Wolfgang were called in to fix the problem. I was not present, so the operators log, Chris, or Wolfgang should be consulted for more details about this. -- JF 9. Any other business