A nice, sunny day.
PRS:
- Reconnected prst to the network. I think what did it was that I unplugged it to test the modem in my laptop and then plugged it back in, so I suspect waiting for the modem reboot script to time out would also have worked. At some point ran into issues with hostapd but upon further investigation I think those are just because the hostapd config file needs to be updated. Also at some point did a "ddn" without a "dup". Did the "dup" later in the afternoon.
- Attached TC logger to prsr port 5 (not port 6 as in dcsr, since it wasn't available). Changed config on git and pulled to the dsm.
- Took soil sample.
SH:
- Installed the CSAT3A (old style) sent by Chris yesterday. (Thanks, Chris!). Modified the prototype "gold" cassette to accommodate the different head style with some cool machining and yet another trip to Ace Hardware, but still needed to shim with washers. This was the last ISFS sensor to be installed! Sebastian brought another sonic of the same type (thanks, Sebastian!), that we'll hold on to as a spare until Campbell repairs ours.
- NEED TO REPLACE CSAT boom mounting screw with one a bit shorter (0.5" 1/4-20). Actually, the long bolt as well (2.5" 3/8).
- Took soil sample.
Other:
- Took delivery of 2 more HRXLs that UU just purchased. These have integral, potted, cables attached. We have one prepared set of cabling. Figured out how to connect things, so will install this at DCS tomorrow.
- Sometime midday the 5G modem in the window of the base trailer got too hot in the sun and shut off, so temporarily moved it down to the bench to be out of the sun, but that means the internet is worse. Could think about a better solution to this.
Started chilly, then nice, then windy and cold, then a nice evening.
Dan has started PTO – Thanks for everything!!
PRS:
- lowered TT
- reseated connector to Rlw.xx.7m and got it going
- dressed cables up TT and closed off spiral cable guides
- raised TT (finally!), up about 10AM
- removed Sebastian's darkhorse
CC:
- unplugged and plugged in USB serial dongle, then restarted NIDAS. Got HRXL going
- did a crude compass boom angle shoot of 237deg magnetic (248 true) looking into the boom.
- However, something that we did stopped cc from being on the net. Probably have to reboot it. Might wait until we go out to install a thermocouple.
DCS:
- replaced TRH housings at 1m, 3m (they didn't have fans!). Now both are working
- reconfigured mote IDs on Rsw.xx.2m and now see data
- lowered TT
- replaced TRH housing at 17m that was working (so data would have been fine, just had been reading too high Rfan). Replacement is normal.
- installed thermocouples at all 3 levels
- closed off spiral cable guides
- raised TT
- connected TC_LOGGER to dcsr, port 6 and changed config (but didn't get data until I also changed the baudrate to 115200 – different than the RAD_LOGGER!)
- installed thermocouple connectors at 1, 2, 3m
- installed an actual thermocouple at 3m
- discovered that AC power was off (when Utah started plugging lots of things in). It had been down since about noon 2 days ago. Turned out to be a loose plug at the power panel. Have tried to connect, but don't get a solid "click" when trying to turn. We'll have to keep an eye on this.
SH:
- asked Chris to FedEx an early CSAT3A to us to replace the one that has been removed. It will need a special mount.
UP:
- this snow depth sensor appears to have fixed itself, with no action by us.
TODOs:
- replace a sonic at SH (tomorrow AM)
- get cc back on the net
- connect TC logger at prs
- sprinkle thermocouples on satellite sites
A bit of fog in the morning, along with some drizzle that cleared later in the day. I saw a rainbow.
The push today was to get the supersites finished, since today was supposed to be the first day of ops.
DCS:
- wired fan power for NR01 (had been missed before)
- secured thermocouple connectors
- (radiometer were cleaned by Sebastian yesterday)
- raised tower (but without thermocouples themselves)
PRS:
- leveled 0.5m radiometers
- installed the rest of the 2m radiometers and dressed cables
- installed 7m and 32m radiometers
- Sebastian cleaned all radiometers
- Sebastian created a platform for his lidar
- replaced prototype cassette mount with final design on one sonic, to better fit the thermocouple mount
- found that the 7m sonic had been mounted upside down on its cassette, and fixed it
- added telescoping tower thermocouple cabling (and thermocouples themselves), but logger is still not hooked up
- started to raise tower, but found that the cables tangled. Pulled the tower back down and reworked some things, but this took enough time that we were not able to fully erect it. (We didn't want to raise it when it was too dark to see what was happening). It is about 15m high.
In the meantime, Dan corrected a power issue for Gannet's group and drove to Logan to return bad sonics to Campbell.
So...another productive day and pretty much the end of set-up, but didn't actually become operational.
Also, a server issue in Boulder is preventing us from getting a complete picture of instrument outages, of which I'm sure there are still several.
cold rain/mist all day, that got a lot of stuff (including the field crew!) soaked.
Dan did a bunch of things (he can edit):
- further work to diagnose faulty sonics
- rented a skidsteer and dug out the road to the DCS gate
- parked the rental trailer at the sounding site for ISS
- Removed faulty sonic from sh
- Installed snow depth at mh
In the meantime, Isabel and I had a supersite day:
- PRS
- set the jumper on port 2 to bypass the part of the serial port board that was broken. 17m ec100 now online without having to replace the dsm.
- stripped everything from Sebastian's darkhorse and rerouted cables
- Installed 0.5m Rlw pair. This required moving the ventillator wires from being bundled with 7m and 32m for the intercomparison, to the 2m Rsw and Rlw ventillation system.
- Installed uplooking 2m Rlw.in, but removed 2m Rsw.out, since the mounting scheme for Rsw.out.2m had to change
- That was all we could do, since we didn't have the 7m and 32m booms, or plates to finish the Rxx.out.2m mounting (we later recovered plates from dcs, that weren't used)
- DCS
- swapped in the spare configured modem (M73), since the old one wasn't connecting and both data and status lights were blinking alternating red and green. Took restarting NIDAS before the Pi recognized it. dcst was on the net after ddn/dup when I checked at the site, but now seems to be down again.
- exchanged the 35m serial cable to rad_logger with a 5m one, since now much closer
- reinstalled 0.5m Rlw pair to allow more cable length
- dressed cables to darkhorse. In the process, found that the NR01 fan hasn't been connected. At this point, it would be easiest to connect it to the aux power on dcsg
- lowered and tilted down the tower, to allow thermocouple cable installation.
- ran cables to 7m, 17m, and 32m, but didn't have the screws to mount the connectors to the sonics.
- tilted up the tower, but didn't raise.
Still needed:
- PRS
- 1/4-20 bolts and plates (now have) to implement Dan's big plate mounting of Rxx.out.2m, then dress darkhorse cables
- rad booms (need to retrieve from storage). I think I have bolts to mount paddles to these
- split link for 0.5m guy (have turnbuckle)
- all thermocouple stuff
- may want an extra 5m bulgin in case we run out of slack for the 32m radiometer
- raise tower
- may need amp power plug for NR01 fan
- DCS
- thermocouple screws
- thermocouple logger and cabling for 1,2,3m (UU will do), along with all thermocouples
- amp power plug for NR01 fan
- raise tower
Found that the real-time data stream hasn't been alive for the last 24 hours. No raw_data/isfs_ data files since 2022 01 05 04 00 and data_stats failed. This also caused webplots to fail. dsm_server was running, so check_cfact_procs didn't do anything. I just killed dsm_server and reran check_cfact_procs, which seems to have brought everything back to life.
As user isfs on barolo:
pgrep dsm_server (returns the process id number of the dsm_server process)
kill -9 <pid>
check_cfact_procs
Dan and I spent the morning trying to fix as many csat/ec150/ec100 issues as we can with the spares we have.
MW: ec100 was reporting yesterday, but winds and gas were both all nans. By the time we got there this morning, the ec100 was not reporting at all, with red status lights for both gas and sonic. After replacing the ec100 we did receive data, but the winds were still all nans, so we replaced the sonic head as well. Now the whole system is working.
PRS:
- 1m sonic: ec100 not reporting anything, and looking like it never had reported anything since it got set up. In ec100 box gas light was red, power and sonic lights were green. After replacing ec100 the entire system is working.
- 7m sonic: was reporting data but all bad winds, diag value of 16 ("re-acquiring signal"). After tilting the tower down it appears that the heating wire on the sonic shorted:
- We replaced the sonic head and now the whole system is working.
- 17m sonic: ec100 had been unresponsive, but was reporting when we arrived at site, but then became completely unresponsive again before we got around to replacing it. data_stats output from the current nidas file showed a start time of 01 03 22:22:42.056 (which I think must be wrong, this file's begin time was 20220104_120000 and the previous file didn't show any data from port 2), and an end time of 01 04 18:23:29.970, shortly after we checked it on site. Anyhow, in the ec100 box the power light was blinking red and the sonic + gas lights were blinking yellow. When we plugged in a spare ec100 it had the same light situation, so we eventually tracked the problem down to port 2 on the prst dsm, which is not properly powering the ec100. There are no spare ports on the dsm so we will need to swap the dsm itself to get eight working ports. Tentatively planning to do this tomorrow since we also need to move radiometers at prs tomorrow. When we plugged the 17m ec100 into a different port it looks like it is reporting data, some nans for winds but some good wind data.
- 32m sonic: was reporting data but all bad winds, diag value of 16ish. Replaced the sonic head, after that entire system is working.
Dan and I have been wondering about the effect of the deicing setup and if that's causing our problems. Before we left this morning we saw that the ldiag value at SH has been consistently 1 since 12/21/21 12:07MST/19:07UTC, and from the csat_heat log the first time deicing ran was 2021-12-21 19:00:05,078, which could mean that running the heating system killed the sonic. We have yet to visit SH so don't know yet whether it shorted like the 7m prst sonic did. This afternoon I looked into other sonics to see if I found this pattern again and mostly didn't. Ldiag was consistently 1 at prst 7m after 13:02 MST 12/21, which is around the time I was deploying the deicing script, but I can't check the heater log right now because prst is down.
Since we were worried about deicing, this morning I logged in and commented out the crontab entry for the csat_heat script in all dsms except dcst (which was and is offline–originally serial card issues, now according to Steve modem issues). So at least now the sonics that we replaced should stay operational, if it's the heating that's interfering with that.
After our rounds SH still has a sonic that's giving bad winds but we don't have any more known good spares. Dan has been experimenting with the ec100 windows utility and the bad ec100s and csats and did manage to get a working csat/ec100 combo with the prst 7m sonic and the prst 1m ec100, but since that's the csat that got shorted we don't feel great about re-deploying it as a spare yet. Dan has also called Campbell and gotten some troubleshooting advice and assistance.
While Dan and Isabel were playing with sonics, Sebastian and I worked at dcs to reconfigure from the radiation intercomparison configuration to the radiative flux divergence configuration.
- Replaced dcst DSM with spare3, since the dcst serial board wasn't connecting to the Pi. Everything came up except the modem, which is blinking red/green on the LEDs.
- Tilted the TT down.
- Moved rad logger to the bed of the TT
- Moved the CVF4s with the Rlw.in/out.2m on Sebastian's "swartz" to our darkhorse. Had to shift our swartzs to accomodate this. More cable dressing is needed.
- Installed the 0.5m radiometers on an improvised stand, but didn't allow enough cable length (the cables now must be managed when tilting the tower). I need to redo this mount.
- Installed 7m and 32m radiometers on their respective booms
- Removed Sebastian's darkhorse
- Removed all cabling going to his darkhorse. This was a bit of a mess, since intercomparison cabling was dressed together with permanent cabling, and all of it was under the snow. Fortunately, most of the snow was not hard packed, so the cables could be pulled up.
- Sebastian reconnected the redone cabling to his webcam.
- Tilted the TT up, and half raised, mostly to check on cable lengths/cable feeds up the tower. No significant problems encountered.
Didn't have time to install thermocouple cabling (which is now in the job box).
I may have left the key in the generator.
Steve and I paid several visits to prs today.
- replaced 3m trh housing with one that actually has a fan in it!
- fixed pressure port mount on 2m sonic boom (was missing a piece of 1/2" PVC pipe, that was cracked during set-up)
- debugged long wave radiation mote. Mote wasn’t reporting data, replaced mote, then found incoming radiometer reporting bad data. Back at base Steve disassembled it and after that it worked again so we put it back at the site. Now the entire prs radiometer intercomparison is running.
- debugged (some of the) sonic problems. Ec100 at 2m was completely unresponsive, though showed green light for the sonic and red light for the ec150. Tried a bunch of things but ended up replacing the ec100, after which sonic and gas were both ok. Also ended up replacing the 3m csat head, which was giving nans for winds and diagnostic code of 16 (“reacquiring signal”). At prs the 1m and 17m ec100s are still unresponsive and the 7m and 32m sonics are reporting NAs
We visited dcs at the end of the day to see why dcst was down, suspecting power or network problem. Problem seems to actually be that the serial board is not showing up as a usb device, which kills networking since the modem was plugged into it. No serial ports working and no port status lights on the serial card, but the gps ops light was still blinking. Lsusb output doesn’t show the serial board or anything plugged into it. Will try replacing the dsm tomorrow.
Meanwhile Dan fixed power problems at several sites (dcs, pc, and lc) and worked on getting someone to plow site accesses. We learned that the guy who watches over the Water District building plowed access to the base.
Dan also cycled power at sh to get it back on the network.
Plan for tomorrow is for Steve to work on mounting radiometers and thermocouples in their proper places at dcs with Sebastian, while Dan and I see about replacing more sonics and ec100s. We haven't run out of spares yet, but getting very close.
Feel free to add details, this is short because I’m typing it on my phone…
I have just arrived back, and Dan and Isabel (with Bill) are immanent. I did 2 stops and a drive-by along the way:
UP: power panel is there, but no hint of a transformer
PC: I <think> the GFI was dripped at the power drop, but once reset there was power at the panel but not in the job box. Seems like the power cable is cut/disconnected somewhere. This is unfortunate, since there is a foot of snow, packed down much of the distance by snowmachine tracks.
MW: Found that, indeed, the Binder connector to the Tsoil probe was loose. Reseating seems to have fixed Tsoil – one item now off the TODO list...