Dan and I spent the morning trying to fix as many csat/ec150/ec100 issues as we can with the spares we have.
MW: ec100 was reporting yesterday, but winds and gas were both all nans. By the time we got there this morning, the ec100 was not reporting at all, with red status lights for both gas and sonic. After replacing the ec100 we did receive data, but the winds were still all nans, so we replaced the sonic head as well. Now the whole system is working.
PRS:
- 1m sonic: ec100 not reporting anything, and looking like it never had reported anything since it got set up. In ec100 box gas light was red, power and sonic lights were green. After replacing ec100 the entire system is working.
- 7m sonic: was reporting data but all bad winds, diag value of 16 ("re-acquiring signal"). After tilting the tower down it appears that the heating wire on the sonic shorted:
- We replaced the sonic head and now the whole system is working.
- 17m sonic: ec100 had been unresponsive, but was reporting when we arrived at site, but then became completely unresponsive again before we got around to replacing it. data_stats output from the current nidas file showed a start time of 01 03 22:22:42.056 (which I think must be wrong, this file's begin time was 20220104_120000 and the previous file didn't show any data from port 2), and an end time of 01 04 18:23:29.970, shortly after we checked it on site. Anyhow, in the ec100 box the power light was blinking red and the sonic + gas lights were blinking yellow. When we plugged in a spare ec100 it had the same light situation, so we eventually tracked the problem down to port 2 on the prst dsm, which is not properly powering the ec100. There are no spare ports on the dsm so we will need to swap the dsm itself to get eight working ports. Tentatively planning to do this tomorrow since we also need to move radiometers at prs tomorrow. When we plugged the 17m ec100 into a different port it looks like it is reporting data, some nans for winds but some good wind data.
- 32m sonic: was reporting data but all bad winds, diag value of 16ish. Replaced the sonic head, after that entire system is working.
Dan and I have been wondering about the effect of the deicing setup and if that's causing our problems. Before we left this morning we saw that the ldiag value at SH has been consistently 1 since 12/21/21 12:07MST/19:07UTC, and from the csat_heat log the first time deicing ran was 2021-12-21 19:00:05,078, which could mean that running the heating system killed the sonic. We have yet to visit SH so don't know yet whether it shorted like the 7m prst sonic did. This afternoon I looked into other sonics to see if I found this pattern again and mostly didn't. Ldiag was consistently 1 at prst 7m after 13:02 MST 12/21, which is around the time I was deploying the deicing script, but I can't check the heater log right now because prst is down.
Since we were worried about deicing, this morning I logged in and commented out the crontab entry for the csat_heat script in all dsms except dcst (which was and is offline–originally serial card issues, now according to Steve modem issues). So at least now the sonics that we replaced should stay operational, if it's the heating that's interfering with that.
After our rounds SH still has a sonic that's giving bad winds but we don't have any more known good spares. Dan has been experimenting with the ec100 windows utility and the bad ec100s and csats and did manage to get a working csat/ec100 combo with the prst 7m sonic and the prst 1m ec100, but since that's the csat that got shorted we don't feel great about re-deploying it as a spare yet. Dan has also called Campbell and gotten some troubleshooting advice and assistance.
While Dan and Isabel were playing with sonics, Sebastian and I worked at dcs to reconfigure from the radiation intercomparison configuration to the radiative flux divergence configuration.
- Replaced dcst DSM with spare3, since the dcst serial board wasn't connecting to the Pi. Everything came up except the modem, which is blinking red/green on the LEDs.
- Tilted the TT down.
- Moved rad logger to the bed of the TT
- Moved the CVF4s with the Rlw.in/out.2m on Sebastian's "swartz" to our darkhorse. Had to shift our swartzs to accomodate this. More cable dressing is needed.
- Installed the 0.5m radiometers on an improvised stand, but didn't allow enough cable length (the cables now must be managed when tilting the tower). I need to redo this mount.
- Installed 7m and 32m radiometers on their respective booms
- Removed Sebastian's darkhorse
- Removed all cabling going to his darkhorse. This was a bit of a mess, since intercomparison cabling was dressed together with permanent cabling, and all of it was under the snow. Fortunately, most of the snow was not hard packed, so the cables could be pulled up.
- Sebastian reconnected the redone cabling to his webcam.
- Tilted the TT up, and half raised, mostly to check on cable lengths/cable feeds up the tower. No significant problems encountered.
Didn't have time to install thermocouple cabling (which is now in the job box).
I may have left the key in the generator.
Steve and I paid several visits to prs today.
- replaced 3m trh housing with one that actually has a fan in it!
- fixed pressure port mount on 2m sonic boom (was missing a piece of 1/2" PVC pipe, that was cracked during set-up)
- debugged long wave radiation mote. Mote wasn’t reporting data, replaced mote, then found incoming radiometer reporting bad data. Back at base Steve disassembled it and after that it worked again so we put it back at the site. Now the entire prs radiometer intercomparison is running.
- debugged (some of the) sonic problems. Ec100 at 2m was completely unresponsive, though showed green light for the sonic and red light for the ec150. Tried a bunch of things but ended up replacing the ec100, after which sonic and gas were both ok. Also ended up replacing the 3m csat head, which was giving nans for winds and diagnostic code of 16 (“reacquiring signal”). At prs the 1m and 17m ec100s are still unresponsive and the 7m and 32m sonics are reporting NAs
We visited dcs at the end of the day to see why dcst was down, suspecting power or network problem. Problem seems to actually be that the serial board is not showing up as a usb device, which kills networking since the modem was plugged into it. No serial ports working and no port status lights on the serial card, but the gps ops light was still blinking. Lsusb output doesn’t show the serial board or anything plugged into it. Will try replacing the dsm tomorrow.
Meanwhile Dan fixed power problems at several sites (dcs, pc, and lc) and worked on getting someone to plow site accesses. We learned that the guy who watches over the Water District building plowed access to the base.
Dan also cycled power at sh to get it back on the network.
Plan for tomorrow is for Steve to work on mounting radiometers and thermocouples in their proper places at dcs with Sebastian, while Dan and I see about replacing more sonics and ec100s. We haven't run out of spares yet, but getting very close.
Feel free to add details, this is short because I’m typing it on my phone…
I have just arrived back, and Dan and Isabel (with Bill) are immanent. I did 2 stops and a drive-by along the way:
UP: power panel is there, but no hint of a transformer
PC: I <think> the GFI was dripped at the power drop, but once reset there was power at the panel but not in the job box. Seems like the power cable is cut/disconnected somewhere. This is unfortunate, since there is a foot of snow, packed down much of the distance by snowmachine tracks.
MW: Found that, indeed, the Binder connector to the Tsoil probe was loose. Reseating seems to have fixed Tsoil – one item now off the TODO list...
qctables has shown that only the dc CS125 is reporting. This is because the message format was never changed for the sensors that UU provided. Have just changed all formats to "11", except for dcs that is currently unreachable.
For the record, the format was set to "5". Also, the procedure was:
rserial /dev/ttyDSM0 (or whatever)
open 0 (enter user menus)
1 (select message output)
1 (select message format)
11 (set format to 11)
0 (up one menu)
9 (save and exit)
While in this menu, recorded the serial numbers for most sensors.
Just to start a list of what to do when/if(!) we get back next week – a fairly long list for 3 days!:
Site issues:
- dcs is totally down. It appears to have lost power on 21 Dec soon after being powered up and subsequently died on 26 Dec. Could the Victron have V/I limit settings set incorrectly? Update: Sebastian says that his power (connecting to our AC outlet) is fine. This further suggests that this is a Victron issue, rather than AC power. DONE, Victron was bad. Replaced.
- pc is totally down. Similarly, it lost power on 15 Dec soon after being powered up and died 23 Dec. AC not getting from power panel to job box – cable issue? DONE, extension codes became unplugged under the snow.
- lc power cord was cut by plow. Might stay up until we get there and replace the cord.
- up may or may not have a power drop, so might need to create another solar farm. (Do we have cables?)
Existing sensor issues:
- Most sonics are reporting NA, though they did work initially. Assuming this is ice related (try more heating?), but might check with Campbell to see if a configuration change would help. See table below
- Most EC150s also show missing data. However, in at least one case, I've found the EC150 data to be okay – just missing because statsproc processes it in a group with the sonic data that ARE bad. Hopefully, fixing the sonics will bring back the EC150 values.
- A thermocouple appears to have been plugged in at sh on 19 Dec, but seems to have died on 21 Dec – just 2 days later. While it was plugged in, the data look good, though not exactly like tc (maybe a good thing). I do see noise above 3Hz during the day and 8Hz at night.
- Rfan.3m.prs is bad (T/RH ok). Housing should be replaced? DONE, old housing had no fan!
- Rlw.in.2m.prs not reporting (in the CVF4 housing) – another Binder issue? DONE, mote wasn't sampling, then Rlw.in wanted to be opened up all the way and reassembled.
- Tsoil.mw is all -273. Hopefully, just a loose Binder connector
.DONE, was loose Binder. - P.2m.prs was missing horizonal pipe mount. FIXED
- HRXL.cc stopped reporting 24 Dec.
Sensor changes:
- reconfigure radiometers into operational mode at dcs and prs. Have new mounting for 0.5m radiometer "paddles".
- move CVF4 Rlws to darkhorse
- move other paddles to 0.5, 7, and 32m (booms for 7 and 32 are on site at each site) – lots of cables to redo. Sebastian will help.
- attach thermocouples and loggers to towers at dcs and prs. Eric said he'd do most of this work.
- raise both TTs
Standard operations:
- finish recording serial numbers (a table has been started in this wiki)
- take site photos (maybe Dan has already done?)
- shoot boom angles (compass, then Leica)
- take the first set of gravimetric measurements (though freezing soil probably will have moisture that the EC-5 doesn't see).
Software:
- test heating scripts
- it appears that I'm close to parsing the RAD_LOGGER message correctly, but we should check with Sebastian. I suspect that my "Rpile" variable from him really is an uncalibrated pile voltage.
- why are there still "Early sample...prior to sorting window" errors using prep? I thought our new timing fixed all this!
- why are there frequent "EMERGENCY|SampleOutput: inet:128.117.43.122:30010: IOException: inet:128.117.43.122:30010: send: Message too long, disconnecting" error messages on the DSM consoles? Could these be from the OTT, that reports something like a 32k byte sample?
- create irga_co2/h2o.dat QC files by height for supersites
- update all csat.dat QC files once boom angles are known.
Sonic status:
site | ec100 | ec150 | csat |
---|---|---|---|
up | ok | ok | ok |
lc | ok (fuse replaced) | ok | ok |
cc | ok | ok | ok |
dc | ok | ok | ok |
sp | ok | ok | ok |
sh | ok | ok | x |
mh | ok | ok | ok |
pc | ok | ok | ok |
mw | ok (replaced) | x | ok (replaced) |
prs.1m | ok (replaced) | ok | ok |
2m | ok (replaced) | ok | ok |
3m | ok | ok | ok (replaced) |
7m | ok | ok | ok (replaced) |
17m | bad port (will need to replace dsm) | ok? | ok? |
32m | ok | ok | ok (replaced) |
dcs.1m | ok | ok | ok |
2m | ok | x | ok |
3m | ok | x | ok |
7m | ok | ok | ok |
17m | ok | ok | ok |
32m | ok | ok | ok |
Weather:
Rain throughout the day, followed by wintery mix in the evening
Tasking:
- Radiometers at DCS and PRS are plumb as of 1:30pm and 3pm MST today, respectively
- Punchlist items at both supersites
- Connected Lake Creek Power!
- Added two solar panels to Upper Provo and swapped batteries, unfortunately they will not see sun for the next week
- Base trailer deep clean
I will be leaving tomorrow, signing off for this trip. Great job everyone, see you in the new year!
I've just made a flurry of config changes on the dsms to accommodate how things are in the field:
- Finally got parsing of Sebastian's rad_logger message working – still not sure about the variables, but it is a start. Changes made on dcst's sensor_catalog.xml
- Changed all v2.7 motes at PRS to use id numbers. Set id=90 for the 2 Rsw's, id=91 for the 0.5m Rlw's, id=92 for 2.0m Rlw's, id=93 for 7m Rlw's and id=94 for the 32m Rlw's. Appropriate changes made to cfact.xml on prsg and prst. I think this is the way we will want to operate throughout this project.
It would be great if someone would push/pull these changes back to github.
Lest I forget over the holidays!
- Green and White trucks!
- At least one tub of ISS stuff (to be left in Steve's office Dec 29 by Lou and Bill
- Two socket wrenches that Steve stole from the base toolchest
- The 2nd soil kit? (It has the portable scale)
- Two base-plates with poles for the 0.5m radiometers at each supersite
Weather:
Clear throughout the morning, followed by scattered mixed precip in the afternoon.
Tasking:
- Loaded a GSA truck with unused cable, which Tony drove back to Boulder and left in the cage at Foothills
- Chris and myself went to the Center Creek satellite site and checked off a variety of punchlist items
- Chris departed for CO around noon
- Continued work at PRS, all ventilators powered and other items completed
The radiometers at both supersites are still a bit out of plumb, that will be the number one priority for tomorrow. I will make a blog post noting the time that each site's radiometers are plumb. Huge thanks to Chris and Tony for all of their hard work!
Weather:
Clear, calm, cold. Briefly got above freezing midday.
Tasking:
Last full day of this set up, great job everyone!
- Finished DCS, including power. Extended tower to tension guy wires, then retracted but left tilted up for the holidays. Everything was reporting(when we left)!
- Almost finished PRS. Retracted tower that we left extended for the holidays, also left tilted up for the holidays. Will visit tomorrow morning to finish some punchlist items
All DSMs are on the net except for UP and LC, which are still waiting on utility power.
Upon arrival this morning, 17m EC100 status lights were blinking and output was nonexistent. Once box warmed up(with sensors disconnected), the error resolved itself.
Output has resumed but H2O still seems abnormally high.
Before the holidays, just taking a peak at the available data on NCharts
Variable | Issue |
---|---|
Rpile.in.cc | Flatlined |
Tsoil at nw | Bad values |
h2o.17m.prs | Values are too high |
co2.sp / h2o.sp | Too high / negative values |
up site | Stopped plotting |
I think a test that the upper radiometers are obscured is that Rsw.out > Rsw.in. At low sun angles (sunrise, sunset – not a Fiddler on the Roof song), this could be real, but I'm pretty sure I've seen it associated with snow on the shields.
Weather:
Clear, cold, and calm. Apparent inversion with some smog throughout the day.
Tasking:
- PRS for the day. Tower is up and extended for the night! Most wiring is completed, save for the final termination of the ventilator power
- DCS power drop is now live
Tomorrow is our last full day, we will be tilting and extending DCS along with a variety of other tasks