So far, it appears that we have been handling the ARL wind directions incorrectly. Specifically, qc_geo_tiltcor now shows tnw13 directions 180 degrees from the other 4 ARL towers and none agree with tnw11 for one case that I am investigating. Ed never replied to my email of 7 Nov 2017 with an answer, so I'm trying to work things out.
- The sonics are Gill WindMasters, appear to be with the "pipe mount" from my tnw12 photo
- Guessing from Gill literature that the "north spar" would be opposite the electronics box for the pipe mount
- From my tnw12 photo, it appears that this places the north spar pointing perpendicular to the boom. In tnw12's case, this is counter-clockwise. From Robert's multistation scan, tnw12's boom was pointing from 89–93 degrees (east), thus the north spar would have pointed approximately north!
- From Gordon's wind direction quick reference, a windmaster Vaz should be 90 degrees counterclockwise from the boom direction, or about 270 degrees.
- From Robert's scan, and looking in detail at the tnw12 photo, tnw13 had booms on the opposite side of the tower from the other 4 ARL towers.
- However, from plotting wind direction in noqc_instrument, it appears that the wind directions are (approximately) the same from all 5 ARL towers.
- Thus, I assume that the sonics still had the north spar on the north side of the boom for tnw13 and thus that we should use the reciprocal of the boom direction to determine the true tnw13 Vaz.
- In other words, Vaz from all ARL sonics should be approximate 270 degrees. This is not how we have been processing them to date.
So...I'm going to try out this theory...
P.S. Trying this manually on one case looked excellent. I've just updated all the cal files.
As far as I can see, the Metek at 30.tse10 never reported good w variances. spd, dir, w, u'u', and v'v' all look okay, but w'w' is always bad. It appears that, fairly regularly, the reported (parsed?) w from this sonic has a bad sample with a large + or - value. Should be fixable using min/max ranges in the .xml. Might want to look at the message though, to see if this could possibly be a parsing error, though this is unlikely since all the other Meteks were just fine.
u, v, and tc do not seem to have this issue, which makes me think that it is not an actual misfunctioning of the acoustic part of this instrument. It is non-orthogonal, so any path problem would show up on a horizontal velocity in addition to the vertical. We should call this to the attention of whomever owns it.
A data example, unparsed from the sensor:
2017 05 19 00:00:43.6530 0.04999 44 M:x = 392 y = 65 z = 87 t = 1154\r\n
2017 05 19 00:00:43.7030 0.05 44 M:x = 364 y = 57 z = 94 t = 1176\r\n
2017 05 19 00:00:43.7531 0.05008 44 M:x = 339 y = -33 z = 13593 t = 1169\r\n
2017 05 19 00:00:43.8030 0.04991 44 M:x = 359 y = 55 z = 32 t = 1161\r\n
2017 05 19 00:00:43.8525 0.04949 44 M:x = 387 y = 68 z = 26 t = 1151\r\n
Clearly, the bad value of 13593 came straight from the sonic.
I just used the opportunity of staging for ARTSE to evaluate the effect of not cleaning our radiometers during Perdigao. All of the Perdigao radiometers were packed without being cleaned, to enable this test to be done. However, it is possible that the stretchy film that is part of the NR01 packing may have wiped some stuff off of the domes. Two of these NR01s were brought back (by me) by hand – the rest are still in the seatainer.
Round 1 (27 Jul 2017)
Today (27 July), which is partly cloudy with boundary-layer Cu, we installed NR01 #7 at approximately 13:00 (local). At approximately 14:43, I cleaned it during a period when the sun disk was clear of clouds. The order was: wetting Rsw.in, Rlw.in, Rlw.out, Rsw.out, then wiping in the same order. At the end of this cleaning, I added water to the wetness sensor, left the water on for about 10s, and wiped it clean, just as an indicator flag. I ran rserial during this cleaning on a laptop.
From the rserial output, I see Rsw.in change from about 926 W/m2 before cleaning to 912 W/m2 after. Thus, the effect of the dirt/pollen/smoke/oil/etc. was an enhancement of incoming solar radiation by about 1.4%. Obviously, the primary effect of the dirt was to enlarge the image of the solar disk. This effect will be difficult to model and thus correct the data. It would have been a good idea to measure the direct and diffuse radiation separately. When I measure the other NR01, I'll also take data using the shadowing paddle.
Round 2 (1 Aug 2017)
Continuing the ARTSE piggy-back, also tested NR01 #12 at about 11:20 on 1 Aug 2017 with nearly clear skies. This time the procedure was:
- make sure data were being recorded on USB stick
- nevertheless, also logged data through minicom, with rserial running
- ran in clear skies for a while
- shaded Rsw.in for 20s with a paddle. (forgot this time that I should crouch down to prevent my head from being visible to radiometers)
- shaded Rlw.in for 20s with paddle
- added water to wetness sensor to indicate cleaning
- wetted and wiped Rsw.in
- same to Rlw.in (was visibly dirty)
- same to Rlw.out
- same to Rsw.out
- dry-wiped Rsw.in
- same to Rlw.in
- same to Rlw.out
- same to Rsw.out
- wiped dry the wetness sensor (showed total time cleaning was 175s)
- shaded Rsw.in for 20s with paddle
- shaded Rlw.in for 20s with paddle
All of this is to try to get the effect of cleaning separately for direct and diffuse radiation (though we don't expect much change on Rlw), since previous data showed that a correction model might need to treat direct and diffuse separately.
In this case, the results were:
- Rsw.in total before/after: 887/887 (no change)
- Rsw.in diffuse before/after: 101.5/102.5
- Rlw.in total before/after: 371.8/370.4
- Rlw.in diffuse before/after: 374/372
All of these changes are quite small, <1%. In the case of Rlw.in, my head in the field of view may have affected the results. Thus, this radiometer did not require this full procedure and we will not have to correct its data. Nevertheless, we should repeat this procedure on all the other radiometers when they return.
Also notable is that the diffuse effect on Rlw.in was small – under 2 W/m2 for a change in Rsw of 785 W/m2, or a "swcor" value of < 0.25%. The Epply PIRs we tested in 2003 had values from 0.2–1.9%. Thus, the NR01 Rlw is pretty good.
Round 3 (18 Sep 2017)
Approximately same procedure as Round 2, now with the remaining NR01s from the seatainer. All but 2 Rsw.in values were lower after cleaning, however most data were taken during the afternoon when light was decreasing. Time series were logged through minicom capture. I used yet another new paddle, this one with aluminum tape (shiny) towards the sun, electricians tape (black) towards the sensor.
The final table with the analysis so far:
Sensor ID | Rsw.in before | Rsw.in after | Extrapolated change | (before-change)/after (%) | Rsw diff before | Rsw diff after | Rlw before | Rlw after | before/after (%) | Rlw diff before | Rlw diff after | diff/normal before (%) | diff/normal after (%) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 837.6 | 834.7 | -2.0 | 0.6 | 69.5 | 65.3 | 365.3 | 360.9 | 1.2 | 362.5 | 359.0 | -0.8 | -0.5 |
2 | 750.0 | 736.5 | 25.0 | -1.6 | 68.9 | 57.3 | 334.7 | 337.9 | -0.9 | 336.8 | 347.3 | 0.6 | 2.8 |
3 | 861.2 | 858.5 | -0.8 | 0.4 | 78.6 | 71.7 | 334.1 | 336.5 | -0.7 | 340.0 | 340.4 | 1.8 | 1.2 |
4 | 688.5 | 669.2 | 14.9 | 0.7 | 68.2 | 58.4 | 355.4 | 352.5 | 0.8 | 353.6 | 349.7 | -0.5 | -0.8 |
5 | 834.4 | 835.0 | -11.0 | 1.2 | 72.5 | 67.4 | 334.1 | 335.4 | -0.4 | 336.0 | 339.7 | 0.6 | 1.3 |
6 | 850.1 | 844.0 | -6.8 | 1.5 | 72.1 | 64.6 | 345.1 | 343.3 | 0.5 | 346.1 | 345.9 | 0.3 | 0.8 |
7 | 926 | 912 | - | 1.5 | - | - | - | - | - | - | - | - | - |
8 | 638.5 | 633.3 | 8.1 | -0.5 | 62.6 | 54.3 | 347.8 | 347.5 | 0.1 | 349.7 | 348.8 | 0.5 | 0.4 |
9 | 721.0 | 713.9 | 11.7 | -0.6 | 67.3 | 64.1 | 335.8 | 336.8 | -0.3 | 341.8 | 343.2 | 1.8 | 1.9 |
10 | 728.5 | 715.4 | 11.4 | 0.2 | 65.4 | 58.3 | 346.4 | 347.5 | -0.3 | 349.6 | 351.9 | 0.9 | 1.3 |
11 | 623.0 | 606.0 | 1.0 | 2.6 | 61.6 | 53.7 | 340.3 | 341.7 | -0.4 | 339.8 | 338.6 | -0.1 | -0.9 |
12 | 887 | 887 | - | 0.0 | 101.5 | 102.5 | 371.8 | 370.4 | 0.4 | 374 | 372 | 0.6 | 0.4 |
13 | 681.0 | 662.2 | 13.7 | 0.8 | 68.4 | 63.0 | 342.6 | 343.7 | -0.3 | 344.9 | 346.0 | 0.7 | 0.7 |
14 | 854.6 | 858.3 | -20.7 | 2.0 | 74.0 | 69.5 | 338.2 | 339.1 | -0.3 | 336.9 | 339.9 | -0.4 | 0.2 |
These data support the following conclusions:
- The maximum change in Rsw.in due to cleaning was 2.6%.
- The sense of all but 3 of the Rsw.in errors was less radiation after cleaning. Thus, the "sparkle" effect seems to have dominated over shading.
- The maximum change in Rlw.in due to cleaning was 1.2%. Most changes were quite small and were both positive and negative, i.e. not systematic.
- The maximum change of Rlw.in due to direct shading (the "f" correction) was 2.8%, but all but 2 sensors showed an increase of Rlw with decreasing Rsw. This suggests that the difference was due to the measurement method (paddle being too close?). I don't understand this result.
- Cleaning did not have a <major> change on the f-correction values.
Overall, not terrible results, though radiation to 2% certainly isn't up to our usual standards. My time trend analysis could be improved, but probably won't change the general trend here.
Next step – repeat these tests on the 4-component sensors!
Jakob had pointed out that most of the network was down. (I've been on vacation and hadn't been monitoring.) Apparently stations went down in the last lightning storm (again!), either Fri or Sat. I just:
- found a dead power supply in rne01 and replaced it with the one from rne02. Of course, rne02 is a <repeater> so much of the network is still down.
- reset differential protection on tse13
- reset differential protection on tse11
- reset differential protection on tse09
Of course, this is now <tear down>, so we shouldn't be working to <maintain> the network. Nevertheless, several groups are still running, using this period for post-cals, etc. Eventually, DTU will rework the network for operation after we are gone.
While touring my family yesterday, stopped by tse04 and noticed that TRH.2m fan wasn't working. Cycling power (unplug/plug at DSM) brought it back. This done about 1230 (17 Jun). Not that we need the data now, but hopefully this will be a check on whether Ifan monitoring was working.
No changes in instruments, everything seems to have been working as usual for the last few days.
I had to cycle the power on rsw02 twice in the last 24 hours, after it got hung up and stopped responding to pings, so we lost a few hours of data for P.2m.rsw02.
The substitute Ubiquiti radio at tnw05 eventually stopped working for some reason. When I visited the site on Sunday to fix it, it came back after cycling the power. Then it went out again by the next day. So I visited the site again on Monday and swapped the original radio back into operation. Since then that radio has not had any problems. Go figure.
I leave today, Kurt and Dan are on their way here with another truck.
Tcase.in from the KZRAD has been missing since 2017-06-02. It started showing problems on 2017-05-26. So this afternoon I finally installed the last spare NR01 as a backup, from about 13:45 to 14:45 WEST on 2017-06-10.
It is only level by eye, and there was some disturbance to the ground to wash the radiometers. (I only washed the NR01, I don't know why I didn't think to wash the KZ also...)
I assumed it was better to mount it further from the dark horse beam, but now I see that the legs are probably in view of the NR01 downlooking radiometers. Let me know if I need to change it.
Sonics and gas analyzers
I think all the sonics are working.
v01 10m IRGA
The v01 10m irgadiag has regular stretches of non-zero values, maybe 40% of samples are good. I don't know if that's worth trying to fix, given the chance a new head could make it worse. Maybe it's just a spider web, so if we get a chance to climb we should take a look.
v07 20m LiCOR
The LiCOR at v07 20m (port 3 on v07t) was not reporting, fixed by cycling power.
Other issues
tnw05u
I replaced the Ubiquiti radio at tnw05. It was able to connect to its AP without climbing the tower to mount it. Since the original had already been upgraded to firmware 8.1.4, the replacement was also upgraded to 8.1.4, and then I just saved the config from the original and restored it to the replacement. Traffic to and from tnw05b and tnw05t has been normal, including rsyncs, however for some reason now tnw05u does not respond to ping, ssh, or https access on the WLAN interface.
Sonics and gas analyzers
All sonics appear to be working. The new METEKs needed cal files, so the netcdf data prior to the cal file additions are probably missing values. I think we also fixed an inconsistency in the cal file paths, so sites with more than one dsm may not have been applying the most accurate site-specific boom bearings.
TRH sensors
No change.
Other issues
tnw05u
The radio continued to hang up rsync connections today, even after a few configuration changes to match it exactly with a working radio, rsw06. The latest attempt was to upgrade to the latest firmware, 8.1.4, but even that did not work. We may try adding another radio in its place. This is being tracked in ISFS-152.
Generating stats_5min.xml
Isabel has written a python script to generate the stats xml file from the sensor list, to reduce the chance for human error in generating in manually. She found several missing TRH sensors and one sonic at the wrong height. Once we've compared the netcdf output using the generated xml file, the manual file will be replaced.
Sonics and gas analyzers
No known problems at this time. Unlike the RMYoungs they replaced, the METEKs at tnw07b 4m and v07 8m do not report ldiag, so those are all missing in the QC tables, but I don't know at the moment how to fix that.
TRH Sensors
The only sensor known to be down at this point is the 60m on tse11.
Other issues
v05
We visited v05 to investigate why it was offline, even though the Ubiquiti was still reachable. Eventually we rebooted the Ubiquiti and the connection came back. The DSM had been up the whole time so no data were lost.
tnw05 rsync
The tnw05u radio continues to be a problem, hanging up rsync connections. No explanation or fix yet.
v04
Tcase.in is still down and nothing has been done about it. We still need to work on mounting a NR01 on the dark horse with the KZ.
Sonics and gas analyzers
v07 8m
Replaced RMYoung with a METEK, using the same port since all are filled. The arrow is pointing towards the tower in direction of boom. We discovered that the fuse to the serial interface boards would blow eventually once the METEK was running, so we replaced the 1A fuse with a 3A. The configurations have been updated and all appears to be working now. Port 1 is jumpered for RS485.
tnw07b 4m
Replaced RMYoung with a METEK, using port 5, now jumpered for RS485.
TRH sensors
No change.
Other issues
rsync
We have added rsync monitoring to nagios, and discovered that a few sites are days behind on rsync. It turned out the problem was the Ubiquiti not allowing certain traffic, same symptoms as documented in ISFS-152. Two of the four systems were tnw05t and tnw05b, the same ones as reported in the original issue. The problem keeps happening within several hours after rebooting the Ubiquiti, so there must be something particularly wrong with the radio at that site.
Sonics and gas analyzers
tnw09 10m
Replaced CSAT3A sonic head at 10m on tnw09. At first it reported all bad samples. So we plugged in the original again, thinking maybe some spider webbing had been interfering, but it still reported bad samples also. So we left the replacement installed, and lo and behold, a few hours later the diagnostic bits turned to zero and remained zero the next day. Go figure.
v07 8m
Had a METEK tested and ready to go, did not have time to install it.
tnw07b 4m
Had a METEK tested and ready to go, did not have time to install it.
Other issues
ARTSE
I have one full EC150 tested and set aside.
Sonics and gas analyzers
v07 8m
RMYoung on port 1 is still bad, but now we know that it is actually at 8m.
tnw09 10m
CSAT3 IRGA winds on port 2 still about 50% flagged.
tnw07b 4m
RMYoung on port 1 keeps going in and out. It looks like the bad samples peak in the afternoons, similar to v07 8m.
tse01 10m
No sign of problems since going out for several hours on 2017-06-02.
TRH
tse11 60m
No change.
Other issues
There seems to be a problem with the winds in the high-rate dataset not being oriented correctly to geographic coordinates, so I need to investigate that.
tse05 finally stayed up overnight.
v04 Tcase.in is still out. I will figure out how to mount our last NR01 on the dark horse and just record it as an additional radiation sensor.
Around 2017-06-03,14:00 UTC, Isabel and I visited v07 to attempt to replace the flaky RMYoung in port 1 with a less flaky RMYoung. We were able to use a ladder to reach the 4m boom and swap in the new sonic, but that did not improve anything on port 1. We discovered a fuse was not quite plugged in the whole way on port 3, so plugged it in. And at some point we "lost" all the serial ports. Probing the bulgin pins only showed 5V on some pins and no pins with 12V. So just to be sure, we powered down the DSM and replaced all the fuses leading up to port 1, including the 7.5A and 3A fuses on the power panel. After that all the ports resumed working, but I don't know if that means we really did have a partially failing fuse somewhere (seems doubtful), or something else got into a funky state.
We still saw almost all flagged samples from the RMYoung on port 1. So then we discovered that the configuration was incorrect. Port 1 is at 8m and not 4m, so we had replaced the wrong sonic. We unclipped all the cable loops hung on the tower and mapped them to their sensors, here's what we found:
Port | Height | Sensor |
---|---|---|
0 | 2m | CSAT |
1 | 8m | RMYoung |
2 | 6m | RMYoung |
3 | 4m | RMYoung |
6 | 2m | PTB |
7 | 0m | Soil mote |
So ports 1 and 3 were swapped, and I've fixed the configuration to match.
This means the sonic that really has been failing during the day is at 8m, so we cannot replace it without climbing. During our visit, I thought we determined eventually that the replacement did not work any better, but now I'm not so sure that we were looking at the right sonic, so maybe we still have one RMYoung with 2% flagged samples which could replace the 8m. Otherwise we have a METEK we could install, whenever we're able to climb.
Note that we disturbed the flow for the 2m CSAT sonic during our visit because we were working right next to it.
Sonics and gas analyzers
v07 4m
The plan is to replace it with the last RMYoung. 2% bad samples all the time would be better than 2% good samples during the day.
tse01 10m
CSAT3A on port 2 is reporting all bad samples as of 2017-06-02,08 UTC. So we only have to replace whichever of head or box is bad, or both, but we have spares. Cycling power did not help.
tnw09 10m
Lots of flagged samples still. This is a CSAT3 IRGA. Up to %30 of the winds are flagged in qctables, need to check on the gas diagnostic.
tnw07b 4m
Diagnostic still mostly bad.
TRH Sensors
tse11 60m
Still needs to be replaced.
Other issues
tse05 power
Isabel and I installed a second solar panel at tse05 around 16z. The panel is more horizontal and is aimed more west, since the first panel is aimed more east. It Looks like voltage jumped up a tenth, but maybe there will not be enough charging left today to get it through tonight. Maybe tomorrow night.