Real-World Test of the BloomSky STORM
Sure – it was tested in the lab; but how will a real-world wind test of the new BloomSky STORM go when compared to another high-grade weather sensor?
Summer is the most consistently windy time of year in the San Francisco Bay Area. It’s this predictable offshore wind, pulling in the iconic fog for which San Francisco is so well known. The windy Bay Area summer weather made for an ideal time and place for a real-world test of the BloomSky Storm.
How We Did It
For this real-world wind test we used three STORMs and a Davis Vantage Pro2 in an effort to compare the observations from the BloomSky station to another “prosumer” model. A secondary, but equally important goal, was to monitor the variations of wind speed observations from identical model stations within the same testing environment.
For the first phase of the test, two STORMs (designated ‘STORM A’ and ‘J’) were positioned on a tripod at a level height approximately four feet off the roof top; the Davis and the third STORM (designated ‘B’) were positioned on a mast approximately seven feet above the surface (pictured below).
The second phase of our test placed all stations at the seven foot height. This segment of the test would show if stations A and J increase in wind speed when placed in the higher mounting position.
The location in San Francisco offered a high and unobstructed vantage point, with no obstructions within 100 feet in all directions. Wind speeds throughout the testing process ranged from a light breeze to gusts over 30 mph.
What We Found Out
This preliminary results of our real-world test of the BloomSky STORM suggest that the mounting height has a dramatic impact of the station’s readings.
During the first phase of the test, STORM B and the Davis station observed consistently higher wind speeds at the time-aligned points when compared to STORM A and J. Additionally, the taller stations recorded noticeably higher top-end wind speeds overall. These stations observed speeds over 30 mph, while the shorter setup never exceeded 27 mph.
The second phase of our test placed all stations at the seven foot height. This was to validate if stations A and J increase in wind speed when placed in the higher mounting position. The results showed the expected increase in observed wind speed from the two stations.
The charts below show the results from phase one and two. These graphs plot the observed wind speeds from each BloomSky STORM when compared to a “fixed speed” Davis observation. Put simply – these charts illustrate the difference (higher or lower) in wind speed from the Davis station.
Phase 1 showed that STORM A and J, mounted at four feet height, registered consistently lower wind speeds than the Davis station. STORM B, which was at the same height as the Davis, observed nearly identical speeds to the Davis in winds below ~10mph, and higher speeds above this mark.
During Phase 2, when all stations were at a level height of approximately seven feet, STORM B and J were within 1 mph of the Davis station under 10-12 mph wind speeds but actually recorded higher speeds when winds were above 12 mph. While STORM A was still below the Davis observations in winds under 12 mph, the recorded speed of STORM A was closer to the Davis benchmark than in Phase 1 and surpassed the Davis observation in winds above 12 mph.
Understanding the Wind, and the Test
Wind is a difficult thing to accurately record due to its very nature; it’s gusty and inconsistent, it swirls and shifts – producing turbulence which further impacts the perceived speed, and it is almost never consistent.
The BloomSky STORM captures wind measurements by converting the number of wind-cup rotations in a ten-second window to a speed; the STORM does this three times over a 30-second window. It then averages the three 10-second measurements to derive and transmit this “reported” wind speed. These 30-second interval speeds can be all over the chart – instantly jumping from zero to ten miles per hour – thus making a “consistent” speed somewhat subjective to the method or timing of reporting. To level off these gusty and erratic speeds and give a more consistent wind report, the BloomSky cloud produces a “sustained” wind speed by averaging these 30-second interval reports into a rolling two-minute window. This is the number reported in the mobile app and other BloomSky end-points.
The Davis station captures and reports an “instantaneous” speed every seven to ten seconds, however only archives data every 60 seconds, and therefore made for smaller data set to work with.
The challenge in testing wind speeds across multiple stations is aligning the data-points by time – no data-point lines up conveniently with another. Therefore, the data used could have been as far apart as 15 seconds. This is a large window of time if you’ve ever sat out on a beach or a field and felt the wind change from moment to moment.
Additionally, the nature of wind is such that it causes its own turbulence at speeds above approximately 15. This means that swift moving columns of air can actually affect the speed and direction of adjacent columns, causing variability in speeds over a very small distance. This will often impact the observations from stations in a testing environment, regardless of proximity. Despite these testing challenges, the results were very consistent throughout the real-world test of the BloomSky STORM.
If you have questions about our testing setup, our process, the results, or how you can perform a similar test, leave us a note in the comments below.