We’ve been in the Chesapeake Bay for 36 hours now, and sampling for about 30 of those hours, so it’s probably time to check in. So far we’ve completed a Scanfish survey, 6 CTD casts and a few net tows. The weather yesterday was spectacular – flat calm and sunny with fish jumping all around the boat during the transit from the Bay Bridge to Rappahannock Shoals. Right around the time the night watch came on the wind picked up a bit and reached 25 knot gusts. Not unworkable conditions, but not great, and we had to shut down some of our Z-traps this morning because the instruments don’t work well in high wind and current conditions. Not to fear, we’ve been keeping busy with CTD casts. In many parts of the ocean there are too few zooplankton to catch them in the bottles on the CTD (called Niskin bottles), but here in the Chesapeake there are so many animals in the water we can capture them quantitatively using the CTD. It’s not ideal, the CTD samples a smaller volume of water than the nets, but it is also less prone to weather related problems like the Z-traps.
But I digress. Here I am talking about the weather problems when I have data to show, and a comparison to discuss.
- Raw Scanfish data from 21 Sept. 2010.
- Processed Scanfish data from 21 September 2010.
The two plots above show the Scanfish data, processed and averaged on the left, and the raw data on the right. I put these up to show how our processing routines can affect how the data is visualized. It’s important to note here that we’re not changing the data – it’s all there, but when we process it we reduce the total number of data points to make it easier to understand and simpler for the programs to plot. The raw data plots use 57874 individual points of data, for each variable we plot. The processed ones compress that to 6357 data points. That’s 11% of the previous data, which makes it much easier to deal with. In the grand schemed of things we don’t take nearly as much data as in some disciplines, but it still is easier for the computer to deal with fewer data points, when every data point has a number of values associated with it: Latitude, Longitude, Depth, and the variable value (dissolved oxygen, fluorescence, temperature, salinity, etc.). The other thing to note is that the overall patterns show up in both plots: hypoxic bottom water at the northern end of the transect, high salinity in the southern end, and temperature inversions (higher temps) down deep on all transects.
Now, the title is retreat. If you were to compare this to the previous transects we did in August and May, you will see this is more similar to May, with oxic water in the southern end of the transect all the way to the bottom. This is what we might expect, as the bay starts to mix up with the onset of autumn storms that churn the bay. I’m not entirely sure about the timing of this mixing event compared to other years, but it’s worth looking at. I’ll have more to say about the data later, but now I’m due on deck to try some Z-Traps. The wind is laying down and I think we can get some in before lunch (which is teriyaki/orange chicken salad).
Speaking of lunch, I’m trying to convince the cook to contribute a guest blog post. Help me out by commenting that you think that’s a good idea so I can print out the comments and post them in the galley. It could be interesting to get the perspective of the ships cook, and I can vouch for him being a good guy with fun stories and dry wit. You don’t want to hear only from the planktoneer for the next few days?
Update: I uploaded new graphics. It’s the same data but I cleaned them up slightly and have the transects in the same orientation now. Other than that, they are exactly the same but it should be easier to compare them.
Guest post, please! More information from people working on a research vessel would be way more interesting than data flogging (just kidding about the data flogging – we love that part, too).