Science Stories: Adventures in Bay-Delta Data

rss
Check Your Work – How the Interagency Ecological Program Conducts Internal Reviews
  • August 13, 2021

When you are running a long-term monitoring program, it’s easy to keep plugging away doing the same old thing over and over again. That’s what “long term monitoring” is all about right? But is the survey we designed 40 years ago still giving us useful data? With new sampling gear, new statistics, and new mandates, can we improve our monitoring to better meet our needs? These questions have been on the minds of Interagency Ecological Program (IEP) researchers, so an elite team spent over a year doing a rigorous evaluation of three of IEP’s fisheries surveys to figure out how we can improve our monitoring program. The team was assembled with representatives from multiple agencies who each brought something to the table: guidance and facilitation, experience using the data, regulatory background, quantitative skills, and outside statistical expertise. This wasn’t the first time IEP reviewed itself, but it was the first time they tried to take a really quantitative look at it. The team focused on trying to assess the ability of the datasets to answer types of management questions based on themes, so multiple surveys were reviewed together.

The Team

  • Dr. Steve Culberson, IEP Lead Scientist - Guidance and Facilitation
  • Stephanie Fong, IEP Program Manager – Guidance and Facilitation
  • Dr. Jereme Gaeta, CDFW Senior Environmental Scientist – Quantitative Ecologist
  • Dr. Brock Huntsman, USGS Fish Biologist – Quantitative Ecologist
  • Dr. Sam Bashevkin, DSP Senior Environmental Scientist – Quantitative Ecologist
  • Brian Mahardja, USBR Biologist – Quantitative Ecologist and Data User
  • Dr. Mike Beakes USBR Biologist – Quantitative Ecologist and Data User
  • Dr. Barry Noon, Colorado State University – Independent statistical consultant
  • Fred Feyrer, USGS Fish Biologist – Data User
  • Stephen Louie, State Water Board Senior Environmental Scientist – Regulator
  • Steven Slater, CDFW Senior Environmental Scientist - Principal Investigator – FMWT
  • Kathy Hieb, CDFW Senior Environmental Scientist – Principal Investigator – Bay Study
  • Dr. John Durand – UC Davis, Principal Investigator – Suisun Marsh Survey

The Surveys

  • Fall Midwater Trawl (FMWT) – One of the cornerstones of IEP since 1967, this California Department of Fish and Wildlife (CDFW) survey runs from September-December every year and was originally designed to monitor the impact of the State Water Project and Central Valley Project on yearling striped bass.
  • San Francisco Bay Study – On the water since 1980, Bay Study was also run by the CDFW and runs year-round from the South Bay to the Confluence. It also monitors the effects of the Projects on fish communities.
  • Suisun Marsh Survey – Starting in 1979, the Suisun Marsh Survey is conducted by UC Davis with funding from the Department of Water Resources. This survey describes the impact of the Projects and the Suisun Marsh Salinity Control Gates on fish in the Marsh.

The Gear

  • The otter trawl – A big net towed along the ground behind a boat, this type of net targets fish that hang out on the bottom (“demersal fishes”). This net only samples the bottom in deep water, but will sample most of the water column in shallower channels (less than 3 meters deep).
  • The midwater trawl – Another big net, but this one starts at the bottom and is pulled in toward the boat while trawling, gradually reducing the depth of the net so all depths are sampled equally. This net targets fish that like living in open water (“pelagic fishes”).

The question: Can we make it better?

A group of fish get together and look at a diagram that says: Surveys produce Data that inform decisions that inform mandates.
Figure1. The team assembled to see how surveys could generate the best data to inform decisions and fulfill their regulatory mandates.

The question seemed simple – but the answer was unexpectedly complex. While the surveys all targeted similar fish, used similar gear, and went to similar places, they all had enough differences in their survey design, mandates, and institutional history that looking at them together wasn’t easy.

The first step in the review process was, perhaps, the most difficult. The team had to get the buy-in from all the leaders of the surveys under review, all the regulators mandating that the surveys take place, all the people critical of the surveys as they currently stand, and the supervisors of the team who were going to devote a large percentage of their time to the effort. Getting trust from multiple interest groups was challenging, but it was also one of the most rewarding and exciting parts of the process. Stephanie reflected: “We plan to incorporate more of their recommendations in upcoming reviews and increase our collaboration with them… it also would have been helpful if we could have spent more time up front with getting buy-in from those being reviewed and those critical of the surveys.”

Once everyone was on board, the team took a deep dive into the background behind each survey. Why was it established? How have the data been used in the past? Has it made any changes over time? How are the data currently shared and used? Putting together this information gave them a great appreciation for the broad range of experience within IEP. In particular, the team needed to pay attention to the regulatory mandates that first called for the surveys (such as Endangered Species Act Biological Opinions and Water Rights Decisions), to make sure the surveys were still meeting their needs.

The next step was putting the data together, and here’s where it got hard. The team had to find all the data, interpret the metadata, and convert it into standard formats that were comparable between surveys. Even basic things like the names of fish were different. In the FMWT data, a striped bass was “1”, in the Suisun Marsh data a striped bass was “SB”, and in the Bay Study data a striped bass was “STRBAS”. The team quickly identified a few easy steps that could improve the programs without changing a single survey protocol!

  1. Make all data publicly available on the web in the form of non-proprietary flat files (such as text or .csv spreadsheets)
  2. Create detailed metadata documents describing all the information needed to understand the survey (assume the person reading it is a total stranger who knows nothing about your program!)

Figuring out better methods of storing and sharing data is relatively easy, but how do we decide whether we should change when and where and how the surveys actually catch fish? The surveys were all intended to track changes in the fish community, but community-level changes are complex, with over 100 fish species in the estuary. The team decided to divide the task into three parts:

  1. Figure out how to quantify bias between the surveys for individual fish (seeing if some surveys are more likely to catch certain species than the other surveys, Figure 2).
  2. Create a better definition of the “fish community” by identifying which groups of species are caught together more often (Figure 3).
  3. See what happens when we change how often we sample or how many stations we sample. Do we lose any information if we do less work?

Image of a classification tree with four groups of fish labeled: Brackish, Fresh, Marine, and Grunion.
Figure 2. The quantitative ecologists used a form of hierarchical clustering to figure out which groups of fish are most frequently caught together, and which species is most indicative of each group. The indicator species are the ones with the gold stars. Figure adapted from IEP Survey Review Team (2021).

Going through this process involved pulling out all the fancy math and computer programing. Sam, Brian, Jereme, Mike, and Brock explored the world of generalized additive models, principle tensor analysis, Bayesian generalized linear mixed models, hierarchical cluster analysis, and things involving overdispersion in negative binomial distributions. If there were a way to Math their way to the answer, they were going to find it!

Image of a midwater trawl with two fish talking about whether there are any biases in their fishing. They agree that the boat probably catches fewer fish than we think it does.
Figure 3. The team also evaluated biases in sampling gear. Sampling bias occurs when the gear doesn't sample all fish consistently. Sometimes they miss fish of certain sizes, fish that live in certain habitats, or fish that can evade the nets.

For better or worse, Math and a 1-year pilot effort will only get you so far. The team could develop some recommendations, scenarios, and new methods, but it will be up to management to decide how to continue the review effort and then implement change. Their results highlighted a few key points that will be useful in reviewing the rest of IEP’s surveys and making decisions about changes:

  1. Involving stakeholders early in the review process will increase transparency, facilitate sharing of ideas, and promote community understanding.
  2. We need to characterize the biases of our sampling gear in order to make stronger conclusions about fish populations.
  3. Identifying distinct communities of fish helps us track changes over time and space.
  4. We can use Bayesian simulation methods to test the impacts of altered sampling designs on our understanding of estuarine ecology.
  5. These sort of reviews take time and effort by a highly skilled set of scientists, so IEP will need to dedicate a lot of staff to a full review of all their surveys.

Further Reading

Categories: BlogDataScience, General


Comments are closed.