Jun 02, 2025

Public workspaceUSDA LTAR Common Experiment measurement: Surface fluxes of carbon dioxide, water vapor, and energy with the Eddy Covariance method

  • 1USDA Agricultural Research Service, Hydrology and Remote Sensing Laboratory, Beltsville, MD
Icon indicating open access to content
QR code linking to this content
Protocol CitationJoseph G. Alfieri 2025. USDA LTAR Common Experiment measurement: Surface fluxes of carbon dioxide, water vapor, and energy with the Eddy Covariance method. protocols.io https://dx.doi.org/10.17504/protocols.io.5jyl8211dl2w/v1
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: Working
We use this protocol and it's working
Created: June 19, 2024
Last Modified: June 02, 2025
Protocol Integer ID: 103376
Keywords: Long-Term Agroecosystem Research, LTAR, USDA LTAR, Common Experiment, surface flux, eddy covariance method, carbon dioxide, water vapor, energy
Funders Acknowledgements:
United States Department of Agriculture
Grant ID: -
Abstract
This protocol describes use of the eddy covariance method for quantifying surface fluxes of carbon dioxide, water vapor, and energy. It provides a brief overview of eddy covariance theory, minimum requirements for instrumentation, guidance for sensor installation and maintenance, and procedures for data processing and quality control.
Guidelines
A very brief overview of the underlying theory

As a result of convection arising from thermal stratification (buoyancy effects), wind shear caused by vertical differences in wind speed, and mechanical mixing due to interactions with surface elements, the atmosphere is turbulent. Because of its turbulent nature, the air flow near the surface varies stochastically in both space and time. Moreover, it is both three-dimensional and rotational. In other words, atmospheric turbulence can be envisioned as a mixture of innumerable rotating eddies that vary continuously in size from a few centimeters to several kilometers.

These turbulent eddies can carry heat, water vapor, carbon dioxide, and other trace gases with them. When eddies interact with the surface, it results in the exchange of energy, mass, and momentum between the land surface and the atmosphere. The resulting fluxes are a critical linkage between numerous biogeophysical and biogeochemical processes; thus, surface flux data is important to a broad range of scientific and practical applications.

While there are multiple techniques that can be used to measure surface fluxes, perhaps the most prevalent method is eddy covariance. This is because eddy covariance is the most reliable and physically defensible means of collecting accurate direct measurements of turbulent exchange over both natural and man-made surfaces across the continuum of atmospheric conditions. Eddy covariance determines the flux in terms of the covariance between the vertical wind velocity and the scalar quantity, e.g. heat or water vapor, of interest. For example, the sensible and latent heat fluxes can be expressed as:


where H is the sensible heat flux, ρ is the density of dry air, cp is the specific heat of air, w is the vertical wind velocity, Θv is the virtual potential temperature, λE is the latent heat flux, λv is the latent heat of vaporizations, and q is the mixing ratio for water vapor. The primes (′) indicate an instantaneous deviation from the mean and the overbars ( ‾ ) indicate the temporal mean.

This approach is built on a number of critical assumptions. The first of these is the assumption that vertical movement, thus the transport of scalar quantities and their associated fluxes, is due only to turbulent motion. This ensures the vertical flux is fully described by the covariance between vertical wind velocity and the scalar of interest. The application of eddy covariance also assumes homogeneity, i.e. the underlying terrain is both level and uniform. This assumption is necessary so that the effects of horizontal transport can be neglected. It also implies there is no convergence/divergence of wind flow and the mean vertical transport of dry air is nil. Next, it assumes stationarity, i.e. the bulk environmental conditions (temperature, pressure, etc.) do not change over the measurement period. This assumption is required so that the temporal mean of measurements taken at a point can be taken as equivalent to the spatial mean. Finally, the eddy covariance method assumes that the measurements are collected in the constant-flux layer, the portion of the surface boundary layer where the turbulent fluxes are invariant with height. This assumption is necessary so that the fluxes measured at some height above the surface can be taken as equivalent to the flux at the surface.

It is important to recognize that these assumptions are never fully satisfied. Moreover, violations of the underlying assumptions are important source of error and/or uncertainty that must be accounted for when post-processing and interpreting flux data. In some cases, these errors, along with those from other sources – for example, electronic noise (data spikes), sensor separation, and density effects – can be minimized by applying a suite of corrections during post-processing. However, this is not always the case. For example, while periods of insufficient turbulence can often be identified by evaluating the magnitude of the friction velocity, the resulting error cannot be readily corrected.

Like all measurement techniques, eddy covariance has its own unique set of strengths and limitations. These are the result of the theoretical and practical underpinnings of the measurement technique. The advantage of eddy covariance is that it can provide direct, independent, continuous, and non-destructive measurements of both energy and trace gas fluxes; additionally, measurements from the eddy covariance method are representative because they are integrated over a large source area (footprint). On the other hand, eddy covariance systems are costly and require both extensive post-processing to calculate the fluxes, and a strong understanding of both turbulent theory and biophysics to interpret the results.
Data collection
Data collection
Instrumentation

In addition to the sonic anemometer and hygrometer needed to measure the turbulent fluxes of heat and moisture, an eddy covariance system requires a number of ancillary sensors to collect the data needed to describe the complete surface energy budget and components of the eddy covariance system are listed in Table 1. Other sensors may be added to the system to address site-specific needs.



The configuration of the sensors on a tower (Figure 1) should ensure that potential sources of error are minimized. For example, the measurement height is selected to ensure the measurements are collected above the roughness sub-layer while remaining in the surface boundary layer. Similarly, the sensors are oriented to minimize the potential for flow distortion due or other interference, e.g. shadowing, that could adversely impact the measurements.

Figure 1. The configuration of a typical micrometeorological tower. Note that there are additional sensors beyond the core components of an eddy covariance system.

Sonic anemometer

  • The sonic anemometer collects simultaneous high-frequency measurements of the orthogonal wind velocity components, which can also be used to determine a sonic temperature via the temperature dependence of the speed of sound.
  • The sensor is mounted facing into the direction of the prevailing wind in order to minimize potential errors due to flow distortions.
  • Similarly, the measurement height should be at least 1.5 times the vegetation height to ensure the measurements are collected above the roughness sub-layer.
  • As a result, measurement heights of 3 to 5 m are commonly used over agricultural surfaces. Finally, the sensor is typically operated at a sampling rate of 10 Hz or 20 Hz.
  • As an alternative to using the sonic temperature to determine the sensible heat flux, temperature measurements using a fine-wire thermocouple can also be used.
  • However, this is not recommended for long-term observations because the thermocouple, which is finer than human hair, can easily be broken by either birds or insects and during inclement weather. This not only increases the number of gaps in the data, it also increases the frequency and cost of maintenance.
Infrared gas analyzer

  • The infrared gas analyzer (IRGA) collects simultaneous high-frequency measurements of the water vapor and carbon dioxide densities.
  • These instruments also provide measurements of atmospheric pressure via an integrated barometer.
  • The measurements are synchronous in time with those made by the sonic anemometer.
  • The sensor is co-located with the sonic anemometer with a separation distance between 10 cm and 15 cm. The separation distance is a compromise to ensure that the IRGA and sonic anemometer are sampling the same turbulent eddies while minimizing the potential for flow distortion.
  • It is useful to note that several manufacturers have developed integrated sensors – for example, Campbell Scientific now produces the IRGASON system – that combine both the sonic anemometer and IRGA.
  • In addition to the IRGAs used to measure water vapor and carbon dioxide, an open-path sensor is available to measure methane.
  • For other trace gases such as nitrous oxide and ammonia, there are portable analyzers with sufficient dynamic response for eddy covariance, but because they are closed-path, benchtop instruments, air must be routed to them from a sampling point near the sonic anemometer via tubing and pumps, at a rate sufficiently high to ensure fully turbulent concentration time series to line up with the sonic anemometer measurements.
  • Finally, other instrument types were historically used to measure the water vapor concentration with older eddy systems. Examples of these include the krypton and Lyman-alpha hygrometers.
Net radiometer

  • A four-component net radiometer is used to both measure the components of the radiation budget:

  1. incident solar (shortwave) radiation,
  2. reflected solar radiation,
  3. incident long-wave radiation, and
  4. terrestrial long-wave radiation – and determine the net radiation.

  • The net radiation is calculated from the individual components according to:

𝑅𝑛 = 𝐾 − 𝐾 + 𝐿 − 𝐿 2
where 𝑅𝑛 is the net radiation 𝐾 is the incident solar (shortwave) radiation, 𝐾 is the reflected solar (shortwave) radiation, 𝐿 is the incident long-wave radiation, and 𝐿 is the terrestrial long-wave radiation.

  • The sensor is typically mounted facing due south above the height of the sonic anemometer and IRGA. However, it can also be mounted on a separate frame located adjacent to the tower.
  • It is important to recognize that these sensors typically require a correction to the long-wave radiation in order to account for the effects of changes in the body temperature of the sensor.
  • Alternately, there are single-component net radiometers, e.g. NR-Lite [Kipp & Zonen], that measure only the difference between incoming and outgoing radiation and report only net radiation.
  • While these sensors are typically less costly, have lower power requirements, and use fewer inputs on the datalogger, they also tend to be more susceptible to error.
  • Similarly, individual pyranometers and pyrgeometers can be used to measure the individual components, but this tends to be both more costly and result in greater errors due to leveling and calibration compared to four-component net radiometers.
  • Finally, while silicon pyranometers and PAR sensors are available for specialized applications, these sensors are not appropriate for routine measurements of the components of the radiation budget.

Heat flux plate

  • Soil heat flux is a difficult measurement to make well.
  • Sensors must be small and buried at some depth below the surface to minimize their impact on both water flow and the heat flow they are trying to measure.
  • Since the true variable of interest is the heat flux at the surface, this means that the measured flux at the depth of installation must be calorimetrically corrected to account for flux divergence, described below (eq. 3).
  • Soil heat flow can often be quite variable spatially due to differences in exposure to solar radiation and differences in soil thermal properties.
  • Consequently, aminimum of three heat flux plates -and more if possible - should be used to collect spatially distributed measurements that can be averaged to determine the representative flux.
  • The plates are typically positioned adjacent to the tower and arranged in either fixed transects or positioned so as to sample the dominant surface conditions at the site.
  • The plates are commonly buried at of depth of 5 cm to 10 cm below the surface; for example, the plates are buried at a depth of 8 cm for the Lower Chesapeake Bay LTAR sites (Fig. 2).
  • Depths of less than 5 cm are not sufficient to ensure that the plates remain undisturbed while depths in excess of approximately 15 cm may hamper the calorimetric correction of the heat flux. (This correction is needed in order to account for the heat energy stored in the overlying soil layer.)
  • In addition, it is recommended that the wires for the heat flux plate, as well as the other subsurface sensors, be enclosed in conduit to prevent damage from mice or other animals.

Figure 2. Example of the placement of soil sensors. The configuration shown is used at the Lower Chesapeake Bay LTAR sites.

  • The correction to the measured soil heat flux uses near-surface soil moisture and temperature measurements to determine the amount of heat stored in the soil above the heat flux plates. The corrected soil heat flux is calculated as:
where G0 is the corrected flux, Gm is the measured flux, cb is the heat capacity of the soil, ΔT is the change in the mean soil temperature of the overlying layer, Δz is the measurement depth, and Δt is the length of the averaging period.

  • The heat capacity varies in time with changing moisture conditions but may be estimated using measurements of the soil moisture content. By neglecting the contribution of air, the heat capacity can be estimated as:
where ρb is the bulk density of the soil, cm is the heat capacity of soil minerals (8.40 × 102J kg-1 K-1), χo is the volume fraction of organic matter, ρo is the density of organic matter (1.30 × 103 kg m-3), co is the heat capacity of organic matter (1.92 × 103 J kg-1 K-1), θ is the soil moisture content, ρw is the density of water (1.00 × 103 kg m-3), and cw is the heat capacity of water (4.20 × 103 J kg-1 K-1).

  • The bulk density of the soil has to be determined experimentally for each field site. While more rigorous estimates of the heat capacity of the soil are possible, they require additional information and, therefore, measurements of the thermal properties of the soil.
Soil moisture probes

  • The most commonly used soil moisture probes employ time domain reflectometry (TDR) to determine the near-surface soil moisture.
  • The sensors are co-located with the heat flux plates but buried at an intermediate depth above the plates.
  • For example, in the case where the plates are at a depth of 8 cm, the soil moisture probes would be positioned at a depth of approximately 5 cm.

Note
Again, as with the heat flux plates, the minimum depth for the soil moisture probes is 5 cm.

  • These data are used to calculate the heat capacity of the soil used when calculating the heat storage correction for the soil heat flux.
Soil temperature probe

  • Like soil moisture, the soil temperature measurements, which are co-located with the heat flux plates, are used primarily for calculating the heat storage correction for the soil heat flux.
  • These measurements can be collected using either individual thermocouples that measure the temperature at a specific depths or paired thermocouples that provide an average temperature at multiple depths.
  • Regardless of the approach taken, one thermocouple is positioned near the surface and one is placed just above the heat flux plate.
  • There are also commercially available sensors that simultaneously measure both water content and temperature.
Temperature/Humidity

  • This sensor for measuring air temperature and humidity is mounted at a height similar to that of the fast-response sensors.
  • However, discretion regarding the orientation and measurement height is needed to ensure that the instrument does not block or otherwise interfere with the air flow passing the sonic anemometer and IRGA.
  • Also, this sensor must be mounted inside a radiation shield that shades the sensor from direct sunlight. Generally, the radiation shield is passively cooled by ambient winds, but aspirated radiation shields may also be used.
  • There is evidence that fan aspiration results in greater accuracy, particularly overnight and at sites where calm conditions persist; however, it also increases power consumption.
Infrared thermometer

  • The infrared thermometer is used to measure the surface temperature at the site.
  • Typically, it is mounted facing downward, either perpendicular or at a 45-degree angle with the surface and at approximately the same height as the net radiometer.
  • The measurement height and angle are at the discretion of researcher and are selected to ensure that the sensor’s field of view encompasses a representative sample of the surface.
Rain gauge

  • The rain gauge is used to measure the amount and intensity of precipitation events.
  • The rain gauge can be mounted either on the tower – in this case its position must be selected to ensure that gauge neither interferes with the high-frequency measurements nor is interfered with by the tower, mounting booms, or other instruments – or positioned adjacent to the tower.
  • In the latter case, the placement of the rain gauge is selected so that collection is not hampered by vegetation or other sensors.
  • While they are commonly used for micrometeorological research, tipping bucket rain gauges may not be appropriate for all sites; they are not suitable for locations where snow is a significant form of precipitation.
  • At sites where snow represents a significant portion of the total precipitation, a snow gage would be appropriate.
Site selection and deployment

Not surprisingly, site selection almost invariably requires some compromise between the scientific ideal and reality. There are a number of questions that need to be considered when determining where to deploy an eddy covariance system.
The first of these is can the system be deployed and maintained safely? Similarly, will the system remain stable and secure over time with changing environmental conditions? For example, if the eddy covariance system is being deployed in a riparian area, one important consideration would be where to place batteries, dataloggers, and the like to minimize the potential of flooding.
Since the eddy covariance measurements are area integrated, it also important to ensure there is sufficient fetch so that the measurements will be representative, i.e. the source area lies within the area of interest. This can be somewhat problematic because detailed information regarding the aerodynamic properties of the surface and stability of the atmosphere are needed to calculate the flux footprint (see below).
However, a conservative estimate that is often used as a rule of thumb is that the upwind fetch should be 100 times the measurement height. An important related consideration is that the region immediately around the tower must free from vegetation or other structures that can cause flow distortions or otherwise interfere with the measurements.
It is also important that the measurements are collected above the roughness sublayer within the constant flux layer.
The roughness sublayer is the region extending from the surface to approximately twice the height of the roughness elements – in this case, individual plants – where individual roughness elements strongly influence local turbulent flow.
Again, the depth of the roughness sublayer varies with changing environmental conditions and requires detail information to estimate. However, as a commonly-used rule of thumb, the measurement height should be at least 2.5 times the canopy height. For example, the instruments should be deployed at a measurement height of 5 m to collect flux data over a field of corn that will grow to 2 m tall.
Maintenance

While the user’s manual should be consulted for the appropriate maintenance schedule of the differing sensors, some general guidelines are provided here.
There are a number of basic maintenance activities that should be carried out each time the site is visited.

  • First, current measurements should be reviewed on the datalogger to ensure that they fall within the expected range for the site. (To this end, it may be helpful to create a table of “normal values” for each measurement and store it in the logger box.)
  • Next, wires and cables should be inspected for damage; the connections to the datalogger should also be confirmed.
  • The battery voltage should also be tested.
  • The sonic anemometer and radiometers should be checked to ensure that it is level.
  • Also, when necessary, the transducers should be carefully cleaned and any obstructions (vegetation, spider webs, etc) to air flow through the sensor removed.
  • At the same time, the optical window of both the gas analyzer and infrared thermometers should be cleaned along with the domes of the radiation sensors.
  • The temperature and humidity sensor, specifically the “humidity chip” should also be examined for contaminants or damage.
  • Similarly, the radiation shield and rain gauge should be checked for foreign matter (dust, insects, etc) cleaned accordingly.
  • Finally, the solar panels should be cleaned.
In addition to site maintenance, there are several maintenance steps that should be completed prior to deployment and periodically thereafter.

Note
Although there is no field calibration for the sonic anemometers, they should be returned to the factory periodically (typically two to three years) for recalibration.

Also, at a minimum, the internal desiccant in the sensor should be checked and replaced on an annual basis. The gas analyzer will require calibration, including both zero and span after replacing the desiccant. This recalibration can be conducted in the field or at a bench.

Note
Similarly, many radiometers use an internal desiccant that should be replaced annually. These sensors should also be recalibrated periodically; most manufacturers recommend recalibration every other year with continuous use.

Although the offset and gain of temperature and humidity sensors cannot be adjusted, they should be evaluated yearly using a dew point generator in order to quantify any measurement bias. (If multiple sensors are deployed at the same site, it is also useful to conduct an inter-comparison to identify any difference in the response of the sensors.)

Note
Finally, if a tipping bucket rain gauge is used, a field calibration should be conducted at least once a year.

Data processing and quality control
Data processing and quality control
Calculating the surface fluxes is computationally intensive. In addition to calculating the covariance between the vertical wind velocity and the heat, moisture, and carbon dioxide, it requires a number of ancillary quantities to be calculated and a suite of corrections be applied. Similarly, the quality control steps require analyses of both the raw data and the calculated fluxes. As a result, post-processing and quality control are often the most time consuming and labor intensive steps.
Processing software

  • Although there are a number of commercial and community software packages available, e.g. EdiRE and EddyPro, the data can also be processed in Matlab, R, or other computing environments using “in-house” programs.
  • Both approaches have advantages and disadvantages. In the case of the former, the software packages tend to be relatively straight-forward to use, have already been vetted to ensure their accuracy, and provide support in identifying and correcting issues.
  • Often, commercial software packages also facilitate ancillary analyses, such as footprint analysis, that are useful for quality control and other purposes.
  • On the other hand, user-generated code allows for greater control of the post-processing and quality control of the data.
Standard corrections

  • A suite of standard quality control and corrections should be applied when calculating the fluxes (Table 2).
  • These corrections account for well-known sources of error when measuring turbulent fluxes using the eddy covariance approach.
  • It is also important to recognize that there are corrections that have been proposed in the literature that were subsequently shown to be inappropriate.
  • For example an additional coordinate rotation about the axis of the prevailing wind can be found in the literature, but should not be applied.



Despiking

Although random noise (Fig. 3) has been an important source of error in high-frequency data historically, improvements in instruments have greatly reduced the occurrence of data spikes. Removing the data spikes is the first post-processing step.

Figure 3. An example of high-frequency data with the data spikes highlighted.

  • Numerous despiking methods are available including statistical filtering, phase space filtering, and wavelet-based approaches; however, recent studies suggest that statistical approaches tend to be the most efficient.
  • The statistical approach uses a moving window to identify data points that fall outside some threshold, which is typically on the order of 3 to 5 standard deviations outside the local mean; the algorithms also consider other factors such as the proximity of flagged data.
  • The number of data points flagged as spikes depends on several factors including

  1. the size of the moving window,
  2. the threshold used, and
  3. the assumed distribution of the raw data.

Note
These factors must be determined and reviewed periodically to ensure that too much data is not moved.

  • The answer to these questions depends largely on the application and the number of data points identified as spikes.
  • For some applications, such as spectral or wavelet analysis, a continuous data set is needed.
  • But, when calculating the turbulent fluxes, the removal of a small number – less than 1% of data points – is unlikely to adversely impact the calculation.
  • If a large fraction of the data points are missing or identified as spikes, it suggests the instrument is malfunctioning or there is another issue. In this case, the data should not be used.
  • Furthermore, the underlying cause of the data spikes must be investigated further.
Sonic temperature

  • Sonic anemometers determine the wind velocity and sonic temperature, which approximates the virtual potential temperature, as a function of the speed of sound.
  • As a result, fluctuations in both the humidity and crosswind velocity that cause variations in the air density, thus the speed of sound, along the measurement path of the sensor can introduce errors in the sonic temperature measurement. This correction is needed to account for those errors.
Coordinate rotation

  • The coordinate rotation adjusts the measurement coordinate system to align with the direction of mean wind flow.
  • In addition to adjusting for changing wind direction, the coordinate rotation also accounts for terrain slope and errors when leveling the sonic anemometer.
  • It both ensures that there is no cross-contamination among the wind velocity components and forces the mean longitudinal and vertical wind velocity to zero.
  • Currently there are two methods in common usage: 2-D Coordinate Rotation and Planar Fit Coordinate Rotation.
  • While both methods yield nearly identical results for the flat terrain typical of agricultural systems, planar fit tends to perform better over complex terrain, such as hill slopes or mountainous environments.
Time delay

  • It is not uncommon for the measurements from different sensors, for example the sonic anemometer and IRGA, to be asynchronous.
  • For open-path systems, this offset is usually small, typically one to five measurements, which corresponds to a delay of up to 0.25 s.
  • For closed-path systems, the time delay can be much larger due to dwell time within the gas analyzer and associated tubing.
  • To correct for the time delay, the offset, if any, is determined via the empirical cross-correlation function; the maximum correlation will occur when the measurement time series are synchronized in time. The data is then shifted to account for the offset prior to further calculations.
Frequency response

  • The frequency response correction accounts for differences in the ability of sensors to respond to eddies across the continuum of frequencies.
  • The correction adjusts for a family of related factors ranging from slow sensor response that is insufficient to capture the effects of very small high-frequency eddies to the effects of sensor separation and path averaging.
Webb-Pearman-Leuning

  • The Webb- Pearman-Leuning (WPL) correction is applied to the water vapor, carbon dioxide, and other trace gas fluxes and compensates for the effects on density of fluctuations in temperature and water vapor.
  • For example, consider the example where cool dry air lies over a relatively warm moist surface. In this situation the updrafts are slightly warmer and moister than the downdrafts. This results in instantaneous changes in air density depending on wind direction. The change in density creates the appearance of a flux even in the absence of turbulent transport.
Quality control

Visual Inspection

  • The first quality control step focuses on identifying periods when environmental conditions hamper data collection, instrument malfunction adversely impact data quality, or mathematical artifacts yield non-physical results during post-processing.
  • Since the presence of dew, fog, or rainfall can adversely impact both wind speed and trace gas measurements, measurements made under these conditions should be omitted.
  • Similarly, flow distortion can occur when the wind is from behind the tower – a common rule of thumb is to flag measurements when the wind passes through a 45° arc centered behind the high-frequency instruments (Fig. 4) – and should be used only with caution.
  • Finally, low wind speeds (less than 1 m s-1) can be indicative of insufficient turbulence. This can be particularly problematic during the overnight period.
  • Typically, periods of insufficient turbulent mixing are identified by the friction velocity; periods with a friction velocity below a threshold value – most commonly between 0.075 m s-1 and 0.20 m s-1 – are flagged.
Figure 4. The range of wind directions behind the sensor system that can result in flow distortion.

Averaging period

  • The measurement averaging period is standardized at 30 minutes.
  • This averaging period is broadly used because in most cases it is sufficiently long to capture all of the frequencies that contribute to turbulent transport while minimizing the potential of violating the assumption of stationarity.
  • Although a 30-minute averaging period is appropriate in most cases, it is important to confirm that the averaging length is sufficient to capture the low-frequency contribution to the turbulent flux. This is accomplished via Ogive analysis.
  • The Ogive analysis is conducted by numerically integrating the co-spectrum of vertical wind velocity and some scalar quantity, e.g. temperature or water vapor, from the frequency of interest to the maximum frequency.
  • The resulting Ogive function has a sigmoidal form (Fig. 5).
  • The inverse of the frequency that demarcates the beginning of the plateau is the minimum averaging period that captures all frequencies contributing to the flux.

Figure 5. An idealized Ogive function showing a plateau beginning at a frequency of 0.002 Hz. This indicates a minimum averaging length of 8.33 minutes.

Stationarity

  • This test confirms that assumption of stationarity is appropriate.
  • Stationarity is tested by subdividing the total measurement period into approximately 5 to 10 sub-periods and then comparing the flux over the whole period with the mean flux of the sub-periods.
  • If the stationarity assumption is valid, i.e. the mean environmental conditions are constant, both flux estimates should be identical.
  • In practice, if the values agree to within 25% to 30%, the assumption of stationarity is considered valid.
  • Like the Ogive test, stationarity testing is typically conducted periodically on a random sampling of the flux data. It should also be run if there is reason to believe that the stationarity assumption is false.
Footprint

  • The flux footprint describes the “field of view” or source area of the flux measurement.
  • More specifically, it describes the relative contribution of upwind locations according to a probability density function.
  • There are numerous approaches of differing complexity for determining the footprint.
  • The underlying modeling framework and assumptions are unique for each approach.
  • For example, the largest family of models, analytical models, determines the footprint by integrating advection-diffusion relationship while other models use large eddy simulations to describe the atmospheric turbulence and determine the path of measured scalar quantities.
  • Because the orientation and extent of the footprint varies with changing environmental conditions, it is useful to periodically evaluate the footprint to ensure that the measurements are capturing the flux from the area of interest.
  • Since the source area of the measured flux is upwind of the tower, the orientation of the footprint varies with wind direction.
  • The extent of the footprint increases proportionally with increasing wind speed and stability and decreasing surface roughness.
  • As a result, the footprint during the growing season can differ significantly from that of the winter months; similarly flux footprints can vary dramatically diurnally.
Closure

  • The closure of the surface energy budget is commonly used as a quality control measure.
  • Closure is determined based on the well-known simplified energy balance relationship according to:


where C is closure, H is the sensible heat flux, λE is the latent heat flux, Rn is the net radiation, and G is the soil heat flux. There are, however, a number of important caveats.
  • Typically, the closure ranges between 75% and 85%. This is the well-known “closure problem”.
  • While its cause is not fully understood, there are likely a number of contributing factors.
  • For example, this relationship neglects a number of minor terms such as the energy consumed by photosynthesis and heat storage within the biomass.
  • There is also a spatial mismatch in the size and location of the source area contributing to the various measurements.
  • Finally, recent studies have shown that the wind direction or “angle of attack” may play a role when using certain types of sonic anemometers.
  • As a result, interpreting closure over short periods, such as a single hour or day, is not recommended.
  • Over longer periods, such as a growing season, a closure of less than 60% to 70% suggest there may be an issue that requires further investigation, but it is not a definitive test of data quality.
Data file formats and metadata
Data file formats and metadata
Quantities

  • Because it is widely recognized and understood throughout the scientific community, the nomenclature, formatting, and metadata used here largely mirrors that used by the AmeriFlux network.
  • For consistency and to facilitate multi-site research, there are a number of core measurements that should be collected at all LTAR sites.
  • Minimally, the measured and derived quantities that should be included in the data files are summarized below (Table 3).




  • If there are additional site-specific measurements or derived quantities, including gap filled data, these can be included as well.
  • They should be appended to the data following the core measurements and identified using the same nomenclature employed by the AmeriFlux network.
  • A complete description of the data should also be given in the metadata file.
File formatting

The data should be submitted as a comma separated variable (.CSV) file. The file header should contain following:

Row 1: Location Name and Year
Row 2: Name and Contact Information of responsible scientist
Row 3: Filename and Creation Data
Row 4: Field Names (See Table 3)
Row 5: Units (See Table 3)

Data should begin with Row 6. Data points that are missing or were removed during the quality control process should be indicated by -9999. The data should not be gap filled. However, as noted above, gap filled data can be included at the discretion of the responsible scientists.
Minimum metadata requirements

An external metadata file associate with the measurement site should include the following:

• The name and location of the site. The location should be provided as a physical location, e.g. Water Conservation Rd, Beltsville, MD) and as latitude, longitude, and elevation.
• The contact name and information of the responsible scientist(s).
• The type and model of the sensors used at the site. This should include a list indicating the date when sensors were replaced or repaired.
• An overview of the post-processing and quality control steps.
• A listing of any known data issues.
High-frequency data

  • The raw high-frequency data should be retained and archived so that it is available on request for those interested in conducting analyses of turbulent transport and exchange.
  • This also facilitates reprocessing of the data should it become necessary.
  • The data should be stored as comma-delimited (.csv) files and should be accompanied by a metadata file containing the same information as described above.
Protocol references
Arya SP. 2001: Introduction to Micrometeorology. Academic Press, London.

Aubinet M. 2012: Eddy Covariance: A Practical Guide to Measurement and Data Analysis. Springer-Verlag, Berlin.

Bruetsart W. 1982: Evaporation into the Atmosphere: Theory, History and Applications, Springer Sciences, Dordrecht .

Burba G, Anderson D. 2010: A Brief Practical Guide to Eddy Covariance Flux Measurements. Li-Cor Biosciences, Lincoln, Nebraska.

Falge E., et al. 2001: Gap filling strategies for defensible annual sums of net ecosystem exchange. Agric. Forest Meteorol. 107, 43–69.

Falge E., et al. 2001: Gap filling strategies for long term energy flux data sets. Agric. Forest Meteorol. 107, 71-77.

Finnigan JJ. 2000: Turbulence in plant canopies. Ann. Rev. Fluid Mech., 32(1), 519‐571.

Finnigan JJ, et al. 2003: A re-evaluation of long-term flux measurement techniques Part I: Averaging and coordinate rotation, Bound.-Layer Meteorol., 107, 1-48.

Foken T, Wichura B. 1996: Tools for quality assessment of surface-based flux measurements. Agric. Forest Meteorol., 78, 83-105.

Foken T. 2008: The energy balance closure problem. Ecol. Appl. 18, 1351-1367.

Foken T. 2008: Micrometeorology. Springer-Verlag, Berlin.

Hillel D. 1998: Environmental Soil Physics. Harcourt, Brace, and Company, San Diego.

Horst TW, Weil JC. 1992: Footprint estimation for scalar flux measurements in the atmospheric surface-layer. Bound.-Layer Meteorol. 59, 279–296.

Horst TW, Weil JC. 1994: How far is far enough? The fetch requirement for micrometeorological measurement of surface fluxes, J. Atmos. Oceanic Tech.11, 1018-1025.

Hsieh CL., et al. 2000: An approximate analytical model for footprint estimation of scalar fluxes in thermally stratified atmospheric flows. Adv.in Water Resour. 23, 765–772.

Kaimal JC, Finnigan JJ. 1994: Atmospheric Boundary Layer Flows: Their Structure and Measurement. Oxford University Press, Oxford.

Leclerc M, Foken T. 2014: Footprints in Micrometeorology and Ecology. Springer-Verlag, Berlin.

Lee X, Massman W, Law B. 2004: Handbook of Micrometeorology: A Guide for Surface Measurements and Analysis, Kluwer Academic Press, Dordrecht .

Leuning R, et al. 2012: Reflections on the surface energy imbalance problem, Agric. Forest Meteorol., 156, 65-74.

Liu HP, et al. 2001: New equations for sonic temperature variance and buoyancy heat flux with an omnidirectional sonic anemometer, Bound.-Layer Meteorol. 100, 459–468.

Massman W. 2000: A simple method for estimating frequency response corrections for eddy covariance systems, Agric. Forest Meteorol., 104, 185-198.

Massman W, Lee X. 2002: Eddy covariance flux corrections and uncertainties in long-term studies of carbon and energy exchanges, Agric. Forest Meteorol., 113, 121-144.

Moore CJ. 1986: Frequency response corrections for eddy correlation systems, Bound.-Layer Meteorol., 37, 17-35.

Panofsky H, Dutton JA. 1984: Atmospheric Turbulence: Models and Methods for Engineering Applications, Wiley InterScience, New York.

Raupach MR, Finnigan JJ. 1997: The influence of topography on meteorological variables and surface-atmosphere interactions, J. Hydrol., 190, 182-213.

Schmid HP. 2002: Footprint modelling for vegetation atmosphere exchange studies: a review and perspective. Agric. Forest Meteorol., 113, 159–183.

Schontanus P, et al. 1983: Temperature measurements with a sonic anemometer and its application to heat and moisture fluxes. Bound.-Layer Meteorol., 26, 81-93.

Starkenburg D, et al. 2016: Assessment of despiking methods for turbulence data in micrometeorology. J. Oceanic Atmos. Tech., 33, 2001-2013.

Stull RB. 1988: An Introduction to Boundary-Layer Meteorology, Kluwer Academic Press, Dordrecht.

Stull RB. 1999: Meteorology for Scientists and Engineers. Brooks Cole Publishing, Pacific Grove, California.

Twine TE, et al. 2000: Correcting eddy-covariance flux underestimates over a grassland, Agric. Forest Meteorol., 103, 279-300.

Webb EK, Pearman G. Leuning R. 1980: Correction of flux measurements for density effects due to heat and water-vapor transfer, Q. J. R. Meteorol. Soc., 106, 85-100.
Acknowledgements
The author thanks Claire Phillips, Heping Liu, and Rosvel Bracho for review of this protocol.

This research is a contribution from the Long-Term Agroecosystem Research (LTAR) network. LTAR is supported by the United States Department of Agriculture. The use of trade, firm, or corporation names in this publication is for the information and convenience of the reader. Such use does not constitute an official endorsement or approval by the United States Department of Agriculture or the Agricultural Research Service of any product or service to the exclusion of others that may be suitable. USDA is an equal opportunity provider and employer.