Since July 8, 2015, RDA has been archiving 0.25 degree GDAS/FNL analysis and forecast (hours 3, 6, 9) globally gridded data in GRIB2 format.

We call it ds083.3: NCEP GDAS/FNL 0.25 Degree Global Tropospheric Analyses and Forecast Grids.

I hope you call it useful. ;-)

The analysis files are named gdas1.fnl0p25.YYYYMMDDHH.f00.grib2.

You may be able to find and download FNL at this resolution from NCEI beginning in mid-January 2015, but the RDA archive begins in July 2015 with gdas1.fnl0p25.2015070800.f00.grib2.

Forecast grids corresponding to 3, 6 and 9 hours after analysis time are named gdas1.fnl0p25.YYYYMMDDHH.fFH.grib2 where Forecast Hour, FH, can be 03, 06, 09.

wrfhelp@ucar.edu expects most WRF users will see forecast improvement with 0.25 degree instead of 1.0 degree input. You can use a series of f00 analysis files to provide initial conditions and lateral boundary conditions for your WRF run. You can also add forecast files to extend your WRF run past the time of the latest available FNL analysis file.

Each 0.25 degree GDAS/FNL analysis file is about ~180 MB while the corresponding 1.0 degree file is ~17 MB.

Also note that the forecast 0.25 degree grids are ~205 MB because they include water (APCP, ACPCP, PRATE...) and cloud variables that are not in the analysis grids. This is why some people call the analysis grids "dry".

Please order subsets of the global grids in order to efficiently use shared bandwidth.

Update:

We will not be back-filling the 0.25 degree GDAS/FNL to Jan 2015. If you need FNL files we don't offer, order them directly from NOAA.

News and tutorials from the National Center for Atmospheric Research's Research Data Archive

## 11 March 2016

## 07 March 2016

### What a difference 6 hours makes

I illustrated NCEP Model Performance with verification statistics for the 18Z GFS forecast cycle because 18Z corresponds to noon CST--near the peak daily temperatures over the Continental Unites States (CONUS). However, that may have been misleading or only told a partial truth.

Let's look at the statistics for 18Z again.

Note the data scales.

Now let's look at the same visualizations for the 12Z data analysis/forecast cycle.

Notice that the temperature bias scale is reduced from [-1.0, 1.0] to [-0.5, 0.5] C? The bias reduction is less dramatic than a factor of two.

In the next graph, the scale doesn't change, but the peak of the bias is reduced from ~4.0C to ~2.5C at the tropopause.

What causes that difference? The graph on the right gives a clue.

The horizontal scale gives the total number of observations used to compute the statistics. At 18Z, there were ~2,750. At 12Z, there were ~135,000. More radiosondes are released near 12Z than any other time of day (with 0Z coming in a close second.)

Let's look at the statistics for 18Z again.

18Z analysis cycle GFS temperature bias for forecast hours 0-168, compared to conventional upper air soundings. Operational GFS on the left and experimental GFS on the right. Scale [-1.0, 1.0] C |

Bias between upper air stations and the 48-hour GFS forecast for the 18Z cycle. Bias statistics computed over ~2,750 observations. |

12Z analysis cycle GFS temperature bias for forecast hours 0-168, compared to conventional upper air soundings. Operational GFS on the left and experimental GFS on the right. Scale [-0.5, 0.5] C |

In the next graph, the scale doesn't change, but the peak of the bias is reduced from ~4.0C to ~2.5C at the tropopause.

Bias between upper air stations and the 48-hour GFS forecast for the 12Z cycle. Bias statistics computed over ~135,000 observations. |

The horizontal scale gives the total number of observations used to compute the statistics. At 18Z, there were ~2,750. At 12Z, there were ~135,000. More radiosondes are released near 12Z than any other time of day (with 0Z coming in a close second.)

### Links:

- NCEP EMC Mesoscale Verification Statistics (includes GFS)
- NCEP EMC GFS Verification Statistics (select among 00, 06, 12, 18Z forecast cycles)
- What's the difference between GFS and FNL?
- Analysis, forecast, reanalysis--what's the difference?
- NCEP Model Performance

Labels:
Analysis,
Data Science,
Forecast,
Reanalysis

## 04 March 2016

### NCEP Model Performance

Users of gridded analysis or forecast data sets may wonder how well do the analyses reflect the measurements that were assimilated into them. Furthermore, how do the forecasts compare to the reality?

Welcome to the world of verification statistics.

If you took all the radiosondes that were ingested into an analysis like GDAS/FNL, then you could compute the mean difference (bias) and the RMSE between the measurements and the analysis for the exact same time. The overall goal is to minimize the biases globally (but allow small biases at individual stations.)

This NCEP EMC site allows you to view some useful statistics for each analysis cycle (00Z, 06Z, 12Z, 18Z). For instance, if you compared the analyses and forecasts from the 18Z analysis/forecast cycle against all upper air measurements, then you see a slight warm bias in the troposphere and a slight cool bias in the stratosphere at Forecast Hour 0 (analysis time).

Notice that the fit is not perfect. The operational GFS model is shown on the left; an experimental version (GFSX) is shown on the right. GFSX appears to be a slight improvement.

Let's look at the Root Mean Squared Error (RMSE). Are you amazed that we can forecast the global temperature to within 2.5 degrees 5 days ahead? Or are you young enough to take that for granted?

Again, GFSX appears to be an improvement against the current operational GFS model. After monitoring both, NCEP EMC scientists may decide to implement GFSX as the new operational model, GFS.

Tweaks like this are common as I explained in Analysis, forecast, reanalysis--what's the difference? If consistent processing is important for your work, always use a reanalysis.

Here's a vertical cross-section of the same verification data, at forecast hour 48. The web site does not offer a 0 hour graph, but the first plot shows that the 48 hour forecast errors are slightly larger than the analysis.

The NCEP EMC Mesoscale Verification site offers further insight into GFS vs GDAS/FNL. If you read What's the difference between GFS and FNL?, you may recall that the GDAS/FNL analysis takes place several hours later than GFS, so that it can incorporate more observations. By the time GDAS/FNL is ready, the 12-hour GFS forecast representing the same time should be ready.

The 500 mb height, aka the half-height of the atmosphere, gives you an indication of temperatures and major atmospheric features such as highs and lows. GDAS/FNL shows slightly sharper features than GFS, but notice how well they agree with each other overall.

I hope, in studying these statistics, you agree with me that NWP is a major triumph of human ingenuity and cooperation.

Welcome to the world of verification statistics.

If you took all the radiosondes that were ingested into an analysis like GDAS/FNL, then you could compute the mean difference (bias) and the RMSE between the measurements and the analysis for the exact same time. The overall goal is to minimize the biases globally (but allow small biases at individual stations.)

This NCEP EMC site allows you to view some useful statistics for each analysis cycle (00Z, 06Z, 12Z, 18Z). For instance, if you compared the analyses and forecasts from the 18Z analysis/forecast cycle against all upper air measurements, then you see a slight warm bias in the troposphere and a slight cool bias in the stratosphere at Forecast Hour 0 (analysis time).

18Z analysis cycle GFS temperature bias for forecast hours 0-168, compared to conventional upper air soundings. Operational GFS on the left and experimental GFS on the right. |

Let's look at the Root Mean Squared Error (RMSE). Are you amazed that we can forecast the global temperature to within 2.5 degrees 5 days ahead? Or are you young enough to take that for granted?

18Z analysis cycle GFS temperature RMSE for forecast hours 0-168, compared to conventional upper air soundings. RMSE of GFSX is smaller than GFS (right). |

Tweaks like this are common as I explained in Analysis, forecast, reanalysis--what's the difference? If consistent processing is important for your work, always use a reanalysis.

Here's a vertical cross-section of the same verification data, at forecast hour 48. The web site does not offer a 0 hour graph, but the first plot shows that the 48 hour forecast errors are slightly larger than the analysis.

Bias between upper air stations and the 48-hour GFS forecast. |

The 500 mb height, aka the half-height of the atmosphere, gives you an indication of temperatures and major atmospheric features such as highs and lows. GDAS/FNL shows slightly sharper features than GFS, but notice how well they agree with each other overall.

I hope, in studying these statistics, you agree with me that NWP is a major triumph of human ingenuity and cooperation.

### Links:

- NCEP EMC Mesoscale Verification Statistics (includes GFS)
- NCEP EMC GFS Verification Statistics (select among 00, 06, 12, 18Z forecast cycles)
- What's the difference between GFS and FNL?
- Analysis, forecast, reanalysis--what's the difference?
- What a difference 6 hours makes

Labels:
Analysis,
Data Science,
Forecast,
Reanalysis

Subscribe to:
Posts (Atom)