One of the things I enjoy doing is comparing dynamic information equilibrium model (DIEM) forecasts to forecasts by other people or institutions. In part because they're often funny, but in general head-to-head model competition is one of the better ways to get a handle on model usefulness (or flat out reject models). On twitter, I came across (via here) this December 2014 white paper out of the Fed [pdf]. In it they had a fascinating graph of the unemployment rate with the "trend" and a forecast of the data  from 2014 out to the 2020s:
I downloaded the relevant archival unemployment rate data from ALFRED (the October 2014 vintage since the last point was indicated as September 2014), and ran the DIEM. What's interesting (to me, at least) is that this point in the time series was in the middle of the 2014 (positive) shock to employment. The shock was detectable (i.e. multiple data points had fallen outside the confidence limits of the counterfactual no-shock model), but the parameters hadn't fully converged resulting in some additional uncertainty (similar to this discussion here). Comparing the DIEM forecast to the Fed model, it's pretty obvious the latter was off the mark:
While I was at it, I also compared the DIEM forecast to a couple of additional Federal Reserve forecasts available at the time that I have been tracking for the past couple of years: the FOMC September meeting [pdf], and the October FRBSF Fed Views forecast. These forecasts weren't quite as bad, but then only looked at a shorter time scale:
 In the paragraph below the figure it explicitly says this is a forecast of the data, not a forecast of the "trend":
The trend we used for the factor model (the solid red line) is computed from a hybrid series that includes the additional 8 years of history plus the 8 years of simulated future data.