Forecasting with Many Models: Model Confidence Sets and Forecast Combination
A longstanding finding in the forecasting literature is that averaging forecasts from different models often improves upon forecasts based on a single model, with equal weight averaging working particularly well. This paper analyzes the effects of trimming the set of models prior to averaging. We compare different trimming schemes and propose a new one based on Model Confidence Sets that take into account the statistical significance of historical out-of-sample forecasting performance. In an empirical application of forecasting U.S. macroeconomic indicators, we find significant gains in out-of-sample forecast accuracy from our proposed trimming method.