This tutorial tells about how to do Holt-WInters smoothing and forecast in R.

Table of Contents

### 1 Basics of Holt-Winters method

#### 1.1 Additive model

\[\text{Level: }a_t=\alpha(x_t-s_{t-p})+(1-\alpha)(a_{t-1}+b_{t-1})\]

\[\text{Trend (or slope): }b_t=\beta(a_t-a_{t-1})+(1-\beta)b_{t-1}\]

\[\text{Seasonal effect: }s_t=\gamma(x_t-a_t)+(1-\gamma)s_{t-p}\]

where \(a_t\), \(b_t\), and \(s_t\,\) are the estimated level, slope, and seasonal effect at time t, and \(\alpha\), \(\beta\), and \(\gamma\,\) are the smoothing parameters.

Notes:

- The level updating equation takes a weighted average of our latest observation with our existing estimate of the appropriate seasonal effect subtracted, and our forecast of the level made one time step ago. Here the one-step-ahead forecast of the level is the sum of the estimates of the level and slope at the time of forecast. Note that the mean of the process is the sum of the level and the appropriate seasonal effect.
- The trend (or slope) updating equation takes a weighted average of the latest estimate of slope, i.e., the difference in the estimated level at time
*t*and the estimated level at time*t-1*, and our previous estimate of slope. Note that R’s convention refers to the slope as the trend. - The seasonal effect updating equation takes a weighted average of the difference between the current observation and current estimate of the level, and the last estimate of the seasonal effect for this season which was made at time \(t-p\).
- The trend and seasonal effect updating equations can only be used after the level updating equation has been applied to get \(a_t\).
- Typical choices of smoothing weights are \(\alpha=0.2,\,\beta=0.2,\,\text{and}\,\gamma=0.2\).
- The updating equations can be started with \(a_1=x_1\), and the initial slop \(b_1\,\) and seasonal effects \(s_1,\cdots,s_p\) reckoned from experience, estimated from the data in some way, or set at zero. The default in R is to use values obtained from the
*decompose*procedure.

Given observations up to time *n*, \(x_1,\cdots,x_n\), the forecasting equation for \(x_{n+k}\,\) is

\[\hat{x}_{n+k|n}=a_n+k\cdot b_n+s_{n+k-p}\qquad\qquad k\le p\]

where \(a_n\,\) is the estimate level and \(b_n\,\) is the estimated slope, so \(a_n+kb_n\,\) is the expected level at time *n+k*, and \(s_{n+k-p}\,\) is the exponentially weighted estimate of the seasonal effect made at time \(n+k-p\).

For example, for monthly data (p=12), if time *n+1* occurs in January, then \(s_{n+1-12}\,\) is the exponentially weighted estimate of the seasonal effect for January made in the previous year. The forecasting equation can be used for lead times between *(m-1)p+1* and *mp*, but then the most recent exponentially weighted estimate of the seasonal effect available will be \(s_{n+k-(m-1)p}\).

#### 1.2 Multiplicative model

\[\text{Level:}\quad a_n=\alpha\left(\dfrac{x_n}{s_{n-p}}\right)+(1-\alpha)(a_{n-1}+b_{n-1})\]

\[\text{Trend (or slope):}\quad b_n=\beta(a_n-a_{n-1})+(1-\beta)b_{n-1}\]

\[\text{Seasonal effect:}\quad s_n=\gamma\left(\dfrac{x_n}{a_n}\right)+(1-\gamma)s_{n-p}\]

Given observations up to time *n*, \(x_1,\cdots,x_n\), the forecasting equation for \(x_{n+k}\,\) is

\[\hat{x}_{n+k|n}=(a_n+k\cdot b_n)\cdot s_{n+k-p}\qquad\qquad k\le p\]

In R, the function *HoltWinters()* can be used to estimate smoothing parameters for the Holt-Winters model by minimizing the one-step-ahead prediction errors (SS1PE).

### 2 Simple Exponential Smoothing

If you have a time series that can be described using an additive model with constant level and no seasonaility, you can use simple exponential smoothing to make short-term forecast.

SMoothing is controlled by the parameter \(\alpha\in[0,1]\), where a value close to zero means that little weight is placed on the most recent observations.

1 2 3 |
rain<-scan("http://library.quantlego.com/Howto/R/precip1.dat", skip=1) rainseries<-ts(rain,start=c(1813)) plot(rainseries) |

As can be seen, there is roughly constant (mean) level of 25 inches in rainfall, and the random fluctuation seems to be roughly constant in size over time. Therefore, it is probably appropriate to describe the data using an additive model, and we can make forcasts using simple exponential smoothing.

To use R’s *HoltWinters()* function for simple exponential smoothing, we need to set the parameters **beta=FALSE** and **gamma=FALSE**.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
> rainseriesforecasts<-HoltWinters(rainseries, beta=FALSE, gamma=FALSE) > rainseriesforecasts Holt-Winters exponential smoothing without trend and without seasonal component. Call: HoltWinters(x = rainseries, beta = FALSE, gamma = FALSE) Smoothing parameters: alpha: 0.02412151 beta : FALSE gamma: FALSE Coefficients: [,1] a 24.67819 |

By default,* HoltWInters()* just make forecasts for the time period covered by the original time series

1 2 3 4 5 6 7 8 9 10 11 12 13 |
> rainseriesforecasts$fitted Time Series: Start = 1814 End = 1912 Frequency = 1 xhat level 1814 23.56000 23.56000 1815 23.62054 23.62054 1816 23.57808 23.57808 ...... 1910 24.57541 24.57541 1911 24.59433 24.59433 1912 24.59905 24.59905 |

We can plot the original time series against the forcasts as following

1 |
plot(rainseriesforecasts) |

As a measure of the forecast accuracy, we can calculate the sum of squared error (SSE) for the in-sample forecast errors

1 2 |
> rainseriesforecasts$SSE [1] 1828.855 |

The higher the value of \(\alpha\), the less smoothing, as shown below

1 2 3 4 5 |
layout(1:4) plot(HoltWinters(rainseries, alpha=0.25, beta=FALSE, gamma=FALSE), main="Alpha=0.25") plot(HoltWinters(rainseries, alpha=0.5, beta=FALSE, gamma=FALSE), main="Alpha=0.5") plot(HoltWinters(rainseries, alpha=0.75, beta=FALSE, gamma=FALSE), main="Alpha=0.75") plot(HoltWinters(rainseries, alpha=1, beta=FALSE, gamma=FALSE), main="Alpha=1") |

We can use the ‘l.start’ parameter in the *HoltWInters()* function to set the initial value for the level. For example

1 |
plot(HoltWinters(rainseries, beta=FALSE, gamma=FALSE, l.start=35)) |

Note, if \(\alpha\,\) is not specified, the \(\alpha\,\) value calculated by R will be different for different specified initial values.

By default, *HoltWInters()* function makes forecasts for the time period covered by the original time series. However, we can use the *forecast.HoltWinters()* functio n in the ‘f**orecast**‘ package to make forecasts for further time points in future.

1 2 3 4 5 6 7 8 9 10 11 12 |
> rainseriesforecasts2<-forecast.HoltWinters(rainseriesforecasts, h=8) > rainseriesforecasts2 Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 1913 24.67819 19.17493 30.18145 16.26169 33.09470 1914 24.67819 19.17333 30.18305 16.25924 33.09715 1915 24.67819 19.17173 30.18465 16.25679 33.09960 1916 24.67819 19.17013 30.18625 16.25434 33.10204 1917 24.67819 19.16853 30.18785 16.25190 33.10449 1918 24.67819 19.16694 30.18945 16.24945 33.10694 1919 24.67819 19.16534 30.19105 16.24701 33.10938 1920 24.67819 19.16374 30.19265 16.24456 33.11182 > plot.forecast(rainseriesforecasts2) |

Note, the 80% and 95% prediction intervals shown here are based on assumptions that there are no autocorrelations in the forecast errors, and the forecast errors are normally distributed with mean zero and constant variance.

‘Forecast error’ is defined as the observed value minus predicted value for each time point. We can calculate the forecast error only for the time period covered by the original data. The accuracy of prediction is represented by the sum of squared errors (SSE) of the in-sample forecast error. The in-sample errors can be obtained as following

1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
> rainseriesforecasts2$residuals Time Series: Start = 1814 End = 1912 Frequency = 1 [1] 2.5100000 -1.7605450 7.6619220 -0.1128951 0.1198281 2.6469377 -1.1569105 7.8909960 -0.1293468 0.1237733 8.4407877 [12] -0.9328169 -1.6003159 -1.1317139 3.7755848 1.1245120 0.8573870 3.5167056 -4.5081227 0.5606200 -4.1129030 0.2063065 [23] 3.2813300 -4.7778206 -2.4725723 3.4470698 -4.6960787 7.1171978 -1.0944797 1.6919208 -1.5488909 -1.4115293 2.2325189 [34] -6.4813328 5.7850067 -1.2345364 -4.9147575 -3.3862062 11.4054742 1.6803570 -5.6001757 -1.0550911 -1.8796407 -1.8643009 [45] -5.2293312 4.3368082 8.2621979 -1.9070988 3.4389033 -2.6240483 -7.2207523 5.5034232 7.4906723 1.9599860 -0.9372918 [56] 1.1053171 -3.0213449 0.7515345 9.5734064 -1.8475186 -5.6529537 4.1034041 1.7244238 3.6928281 9.5137515 9.0242655 [67] 5.2665866 2.7795486 1.9325016 -0.8541132 -4.8835107 1.5242869 1.8575188 -5.9872873 2.6871351 -1.2676827 -3.8571042 [78] 3.1559349 -2.4601910 -5.2108475 3.0548460 -3.4888415 -1.3546853 -1.9820083 -7.1041993 -2.0828352 -1.2925941 -2.3714148 [89] -3.6442127 13.7036912 -4.0768625 -1.6585224 -0.3285164 -1.5705920 -0.8727070 2.2283440 0.7845930 0.1956674 3.2809476 |

If the predictive model cannot be improved upon, there should be no correlations between forecast errors for successive predictions. In other words, if there are correlations between forecast errors for successive predictions, it is likely that the simple exponential smoothing forecast could be improved upon by another forecasting techniques.

1 |
acf(rainseriesforecasts2$residuals, lag.max=20) |

As can be seen, the correlation at lag 3 is just touching the significance bounds. To test whether there is significant evidence of non-zero correlation at lags 1~20, we can carry out a Ljung-Box test using R’s *Box.test()* function.

1 2 3 4 |
> Box.test(rainseriesforecasts2$residuals, lag=20, type="Ljung-Box") Box-Ljung test data: rainseriesforecasts2$residuals X-squared = 17.4008, df = 20, p-value = 0.6268 |

Here the Ljung-Box test statistic is 17.4, and the p-value is 0.6, so there is little evidence of non-zero autocorrelations in the in-sample forecasts errors at lags 1~20.

To be sure that the predictive model cannot be improved upon, it is also a good idea to check whether the forecast errors are normally distributed with mean zero and constant variance. We plot the in-sample forecast errors

1 |
plot(rainseriesforecasts2$residuals) |

The plot shows that the in-sample forecast errors seem to have roughly constant variance over time, althought the fluctuations in the start may be slightly than that at later dates.

We can also plot a histogram to check whether the forecast errors are normally distributed with mean zero. Here we first create a R function and save it to file ‘Plot_Forecast_Errors_Histogram.R’

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
# R function saved as 'Plot_Forecast_Errors_Histogram.R' PlotForecastErrors <- function(forecasterrors) { mybinsize <- IQR(forecasterrors)/4 mysd <- sd(forecasterrors) mymin <- min(forecasterrors)-mysd*5 mymax <- max(forecasterrors)+mysd*5 mynorm <- rnorm(10000,mean=0,sd=mysd) mymin2 <- min(mynorm) mymax2 <- max(mynorm) if (mymin2<mymin) { mymin<-mymin2 } if (mymax2>mymax) { mymax<-mymax2 } mybins <- seq(mymin,mymax,mybinsize) hist(forecasterrors, col="red", freq=FALSE, breaks=mybins) myhist <- hist(mynorm, plot=FALSE, breaks=mybins) points(myhist$mids, myhist$density, type="l", col="blue", lwd=2) } |

Then we call this function and plot the histogram

1 2 |
source("c:/users/MyName/desktop/Plot_Forecast_Errors_Histogram.R") PlotForecastErrors(rainseriesforecasts2$residuals) |

The histogram shows that the distribution of forecast errors is roughly centered on zero, and is more or leass normally distributed, although it seems to slightly skewed to the right compared to a normal curve. However, the right skew is relatively small, and so it is plausible that the forecast errors are normally distributed with mean zero.

In summary: The Ljung-Box test showed that there is little evidence of non-zero autocorrelations in the in-sample forecast errors, and the distribution of forecast errors seems to be normally distributed with mean zero. This suggests that the simple exponential smoothing method provides an adequate predictive model for the rainfall, which probably cannot be improved upon. Furthermore, the assumption that the 80% abd 95% prediction intervals were based on are probably valid.

### 3 Holt’s Exponential Smoothing

If you have a time series that can be described using an additive model with increasing or decreasing trend and no seasonality, you can use Holt’s exponential smoothing to make short-term forecasts.

Holt’s exponential smoothing is controlled by two parameters, \(\alpha\in[0,1]\,\) for estimating the level at the current time point, and \(\beta\in[0,1]\,\) for estimating the slope of the trend component at the current time point. Parameters close to zero means little weight is placed on the most recent observations when making forecasts of future values.

1 2 3 4 |
www <- "http://library.quantlego.com/Howto/R/skirts.dat" skirts <- scan(www, skip=5) skirts.ts <- ts(skirts,start=c(1866)) plot(skirts.ts) |

We can see obvious trending in the curve. To use the *HoltWInters()* function for Holt’s exponential smoothing, we need to set the parameter \(\gamma\)=FALSE.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
> skirts.ts.forecasts <- HoltWinters(skirts.ts, gamma=FALSE) > skirts.ts.forecasts Holt-Winters exponential smoothing with trend and without seasonal component. Call: HoltWinters(x = skirts.ts, gamma = FALSE) Smoothing parameters: alpha: 0.8383481 beta : 1 gamma: FALSE Coefficients: [,1] a 529.308585 b 5.690464 > skirts.ts.forecasts$SSE [1] 16954.18 > plot(skirts.ts.forecasts) |

Note that the estimated values of \(\alpha=0.84\,\) and \(\beta=1\,\) are both high, meaning that both the estimate of the current value of the level, and of the slope of the trend component, are based mostly on very recent observations in the time series.

You can use the ‘l.start’ and ‘b.start’ arguments to specify the initial values of the level and the slope of the trend component. It is common to set these initial values as following:

- Use the first value in the time series as the initial value of the level;
- Use the second value minus the first value in the time series as the initial value of the slope of the trend component.

The *forecast.HoltWinters()* function can be used to forecast for future times not covered by the original time series

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
> skirts.ts.forecast2 <- forecast.HoltWinters(skirts.ts.forecasts, h=19) > skirts.ts.forecast2 Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 1912 534.9990 509.55210 560.4460 496.08130 573.9168 1913 540.6895 491.01052 590.3685 464.71204 616.6670 1914 546.3800 465.36129 627.3987 422.47258 670.2874 1915 552.0704 434.40205 669.7388 372.11216 732.0287 1916 557.7609 398.94120 716.5806 314.86713 800.6547 1917 563.4514 359.47147 767.4313 251.49103 875.4117 1918 569.1418 316.34076 821.9429 182.51596 955.7677 1919 574.8323 269.81480 879.8498 108.34829 1041.3163 1920 580.5228 220.10648 940.9391 29.31362 1131.7319 1921 586.2132 167.39191 1005.0345 -54.31870 1226.7452 1922 591.9037 111.82029 1071.9871 -142.32052 1326.1279 1923 597.5942 53.52019 1141.6681 -234.49517 1429.6835 1924 603.2846 -7.39593 1213.9652 -330.67069 1537.2399 1925 608.9751 -70.82861 1288.7788 -430.69495 1648.6451 1926 614.6655 -136.68903 1366.0201 -534.43211 1763.7632 1927 620.3560 -204.89720 1445.6092 -641.75986 1882.4719 1928 626.0465 -275.38060 1527.4736 -752.56728 2004.6602 1929 631.7369 -348.07309 1611.5470 -866.75319 2130.2271 1930 637.4274 -422.91396 1697.7688 -984.22478 2259.0796 > plot(skirts.ts.forecast2) |

1 2 3 4 5 |
acf(skirts.ts.forecast2$residuals, lag.max=20) > Box.test(skirts.ts.forecast2$residuals, lag=20, type="Ljung-Box") Box-Ljung test data: skirts.ts.forecast2$residuals X-squared = 19.7312, df = 20, p-value = 0.4749 |

The correlogram shows that the sample autocorrelation for the in-sample forecast errors at lag 5 exceeds the significance bounds. However, we would expect one in 20 of the autocorrelations for the first 20 lags to exceed the 95% significance bounds by chance alone. The *p*-value from the Ljung-Box test is 0.47, indicating that there is little evidence of non-zero autocorrelation in the in-sample forecast errors at lags 1~20.

1 2 3 4 |
layout(1:2) plot(skirts.ts.forecast2$residuals) source("c:/MyRCodes/Plot_Forecast_Errors_Histogram.R") PlotForecastErrors(skirts.ts.forecast2$residuals) |

As can be seen, the forecast errors have roughly constant variance over time. The histogram shows that it is plausible that the forecast errors are normally distributed with mean zero and constant variance.

In summary: The ljung-Box test shows that there is little evidence of autocorrelations in the forecast errors, while the time plot and histogram of forecast errors show that it is plausible that the forecast errors are normally distributed with zero mean and constant variance. Therefore, we can comclude that Holt’s exponential smoothing provides an adequate predictive model which probably cannot be improved upon. In addition, it means that the assumptions that the 80% and 95% prediction intervals were based on are probably valid.

### 4 Holt-Winters Exponential Smoothing

If you have a time series rhat can be described using an additive model with increasing or decreasing and seasonality, then you can use Holt-Winters exponential smoothing to make sjort-term forecast.

Holt-Winters exponential smoothing estimates the level, slope, and seasonal component at the current time point. Smoothing is controlled by \(\alpha\in[0,1],\,\beta\in[0,1],\,\text{and}\,\gamma\in[0,1]\) for the estimates of the level, slope of the trend, and seasonal component, respectively. Parameter value close to zero means that relatively little weight is places on the most recent observations when making forecasts of future values.

1 2 3 4 5 |
> souvenir <- scan("http://library.quantlego.com/Howto/R/fancy.dat") Read 84 items > souvenir.ts <- ts(souvenir, start=c(1987,1), freq=12) > log.souvenir.ts <- log(souvenir.ts) > plot(log.souvenir.ts) |

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
> souvenir.forecast <- HoltWinters(log.souvenir.ts) > souvenir.forecast Holt-Winters exponential smoothing with trend and additive seasonal component. Call: HoltWinters(x = log.souvenir.ts) Smoothing parameters: alpha: 0.413418 beta : 0 gamma: 0.9561275 Coefficients: [,1] a 10.37661961 b 0.02996319 s1 -0.80952063 s2 -0.60576477 s3 0.01103238 s4 -0.24160551 s5 -0.35933517 s6 -0.18076683 s7 0.07788605 s8 0.10147055 s9 0.09649353 s10 0.05197826 s11 0.41793637 s12 1.18088423 > souvenir.forecast$SSE [1] 2.011491 |

- The \(\alpha=0.41\,\) is relatively low, indicating that the estimate of the level at the current time point is based upon both recent observations and some observations in the more distant past.
- The \(\beta=0\,\) indicates that the estimate of the slope of the trend component is not updated over the time series. Instead is set equal to its initial value. This makes good intuitive sense, as the level changes quite a bit over the time series, but the slope of the trend component remains roughly the same.
- The \(\gamma=0.96\,\) is high, indicating that the estimate of the seasonal component at the current time point is just based on very recent observations.

1 |
plot(souvenir.forecast) |

We can forecast for future time points not included in the original time series

1 2 3 4 5 6 7 8 9 |
> souvenir.forecast2 <- forecast.HoltWinters(souvenir.forecast, h=48) > souvenir.forecast2 Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 Jan 1994 9.597062 9.381514 9.812611 9.267409 9.926715 Feb 1994 9.830781 9.597539 10.064024 9.474068 10.187495 ...... Nov 1997 12.202826 11.481562 12.924089 11.099748 13.305903 Dec 1997 12.995737 12.268989 13.722485 11.884272 14.107202 > plot(souvenir.forecast2) |

We investigate whether the predictive model can be improved by checking the autocorrelation in and do Ljung-Box test on the forecast errors:

1 2 3 4 5 |
> acf(souvenir.forecast2$residuals, lag.max=20) > Box.test(souvenir.forecast2$residuals, lag=20, type="Ljung-Box") Box-Ljung test data: souvenir.forecast2$residuals X-squared = 17.5304, df = 20, p-value = 0.6183 |

As can be seen, the autocorrelations for the in-sample forecasterrors do not exceed the significance bounds for lags 1~20. Furthurmore, the *p*-value for Ljung-Box test is 0.6, indicating that there is little evidence of non-zero autocorrelations at lags 1~20.

We also check whether the forecast errors have constant variance and are normally distributed with mean zero

1 2 3 |
layout(1:2) plot(souvenir.forecast2$residuals) PlotForecastErrors(souvenir.forecast2$residuals) |

As can be seen, it is plausible that the forecast errors have constant variance over time, and they are normally distributed with mean zero.

In summary: There is little evidence of autocorrelation at lags 1~20 for the forecast errors, and the forecast errors appear to be normally distributed with mean zero and constant variance over time. This suggest that Holt-Winters exponential smoothing provides an adequate predictive model which probably cannot be improved uopn. FUrthermore, the assumptions upon which the prediction intervals were based are probably valid.

Another example:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
> www <- "http://library.quantlego.com/Howto/R/wine.dat" > wine.dat <- read.table(www, header = T) > sweetw.ts <- ts(wine.dat$sweetw, start=c(1980,1), freq=12) > sweetw.hw <- HoltWinters(sweetw.ts, seasonal="mult") > sweetw.hw Holt-Winters exponential smoothing with trend and multiplicative seasonal component. Call: HoltWinters(x = sweetw.ts, seasonal = "mult") Smoothing parameters: alpha: 0.4086698 beta : 0 gamma: 0.4929402 Coefficients: [,1] a 285.6890314 b 1.3509615 s1 0.9498541 s2 0.9767623 s3 1.0275900 s4 1.1991924 s5 1.5463100 s6 0.6730235 s7 0.8925981 s8 0.7557814 s9 0.8227500 s10 0.7241711 s11 0.7434861 s12 0.9472648 > sweetw.hw$coef a b s1 s2 s3 s4 s5 s6 s7 s8 285.6890314 1.3509615 0.9498541 0.9767623 1.0275900 1.1991924 1.5463100 0.6730235 0.8925981 0.7557814 s9 s10 s11 s12 0.8227500 0.7241711 0.7434861 0.9472648 > sweetw.hw$SSE [1] 477693.9 > plot(sweetw.hw) > legend("topleft",c("Raw data", "HW Smoothing"), lty=c(1,1), col=c("black","red")) |

Here \(\alpha=0.41,\,\beta=0,\,\gamma=0.47\), meaning that the level and seasonal variation adapt rapidly whereas the trend is slow to do so.

The coefficients are the estimated values of the level, slope, and the 12 multiplicative seasonals from January to December available at the latest time point.

To see the predictive performance of the model, we can compare the mean square one-step-ahead prediction error with the standard deviation of the original time series.

1 2 3 4 |
> sqrt(sweetw.hw$SSE/length(sweetw.ts)) [1] 50.54219 > sd(sweetw.ts) [1] 121.3908 |

As can be seen, we see a substantial decrease from 50.5 to 121.39.

By plotting the fotted component of the HW smoothing result, we can get the decomposed plots

1 2 3 4 5 6 7 8 9 10 11 |
> sweetw.hw$fitted xhat level trend season Jan 1981 138.66081 117.9028 1.350962 1.1627376 Feb 1981 125.84473 123.2391 1.350962 1.0100700 Mar 1981 151.06741 127.0805 1.350962 1.1762492 ...... May 1995 172.69087 243.4367 1.350962 0.7054721 Jun 1995 179.12698 254.2353 1.350962 0.7008474 Jul 1995 262.73782 279.4197 1.350962 0.9357739 > plot(sweetw.hw$fitted) |

1 2 3 |
library(forecast) sweetw.hw2 <- forecast.HoltWinters(sweetw.hw, h=4*12) plot(sweetw.hw2) |

Last example:

1 2 3 4 5 6 |
AP <- AirPassengers AP.hw <- HoltWinters(AP, seasonal="mult") plot(AP.hw) plot(AP.hw$fitted) AP.predict <- forecast.HoltWinters(AP.hw, h = 4*12) plot(AP.predict) |