1
0
Fork 0

Remove glossary

This commit is contained in:
Alexander Hess 2020-11-30 18:42:54 +01:00
commit 96a3b242c0
Signed by: alexander
GPG key ID: 344EA5AB10D868E0
16 changed files with 40 additions and 193 deletions

View file

@ -15,21 +15,21 @@ As the models in this family do not include the test day's demand data in
The models in this family are as follows; we use prefixes, such as "h" here,
when methods are applied in other families as well:
\begin{enumerate}
\item \textit{\gls{naive}}:
\item \textit{naive}:
Observation from the same time step one week prior
\item \textit{\gls{trivial}}:
\item \textit{trivial}:
Predict $0$ for all time steps
\item \textit{\gls{hcroston}}:
\item \textit{hcroston}:
Intermittent demand method introduced by \cite{croston1972}
\item \textit{\gls{hholt}},
\textit{\gls{hhwinters}},
\textit{\gls{hses}},
\textit{\gls{hsma}}, and
\textit{\gls{htheta}}:
\item \textit{hholt},
\textit{hhwinters},
\textit{hses},
\textit{hsma}, and
\textit{htheta}:
Exponential smoothing without calibration
\item \textit{\gls{hets}}:
\item \textit{hets}:
ETS calibrated as described by \cite{hyndman2008b}
\item \textit{\gls{harima}}:
\item \textit{harima}:
ARIMA calibrated as described by \cite{hyndman2008a}
\end{enumerate}
\textit{naive} and \textit{trivial} provide an absolute benchmark for the

View file

@ -16,17 +16,17 @@ By decomposing the raw time series, all long-term patterns are assumed to be
a potential trend and auto-correlations.
The models in this family are:
\begin{enumerate}
\item \textit{\gls{fnaive}},
\textit{\gls{pnaive}}:
\item \textit{fnaive},
\textit{pnaive}:
Sum of STL's trend and seasonal components' na\"{i}ve forecasts
\item \textit{\gls{vholt}},
\textit{\gls{vses}}, and
\textit{\gls{vtheta}}:
\item \textit{vholt},
\textit{vses}, and
\textit{vtheta}:
Exponential smoothing without calibration and seasonal
fit
\item \textit{\gls{vets}}:
\item \textit{vets}:
ETS calibrated as described by \cite{hyndman2008b}
\item \textit{\gls{varima}}:
\item \textit{varima}:
ARIMA calibrated as described by \cite{hyndman2008a}
\end{enumerate}
As mentioned in Sub-section \ref{unified_cv}, we include the sum of the

View file

@ -8,13 +8,13 @@ Instead of obtaining an $H$-step-ahead forecast, we retrain a model after
every time step and only predict one step.
The remainder is as in the previous sub-section, and the models are:
\begin{enumerate}
\item \textit{\gls{rtholt}},
\textit{\gls{rtses}}, and
\textit{\gls{rttheta}}:
\item \textit{rtholt},
\textit{rtses}, and
\textit{rttheta}:
Exponential smoothing without calibration and seasonal fit
\item \textit{\gls{rtets}}:
\item \textit{rtets}:
ETS calibrated as described by \cite{hyndman2008b}
\item \textit{\gls{rtarima}}:
\item \textit{rtarima}:
ARIMA calibrated as described by \cite{hyndman2008a}
\end{enumerate}
Retraining \textit{fnaive} and \textit{pnaive} did not increase accuracy, and

View file

@ -10,7 +10,7 @@ Based on the seasonally-adjusted time series $a_t$, we employ the feature
The ML models are trained once before a test day starts.
For training, the matrix and vector are populated such that $y_T$ is set to
the last time step of the day before the forecasts, $a_T$.
As the splitting during CV is done with whole days, the \gls{ml} models are
As the splitting during CV is done with whole days, the ML models are
trained with training sets consisting of samples from all times of a day
in an equal manner.
Thus, the ML models learn to predict each time of the day.
@ -20,8 +20,8 @@ For prediction on a test day, the $H$ observations preceding the time
As a result, real-time data are included.
The models in this family are:
\begin{enumerate}
\item \textit{\gls{vrfr}}: RF trained on the matrix as described
\item \textit{\gls{vsvr}}: SVR trained on the matrix as described
\item \textit{vrfr}: RF trained on the matrix as described
\item \textit{vsvr}: SVR trained on the matrix as described
\end{enumerate}
We tried other ML models such as gradient boosting machines but found
only RFs and SVRs to perform well in our study.