Remove glossary
This commit is contained in:
parent
f1844f8407
commit
96a3b242c0
16 changed files with 40 additions and 193 deletions
|
|
@ -41,7 +41,7 @@ These numerical instabilities occurred so often in our studies that we argue
|
|||
against using such measures.
|
||||
\item \textbf{Scaled Errors}:
|
||||
\cite{hyndman2006} contribute this category and introduce the mean absolute
|
||||
scaled error (\gls{mase}).
|
||||
scaled error (MASE).
|
||||
It is defined as the MAE from the actual forecasting method on the test day
|
||||
(i.e., "out-of-sample") divided by the MAE from the (seasonal) na\"{i}ve
|
||||
method on the entire training set (i.e., "in-sample").
|
||||
|
|
@ -84,4 +84,4 @@ We conjecture that percentage error measures may be usable for UDPs facing a
|
|||
higher overall demand with no intra-day down-times in between but have to
|
||||
leave that to a future study.
|
||||
Yet, even with high and steady demand, divide-by-zero errors are likely to
|
||||
occur.
|
||||
occur.
|
||||
|
|
|
|||
|
|
@ -15,21 +15,21 @@ As the models in this family do not include the test day's demand data in
|
|||
The models in this family are as follows; we use prefixes, such as "h" here,
|
||||
when methods are applied in other families as well:
|
||||
\begin{enumerate}
|
||||
\item \textit{\gls{naive}}:
|
||||
\item \textit{naive}:
|
||||
Observation from the same time step one week prior
|
||||
\item \textit{\gls{trivial}}:
|
||||
\item \textit{trivial}:
|
||||
Predict $0$ for all time steps
|
||||
\item \textit{\gls{hcroston}}:
|
||||
\item \textit{hcroston}:
|
||||
Intermittent demand method introduced by \cite{croston1972}
|
||||
\item \textit{\gls{hholt}},
|
||||
\textit{\gls{hhwinters}},
|
||||
\textit{\gls{hses}},
|
||||
\textit{\gls{hsma}}, and
|
||||
\textit{\gls{htheta}}:
|
||||
\item \textit{hholt},
|
||||
\textit{hhwinters},
|
||||
\textit{hses},
|
||||
\textit{hsma}, and
|
||||
\textit{htheta}:
|
||||
Exponential smoothing without calibration
|
||||
\item \textit{\gls{hets}}:
|
||||
\item \textit{hets}:
|
||||
ETS calibrated as described by \cite{hyndman2008b}
|
||||
\item \textit{\gls{harima}}:
|
||||
\item \textit{harima}:
|
||||
ARIMA calibrated as described by \cite{hyndman2008a}
|
||||
\end{enumerate}
|
||||
\textit{naive} and \textit{trivial} provide an absolute benchmark for the
|
||||
|
|
|
|||
|
|
@ -16,17 +16,17 @@ By decomposing the raw time series, all long-term patterns are assumed to be
|
|||
a potential trend and auto-correlations.
|
||||
The models in this family are:
|
||||
\begin{enumerate}
|
||||
\item \textit{\gls{fnaive}},
|
||||
\textit{\gls{pnaive}}:
|
||||
\item \textit{fnaive},
|
||||
\textit{pnaive}:
|
||||
Sum of STL's trend and seasonal components' na\"{i}ve forecasts
|
||||
\item \textit{\gls{vholt}},
|
||||
\textit{\gls{vses}}, and
|
||||
\textit{\gls{vtheta}}:
|
||||
\item \textit{vholt},
|
||||
\textit{vses}, and
|
||||
\textit{vtheta}:
|
||||
Exponential smoothing without calibration and seasonal
|
||||
fit
|
||||
\item \textit{\gls{vets}}:
|
||||
\item \textit{vets}:
|
||||
ETS calibrated as described by \cite{hyndman2008b}
|
||||
\item \textit{\gls{varima}}:
|
||||
\item \textit{varima}:
|
||||
ARIMA calibrated as described by \cite{hyndman2008a}
|
||||
\end{enumerate}
|
||||
As mentioned in Sub-section \ref{unified_cv}, we include the sum of the
|
||||
|
|
|
|||
|
|
@ -8,13 +8,13 @@ Instead of obtaining an $H$-step-ahead forecast, we retrain a model after
|
|||
every time step and only predict one step.
|
||||
The remainder is as in the previous sub-section, and the models are:
|
||||
\begin{enumerate}
|
||||
\item \textit{\gls{rtholt}},
|
||||
\textit{\gls{rtses}}, and
|
||||
\textit{\gls{rttheta}}:
|
||||
\item \textit{rtholt},
|
||||
\textit{rtses}, and
|
||||
\textit{rttheta}:
|
||||
Exponential smoothing without calibration and seasonal fit
|
||||
\item \textit{\gls{rtets}}:
|
||||
\item \textit{rtets}:
|
||||
ETS calibrated as described by \cite{hyndman2008b}
|
||||
\item \textit{\gls{rtarima}}:
|
||||
\item \textit{rtarima}:
|
||||
ARIMA calibrated as described by \cite{hyndman2008a}
|
||||
\end{enumerate}
|
||||
Retraining \textit{fnaive} and \textit{pnaive} did not increase accuracy, and
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ Based on the seasonally-adjusted time series $a_t$, we employ the feature
|
|||
The ML models are trained once before a test day starts.
|
||||
For training, the matrix and vector are populated such that $y_T$ is set to
|
||||
the last time step of the day before the forecasts, $a_T$.
|
||||
As the splitting during CV is done with whole days, the \gls{ml} models are
|
||||
As the splitting during CV is done with whole days, the ML models are
|
||||
trained with training sets consisting of samples from all times of a day
|
||||
in an equal manner.
|
||||
Thus, the ML models learn to predict each time of the day.
|
||||
|
|
@ -20,8 +20,8 @@ For prediction on a test day, the $H$ observations preceding the time
|
|||
As a result, real-time data are included.
|
||||
The models in this family are:
|
||||
\begin{enumerate}
|
||||
\item \textit{\gls{vrfr}}: RF trained on the matrix as described
|
||||
\item \textit{\gls{vsvr}}: SVR trained on the matrix as described
|
||||
\item \textit{vrfr}: RF trained on the matrix as described
|
||||
\item \textit{vsvr}: SVR trained on the matrix as described
|
||||
\end{enumerate}
|
||||
We tried other ML models such as gradient boosting machines but found
|
||||
only RFs and SVRs to perform well in our study.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue