1
0
Fork 0

Remove glossary

This commit is contained in:
Alexander Hess 2020-11-30 18:42:54 +01:00
commit 96a3b242c0
Signed by: alexander
GPG key ID: 344EA5AB10D868E0
16 changed files with 40 additions and 193 deletions

View file

@ -40,8 +40,7 @@ Their main advantages stem from the fact that the models calibrate themselves
\cite{cleveland1990} introduce a seasonal and trend decomposition using a
repeated locally weighted regression - the so-called Loess procedure - to
smoothen the trend and seasonal components, which can be viewed as a
generalization of the methods above and is denoted by the acronym
\gls{stl}.
generalization of the methods above and is denoted by the acronym STL.
In contrast to the X11, X13, and SEATS methods, the STL supports seasonalities
of any lag $k$ that must, however, be determined with additional
statistical tests or set with out-of-band knowledge by the forecaster

View file

@ -4,8 +4,7 @@
ML methods have been employed in all kinds of prediction tasks in recent
years.
In this section, we restrict ourselves to the models that performed well in
our study: Random Forest (\gls{rf}) and Support Vector Regression
(\gls{svr}).
our study: Random Forest (RF) and Support Vector Regression (SVR).
RFs are in general well-suited for datasets without a priori knowledge about
the patterns, while SVR is known to perform well on time series data, as
shown by \cite{hansen2006} in general and \cite{bao2004} specifically for

View file

@ -5,7 +5,7 @@ Because ML models are trained by minimizing a loss function $L$, the
resulting value of $L$ underestimates the true error we see when
predicting into the actual future by design.
To counter that, one popular and model-agnostic approach is cross-validation
(\gls{cv}), as summarized, for example, by \cite{hastie2013}.
(CV), as summarized, for example, by \cite{hastie2013}.
CV is a resampling technique, which ranomdly splits the samples into a
training and a test set.
Trained on the former, an ML model makes forecasts on the latter.

View file

@ -2,7 +2,7 @@
\label{rf}
\cite{breiman1984} introduce the classification and regression tree
(\gls{cart}) model that is built around the idea that a single binary
(CART) model that is built around the idea that a single binary
decision tree maps learned combinations of intervals of the feature
columns to a label.
Thus, each sample in the training set is associated with one leaf node that

View file

@ -2,7 +2,7 @@
\label{svm}
\cite{vapnik1963} and \cite{vapnik1964} introduce the so-called support vector
machine (\gls{svm}) model, and \cite{vapnik2013} summarizes the research
machine (SVM) model, and \cite{vapnik2013} summarizes the research
conducted since then.
In its basic version, SVMs are linear classifiers, modeling a binary
decision, that fit a hyperplane into the feature space of $\mat{X}$ to