1
0
Fork 0

Adjust placement of tables

This commit is contained in:
Alexander Hess 2020-10-05 01:03:06 +02:00
parent 25b577c30f
commit 4784f76ec8
Signed by: alexander
GPG key ID: 344EA5AB10D868E0
2 changed files with 20 additions and 20 deletions

View file

@ -44,13 +44,6 @@ As the non-seasonal \textit{hses} reaches a similar accuracy as its
So, in the absence of seasonality, models that only model a trend part are So, in the absence of seasonality, models that only model a trend part are
the least susceptible to the noise. the least susceptible to the noise.
For medium demand (i.e., $10 < \text{ADD} < 25$) and training horizons up to
six weeks, the best-performing models are the same as for low demand.
For longer horizons, \textit{hets} provides the highest accuracy.
Thus, to fit a seasonal pattern, longer training horizons are needed.
While \textit{vsvr} enters the top three, \textit{hets} has the edge as they
neither require parameter tuning nor real-time data.
\begin{center} \begin{center}
\captionof{table}{Top-3 models by training weeks and average demand \captionof{table}{Top-3 models by training weeks and average demand
($1~\text{km}^2$ pixel size, 60-minute time steps)} ($1~\text{km}^2$ pixel size, 60-minute time steps)}
@ -206,6 +199,13 @@ While \textit{vsvr} enters the top three, \textit{hets} has the edge as they
\end{tabular} \end{tabular}
\end{center} \end{center}
For medium demand (i.e., $10 < \text{ADD} < 25$) and training horizons up to
six weeks, the best-performing models are the same as for low demand.
For longer horizons, \textit{hets} provides the highest accuracy.
Thus, to fit a seasonal pattern, longer training horizons are needed.
While \textit{vsvr} enters the top three, \textit{hets} has the edge as they
neither require parameter tuning nor real-time data.
In summary, except for high demand, simple models trained on horizontal time In summary, except for high demand, simple models trained on horizontal time
series work best. series work best.
By contrast, high demand (i.e., $25 < \text{ADD} < \infty$) and less than By contrast, high demand (i.e., $25 < \text{ADD} < \infty$) and less than

View file

@ -1,19 +1,6 @@
\subsection{Results by Model Families} \subsection{Results by Model Families}
\label{fams} \label{fams}
Besides the overall results, we provide an in-depth comparison of models
within a family.
Instead of reporting the MASE per model, we rank the models holding the
training horizon fixed to make comparison easier.
Table \ref{t:hori} presents the models trained on horizontal time series.
In addition to \textit{naive}, we include \textit{fnaive} and \textit{pnaive}
already here as more competitive benchmarks.
The tables in this section report two rankings simultaneously:
The first number is the rank resulting from lumping the low and medium
clusters together, which yields almost the same rankings when analyzed
individually.
The ranks from only high demand pixels are in parentheses if they differ.
\begin{center} \begin{center}
\captionof{table}{Ranking of benchmark and horizontal models \captionof{table}{Ranking of benchmark and horizontal models
($1~\text{km}^2$ pixel size, 60-minute time steps): ($1~\text{km}^2$ pixel size, 60-minute time steps):
@ -47,6 +34,19 @@ The ranks from only high demand pixels are in parentheses if they differ.
\end{center} \end{center}
\ \
Besides the overall results, we provide an in-depth comparison of models
within a family.
Instead of reporting the MASE per model, we rank the models holding the
training horizon fixed to make comparison easier.
Table \ref{t:hori} presents the models trained on horizontal time series.
In addition to \textit{naive}, we include \textit{fnaive} and \textit{pnaive}
already here as more competitive benchmarks.
The tables in this section report two rankings simultaneously:
The first number is the rank resulting from lumping the low and medium
clusters together, which yields almost the same rankings when analyzed
individually.
The ranks from only high demand pixels are in parentheses if they differ.
A first insight is that \textit{fnaive} is the best benchmark in all A first insight is that \textit{fnaive} is the best benchmark in all
scenarios: scenarios:
Decomposing flexibly by tuning the $ns$ parameter is worth the computational Decomposing flexibly by tuning the $ns$ parameter is worth the computational