From 4784f76ec87779c2ef3a76c8509b9f0c302bc1ca Mon Sep 17 00:00:00 2001 From: Alexander Hess Date: Mon, 5 Oct 2020 01:03:06 +0200 Subject: [PATCH] Adjust placement of tables --- tex/4_stu/4_overall.tex | 14 +++++++------- tex/4_stu/6_fams.tex | 26 +++++++++++++------------- 2 files changed, 20 insertions(+), 20 deletions(-) diff --git a/tex/4_stu/4_overall.tex b/tex/4_stu/4_overall.tex index dff9559..4d11718 100644 --- a/tex/4_stu/4_overall.tex +++ b/tex/4_stu/4_overall.tex @@ -44,13 +44,6 @@ As the non-seasonal \textit{hses} reaches a similar accuracy as its So, in the absence of seasonality, models that only model a trend part are the least susceptible to the noise. -For medium demand (i.e., $10 < \text{ADD} < 25$) and training horizons up to - six weeks, the best-performing models are the same as for low demand. -For longer horizons, \textit{hets} provides the highest accuracy. -Thus, to fit a seasonal pattern, longer training horizons are needed. -While \textit{vsvr} enters the top three, \textit{hets} has the edge as they - neither require parameter tuning nor real-time data. - \begin{center} \captionof{table}{Top-3 models by training weeks and average demand ($1~\text{km}^2$ pixel size, 60-minute time steps)} @@ -206,6 +199,13 @@ While \textit{vsvr} enters the top three, \textit{hets} has the edge as they \end{tabular} \end{center} +For medium demand (i.e., $10 < \text{ADD} < 25$) and training horizons up to + six weeks, the best-performing models are the same as for low demand. +For longer horizons, \textit{hets} provides the highest accuracy. +Thus, to fit a seasonal pattern, longer training horizons are needed. +While \textit{vsvr} enters the top three, \textit{hets} has the edge as they + neither require parameter tuning nor real-time data. + In summary, except for high demand, simple models trained on horizontal time series work best. By contrast, high demand (i.e., $25 < \text{ADD} < \infty$) and less than diff --git a/tex/4_stu/6_fams.tex b/tex/4_stu/6_fams.tex index 8fdd17d..c398824 100644 --- a/tex/4_stu/6_fams.tex +++ b/tex/4_stu/6_fams.tex @@ -1,19 +1,6 @@ \subsection{Results by Model Families} \label{fams} -Besides the overall results, we provide an in-depth comparison of models - within a family. -Instead of reporting the MASE per model, we rank the models holding the - training horizon fixed to make comparison easier. -Table \ref{t:hori} presents the models trained on horizontal time series. -In addition to \textit{naive}, we include \textit{fnaive} and \textit{pnaive} - already here as more competitive benchmarks. -The tables in this section report two rankings simultaneously: -The first number is the rank resulting from lumping the low and medium - clusters together, which yields almost the same rankings when analyzed - individually. -The ranks from only high demand pixels are in parentheses if they differ. - \begin{center} \captionof{table}{Ranking of benchmark and horizontal models ($1~\text{km}^2$ pixel size, 60-minute time steps): @@ -47,6 +34,19 @@ The ranks from only high demand pixels are in parentheses if they differ. \end{center} \ +Besides the overall results, we provide an in-depth comparison of models + within a family. +Instead of reporting the MASE per model, we rank the models holding the + training horizon fixed to make comparison easier. +Table \ref{t:hori} presents the models trained on horizontal time series. +In addition to \textit{naive}, we include \textit{fnaive} and \textit{pnaive} + already here as more competitive benchmarks. +The tables in this section report two rankings simultaneously: +The first number is the rank resulting from lumping the low and medium + clusters together, which yields almost the same rankings when analyzed + individually. +The ranks from only high demand pixels are in parentheses if they differ. + A first insight is that \textit{fnaive} is the best benchmark in all scenarios: Decomposing flexibly by tuning the $ns$ parameter is worth the computational