NEC orchestrating a brighter world
NEC Laboratories Europe

Artificial Intelligence Innovation
Publications

Julia Gastinger, Timo Sztyler, Lokesh Sharma, Anett Schuelke: “On the Evaluation of Methods for Temporal Knowledge Graph Forecasting,” NeurIPS 2022 Temporal Graph Learning Workshop

Paper Details

Abstract:
Due to its ability to incorporate and leverage time information in relational data, Temporal Knowledge Graph (TKG) learning has become an increasingly studied research field. With the goal of predicting the future, researchers have presented innovative methods for what is called Temporal Knowledge Graph Forecasting. However, the experimental procedures in this line of work show inconsistencies that strongly influence empirical results and thus lead to distorted comparisons among models. This work focuses on the evaluation of TKG Forecasting models: we describe evaluation settings commonly used in this research area and shed light on its scholarship issues. Further, we provide a unified evaluation protocol and carry out a re-evaluation of state-of-the-art models on the most common datasets under such a setting. Finally, we show the difference in results caused by different evaluation settings. We believe that this work provides a solid foundation for future evaluations of TKG Forecasting models and can thus contribute to the development of this growing research area.

Presented at: NeurIPS 2022 Temporal Graph Learning Workshop
In collaboration with: University of Mannheim

Full paper download: On_the_Evaluation_of_Methods_for_Temporal_Knowledge_Graph_Forecasting.pdf

Julia Gastinger, Sébastien Nicolas, Dusica Stepić, Mischa Schmidt, Anett Schülke: "A study on Ensemble Learning for Time Series Forecasting and the need for Meta-Learning", International Joint Conference on Neural Networks (IJCNN), 2021.

Paper Details

Abstract
The contribution of this work is twofold: (1) We introduce a collection of ensemble methods for time series forecasting to combine predictions from base models. We demonstrate insights on the power of ensemble learning for forecasting, showing experiment results on about 16000 openly available datasets, from M4, M5, M3 competitions, as well as FRED (Federal Reserve Economic Data) datasets. Whereas experiments show that ensembles provide a benefit on forecasting results, there is no clear winning ensemble strategy (plus hyperparameter configuration). Thus, in addition, (2), we propose a meta-learning step to choose, for each dataset, the most appropriate ensemble method and their hyperparameter configuration to run based on dataset meta-features.

Published in:              International Joint Conference on Neural Networks
Paper available at:      https://arxiv.org/abs/2104.11475

 

M. Schmidt, J. Gastinger, S. Nicolas, A. Schuelke: "HAMLET - A Learning Curve-Enabled Multi-Armed Bandit for Algorithm Selection", International Joint Conference on Neural Networks (IJCNN), 2020

Paper Details

Abstract
Automated algorithm selection and hyperparameter tuning facilitates the application of machine learning. Traditional multi-armed bandit strategies look to the history of observed rewards to identify the most promising arms for optimizing expected total reward in the long run. When considering limited time budgets and computational resources, this backward view of rewards is inappropriate as the bandit should look into the future for anticipating the highest final reward at the end of a specified time budget. This work addresses that insight by introducing HAMLET, which extends the bandit approach with learning curve extrapolation and computation time-awareness for selecting among a set of machine learning algorithms. Results show that the HAMLET Variants 1-3 exhibit equal or better performance than other bandit-based algorithm selection strategies in experiments with recorded hyperparameter tuning traces for the majority of considered time budgets. The best performing HAMLET Variant 3 combines learning curve extrapolation with the well-known upper confidence bound exploration bonus. That variant performs better than all non-HAMLET policies with statistical significance at the 95% level for 1,485 runs.

Published in:              International Joint Conference on Neural Networks

Paper available at:      arxiv.org/abs/2001.11261

M. Schmidt, S. Safarani, J. Gastinger, T. Jacobs, S. Nicolas, A. Schuelke: "On the Performance of Differential Evolution for Hyperparameter Tuning", International Joint Conference on Neural Networks (IJCNN), 2019

Paper Details

Abstract
Automated hyperparameter tuning aspires to facilitate the application of machine learning for non-experts. In the literature, different optimization approaches are applied for that purpose. This paper investigates the performance of Differential Evolution for tuning hyperparameters of supervised learning algorithms for classification tasks. This empirical study involves a range of different machine learning algorithms and datasets with various characteristics to compare the performance of Differential Evolution with Sequential Model-based Algorithm Configuration (SMAC), a reference Bayesian Optimization approach. The results indicate that Differential Evolution outperforms SMAC for most datasets when tuning a given machine learning algorithm - particularly when breaking ties in a first-to-report fashion. Only for the tightest of computational budgets SMAC performs better. On small datasets, Differential Evolution outperforms SMAC by 19% (37% after tie-breaking). In a second experiment across a range of representative datasets taken from the literature, Differential Evolution scores 15% (23% after tie-breaking) more wins than SMAC.

Presented at:            International Joint Conference on Neural Networks 2019
Paper available at:     https://arxiv.org/abs/1904.06960

Top of this page