Zhao Xu, Carolin Lawrence, Ammar Shaker, Raman Siarheyeu: “Uncertainty Propagation in Node Classification”, International Conference on Data Mining (ICDM) 2022
Quantifying predictive uncertainty of neural net- works has recently attracted increasing attention. In this work, we focus on measuring uncertainty of graph neural networks (GNNs) for the task of node classification. Most existing GNNs model message passing among nodes. The messages are often deterministic. Questions naturally arise: Does there exist uncertainty in the messages? How could we propagate such uncertainty over a graph together with messages? To address these issues, we propose a Bayesian uncertainty propagation (BUP) method, which embeds GNNs in a Bayesian modeling framework, and models predictive uncertainty of node classification with Bayesian confidence of predictive probability and uncertainty of messages. Our method proposes a novel uncertainty propagation mechanism inspired by Gaussian models. Moreover, we present an uncertainty oriented loss for node classification that allows the GNNs to clearly integrate predictive uncertainty in learning procedure. Consequently, the training examples with large predictive uncertainty will be penalized. We demonstrate the BUP with respect to prediction reliability and out-of-distribution (OOD) predictions. The learned uncertainty is also analyzed in depth. The relations between uncertainty and graph topology, as well as predictive uncertainty in the OOD cases are investigated with extensive experiments. The empirical results with popular benchmark datasets demonstrate the superior performance of the proposed method.
Presented at: International Conference on Data Mining (ICDM) 2022
Ammar Shaker, Carolin Lawrence: “Multi-Source Survival Domain Adaptation”, 37th AAAI Conference on Artificial Intelligence (accepted)
Survival analysis is the branch of statistics that studies the relation between the characteristics of living entities and their respective survival times, taking into account the partial information held by censored cases. A good analysis can, for example, determine whether one medical treatment for a group of patients is better than another. With the rise of machine learning, survival analysis can be modeled as learning a function that maps studied patients to their survival times. To succeed with that, there are three crucial issues to be tackled. First, some patient data is censored: we do not know the true survival times for all patients. Second, data is scarce, which led past research to treat different illness types as domains in a multi-task setup. Third, there is the need for adaptation to new or extremely rare illness types, where little or no labels are available. In contrast to previous multi-task setups, we want to investigate how to efficiently adapt to a new survival target domain from multiple survival source domains. For this, we introduce a new survival metric and the corresponding discrepancy measure between survival distributions. These allow us to define domain adaptation for survival analysis while incorporating censored data, which would otherwise have to be dropped. Our experiments on two cancer data sets reveal a superb performance on target domains, a better treatment recommendation, and a weight matrix with a plausible explanation.
To be presented at: AAAI Conference on Artificial Intelligence (AAAI-23)
Full paper download: Multi-Source_Survival_Domain_Adaptation_pre-print.pdf
Sascha Saralajew, Ammar Shaker, Zhao Xu, Kiril Gashteovski, Bhushan Kotnis, Wiem Ben Rim, Jürgen Quittek, Carolin Lawrence: “A Human-Centric Assessment Framework for AI”, International Conference on Machine Learning (ICML) Workshop on Human-Machine Collaboration and Teaming 2022
With the rise of AI systems in real-world applications comes the need for reliable and trustworthy AI. An important aspect for this are explain- able AI systems. However, there is no agreed standard on how explainable AI systems should be assessed. Inspired by the Turing test, we introduce a human-centric assessment framework where a leading domain expert accepts or rejects the solutions of an AI system and another domain expert. By comparing the acceptance rates of provided solutions, we can assess how the AI sys- tem performs in comparison to the domain expert, and in turn whether or not the AI system’s explanations (if provided) are human understandable. This setup—comparable to the Turing test—can serve as framework for a wide range of human- centric AI system assessments. We demonstrate this by presenting two instantiations: (1) an assessment that measures the classification accuracy of a system with the option to incorporate label uncertainties; (2) an assessment where the useful- ness of provided explanations is determined in a human-centric manner.
Presented at: ICML 2022 Workshop on Human-Machine Collaboration and Teaming
Full paper download: A_Human-Centric_Assessment_Framework_for_AI_arxiv.pdf
Cheng Wang, Carolin Lawrence, Mathias Niepert: “State-Regularized Recurrent Neural Networks to Extract Automata and Explain Predictions”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Recurrent neural networks are a widely used class of neural architectures. They have, however, two shortcomings. First, they are often treated as black-box models and as such it is difficult to understand what exactly they learn as well as how they arrive at a particular prediction. Second, they tend to work poorly on sequences requiring long-term memorization, despite having this capacity in principle. We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications. This mechanism, which we term state-regularization, makes RNNs transition between a finite set of learnable states. We evaluate state-regularized RNNs on (1) regular languages for the purpose of automata extraction; (2) non-regular languages such as balanced parentheses and palindromes where external memory is required; and (3) real-word sequence learning tasks for sentiment analysis, visual object recognition and text categorisation. We show that state-regularization (a) simplifies the extraction of finite state automata that display an RNN’s state transition dynamic; (b) forces RNNs to operate more like automata with external memory and less like finite state machines, which potentiality leads to a more structural memory; (c) leads to better interpretability and explainability of RNNs by leveraging the probabilistic finite state transition mechanism over time steps.
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
Haris Widjaja, Kiril Gashteovski, Wiem Ben Rim, Pengfei Liu, Christopher Malon, Daniel Ruffinelli, Carolin Lawrence, Graham Neubig: “KGxBoard: Explainable and Interactive Leaderboard for Evaluation of Knowledge Graph Completion Models” EMNLP 2022 (accepted)
Knowledge Graphs (KGs) store information in the form of (head, predicate, tail)-triples. To augment KGs with new knowledge, researchers proposed models for KG Completion (KGC) tasks such as link prediction; i.e., answering (h; p; ?) or (?; p; t) queries. Such models are usually evaluated with averaged metrics on a held-out test set. While useful for tracking progress, averaged single-score metrics cannot reveal what exactly a model has learned—or failed to learn. To address this issue, we pro-pose KGxBoard1: an interactive framework for performing fine-grained evaluation on meaningful subsets of the data, each of which tests individual and interpretable capabilities of a KGC model. In our experiments, we highlight the findings that we discovered with the use of KGxBoard, which would have been impossible to detect with standard averaged single-score metrics.
Presented at: Conference on Empirical Methods in Natural Language Processing (EMNLP) 2022
Full paper download: KGxBoard_Explainable_and_Interactive_Leaderboard_for_Evaluation_Knowledge_Graph_Completion_Models.pdf
Niklas Friedrich, Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Mathias Niepert, Goran Glavaš, “AnnIE: An Annotation Platform for Constructing Complete Open Information Extraction Benchmark”, Annual Meeting of the Association for Computational Linguistics (ACL) 2022
Open Information Extraction (OIE) is the task of extracting facts from sentences in the form of relations and their corresponding arguments in schema-free manner. Intrinsic performance of OIE systems is difficult to measure due to the incompleteness of existing OIE benchmarks: ground truth extractions do not group all acceptable surface realizations of the same fact that can be extracted from a sentence. To measure performance of OIE systems more realistically, it is necessary to manually annotate complete facts (i.e., clusters of all acceptable surface realizations of the same fact) from input sentences.
We propose AnnIE: an inter-active annotation platform that facilitates such challenging annotation tasks and supports creation of complete fact-oriented OIE evaluation benchmarks. AnnIE is modular and flexible in order to support different use case scenarios (i.e., benchmarks covering different types of facts) and different languages. We use AnnIE to build two complete OIE benchmarks: one with verb-mediated facts and another with facts encompassing named entities. We evaluate several OIE systems on our complete benchmarks created with AnnIE. We publicly release AnnIE under non-restrictive license.
Conference: Annual Meeting of the Association for Computational Linguistics (ACL) 2022
Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Mathias Niepert, Goran Glavaš (University of Mannheim), "BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation", Annual Meeting of the Association for Computational Linguistics (ACL) 2022
Intrinsic evaluations of OIE systems are carried out either manually—with human evaluators judging the correctness of extractions—or automatically, on standardized benchmarks. The latter, while much more cost-effective, is less reliable, primarily because of the incompleteness of the existing OIE benchmarks: the ground truth extractions do not include all acceptable variants of the same fact, leading to unreliable assessment of the models’ performance. Moreover, the existing OIE benchmarks are available for English only. In this work, we introduce BenchIE: a benchmark and evaluation framework for comprehensive evaluation of OIE systems for English, Chinese, and German. In contrast to existing OIE benchmarks, BenchIE is fact-based, i.e., it takes into account informational equivalence of extractions: our gold standard consists of fact synsets, clusters in which we exhaustively list all acceptable surface forms of the same fact. Moreover, having in mind common downstream applications for OIE, we make BenchIE multi-faceted; i.e., we create benchmark variants that focus on different facets of OIE evaluation, e.g., compactness or minimality of extractions. We benchmark several state-of-the-art OIE systems using BenchIE and demonstrate that these systems are significantly less effective than indicated by existing OIE benchmarks. We make BenchIE (data and evaluation code) publicly available.
Conference: Annual Meeting of the Association for Computational Linguistics (ACL) 2022
Bhushan Kotnis, Kiril Gashteovski, Daniel Oñoro-Rubio, Vanesa Rodriguez-Tembras, Ammar Shaker, Makoto Takamoto, Mathias Niepert, Carolin Lawrence, "milIE: Modular & Iterative Multilingual Open Information Extraction", Annual Meeting of the Association for Computational Linguistics (ACL) 2022
Open Information Extraction (OpenIE) is the task of extracting (subject, predicate, object) triples from natural language sentences. Current OpenIE systems extract all triple slots independently. In contrast, we explore the hypothesis that it may be beneficial to extract triple slots iteratively: first extract easy slots, followed by the difficult ones by conditioning on the easy slots, and therefore achieve a better overall extraction.
Based on this hypothesis, we propose a neural OpenIE system, MILIE, that operates in an it-erative fashion. Due to the iterative nature, the system is also modular—it is possible to seamlessly integrate rule based extraction systems with a neural end-to-end system, thereby allowing rule based systems to supply extraction slots which MILIE can leverage for extracting the remaining slots. We confirm our hypothe-sis empirically: MILIE outperforms SOTA systems on multiple languages ranging from Chi-nese to Arabic. Additionally, we are the first to provide an OpenIE test dataset for Arabic and Galician.
Conference: Annual Meeting of the Association for Computational Linguistics (ACL) 2022
Ammar Shaker, Shujian Yu, Daniel Oñoro-Rubio: “Learning to Transfer with von Neumann Conditional Divergence”, AAAI-22 (Accepted)
The similarity of feature representations plays a pivotal role in the success of problems related to domain adaptation. Feature similarity includes both the invariance of marginal distributions and the closeness of conditional distributions given the desired response y (e.g., class labels). Unfortunately, tra-ditional methods always learn such features without fully taking into consideration the information in y, which in turn may lead to a mismatch of the conditional distributions or the mix-up of discriminative structures underlying data distributions. In this work, we introduce the recently proposed von Neu-mann conditional divergence to improve the transferability across multiple domains. We show that this new divergence is differentiable and eligible to easily quantify the functional dependence between features and y. Given multiple source tasks, we integrate this divergence to capture discriminative information in y and design novel learning objectives assuming those source tasks are observed either simultaneously or sequentially. In both scenarios, we obtain favorable performance against state-of-the-art methods in terms of smaller generalization error on new tasks and less catastrophic for-getting on source tasks (in the sequential setup).
Conference: Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22)
Wiem Ben Rim, Carolin Lawrence, Kiril Gashteovski, Mathias Niepert, Naoaki Okazaki: “Behavioral Testing of Knowledge Graph Embedding Models for Link Prediction”, Conference on Automated Knowledge Base Construction (AKBC) (accepted)
Knowledge graph embedding (KGE) models are often used to encode knowledge graphs in order to predict new links inside the graph. The accuracy of these methods is typically evaluated by computing an averaged accuracy metric on a held-out test set. This approach, however, does not allow the identiﬁcation of where the models might systematically fail or succeed. To address this challenge, we propose a new evaluation framework that builds on the idea of (black-box) behavioral testing, a software engineering principle that enables users to detect system failures before deployment. With behavioral tests, we can speciﬁcally target and evaluate the behavior of KGE models on speciﬁc capabilities deemed important in the context of a particular use case. To this end, we leverage existing knowledge graph schemas to design behavioral tests for the link prediction task. With an extensive set of experiments, we perform and analyze these tests for several KGE models. Crucially, we for example ﬁnd that a model ranked second to last on the original test set actually performs best when tested for a speciﬁc capability. Such insights allow users to better choose which KGE model might be most suitable for a particular task. The framework is extendable to additional behavioral tests and we hope to inspire fellow researchers to join us in collaboratively growing this framework. The framework is available at https: //github.com/nec-research/KGEval.
Conference: Conference on Automated Knowledge Base Construction (AKBC)
A research collaboration between NEC Laboratories Europe and Tokyo Institute of Technology
Giuseppe Serra, Zhao Xu, Mathias Niepert, Carolin Lawrence, Peter Tiňo, Xin Yao: "Interpreting Node Embedding with Text-labeled Graphs", IEEE International Joint Conference on Neural Network (IJCNN) 2021 (accepted)
Graph neural networks have recently received increasing attention. These methods often map nodes into latent spaces and learn vector representations of the nodes for a variety of downstream tasks. To gain trust and to promote collaboration between AIs and humans, it would be better if those representations were interpretable for humans. However, most explainable AIs focus on a supervised learning setting and aim to answer the following question: ”Why does the model predict y for an input x?”. For an unsupervised learning setting as node embedding, interpretation can be more complicated since the embedding vectors are usually not understandable for humans. On the other hand, nodes and edges in a graph are often associated with texts in many real-world applications. A question naturally arises: could we integrate the human-understandable textural data into graph learning to facilitate interpretable node embedding? In this paper we present interpretable graph neural networks (iGNN), a model to learn textual explanations for node representations modeling the extra information contained in the associated textual data. To validate the performance of the proposed method, we investigate the learned interpretability of the embedding vectors and use functional interpretability to measure it. Experimental results on multiple text-labeled graphs show the effectiveness of the iGNN model on learning textual explanations of node embedding while performing well in downstream tasks.
Index Terms—Node embedding, interpretability, text mining
Full paper download: Interpreting_Node_Embedding_with_Text-labeled_Graph_090621.pdf
Ammar Shaker, Francesco Alesiani, Shujian Yu, Wenzhe Yin “Bilevel Continual Learning,” IJCNN 21 (accepted)
Continual learning (CL) studies the problem of learning a sequence of tasks, one at a time, such that the learning of each new task does not lead to the deterioration in performance on the previously seen ones. This paper presents Bilevel Continual Learning (BiCL), a general framework for continual learning that fuses bilevel optimization and recent advances in meta-learning for deep neural networks. BiCL is able to train both deep discriminative and generative models and deep generative models under the conservative setting of the online continual learning settings. Experimental results show that BiCL provides competitive performance in terms of accuracy for the current task while reducing the effect of catastrophic forgetting.
Carolin Lawrence, Timo Sztyler and Mathias Niepert: “Explaining Neural Matrix Factorization with Gradient Rollback”, AAAI 2021
Explaining the predictions of neural black-box models is an important problem, especially when such models are used in applications where user trust is crucial. Estimating the influence of training examples on a learned neural model's behavior allows us to identify training examples most responsible for a given prediction and, therefore, to faithfully explain the output of a black-box model. The most generally applicable existing method is based on influence functions, which scale poorly for larger sample sizes and models.
We propose gradient rollback, a general approach for influence estimation, applicable to neural models where each parameter update step during gradient descent touches a smaller number of parameters, even if the overall number of parameters is large. Neural matrix factorization models trained with gradient descent are part of this model class. These models are popular and have found a wide range of applications in industry. Especially knowledge graph embedding methods, which belong to this class, are used extensively. We show that gradient rollback is highly efficient at both training and test time. Moreover, we show theoretically that the difference between gradient rollback's influence approximation and the true influence on a model's behavior is smaller than known bounds on the stability of stochastic gradient descent. This establishes that gradient rollback is robustly estimating example influence. We also conduct experiments which show that gradient rollback provides faithful explanations for knowledge base completion and recommender datasets.
Presented at: 35th Conference on Artificial Intelligence (AAAI-21)
Full paper download: 16632-Article_Text-20126-1-2-20210518.pdf
Bhushan Kotnis, Carolin Lawrence and Mathias Niepert: “Answering Complex Queries in Knowledge Graphs with Bidirectional Sequence Encoders”, AAAI 2021
Representation learning for knowledge graphs (KGs) has focused on the problem of answering simple link prediction queries. In this work, we address the more ambitious challenge of predicting the answers of conjunctive queries with multiple missing entities. We propose Bidirectional Query Embedding (BIQE), a method that embeds conjunctive queries with models based on bi-directional attention mechanisms. Contrary to prior work, bidirectional self-attention can capture interactions among all the elements of a query graph. We introduce two new challenging data sets for studying conjunctive query inference and conduct experiments on several benchmark datasets that demonstrate BIQE significantly outperforms state of the art baselines.
Presented at: 35th Conference on Artificial Intelligence (AAAI-21)
Shujian Yu, Ammar Shaker, Francesco Alesiani and Jose Principe: “Measuring the Discrepancy between Conditional Distributions: Methods, Properties and Applications”, IJCAI 2020
We propose a simple yet powerful test statistic to quantify the discrepancy between two conditional distributions. The new statistic avoids the explicit estimation of the underlying distributions in high dimensional space and it operates on the cone of symmetric positive semidefinite (SPS) matrix using the Bregman matrix divergence. Moreover, it inherits the merits of the correntropy function to explicitly incorporate high-order statistics in the data. We present the properties of our new statistic and illustrate its connections to prior art. We finally show the applications of our new statistic on three different machine learning problems, namely the multi-task learning over graphs, the concept drift detection, and the information-theoretic feature selection, to demonstrate its utility and advantage. Code of our statistic is available at https: //bit.ly/BregmanCorrentropy.
Presented at: International Joint Conference on Artificial Intelligence – Pacific Rim International Conference on Artificial Intelligence, 2020
Full paper download: Measuring the Discrepancy between Conditional Distributions: Methods, Properties and Applications (pdf)
Francesco Alesiani, Shujian Yu, Ammar Shaker: “Towards Interpretable Multi Task Learning”, ECML PKDD 2020
Interpretable Multi-Task Learning can be expressed as learn-ing a sparse graph of the task relationship based on the predictionperformance of the learned models. Since many natural phenomenonexhibit sparse structures, enforcing sparsity on learned models reveals theunderlying task relationship. Moreover, different sparsification degreesfrom a fully connected graph uncover various types of structures, likecliques, trees, lines, clusters or fully disconnected graphs. In this paper,we propose a bilevel formulation of multi-task learning that induces sparsegraphs, thus, revealing the underlying task relationships, and an efficientmethod for its computation. We show empirically how the induced sparsegraph improves the interpretability of the learned models and their re-lationship on synthetic and real data, without sacrificing generalizationperformance. Code athttps://bit.ly/GraphGuidedMTL
Ammar Shaker, Shujian Yu, Xiao He, Christoph Gärtner: “Online Meta-Forest for Regression Data Streams”, IJCNN 2020 and WCCI 2020
Stream learning is essential when there is lim-ited memory, time and computational power. However, existingstreaming methods are mostly designed for classification withonly a few exceptions for regression problems. Although beingfast, the performance of these online regression methods isinadequate due to their dependence on merely linear models.Besides, only a few stream methods are based on meta-learningthat aims at facilitating the dynamic choice of the right model.Nevertheless, these approaches are restricted to recommendlearners on a window and not on the instance level. In thispaper, we present a novel approach, named Online Meta-Forest,that incrementally induces an ensemble of meta-learners thatselects the best set of predictors for each test example. Eachmeta-learner has the ability to find a non-linear mapping of theinput space to the set of induced models. We conduct a series ofexperiments demonstrating that Online Meta-Forest outperformsrelated methods on16out of25evaluated benchmark anddomain datasets in transportation.Index Terms—Learning from Data Streams, Adaptive Learn-ing, Meta-Learning, Regression Streams, Data Streams, OnlineBagging, Ensemble Learning
Shujian Yu, Ammar Shaker, Francesco Alesiani, Jose C. Principe: “Measuring the Discrepancy between two Conditional Distributions: Methods, Properties and Applications”, IJCAL20
We propose a simple yet powerful test statistic toquantify the discrepancy between two conditionaldistributions. The new statistic avoids the explicitestimation of the underlying distributions in high-dimensional space and it operates on the cone ofsymmetric positive semidefinite (SPS) matrix usingthe Bregman matrix divergence. Moreover, it in-herits the merits of the correntropy function to ex-plicitly incorporate high-order statistics in the da-ta. We present the properties of our new statisticand illustrate its connections to prior art. We fi-nally show the applications of our new statistic onthree different machine learning problems, name-ly the multi-task learning over graphs, the conceptdrift detection, and the information-theoretic fea-ture selection, to demonstrate its utility and advan-tage. Code of our statistic is available at bit.ly/BregmanCorrentropy.
C. Lawrence, B. Kotnis, M. Niepert: “Attending to Future Tokens for Bidirectional Sequence Generation”, EMNLP 2019
Neural sequence generation is typically performed token-by-token and left-to-right.Whenever a token is generated only previously produced tokens are taken into consideration. In contrast, for problems such as sequence classification, bidirectional attention, which takes both past and future tokens into consideration, has been shown to perform much better. We propose to make the sequence generation process bidirectional by employing special placeholder tokens. Treated as a node in a fully connected graph, a placeholder token can take past and future tokens into consideration when generating the actual output token. We verify the effectiveness of our approach experimentally on two conversational tasks where the proposed bidirectional model outperforms competitive baselines by a large margin.
Presented at: Conference on Empirical Methods in Natural Language Processing 2019 and 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019