NEC orchestrating a brighter world
NEC Laboratories Europe

Human-Centric AI
Blog

 Figure_2_Blog_von_Neumann_Conditional_Divergence_Sample_video_traffic_camera_images.png blog_preview_data-science_edge_1.png

Learning to Transfer with von Neumann Conditional Divergence

Learning paradigms designed for multiple domains or tasks, such as multitask learning, continual learning and domain adaptation, aim to reduce the large amount of energy and manual labor needed to retrain machine learning models. In this work, we introduce a domain adaptation approach that exploits learned features from relevant source tasks to reduce the data required for learning the new target task.

 Uncertainty_Quantification_blog_gcn_ood_citeseer_recreated_V3.png blog_preview_security_edge_1.png

Uncertainty Quantification in Node Classification

Modern neural networks are widely applied in a variety of learning tasks due to their exceptional performance, but fail to express uncertainty about predictions. For example, if a neural network is trained to predict whether an image contains a cat or a dog and is given an elephant as input, it will not admit that it is unsure. With a relatively high probability the machine learning model will instead still choose cat or dog. For high-risk domains like healthcare and autonomous driving this is not the best approach. In these areas, the cost and damage caused by overconfident or underconfident predictions can be catastrophic.

 Geopolitical_knowledge_graph_V4.jpg.png blog_preview_data-science_edge_1.png

Understanding Gradient Rollback

For many, including scientific researchers, artificial intelligence (AI) is a mystery – its reasoning opaque. AI systems and models are often referred to as “black boxes”; we do not understand the logic of what they do. Neural networks are powerful artificial intelligence tools trained to recognize meaningful data relationships and predict new knowledge. Nonetheless, it is not commonly understood how neural networks function or arrive at predictions. When AI systems affect our lives we need to ensure their predictions and decisions are reasonable. NEC Laboratories Europe has recently achieved a milestone in explainable AI research (XAI) by developing the method Gradient Rollback; this opens neural “black box” models and explains their predictions. Gradient Rollback reveals the training data that has the greatest influence on a prediction. Users can ascertain how plausible a prediction is by viewing its explanation (the training instances with the highest influence). The more plausible a prediction is the greater the likelihood that it will be trusted – a key factor in AI user adoption.

 csm_bison_df9f29f797.png blog_preview_data-science_edge_2.png

Attending to Future Tokens for Bidirectional Sequence Generation

Accepted at Empirical Methods for Natural Language Processing (EMNLP) 2019 NLP experienced a major change in the previous months. Previously, each NLP task defined a neural model and trained this model on the given task. But in recent months, various papers (ELMo [1], ULMFiT [2], GPT [3], BERT [4], GPT2 [5]) showed that it is possible to pre-train a NLP model on a language modelling task (more on this below) and then use this model as a starting point to fine-tune to further tasks. This has been labelled as an important turning point for NLP by many ([6], [7], [8], inter alia).

Top of this page