Evolutionary & Hybrid AI

a VUB AI Lab research team
Image

Mission

We aim to build truly intelligent systems that are able to interact with and reason about their native environment in order to solve an open-ended set of tasks. Our systems are deeply inspired by evolutionary principles such as self-organisation, selection and emergent functionality, and are therefore adaptive by design. We adopt a hybrid approach that integrates symbolic and subsymbolic AI techniques, combining their strengths to achieve general, accurate and interpretable models. We focus in particular on tasks that require human language-like communication, involving advanced perception, reasoning and learning skills. We investigate fundamental research questions that have a tight connection to real-world problems.

Featured projects

A selection of exciting research projects that showcase our expertise and provide insight into our long-term research program.
  • Image
    Computational Construction Grammar: A Practical Introduction

    We are working on an introductory textbook that explains in a step-by-step manner how construction grammars can be computationally implemented. The book is at the same time an ideal textbook for courses on construction grammar or computational linguistics, and an indispensable resource for construction grammar researchers who would like to run their analyses on corpora.

  • Image
    Grounded Concept Learning

    This project investigates how autonomous agents can distill meaningful concepts and words from continuous streams of perceptual data. The concepts are constructed through task-based communicative interactions and reflect properties of the world that are relevant for the task. The agents either learn the conceptual system of an existing natural language, or construct their own conceptual system and language.

  • Image
    Semantic Frame Extractor

    The semantic frame extractor combines dependency parsing with computational construction grammar to extract semantic frames from text corpora. We have applied the newly developed method to a corpus of English newspaper articles, in order to study expressed causal relations in the climate change debate.

  • Image
    Visual Question Answering

    In this project, we apply our language technologies to the problem of visual question answering. The task requires to understand a natural language question, reason about its relation to a given image and answer the question. We adopt a hybrid and modular approach that parses the question into a procedural semantic representation and executes it on the image.

  • Image
    Visual Dialog

    We extend our hybrid approach to visual question answering to visual dialog tasks. There, the goal is to answer a series of questions that follow each other during a conversation. To keep track of what has been said, we develop techniques to represent, store and query the history of the dialogue. This involves designing a procedural semantics that interfaces with both the image and the dialogue history.

Selected Publications

These publications are representative of the research carried out by the EHAI team. An exhaustive list can be found at the team members' research pages.
Re-conceptualising the Language Game Paradigm in the Framework of Multi-Agent Reinforcement Learning Paul Van Eecke and Katrien Beuls
The 2020 AAAI Spring Symposium on Challenges and Opportunities for Multi-Agent Reinforcement Learning (COMARL). ArXiv 2004.04722.
From Continuous Observations to Symbolic Concepts:
A Discrimination-Based Strategy for Grounded Concept Learning
Jens Nevens, Paul Van Eecke and Katrien Beuls
Frontiers in Robotics and AI 7, 2020, 10.3389/frobt.2020.00084
A Practical Guide to Studying Emergent Communication through Grounded Language Games Jens Nevens, Paul Van Eecke and Katrien Beuls
Proceedings of AISB '19, Falmouth, United Kingdom, 2019, 1-8.
Computational Construction Grammar for Visual Question Answering Jens Nevens, Paul Van Eecke and Katrien Beuls
Linguistics Vanguard 5(1), 2019, 2018-0070, 1-16.
Exploring the Creative Potential of Computational Construction Grammar Paul Van Eecke and Katrien Beuls
Zeitschrift für Anglistik und Amerikanistik 66(3), 2018, 341-355.
Meta-Layer Problem Solving for Computational Construction Grammar Paul Van Eecke and Katrien Beuls
2017 AAAI Spring Symposium Series, 2017, 258-265.

Demos

The following web demonstrations offer more insight into the techniques and methods we develop.

Visual Dialog

Demonstration of a grounded conversational agent that answers a series of questions.

View demo ›

Visual Question Answering (VQA) on real world images

Demonstration of a hybrid AI system for VQA on real world images using computational construction grammar and executable meaning representations.

View demo ›

Grounded Colour Naming Game

Demonstration of the Babel software toolkit with a grounded colour naming game experiment using the new robot interface package.

View demo ›

Semantic Parsing for VQA

A computational construction grammar for mapping between natural language questions and their executable meaning representations.

View demo ›

Smart Coffee Machine

A voice-activated coffee machine powered by Fluid Construction Grammar.

View demo ›

Software

The EHAI team co-develops a number of software tools that are available to the research community.

Team

  • Image
    Dr. Katrien Beuls

    Principal Investigator

  • Image
    Dr. Paul Van Eecke

    Postdoctoral researcher and lecturer

  • Image
    Dr. Tom Willaert

    Postdoctoral researcher

  • Image
    Jens Nevens

    PhD researcher and teaching assistant

  • Image
    Lara Verheyen

    PhD researcher

  • Image
    Jeroen Van Soest

    Developer

  • Image
    Jérôme Botoko Ekila

    PhD researcher

Interested in collaborating with us?
We're always looking for talented AI researchers to join our interdisciplinary team
Get in touch ›