• Log In
ESANN 2023
  • Home
      • Latest news
  • Submit a paper
      • Call for papers
      • Special sessions
      • Author guidelines
      • Ethics
      • Submissions
  • Program
  • Participate
      • Format of the conference
      • Registration
      • Information for speakers
  • The event
      • Venue
      • About Bruges
      • Hotels
      • Conference commitees
      • Sponsors
      • Contacts
  • Past ESANN
      • ESANN conferences
      • Proceedings
  • My ESANN
close× Call Us +1 (777) 123 45 67
close×

Special sessions

Special sessions are organized by renowned scientists in their respective fields. Papers submitted to these sessions are reviewed according to the same rules as any other submission. Authors who submit papers to one of these sessions are invited to mention it on the author submission form; submissions to the special sessions must follow the same format, instructions and deadlines as any other submission, and must be sent according to the same procedure.

The following special sessions will be organized at ESANN 2023:

  • Efficient Learning In Spiking Neural Networks
    Organized by Alex Rast (Oxford Brookes University, UK), Nigel Crook (Oxford Brookes University, UK)
  • Quantum Artificial Intelligence
    Organized by José D. Martín-Guerrero (Universitat de València, Spain), Lucas Lamata (Universidad de Sevilla, Spain), Thomas Villmann (University of Applied Sciences Mittweida, Saxon Institute for Computational Intelligence and Machine Learning, Germany)
  • Green Machine Learning
    Organized by Verónica Bolón-Canedo (CITIC, Universidade da Coruña, Spain), Laura Morán-Fernández (CITIC, Universidade da Coruña, Spain), Brais Cancela (Universidade da Coruña, Spain), Amparo Alonso-Betanzos (CITIC, Universidade da Coruña, Spain)
  • Graph Representation Learning
    Organized by Federico Errica (NEC Laboratories Europe GmbH, Germany), Davide Bacciu (Università di Pisa, Italy), Alessio Micheli (Università di Pisa, Italy), Luca Pasa (University of Padova, Italy, Italy), Nicolò Navarin (University of Padua, Italy), Marco Podda (Università di Pisa, Italy), Daniele Zambon (The Swiss AI Lab IDSIA, Switzerland)
  • Towards Machine Learning Models that We Can Trust: Testing, Improving, and Explaining Robustness
    Organized by Maura Pintor (University of Cagliari, Italy), Ambra Demontis (University of Cagliari, Italy), Battista Biggio (University of Cagliari, Italy)
  • Neuro-Symbolic AI: Techniques, Applications, and Challenges
    Organized by Michael Steininger (IAV, Germany), Christoph Raab (IAV GmbH, Germany), Max Maerker (IAV, Germany), Johannes Dornheim (IAV, Germany), Simon Olma (IAV, Germany), Thorben Menne (IAV, Germany), Christian Nabert (IAV, Germany)
  • Machine Learning Applied to Sign Language
    Organized by Benoit Frénay (Université de Namur, Belgium), Joni Dambre (Ghent University - imec - IDLab, Belgium), Jérôme Fink (University of Namur, Belgium), Mathieu De Coster (Ghent University, Belgium)

Efficient Learning In Spiking Neural Networks
Organized by Alex Rast (Oxford Brookes University, UK), Nigel Crook (Oxford Brookes University, UK)

Spiking neural networks have seen a recent resurgence in interest owing particularly to their greater representational capacity and correspondingly (much) lower computational cost. On the other hand, reliable learning in such networks has been historically difficult to achieve in contrast to conventional (non-spiking) networks that can use efficient gradient-descent algorithms, exemplified by backpropagation. Challenges of adopting key aspects of gradient-descent methods in spiking networks include, e.g., the credit assignment problem, the absence of any known backward signal across synaptic connections to axons, and the non-differentiability of the activation function. There is a need for learning algorithms for spiking models that can compete favourably against conventional neural networks in terms of training time as well as computational power. This Special Session will bring together leading researchers to investigate efficient spike-based learning models that start to approach conventional backprop in terms of learning rate and accuracy, and provide a formal basis for evaluating their computational cost.

Quantum Artificial Intelligence
Organized by José D. Martín-Guerrero (Universitat de València, Spain), Lucas Lamata (Universidad de Sevilla, Spain), Thomas Villmann (University of Applied Sciences Mittweida, Saxon Institute for Computational Intelligence and Machine Learning, Germany)

After a successful special session on Quantum Machine Learning (QML) at ESANN 2020, we take a step forward and propose a special session with a broader scope, Quantum Artificial Intelligence (QAI), according with the significant evolution of the field in recent years. In the classical realm, Machine Learning (ML) is one of the approaches within the Artificial Intelligence (AI) field. While ML basically deals with models and algorithms that learn from data experience, AI is a bigger concept; it aims at solving problems that need of intelligent behavior and decisions, arguably simulating human thinking capability. To achieve that goal, it makes use of ML, Deep Learning but also other techniques, like computer vision, computer audio, sensing, etc. It also encompasses concepts like autonomy in the decisions, to be made by an intelligent agent, that should be able to perform various complex tasks, not only a few ones that has learned through a training process. 

The scientific community of quantum science and technology is more and more interested in its connection with AI, thus pouring intelligence and autonomy in quantum systems. This paves the way for a number of research avenues, including quantum versions of classical intelligent systems or quantum interpretations of human-brain interactions. In fact, a quantum description of the concept of intelligence is challenging in itself, as may require of many redefinitions and new conceptualizations to fulfill the requirements of a quantum paradigm. 

The community working on QML has grown drastically in the last few years, as it is shown by the creation of new journals devoted to the field. The hectic research activity has produced many new ideas and has awaken the interest in fields like QAI, that can be seen as a natural evolution of QML.  

QAI may come up with disruptive concepts. For instance, sentiment analysis or consciousness are well known to be analyzable by classical AI, but they will require reformulations to accommodate them to quantum systems, and this is needed if one wants to implement systems in quantum computers that take intelligent and autonomous actions. Further, interpretable/explainable models as well as cognitive learning paradigms in ML may lead to both inspirations and challenging model restrictions when trying to transfer them to QAI/QML. 

This special session is hence aimed at providing a discussion forum for researchers working on QML, QAI, ML and AI applications to Physics. The main topics of the session include, but are not limited to, theoretical developments in, and applications of:

  • Autonomous smart agents
  • Human-brain interaction
  • Interpretations of classical AI in the quantum realm
  • Physics-informed neural networks
  • Quantum biomimetics
  • Quantum clustering
  • Quantum computing
  • Quantum information
  • Quantum intelligence
  • Quantum learning
  • Quantum neural networks
  • Quantum reinforcement learning
  • Quantum machine learning and interpretable/explainable models in ML
  • Quantum technologies
Green Machine Learning
Organized by Verónica Bolón-Canedo (CITIC, Universidade da Coruña, Spain), Laura Morán-Fernández (CITIC, Universidade da Coruña, Spain), Brais Cancela (Universidade da Coruña, Spain), Amparo Alonso-Betanzos (CITIC, Universidade da Coruña, Spain)

In the last years we have witnessed the most impressive advances achieved by Artificial Intelligence (AI), in most cases by using deep learning models. However, it is undeniable that deep learning has a huge carbon footprint (a paper from 2019 stated that training a language model could emit nearly five times the lifetime emissions of an average car).

The term Green AI refers to AI research that is more environmentally friendly and inclusive, not only by producing novel results without increasing the computational cost, but also by ensuring that any researcher with a laptop has the opportunity to perform high-quality research without the need to use expensive cloud servers. The typical AI research (sometimes referred as Red AI) aims to obtain state-of-the-art results at the expense of using massive computational power, usually through an enormous quantity of training data and numerous experiments. Efficient machine learning approaches (especially deep learning) are starting to receive some attention in the research community. However, the problem is that, most of the time, these works are not motivated by being green. Therefore, it is necessary to encourage the AI community to recognize the value of work by researchers who take a different path, optimizing efficiency rather than only accuracy. Topics such as low-resolution algorithms, edge computing, efficient platforms, and in general scalable and sustainable algorithms and their applications are of interest to complete a holistic view of Green AI.

In this special session, we invite papers on both practical and theoretical issues about developing new machine learning that are sustainable and green, as well as review papers with the state-of-art techniques and the open challenges encountered in this field. In particular, topics of interest include, but are not limited to:

  • Developing energy-efficient algorithms for training and/or inference.
  • Investigating sustainable data management and storage techniques.
  • Exploring the use of renewable energy sources for machine learning.
  • Examining the ethical and social implications of green machine learning.
  • Investigating methods for reducing the carbon footprint of machine learning systems.
  • Studying the impact of green machine learning on various industries and
  • applications.

Submitted papers will be reviewed according to the ESANN reviewing process and will be evaluated on their scientific value: originality, correctness, and writing style.

Graph Representation Learning
Organized by Federico Errica (NEC Laboratories Europe GmbH, Germany), Davide Bacciu (Università di Pisa, Italy), Alessio Micheli (Università di Pisa, Italy), Luca Pasa (University of Padova, Italy, Italy), Nicolò Navarin (University of Padua, Italy), Marco Podda (Università di Pisa, Italy), Daniele Zambon (The Swiss AI Lab IDSIA, Switzerland)

Most traditional machine learning approaches assume that data samples are represented as vectors, but real-world phenomena are oftentimes modeled as complex systems of interacting entities. Graphs are a convenient abstraction to model functional and/or structural dependencies between such entities and find application in fields where the intrinsic nature of the datum is relational, such as biology, network science, graphics as well as chemistry. A graph may also serve as a proxy to embed a-priori knowledge in a problem, e.g., to encode symmetries and constraints in combinatorial optimization. Hence, learning from graphs can be the key towards solving complex problems, and has indeed become a thriving research topic. The field of graph representation learning, in particular, studies the ability of deep neural and probabilistic networks to learn representations of graphs encoding the features of interest. Specifically, the class of models at the heart of graph deep learning extends and generalizes typical convolutional or recursive neural networks to process arbitrary graphs.

Towards Machine Learning Models that We Can Trust: Testing, Improving, and Explaining Robustness
Organized by Maura Pintor (University of Cagliari, Italy), Ambra Demontis (University of Cagliari, Italy), Battista Biggio (University of Cagliari, Italy)

In recent years, machine learning has become the most effective way to analyze massive data streams. However, machine learning is also subject to security and reliability issues. These aspects require machine learning to be thoroughly tested before being deployed in unsupervised scenarios, such as services intended for consumers. The goal of this session is to discuss open challenges, both theoretical and practical, related to the security and safety of machine learning. The session will try to address the following challenges:

(i) the implementation of efficient tests for Machine Learning in the context of robustness to attacks and natural drifts of data; and

(ii) the design of robust and efficient models able to function in the wild and mitigate or detect adversarial attacks.

Neuro-Symbolic AI: Techniques, Applications, and Challenges
Organized by Michael Steininger (IAV, Germany), Christoph Raab (IAV GmbH, Germany), Max Maerker (IAV, Germany), Johannes Dornheim (IAV, Germany), Simon Olma (IAV, Germany), Thorben Menne (IAV, Germany), Christian Nabert (IAV, Germany)

Neuro-symbolic AI is a promising approach to artificial intelligence that aims to combine the strengths of symbolic reasoning and probabilistic systems. For example, combining inductive logic programming and deep learning with applications in graphs, vision, reasoning and explainability.

In this special session, we will provide an overview of neuro-symbolic AI, key concepts, and current state-of-the-art techniques. We will also discuss the potential benefits and challenges of neuro-symbolic AI and its potential impact on various fields and applications. In addition to the tutorial, we welcome contributions from attendees in the context of neuro-symbolic AI.  This includes but is not limited to: 

  • Novel neuro-symbolic models and techniques
  • Applications of neuro-symbolic AI to real-world problems
  • Empirical evaluations and comparisons of neuro-symbolic AI approaches
  • Theoretical foundations and analysis of neuro-symbolic AI
  • Emerging trends and challenges in the field of neuro-symbolic AI
Machine Learning Applied to Sign Language
Organized by Benoit Frénay (Université de Namur, Belgium), Joni Dambre (Ghent University - imec - IDLab, Belgium), Jérôme Fink (University of Namur, Belgium), Mathieu De Coster (Ghent University, Belgium)

Deep learning has lead to spectacular advances in many fields dealing with unstructured data such as computer vision, natural language processing, and data generation. Recently, sign languages have drawn the attention of machine learning practitioners as sign language recognition, translation, and synthesis raise interesting technical challenges and have a clear societal impact. The overarching domain of sign language processing is related to computer vision, natural language processing, computer graphics, and human-computer interaction. It brings together computer scientists and linguists to tackle interdisciplinary problems. Hence, it perfectly fits the focus of the ESANN conference and it will not only be of interest for experts in sign language processing, but also for experts in related fields (deep learning, computer vision, action recognition, sequence processing, . . . ). This special session aims to highlight recent advances made in sign language recognition, translation, and synthesis, as well as new datasets.

All submissions will be reviewed equally and accepted submissions will be published in the conference proceedings.  As for the presentation format, in order to favour interaction with the ESANN attendees, poster presentation will be preferred for application-oriented articles, whereas theoretical contributions to deep learning and machine learning techniques may be eligible for oral presentation.

Copyright © ESANN, 2019