• Log In
ESANN 2026
  • Home
      • Latest news
  • Submit a paper
      • Call for papers
      • Special sessions
      • Author guidelines
      • Submissions
      • Ethics
  • Program
  • Participate
      • Format of the conference
      • Registration
      • Information for speakers
      • Code of conduct
  • The event
      • Venue
      • About Bruges
      • Hotels
      • Conference commitees
      • Sponsors
      • Contacts
  • Past ESANN
      • ESANN conferences
      • Proceedings
  • My ESANN
close× Call Us +1 (777) 123 45 67
close×

Special sessions

Special sessions are organized by renowned scientists in their respective fields. Papers submitted to these sessions are reviewed according to the same rules as any other submission. Authors who submit papers to one of these sessions are invited to mention it on the author submission form; submissions to the special sessions must follow the same format, instructions and deadlines as any other submission, and must be sent according to the same procedure.

The following special sessions will be organized at ESANN 2026:

  • Neuro Symbolic AI and Complex Data
    Organized by Luca Oneto (University of Genoa, Italy), Nicolò Navarin (University of Padua, Italy), Luca Pasa (University of Padova, Italy), Davide Rigoni (University of Padova, Italy), Davide Anguita (DIBRIS - University of Genova, Italy)
  • Reliable counterfactuals for machine learning models
    Organized by Marika Kaden (University of Applied Sciences Mittweida, Saxon Institute for Computational Intelligence and Machine Learning, Germany), Benjamin Paassen (Bielefeld University, Germany), Barbara Hammer (CITEC - Bielefeld University, Germany), Thomas Villmann (Mittweida University of Applied Sciences, Saxon Institute for Computational Intelligence and Machine Learning, Germany)
  • Beyond Performance: Comprehensive Evaluation Strategies for Impactful Machine Learning
    Organized by Valerie Vaquet (CITEC, Bielefeld University, Germany), Ulrike Kuhl (Bielefeld University, Germany), Saša Brdnik (University of Maribor, Slovenia), Benjamin Paassen (Bielefeld University, Germany)
  • Learning and Reasoning on Knowledge and Heterogeneous Graphs
    Organized by Matteo Zignani (University of Milan, Italy), Pasquale Minervini (University of Edinburgh; Miniml.AI, United Kingdom), Roberto Interdonato (CIRAD, France), Manuel Dileo (University of Milan, Italy)
  • Reliability, Safety and Robustness of AI applications
    Organized by Caroline König (Universitat Politècnica de Catalunya, Spain), Cecilio Angulo (Universitat Politècnica de Catalunya, Spain), Pedro Jesús Copado (Universitat Politècnica de Catalunya, Spain), G. Kumar Venayagamoorthy (Clemson University, USA)
  • Efficient and Resilient Machine Learning for Industrial Applications
    Organized by Philipp Wissmann (Siemens AG, Deutschland), Marc Weber (Siemens AG, Germany), Simon Leszek (TU Berlin, Germany), Philip Naumann (TU Berlin, Germany), Daniel Hein (Siemens AG, Germany), Steffen Udluft (Siemens Technology, Germany), Thomas Runkler (Siemens AG/ TU Munich, Germany)

Neuro Symbolic AI and Complex Data
Organized by Luca Oneto (University of Genoa, Italy), Nicolò Navarin (University of Padua, Italy), Luca Pasa (University of Padova, Italy), Davide Rigoni (University of Padova, Italy), Davide Anguita (DIBRIS - University of Genova, Italy)

In the contemporary era of Artificial Intelligence (AI) based decision-making, the application of AI on complex data (e.g., nonlinear systems, images, text, sequences, trees, and graphs) has become increasingly pivotal - spanning domains such as drug discovery, industrial automation, and decision support systems. Yet, purely data-driven methods often fall short in domains where structured reasoning, interpretability, and the integration of human knowledge are essential.

Neuro-symbolic AI emerges as a promising paradigm that combines the strengths of symbolic reasoning with sub-symbolic learning, bridging the gap between data-driven models and domain-specific knowledge, requirements, and constraints. This fusion allows for more generalizable, explainable, and trustworthy systems, capable of incorporating logical rules, expert knowledge, and domain constraints into complex data-driven tasks.

In this context, Neuro-Symbolic AI holds the potential to enhance model sustainability, robustness, transparency, and alignment with human-centric goals. This additional knowledge can take many forms, for example: 

  • Constraint-Aware AI, which embed hard or soft logical constraints or verification rules in learning algorithms;
  • AI for science, where (possibly interpretable) models must comply with physical laws or symbolic expressions;
  • Socially responsible AI, where ethical frameworks and cultural principles shape decisions;
  • Applications (e.g., bioinformatics, software engineering, natural sciences, or legal informatics) where knowledge graphs and ontologies guide data-driven inference.

This special session aims to gather valuable contributions and early findings in the field of Neuro-Symbolic AI for Complex Data. Our main objective is to showcase the potential and limitations of new ideas, improvements, and cross-disciplinary integrations of symbolic reasoning and machine learning for solving real-world problems. We welcome contributions across disciplines and encourage submissions that integrate symbolic reasoning, statistical learning, complex data, and domain-specific knowledge to advance the frontiers of AI research.

Reliable counterfactuals for machine learning models
Organized by Marika Kaden (University of Applied Sciences Mittweida, Saxon Institute for Computational Intelligence and Machine Learning, Germany), Benjamin Paassen (Bielefeld University, Germany), Barbara Hammer (CITEC - Bielefeld University, Germany), Thomas Villmann (Mittweida University of Applied Sciences, Saxon Institute for Computational Intelligence and Machine Learning, Germany)

Counterfactuals for model explanation and evaluation gain increasing importance for machine learning approaches. Particularly, in crucial and challenging applications like medical diagnoses support, credit scoring or technical control systems, the evaluation of counterfactuals is a promising approach to explore the applicability and the limits of AI systems and empower users with actionable advice on how to affect automatic decisions. Thus, reliable and faithful generation as well as a prudent interpretation of counterfactuals contribute to the establishment of more reliable AI systems and improve their trustworthiness.

Determination and generation of counterfactual samples can be motivated and processed taking different perspectives: I) The cognitive one considers trustworthiness and reliance based on empirical evidence in human reasoning or model explanations inspired by experiences in social sciences. II) The technical perspective is mainly triggered by issues like plausibility and actionability of counterfactuals as well as by their efficient computation and evaluation. Current developments in counterfactual research provide substantial progress but are far away from to be sufficient in the field. For example, the interplay between counterfactual approaches and causal reasoning is crucial for making reliable predictions and decisions but is not explored satisfactory.

Thus, the proposed session invites researchers to present new ideas, findings and results in this interdisciplinary field. Respective papers may contribute to aspects in this area like 

  • Cognitive aspects in counterfactual generation
  • Counterfactuals and interpretability for reasoning
  • Domain knowledge integration
  • causal machine learning and counterfactuals
  • types of counterfactuals (technical and cognitive perspectives)
  • evaluation and plausibility of counterfactuals, counterfactual metrics
  • … 

Work from additional perspectives on counterfactuals are highly welcome.

Beyond Performance: Comprehensive Evaluation Strategies for Impactful Machine Learning
Organized by Valerie Vaquet (CITEC, Bielefeld University, Germany), Ulrike Kuhl (Bielefeld University, Germany), Saša Brdnik (University of Maribor, Slovenia), Benjamin Paassen (Bielefeld University, Germany)

Machine Learning plays a considerable role across many areas, ranging from industry and scientific research to sectors such as education and medicine. While applying performant models can benefit society, ensuring safe, reliable, and responsible usage requires evaluation beyond simple predictive performance.

As required by the European AI-Act, effective evaluation must address not only safety and robustness but also incorporate aspects like biases, fairness, transparency, and explainability to ensure responsible deployment in line with emerging regulatory standards. However, evaluating methods according to these criteria poses unique challenges: Some criteria - especially those tied to usability, interpretability, and real-world impact - simply cannot be meaningfully evaluated without human assessment. For instance, whether an explanation is interpretable, actionable, and contextually appropriate can only be validated through human experience and perception. But even if we can evaluate all criteria, they may be in conflict. For instance, different fairness notions are mutually contradictory, and implementing fairness can decrease predictive performance. Finally, evaluation protocols can get more complex when considering dynamic and distributed environments, where the data distribution can change over time and across space. Assessing fairness, transparency, or robustness in such settings introduces new methodological challenges, calling for adaptive, context-aware approaches. Overall, a comprehensive evaluation calls for interdisciplinary collaboration, spanning technical, legal, ethical, social, and domain-specific expertise.

This special session aims to create a platform for contributions on novel evaluation protocols, empirical user studies, and interdisciplinary evaluation strategies that critically examine and improve how we evaluate machine learning systems. Topics include, but are not limited to: 

  • Evaluation of fairness and bias mitigation methods
  • Evaluation frameworks for trust, safety, robustness, and uncertainty
  • Evaluation in dynamic, non-stationary, and distributed environments
  • Standardized user study frameworks for interpretability, usability, and practical impact
  • Methodologies for interdisciplinary evaluation, e.g., integrating legal or ethical perspectives
  • Human-centered evaluation of explainable AI
  • Benchmarks and reproducibility strategies for evaluation beyond performance
  • Evaluation of compliance with the EU AI Act and similar regulatory frameworks
  • Participatory evaluation and co-design with end-users and domain experts
  • Evaluation in real-world applications
  • Evaluation of LLMs
  • Assessment of user cognition and understanding of AI
  • Advances in user-focused evaluation metrics
Learning and Reasoning on Knowledge and Heterogeneous Graphs
Organized by Matteo Zignani (University of Milan, Italy), Pasquale Minervini (University of Edinburgh; Miniml.AI, United Kingdom), Roberto Interdonato (CIRAD, France), Manuel Dileo (University of Milan, Italy)

The study of Knowledge Graphs (KGs) and Heterogeneous Graphs is a ma- jor field that is changing how we represent, learn, and reason from complex, interconnected data. This area, which often brings together machine learning and symbolic AI, aims to develop powerful methods for navigating, understand- ing, and discovering new information from rich, multi-relational data structures. KGs, with their structured facts and relationships, and heterogeneous graphs, with their diverse node and edge types, provide a unified way to model real- world systems, from biological networks to social media platforms. The main challenge is creating computational models that can not only learn from this complexity but also reason and infer new knowledge that is easy to interpret by a human audience. To this aim, in this field, we are witnessing a connection between learning-based approaches and symbolic reasoning. Machine learning, especially with the rise of Graph Neural Networks (GNNs), offers new tools for tackling problems on these complex graph structures, being excellent at learning hidden representations (embeddings) of nodes and edges, which are fundamental for tasks such as predicting links, classifying nodes, and finding communities. However, learning-based methods can sometimes act as ”black boxes,” making it difficult to understand how they arrive at their conclusions. This is where symbolic reasoning becomes essential. Reasoning techniques, such as logical inference and rule mining, can use the structured nature of KGs to find new facts and check if the graph is consistent. When combined with learning models, these methods can improve explainability, make the models more robust, and help discover new knowledge beyond pattern recognition only. This combined approach lets us not only predict but also understand why a prediction was made.

Many important areas are benefiting from the use of learning and reasoning on knowledge and heterogeneous graphs. A major challenge is representation learning for heterogeneous graphs, where the goal is to develop GNN architec- tures and embedding methods that can effectively handle different node and edge types. These models must learn to distinguish between various relation- ships and entities while integrating information from the entire graph. Another key area is Knowledge Graph Completion and link prediction, which involves using learning models to predict missing links or entities in a KG. By combining GNNs with logical reasoning rules, we can make predictions more accurate and ensure they are logically consistent with existing knowledge. It is also crucial to create methods that can explain the outputs of complex graph-based models, a field known as Explainable AI (XAI) on graphs, in order to build trust and allow them to be used in important fields. Symbolic reasoning can be used here to cre- ate clear, human-readable explanations by pointing to the specific paths or rules that influenced a model’s decision. A key research direction is exploring new architectures that combine neural parts with symbolic reasoning engines. These models could learn hidden patterns from data while also performing logical in- ference, giving us the best of both worlds. Finally, these advanced techniques have significant applications in areas such as drug discovery, fraud detection, scientific discovery, and recommendation systems, to name a few examples.

Relevant topics include, but are not limited to:

  • Graph Neural Networks (GNNs) for Knowledge and Heterogeneous Graphs
  • Knowledge and Heterogeneous Graph Embedding Techniques
  • Neuro-symbolic AI for Knowledge Graph Reasoning
  • Rule Mining and Logical Inference on KGs
  • Explainability Methods for Knowledge and Heterogeneous Graphs
  • Knowledge-aware Pre-trained Language Models
  • Temporal and Dynamic Knowledge Graph Analysis
  • Learning on Temporal and Dynamic Knowledge Graphs
  • Graph-based Causal Discovery
  • Applications in Biomedical, Social, Financial, and Industrial Systems
Reliability, Safety and Robustness of AI applications
Organized by Caroline König (Universitat Politècnica de Catalunya, Spain), Cecilio Angulo (Universitat Politècnica de Catalunya, Spain), Pedro Jesús Copado (Universitat Politècnica de Catalunya, Spain), G. Kumar Venayagamoorthy (Clemson University, USA)

Reliability, safety, and robustness in AI models are critical for real-world applications. Machine learning models must be designed to operate reliably under practical conditions for real-world systems, including when those conditions differ from the training environment. This session will focus on recent advancements in applied AI for safety- critical applications, as well as methodological contributions that enhance reliability, support safe deployment, and enable robust testing of AI systems.

We invite submissions on (but not limited to) the following themes:

  • Safety-Critical AI Applications: Case studies and risk assessment frameworks
  • Robustness Under Distribution Shifts: Techniques adressing open-set recognition, out-of-distribution detection and domain adaptation.
  • Adversarial Robustness and Stress Testing: Evaluating model behavior under unknown or challenging inputs.
  • Reliability Testing and Evaluation Protocols: Systematic validation approaches for model trustworthiness.
  • Human-in-the-Loop Safety: Integrating expert oversight in high-risk AI deployments.
  • Explainability for Safety-Critical Decisions: Model transparency for accountability.
  • Formal Verification of AI Models: Methods to prove stability, fairness, and scalable verification.
  • Uncertainty Quantification and Calibration: Confidence-aware predictions to suport safe decisions.
Efficient and Resilient Machine Learning for Industrial Applications
Organized by Philipp Wissmann (Siemens AG, Deutschland), Marc Weber (Siemens AG, Germany), Simon Leszek (TU Berlin, Germany), Philip Naumann (TU Berlin, Germany), Daniel Hein (Siemens AG, Germany), Steffen Udluft (Siemens Technology, Germany), Thomas Runkler (Siemens AG/ TU Munich, Germany)

The integration of AI models into industrial applications holds significant opportunities for process optimization, automation and quality control, all contributing to increased efficiency, reduced costs, and improved product reliability. However, realizing these benefits involves overcoming challenges related to computational efficiency, system transparency, and reliability, which can hinder successful deployment and adoption. This special session delves into the complexities faced by real-world industrial applications utilizing intelligent technologies, emphasizing the need for efficient and resilient methods amid the rapid growth of these systems.

As an example, reinforcement learning offers the potential to enhance control and decision-making but requires overcoming obstacles such as learning from offline data and limited exploration capabilities. At the same time, recent advances in generative AI and the resulting foundation models offer promising avenues for promoting scalability and broad application across diverse industrial contexts. In this context insightful and fair benchmarking against conventional methods remains a key challenge to effectively guide stakeholder decisions. Moreover, building trust and clarity in AI systems is crucial for successful adoption. This calls for explainable, interpretable, and trustworthy AI techniques to align operations with human expectations and regulatory requirements. Furthermore, the session will explore strategies to develop AI systems that are data-efficient and flexible, minimizing data requirements while maintaining adaptability to changing environments. Together, these aspects aim to balance innovation with practical application in order to maximize the effectiveness of AI in industrial environments.

Topics of interest include, but are not limited to:

  • Industrial challenges for AI
  • ML applications in industrial manufacturing
  • Offline reinforcement learning
  • Safe reinforcement learning
  • Foundation models/GenAI for industry applications
  • Benchmarking conventional methods and foundation models
  • Explainable and trustworthy AI
  • Data efficient simulation and forecasting
  • ML-assisted process modeling, simulation and monitoring
  • Multimodal modeling and anomaly detection
  • Modeling uncertainty in industrial settings
  • Deep-learning-based time-series and signal processing
Copyright © ESANN, 2019