Heterodox Methods for Interpretable and Efficient Artificial Intelligence
Monday, 13. June, HG-01A32
In this workshop we will discuss (machine learning) architectures where the human involvement in the design of the model and its data ingestion process allows for both more energy efficient and more interpretable outcomes. Examples of such systems stretch from pure grammatical inference methods and probabilistic programming, where the model (family) is entirely constructed by human hands and only very specific model parameters are learned from data, to various types of interpretable neural network approaches where the specific workings of the output system is much less defined a priori. The goal is to spread knowledge about lesser known approaches to learning from data that use an increased level of human involvement, require less training data, and are tailored to achieve interpretable results in a more efficient way.
Schedule
09:00 | 09:15 | Welcome and introductory information |
09:15 | 10:00 | Invited Talk: Prof Ole-Christoffer Granmo The Tsetlin Machine – From Arithmetic to Logic-based AI |
10:00 | 11:30 | Paper session 1: |
Azqa Nadeem, Sicco Verwer & Shanchieh Jay Yang: Suffix-based Finite Automata for Learning Explainable Attacker Strategies | ||
Petter Ericson & Anna Jonsson: Grammatical Inference: Strengths and Weaknesses | ||
Enrique Valero-Leal, Pedro Larrañaga & Concha Bielza: Extending MAP-independence for Bayesian network explainability | ||
Marco Virgolin, Eric Medvet, Tanja Alderliesten & Peter A.N. Bosman: Less is More: A Call to Focus on Simpler Modelsin Genetic Programming for Interpretable Machine Learning | ||
11:30 | 12:30 | Panel 1: Transparent models and explaining uncertainty |
12:30 | 14:00 | Lunch |
14:00 | 14:45 | Invited Talk: Dr. Anil Yaman On the Emergence of Collective Intelligence |
14:45 | 15:45 | Paper session 2: |
Leila Methnani, Andreas Antoniades & Andreas Theodorou: The AWKWARD Real-Time Adjustment of Reactive Planning | ||
Krist Shingjergji, Deniz Iren, Felix Böttger, Corrie Urlings & Roland Klemke: Interpretable Explainability for Face Expression Recognition | ||
Ronald Siebes, Victor de Boer, Roberto Reda & Roderick van der Weerdt: Learning and Reasoning over Smart Home Knowledge Graphs | ||
15:45 | 16:45 | Unconference/excursion (if the weather permits) |
16:45 | 17:45 | Panel 2: Transparency and efficiency in practice |
Event Timeslots (3)
Monday, 13. June (Pre-C. Day 1)
-
Full day workshop
admin
HG-01A32
-
09:00-09:15 Welcome and introductory information
09:15-10:00 Invited Talk by Prof. Ole-Christoffer Granmo: The Tsetlin Machine – From Arithmetic to Logic-based AI
10:00-11:30 Paper session 1
11:30-12:30 Panel 1: Transparent models and explaining uncertainty
admin
HG-01A32
-
14:00-14:45 Invited Talk by Dr. Anil Yaman: On the Emergence of Collective Intelligence
14:45-15:45 Paper session 2
15:45-16:45 Unconference/excursion (if the weather permits)
16:45-17:45 Panel 2: Transparency and efficiency in practice
admin