Overview

Bio

Herke van Hoof is currently assistant professor at the University of Amsterdam in the Netherlands, where he is part of the Amlab. He is interested in reinforcement learning with structured data and prior knowledge. Reinforcement learning is a very general framework, but this tends to result in extremely data-hungry algorithms. Exploiting structured prior knowledge, or using value function or policy parametrizations that respect known structural properties, is a promising avenue to learn more with less data. Examples of this line of work include reinforcement learning (RL) for combinatorial optimisation, RL with symbolic prior knowledge, and equivariant RL.  

Before joining the University of Amsterdam, Herke van Hoof was a postdoc at McGill University in Montreal, Canada, where he worked with Professors Joelle Pineau, Dave Meger, and Gregory Dudek. He obtained his PhD at TU Darmstadt, Germany, under the supervision of Professor Jan Peters, where he graduated in November 2016. Herke got his bachelor and master degrees in Artificial Intelligence at the University of Groningen in the Netherlands.

Recent news

  • Webinar “Knowledge-assisted AI for real-world network infrastructure” (11/13/2024)

    Are you curious about how domain-specific knowledge can help shape AI applications within critical network infrastructures? We’re excited to invite you to our upcoming webinar, “Knowledge-Assisted AI Applications for Real-World Network Infrastructure”, where we’ll discuss potential applications of AI within the unique domains of the AI4REALNET project.

    More information, and a link to the mandatory registration form, can be found here.

  • Postdoc postion (11/6/2024)

    Do you like to investigate how reinforcement learning agents can optimally support human decision making? Frans Oliehoek and me are looking for a postdoc at the University of Amsterdam.
    Directly apply, or find more information, here.
    The position is funded by the AI4REALNET Project and The Hybrid Intelligence Centre.

  • David & Guillermo present their work at ICAPS (6/3/2024)

    Tomorrow, June 4th, David & Guillermo will present their work at ICAPS. The paper proposes a new way to learn sub-policies that can optimally solve complex tasks expressed in linear temporal logic, even in stochastic environments. They’d love to tell you all about it. Or read our paper here.

An archive of news items can be found on the News page.

Highlighted publications
Kuric, D.; Infante, G.; Gómez, V.; Jonsson, A.; van Hoof, H.: Planning with a Learned Policy Basis to Optimally Solve Complex Tasks. In: International Conference on Automated Planning and Scheduling, 2024. (Type: Proceedings Article | Links | BibTeX)
Gagrani, Mukul; Rainone, Corrado; Yang, Yang; Teague, Harris; Jeon, Wonseok; Hoof, Herke; Zeng, Weiliang Will; Zappi, Piero; Lott, Christopher; Bondesan, Roberto: Neural Topological Ordering for Computation Graphs. In: Advances in Neural Information Processing Systems, 2022. (Type: Proceedings Article | Links | BibTeX)
Pol, Elise; Hoof, Herke; Oliehoek, Frans; Welling, Max: Multi-Agent MDP Homomorphic Networks. In: Proceedings of the International Conference on Learning Representations, 2022. (Type: Proceedings Article | Links | BibTeX)
Kool, Wouter; Hoof, Herke; Welling, Max: Estimating Gradients for Discrete Random Variables by Sampling without Replacement. In: International Conference on Learning Representations, 2020. (Type: Proceedings Article | Links | BibTeX)
Smith, M.; Hoof, H.; Pineau, J.: An Inference-Based Policy Gradient Method for Learning Options. In: International Conference on Machine Learning, pp. 4703-4712, 2018. (Type: Proceedings Article | Links | BibTeX)
Hoof, H. Van; Neumann, G.; Peters, J.: Non-parametric Policy Search with Limited Information Loss. In: Journal of Machine Learning Research, vol. 18, no. 73, pp. 1-46, 2017. (Type: Journal Article | Links | BibTeX)