Color is everywhere, and whether we perceive it consciously or not each color we encounter provides an emotional experience. Our work for exploring and navigating emotion in art through color named ACE: Art, Color & Emotion has a technical demo slot at ACM Multimedia 2019 in Nice, France. We are happy to demonstrate the functionalities and invite you to explore OmniArt together through ACE. ACE: Art, Color and Emotion
I am excited to announce that our work on routing data flows per task in Multi-Task Learning models in order to improve the task count scalability is accepted for an Oral presentation at ICCV 2019 in Seoul, South Korea. This paper introduces Task Routing our method for routing data within convolutional neural networks and the PyTorch layer enabling this functionality. As with any combinatorial problem, in MTL there exists an optimal combination of tasks and shared resources which is unknown.
This week I will talk about my research at the 16th International Conference on Informatics and Information Technologies. We will cover computer vision and multi-task learning fundamentals as well as state of the art approaches in the these fields. You can find more details about the talk and complete conference programme on the CIIT website .
Our work on exploting secondary latent features for task grouping got accepted for oral presentation in ICMR 2019 in Ottawa, Canada. This paper introduces Selective Sharing, a method using the factorized gradients per task as a signal that helps in grouping tasks that benefit eachother’s learning process. The grouping is conditioned on a predefined metric so different strategies can be explored. We are preparing the repo for the code release and the site will be updated with a link to the official proceedings.
Over the past two years my research interests have revolved arround Multi-Task Learning (MTL) as a learning paradigm. It is a vast field of diverse research in all domains of computer science from NLP and Signal Processing to Computer Vision and Multimedia. In what follows I will motivate, describe and discuss an approach to MTL we developed called Task Routing. Multi-Task and Many-Task Learning By definition (Carruana 1997), multi-task learning is a learning paradigm that seeks to improve the generalization performance of machine learning models by optimizing for more than one task simultaneously.