Deep models are at the heart of computer vision research recently. With a significant performance boost over conventional approaches, it is relatively easy to treat them like black boxes and enjoy the benefits they offer. However, if we are to improve and develop them further, understanding their reasoning process is key. Motivated by making the understanding process effortless both for the scientists who develop these models and the professionals using them, in this paper, we present an interactive plug&play web based deep learning visualization system.
We created a DataCamp classroom for the students participating in the Fundamentals of Data Science course. Joining this classroom provides access to premium content and courses on DataCamp for the next 6 months. Follow this link to check out the classroom.
In the previous post I introduced a paper in which we build a shared representation of artistic data based on multiple attributes related to it. The data used in that paper is now a full featured, museum-centric dataset containing more than 1M photographic reproductions of artworks with rich metadata. The dataset is still growing and can be obtained from this site. The dataset is growing by the minute, with more and more images being gathered in the background.
OmniArt: Multi-task Deep Learning for Artistic Data Analysis is a paper about a multi-task deep architecture that learns joint representations of data based on multiple tasks. The motivation behind this work is to determine whether looking at artistic data from multiple aspects is beneficial to deep models like it is for the human professionals and can that information be captured in a learned representation of the data. In this paper we also introduce a new structured artistic dataset with rich metadata dubbed OmniART which will be publicly released uppon publication.
For some reason I had to create a first post… :)