Join us this October in Seoul, Korea where we present ArtSight at ACM MultiMedia 2018. ArtSight is a comprehensive query-bycolor explorative interface built on top of the large scale artistic dataset OmniArt. Color is of paramount importance in the artistic realm and querying such large data collections by colors that appear in their palette allows for intuitive exploration. This demo allows users to browse the 3 million artwork items in the OmniArt collection by color, and hierarchically filter each result-set by multiple attributes existing in the collection itself.
The VISTORY Project on the cover of I/O Magazine in a featured artcile The Science of Art. Behind the façade of the majestic Ateliergebouw in Amsterdam you can find a research institute that is unique in the world. At this Netherlands Institute for Conservation+Art+Science+ (NICAS), art historians, conservators, physicists, chemists, mathematicians and ICT researchers work together to better understand, access and preserve cultural heritage. Check out the full article on I/O Magazine’s website or order a printed copy.
Using the millions of images contained in the OmniArt dataset we created humanity’s average artwork. After averaging every painting, sculpture, installation, costume, figurine and photograph in the dataset we came up with a centrally symmetrical brown blur. This image can be interpreted in many different way. Having the central and peripheral lighter regions can signify the common light directions in painting and photography. Light can come to the center as a focus point in portraits or from the corners or top in landscapes.
The ICT.OPEN event is organised annually by the Netherlands Organisation for Scientific Research (NWO) under the auspices of ICT research Platform Netherlands (IPN). It is made to showcase Dutch ICT research and present active projects amongst which is the VISTORY project with the OmniArt dataset. Our latest work on the OmniArt benchmark is going to be presented there. Join us at our poster on post number 13 on March 19th and 20th in the Flint Theater in Amersfort, The Netherlands.
Object play a key role in understanding what is happening in an image. Using our object level annotation tool we have annotated 5000 data samples so far for objects that are not so common or are not easy to translate from real world images. For common objects that do not change their appearance significantly we can use the knowledge obtained from the real world. For example, bicycles have had the same primitive parts for a while now - two wheels, a frame, a seat and a power transfer mechanism from the feet to the wheels.
Today OmniArt hit the 2.5M mark in the number of data samples it contains. Our system is listening for changes as you are reading this post and expanding the dataset even further. In the current form, OmniArt features more than 2 million different faces in paintings, sketches and drawing. Our model’s gender estimation is that 70% of the faces are male and 30% are female. Female portraits mostly contain more than one person in their content while male subjects are mostly alone.
Deep models are at the heart of computer vision research recently. With a significant performance boost over conventional approaches, it is relatively easy to treat them like black boxes and enjoy the benefits they offer. However, if we are to improve and develop them further, understanding their reasoning process is key. Motivated by making the understanding process effortless both for the scientists who develop these models and the professionals using them, in this paper, we present an interactive plug&play web based deep learning visualization system.
We created a DataCamp classroom for the students participating in the Fundamentals of Data Science course. Joining this classroom provides access to premium content and courses on DataCamp for the next 6 months. Follow this link to check out the classroom.
In the previous post I introduced a paper in which we build a shared representation of artistic data based on multiple attributes related to it. The data used in that paper is now a full featured, museum-centric dataset containing more than 1M photographic reproductions of artworks with rich metadata. The dataset is still growing and can be obtained from this site. The dataset is growing by the minute, with more and more images being gathered in the background.
OmniArt: Multi-task Deep Learning for Artistic Data Analysis is a paper about a multi-task deep architecture that learns joint representations of data based on multiple tasks. The motivation behind this work is to determine whether looking at artistic data from multiple aspects is beneficial to deep models like it is for the human professionals and can that information be captured in a learned representation of the data. In this paper we also introduce a new structured artistic dataset with rich metadata dubbed OmniART which will be publicly released uppon publication.