Following the success of the previous editions of the Workshop on Computer VISion for ART Analysis held in 2012, `14, `16, and `18 we present the VISART V workshop, in conjunction with the 2020 European Conference on Computer Vision (ECCV 2020). VISART will continue its role as a forum for the presentation, discussion and publication of Computer Vision (CV) techniques for the analysis of art. As with the prior edition, VISART V offers two tracks:
1. Computer Vision for Art - technical work (standard ECCV submission, 14 page excluding references)
2. Uses and Reflection of Computer Vision for Art (Extended abstract, 4 page, excluding references)
The recent explosion in the digitisation of artworks highlights the concrete importance of application in the overlap between CV and art; such as the automatic indexing of databases of paintings and drawings, or automatic tools for the analysis of cultural heritage. Such an encounter, however, also opens the door both to a wider computational understanding of the image beyond photo-geometry, and to a deeper critical engagement with how images are mediated, understood or produced by CV techniques in the `Age of Image-Machines' (T. J. Clark). Submissions to our first track should primarily consist of technical papers, our second track therefore encourages critical essays or extended abstracts from art historians, artists, cultural historians, media theorists and computer scientists.
The purpose of this workshop is to bring together leading researchers in the fields of computer vision and the digital humanities with art and cultural historians and artists, to promote interdisciplinary collaborations, and to expose the hybrid community to cutting-edge techniques and open problems on both sides of this fascinating area of study.
This half-day workshop in conjunction with ECCV 2020, calls for high-quality, previously unpublished, works related to Computer Vision and Cultural History. Submissions for both tracks should conform to the ECCV 2020 proceedings style and will be double-blind peer reviewed by at least three reviewers. However, extended abstracts will not appear in the conference proceedings. Papers must be submitted online through the CMT submission system at:
Multi-modal multimedia computing systems and human-machine interaction
June 5, 2020 June 12, 2020 (23:59 UTC-0)
July 13, 2020
September 5, 2020
Aaron Hertzmann is a Principal Scientist at Adobe, Inc., and an Affiliate Professor at University of Washington. He received a BA in Computer Science and Art & Art History from Rice University in 1996, and a PhD in Computer Science from New York University in 2001. He was a professor at the University of Toronto for 10 years, and has worked at Pixar Animation Studios and Microsoft Research. He has published over 100 papers in computer graphics, computer vision, machine learning, robotics, human-computer interaction, and art. He is an ACM Fellow and an IEEE Fellow.
Prof. Dr. Andreas Maier was born on 26th of November 1980 in Erlangen. He studied Computer Science, graduated in 2005, and received his PhD in 2009. From 2005 to 2009 he was working at the Pattern Recognition Lab at the Computer Science Department of the University of Erlangen-Nuremberg. His major research subject was medical signal processing in speech data. In this period, he developed the first online speech intelligibility assessment tool – PEAKS – that has been used to analyze over 4.000 patient and control subjects so far. From 2009 to 2010, he started working on flat-panel C-arm CT as post-doctoral fellow at the Radiological Sciences Laboratory in the Department of Radiology at the Stanford University. From 2011 to 2012 he joined Siemens Healthcare as innovation project manager and was responsible for reconstruction topics in the Angiography and X-ray business unit. In 2012, he returned the University of Erlangen-Nuremberg as head of the Medical Reconstruction Group at the Pattern Recognition lab. In 2015 he became professor and head of the Pattern Recognition Lab. Since 2016, he is member of the steering committee of the European Time Machine Consortium. In 2018, he was awarded an ERC Synergy Grant “4D nanoscope”. Current research interests focuses on medical imaging, image and audio processing, digital humanities, and interpretable machine learning and the use of known operators.
Welcome, Alessio Del Bue
Keynote 1 - "Building blocks for a time machine", Andreas Maier
(USES) "EGO-CH: Dataset and Fundamental Tasks for Visitors Behavioral Understanding using Egocentric Vision", Francesco Ragusa
"Detecting Faces, Visual Medium Types, and Gender in Historical Advertisements, 1950-1995", Melvin Wevers
"Object Retrieval and Localization in Large Art Collections using Deep Multi-Style Feature Fusion and Iterative Voting", Nikolai Ufer
"Understanding Compositional Structures in Art Historical Images using Pose and Gaze Priors", Tilman Marquart
Session close
Keynote 2 - "Human visual perception of Art as Computation", Aaron Hertzmann
(USES) "On Style Transfer", Naja Grundtmann
"A Dataset and Baselines for Visual Question Answering on Art", Yuta Nakashima
"Demographic Influences on Contemporary Art with Unsupervised Style Embeddings", Nikolai J Huckle
"Geolocating Time: Digitisation and\\Reverse Engineering of a Roman Sundial", Filippo Bergamasco
Closing remarks, Stuart James
Workshop Close