Skip to main content

Vision for Art

VISART VI

Workshop at the European Conference of Computer Vision (ECCV)

Tel Aviv, 23rd October 2022

(Hybrid)

Venue

The workshop is located in:

Royal Ballroom I

InterContinental David Tel Aviv, an IHG Hotel
Kaufmann St 12, Tel Aviv-Yafo, 61501, Israel

Call for Papers


Following the success of the previous editions of the Workshop on Computer VISion for ART Analysis held in 2012, `14, `16, `18 and `20 we present the VISART VI workshop, in conjunction with the 2022 European Conference on Computer Vision (ECCV 2022). VISART will continue its role as a forum for the presentation, discussion and publication of Computer Vision (CV) techniques for the analysis of art. As with the prior edition, VISART VI offers two tracks:

1. Computer Vision for Art - technical work (standard ECCV submission, 14 page excluding references)
2. Uses and Reflection of Computer Vision for Art (Extended abstract, 4 page, excluding references)

The recent explosion in the digitisation of artworks highlights the concrete importance of application in the overlap between CV and art; such as the automatic indexing of databases of paintings and drawings, or automatic tools for the analysis of cultural heritage. Such an encounter, however, also opens the door both to a wider computational understanding of the image beyond photo-geometry, and to a deeper critical engagement with how images are mediated, understood or produced by CV techniques in the `Age of Image-Machines' (T. J. Clark). Submissions to our first track should primarily consist of technical papers, our second track therefore encourages critical essays or extended abstracts from art historians, artists, cultural historians, media theorists and computer scientists.

The purpose of this workshop is to bring together leading researchers in the fields of computer vision and the digital humanities with art and cultural historians and artists, to promote interdisciplinary collaborations, and to expose the hybrid community to cutting-edge techniques and open problems on both sides of this fascinating area of study.

This workshop in conjunction with ECCV 2022, calls for high-quality, previously unpublished, works related to Computer Vision and Cultural History. Submissions for both tracks should conform to the ECCV 2022 proceedings style and will be double-blind peer reviewed by at least three reviewers. However, extended abstracts will not appear in the conference proceedings. Papers must be submitted online through the CMT submission system at:

https://cmt3.research.microsoft.com/VISART2022/

TOPICS include but are not limited to the follwing:

  • Art History and Computer Vision
  • 3D reconstruction from visual art or historical sites
  • Multi-modal multimedia systems and human machine interaction

  • Visual Question & Answering (VQA) or Captioning for Art
  • Computer Vision and cultural heritage
  • Big-data analysis of art
  • Security and legal issues in the digital presentation and distribution of cultural information
  • Image and visual representation in art
  • 2D and 3D human pose and gesture estimation in art
  • Multimedia databases and digital libraries for artistic research
  • Interactive 3D media and immersive AR/VR for cultural heritage
  • Approaches for generative art
  • Media content analysis and search
  • Surveillance and Behavior analysis in Galleries, Libraries, Archives and Museums

Important Dates


  • Full & Extended Abstract Paper Submission:

    10th July 2022 27th June 2022 (23:59 UTC-0)

  • Notification of Acceptance:

    8th August 2022

  • Camera-Ready Paper Due:

    15th August 2022

  • Workshop:

    23rd October 2022


Submission Deadline:

Days
:
Hours
:
Minutes
:
Seconds

Keynotes

Prof. Béatrice Joyeux-Prunel

Bio

Béatrice Joyeux-Prunel is an art historian, Full Professor at Geneva University in Switzerland (UNIGE), and chair of Digital Humanities. She works on the social and global history of art in the contemporary period, and on globalisation through images. From 2009 she directs Artl@s (https://artlas.huma-num.fr), an international research platform on artistic globalisation. In this context, she funded in Paris's École Normale Supérieure, where she was Assistant Professor in contemporary art history (2007–2019), the IMAGO Centre, dedicated to teaching, research and creation on the circulation of images in Europe (www.imago.ens.fr, Erasmus + European Jean Monnet Excellence Center, 2019-2022). At UNIGE she coordinates the SNF Visual Contagions project (https://visualcontagions.unige.ch, 2021-2024). Joyeux-Prunel’s latest books are: Les avant-gardes artistiques – une histoire transnationale 1848-1918 (Gallimard, Folio histoire in paperback, 2016); Les avant-gardes artistiques – une histoire transnationale 1918-1945 (Gallimard Folio histoire in paperback, 2017); and Naissance de l'art contemporain (1945-1970) – Une histoire mondiale (CNRS Editions, 2021). She curated the exhibition « Contagions visuelles » for the Espace de Création numérique du Jeu de Paume, Paris (10 May- 31 December 2022).

Abstract for "Visual Globalization Through the Lens of Machine Vision. From Promises to Reality"

If the digitisation of works of art has exploded in recent years, the situation is even more significant for illustrated periodicals. The latter constitutes an unprecedented source to study how the visual cultures of the century and cultural globalisation. Until recently, visual globalization was approached mainly through case studies, or through quantitative approaches that chiefly considered texts (textual descriptions of images and metadata). Computer vision methodologies have made it possible to study its truly visual aspects (objects, colours, lines, layouts, patterns, styles), and their diffusion. Yet, implementing a visual study of globalisation over one century pushes the limits of computer vision. What algorithms can we use efficiently in a reasonable time, on which formats? What are the best strategies to track the visual blockbusters of the century, draw their circulation and understand its logic? How can we visualise this data in a relevant and heuristic way? This is what the Visual Contagions project at the University of Geneva is all about. This paper will present the project’s issues, its methods and results, as well as their limitations, advocating for advances in Machine Learning that take better account of the specificities of digital heritage and of the actual implementation of algorithms.

Prof. John Collomosse

Bio

John Collomosse is a Principal Scientist at Adobe Research where he leads the deep learning group. John’s research focuses on representation learning for creative visual search (e.g. sketch, style, pose based search) and for robust image fingerprinting and attribution. He is a part-time full professor at the Centre for Vision Speech and Signal Processing, University of Surrey (UK) where he founded and co-directs the DECaDE multi-disciplinary research centre exploring the intersection of AI and Distributed Ledger Technology. John is part of the Adobe-led content authenticity initiative (CAI) and contributor to the technical work group of the C2PA open standard for digital provenance. He is on the ICT and Digital Economy advisory boards for the UK Science Council EPSRC.

Abstract for "Content Provenance: To Authenticity and Beyond!"

Technologies for determining content provenance (‘where did this image come from?’, ‘what was done to it, and by whom’?) are critical to establishing attribution and trust in media.  Provenance can help society fight fake news and misinformation by enabling users to make better trust decisions on content they encounter online.   In a future metaverse where interoperating platforms generate value through creation and exchange of digital assets, provenance can help creative gain recognition for their work and open the door to decentralized markets for creative content.  Tracing the provenance of synthetic media too, enables apportionment of credit to those contributing their work for training generative AI. In this talk I will outline Adobe’s role in work toward an open standard to support content provenance, and research addressing open challenges around provenance via content fingerprinting and hashing, generative / synthetic media attribution, as well as distributed trust models to underwrite the provenance of assets e.g. via distributed ledger technology (DLT) and blockchain.

Prof. Ohad Ben-Shahar

Bio

Ohad Ben-Shahar is a Professor of Computer Science at the Computer Science department, Ben Gurion University (BGU), Israel. He received his B.Sc. and M.Sc. in Computer Science from the Technicon (Israel Institute of Technology) in 1989 and 1996, respectively, and his M.Phill and PhD From Yale University, CT, USA in 1999 and 2003, respectively. He is a former chair of the Computer Science department and the present head of the School of Brain Sciences and Cognition at BGU.

Prof. Ben-Shahar’s research area focuses on computational vision, with interests that span all aspects of theoretical, experimental, and applied vision sciences and their relationship to cognitive science as a whole. He is the founding director of the interdisciplinary Computational Vision Laboratory (iCVL), where research involves theoretical computational vision, human perception and visual psychophysics, visual computational neuroscience, animal vision, applied computer vision, and (often biologically inspired) robot vision. He is a principle investigator in numerous research activities, from basic research animal vision projects through applied computer vision, data sciences, and robotics consortia, many of them funded by agencies such as the ISF, NSF, DFG, the National Institute for Psychobiology, The Israeli Innovation Authority, and European frameworks such as FP7 and Horizon 2020.

Schedule


23

October

Sunday

Morning - 9.30am -12pm (UTC+3)


Afternoon - 13:00 - 17:30 (UTC+3)

Award

The Best Paper Award sponsored by Adobe was awarded to:

Artem Reshetnikov, Maria-Cristina Marinescu , Joaquim Moré

For their Paper:

DEArt: Dataset of European Art

Sponsors

MEMEX Project
Project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 870743.