Satellite Workshops

Workshops will take place on Saturday, September 2, 2017, at the Kos International Convention Center.

For any questions specific to a workshop, such as submission date, please contact the organizers of that workshop. For general questions, please contact the workshop chairs, Kostas Berberidis and Iasonas Kokkinos, at


W1. IWCIM: International Workshop on Computational Intelligence for Multimedia Understanding

Contact Person

Behçet Ugur Töreyin

Workshop Organizers


Workshop Abstract

International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM) is the annual workshop organized by the working group Multimedia Understanding through Semantics, Computation and Learning (MUSCLE) of the European Research Consortium for Informatics and Mathematics (ERCIM). This year, IWCIM takes place as a satellite workshop to EUSIPCO-2017, to be held in Kos, Greece, on Saturday, September 2, 2017.

Multimedia understanding is an important part of many intelligent applications in our social life, be it in our households, or in commercial, industrial, service, and scientific environments. Nowadays, raw data normally come from a host of different sensors and other sources, and are different in nature, format, reliability and information content. Multimodal and cross-modal analysis are the only ways to use them at their best. Special theme for this year’s IWCIM is “Signal Processing for Surveillance and Security Applications”. With the advance of sensor technology and increased computational power, multimodal security and surveillance systems exploiting several modalities become more prevalent than conventional surveillance systems depending solely on single-channel visible-range video. In that respect, methods and techniques for multimodal sensor and signal analysis play instrumental role for surveillance and security applications.

This workshop aims at bringing together researchers working on different aspects of multimedia understanding, machine learning, multimodal systems and signal processing for surveillance and security applications.

Topics of Interest

General track

  • Multisensor systems
  • Multimodal analysis
  • Crossmodal data analysis and clustering
  • Mixed-reality applications
  • Activity and object detection and recognition
  • Text and speech recognition
  • Multimedia labeling, semantic annotation, and metadata
  • Multimodal indexing and searching in very large data-bases
  • Big and Linked Data
  • Search and mining Big Data
  • Large-scale recommendation systems
  • Multimedia and Multi-structured data
  • Semantic web and Linked Data
  • Case studies

Special track “Signal processing for surveillance”

  • Multimodal Surveillance and Security
  • Activity/interaction understanding
  • Intention estimation, situation awareness and decision making
  • Real time signal and image processing
  • Saliency analysis, compression and summarization
  • Smart camera networks and pervasive computing
  • Airborne and remote sensing
  • Environmental monitoring
  • Forensics
  • Applications and case studies
W2. Multi-Learn 2017: Multimodal processing, modeling and learning approaches for human-computer/robot interaction.

Contact Person

Vassilis Pitsikalis

Workshop Organizers


Workshop Abstract

With this workshop we plan to bring together researchers from different disciplines around signal processing, machine learning, computer vision and robotics with application in HRI/HCI fields, as related to multimodal and multi-sensor processing.

During the last decades, an enormous number of socially interactive systems have been developed constituting the field of Human-Computer and Human -Robot Interaction (HCI/HRI) an actual motivating challenge. This challenge has become even greater, due to the relocation of such systems outside the lab environment and into real use cases. The growing potential of multimodal interfaces in human-robot and human-machine communication setups has stimulated people’s imagination and motivated significant research efforts in the fields of computer vision, speech recognition, multimodal sensing, fusion,and human-computer interaction (HCI) and nowadays lies at the heart of such interfaces. In parallel we are interested in applications of multimodal modeling, fusion and recognition when seen from an interdisciplinary perspective such as assistive, clinical, affective and psychological aspects e.g. dealing for instance with cognitive and/or mobility impairments. From the robotics perspective, designing and controlling robotic devices constitutes an emerging research field on its own.
The integration with multimodal machine learning models pose many challenging scientific and technological problems that need to
be addressed in order to build efficient and effective interactive robotic systems.

These may include:

  • human motion tracking, multimodal actions and gestures recognition, as well as intention prediction, while fusing multimodal sensorial data
  • analysing and modelling human behaviour in the context of physical and non-physical human-robot interaction
  • developing context- and affect-aware, human-centred systems that act both proactively and adaptively in order to optimally combine physical, sensorial and cognitive modalities
  • fostering intuitive and natural human-robot communication ultimately achieving robotic behaviours that emulate the way humans operate and behave while taking into account social interaction and ethical constraints.

The above are even further challenging when considering special groups of interest, such as children, aging population or other cases that would benefit by assistive, educative or entertaining capable technologies, while being based on multi-modal sensing and natural HCI/HRI.This workshop seeks to bring together different communities, to discuss and share the knowledge and experience of approaches thatcould be applicable across interdisciplinary domains. Arranging this workshop around EUSIPCO-2017 will make it possible to bring together many researchers from different backgrounds to discuss and advance the current state-of-the-art w.r.t.

  • signal and speech processing,machine learning, computer vision and robotics with application in HRI/HCI fields
  • studies and models by clinical, psychological issues related to real-life constrains and use cases such as cognitive impairments,
    autism, dementia
  • effective usage of large datasets, corpora, communication and models on language , semantics and data annotations

This workshop is supported by the EU-funded H2020 projects I-Support and Baby-Robot.

W3. Deep Learning and Geometry

Contact Person

Or Litany

Workshop Organizers

  • Or Litany (Tel Aviv University)
  • Emanuele Rodolà (USI Lugano)
  • Michael Bronstein (USI Lugano / Tel Aviv University / Intel Perceptual Computing)
  • Alex Bronstein (Tel Aviv University / Technion / Intel Perceptual Computing)
  • Ron Kimmel (Technion / Intel Perceptual Computing)


Workshop abstract

The past decade in computer vision research has witnessed the re-emergence of “deep learning”, and in particular convolutional neural network (CNN) techniques, allowing to learn powerful image feature representations from large collections of examples. Such methods have achieved a breakthrough in performance in a wide range of applications such as image classification, segmentation, detection and annotation. Nevertheless, when attempting to apply standard deep learning methods to geometric data which by its nature is non-Euclidean (e.g. 3D shapes and graphs), one has to face fundamental differences between images and geometric objects. Shape analysis, graph analysis, and geometry pose new challenges that are non-existent in image analysis, and deep learning methods have only recently started penetrating into the 3D vision, pattern recognition, multimedia, signal processing, and graphics communities. Deep learning has been applied to 3D data in recent works using standard (Euclidean) architectures applied to volumetric or view-based shape representations. Intrinsic versions of deep learning have also been proposed very recently with the generalization of the CNN paradigm to non-Euclidean manifolds, allowing them to deal with domain deformations. These “generalized” CNNs can be used to learn invariant shape features and correspondence, allowing to achieve state-of-the-art performance in several shape analysis tasks, while at the same time allowing for different shape representations, e.g. meshes, point clouds, or graphs.

The main focus of the workshop is on generalization of deep learning techniques beyond the Euclidean settings, in order to apply them to geometric data. We aim to bring together and offer a forum for discussion and interaction among researchers interested in learning techniques applied to geometric data, and favor the cross-fertilization between fields such as Machine Learning, Signal Processing, Computer Graphics, Computer Vision, Pattern Recognition, and Multimedia. We believe that the workshop will give a new view on deep learning and thus will be interesting to machine learning experts on the one hand, and will offer new solutions to hard problems in signal processing, pattern recognition, and computer graphics and thus would appeal to experts in these fields on the other.

W4. Creative Design and Advanced Manufacturing: An emerging application area for Signals and Systems

Contact Person

Asli Genctav

Workshop Organizers

  • Iestyn Jowers Design Group, The Open University Walton Hall
  • Sibel TariDepartment of Computer Engineering Middle East Technical University


Worskhop abstract

The aim is to bring together people from architectural and industrial design communities with people from engineering, mathematics and computer science to create synergy between inverse and forward modeling in the field of creative design and advanced manufacturing. We call for contributions exploring techniques rooted in signals and systems for solving emerging novel applications as well as re-addressing traditional design and manufacturing problems with modern techniques. Discussions on inspiring unsolved applications or wide coverage surveys are also welcome.

Workshop topics

  • Detection and estimation of novelty
  • Dimensionality reduction techniques for design/product categorization
  • Managing digital design information (storage, compression and transmission)
  • Distributed collaboration
  • Multisensory (audio, visual, haptic) integration
  • Human computer interaction
  • Eye Tracking
  • Programmable metamaterial design
  • Space syntax