Biomedical Imaging Group, EPFL,
CH-1015 Lausanne, Switzerland
Biomedical imaging plays a central role in medicine and biology with its range of applications and level of sophistication having increased steadily during the past 4 decades. Modern developments include super-resolution fluorescence microscopy, which was rewarded by the 2014 Nobel Prize. Part of the recent improvements in image quality and resolution are due to the use of advanced signal processing.
In a nutshell, classical imaging relies on 1st generation methods; that is, filtered backprojection and Tikhonov regularization, which are the techniques typically deployed in clinical scanners or commercial imagers. The past decade saw the development of 2nd generation methods, which are associated with l1-norm minimization, non-quadratic regularization, sparsity, and compressed sensing. The role of advanced signal processing there is obvious and rather dramatic, as it allows reconstructing images from lesser views, which translates into faster imaging and/or a reduction of the radiation dose for the patient. While sparsity-based methods are still at the forefront of research, there is new evidence of the emergence of 3rd generation methods, which are trying to incorporate recent advances in machine learning and Deep ConvNets.
The tutorial will provide a progressive coverage of these developments, starting from an overview of imaging modalities in relation to their forward model. We shall briefly discuss the classical linear reconstruction methods that typically involve some form of backpropagation (CT or PET) and/or the fast Fourier transform (in the case of MRI). We shall then move on to modern iterative schemes that can handle more challenging acquisition setups such as parallel MRI, non-Cartesian sampling grids, and/or missing views. We shall highlight sparsity-promoting methods supported by the theory of compressed sensing. We shall address implementation issues and present numerous illustrative examples. We shall conclude with a short presentation of challenges and opportunities for the design of future 3rd generation methods.
Presenters: Luca Sanguinetti, University of Pisa, Italy
The next generation wireless networks need to accommodate around 1000x higher data volumes and 50x more devices than current networks. Since the spectral resources are scarce, particularly in bands suitable for wide-area coverage, the main improvements need to come from a more aggressive spatial reuse of the spectrum; that is, many more concurrent transmissions are required per unit area. This can be achieved by the massive MIMO (massive multi-user multiple-input multiple output) technology, where the access points are equipped with hundreds of antennas and can serve tens of users on each time-frequency resource by spatial multiplexing. The large number of many antennas provides give a great separation of users in the spatial domain, which is a paradigm shift from conventional multi-user technologies that mainly rely on user separation in the time or frequency domains.
In recent years, massive MIMO has gone from being a mind-blowing theoretical concept to one of the most promising 5G-enabling technologies. Everybody seems to talk about massive MIMO, but do they all mean the same thing? What is the canonical definition of massive MIMO? What are the differences from the classical multi-user MIMO technology from the nineties? What are the key characteristics of the transmission protocol? How can massive MIMO be deployed? Are there any widespread misunderstandings?
This tutorial provides answers to all of these questions and other doubts that the attendees might have. We begin by covering the main motivation and properties of Massive MIMO in depth. Next, we describe basic communication theoretic results that are useful to quantify the fundamental gains, behaviors, and limits of the technology. The second half of the tutorial provides a survey of the state-of-the-art regarding spectral efficiency, energy efficient network design, and practical deployments.
Fellow of EURASIP, LFIEEE, FIET, FREng,
Selex ES (retired), Rome (I)
Chairperson of The IEEE AESS Outstanding Organizational Leadership Award
Visiting Professor at UCL, Electronics Dpt
Dipartimento di Ingegneria dell’Informazione (DINFO)
Università degli Studi di Firenze
Via Santa Marta 3, 50139 Firenze, Italy
The talk will describe the intertwined R&D activities, along several decades, between academia and industry in conceiving and implementing – on live radar systems – tracking algorithms for targets in civilian as well as defense and security applications.
We trace back from the alpha-beta adaptive filter to modern random set filters passing thru the Kalman algorithm (in its many embodiments), Multiple Model filters, Multiple Hypothesis Tracking, Joint Probabilistic Data Association, Particle Filters for nonlinear non Gaussian models. Fusion of heterogeneous collocated as well as non-collocated sensor data is also addressed. Applications to land, naval and airborne sensors are considered. Active as well as passive radar experiences are overviewed. The description will provide a balanced look at both mathematical aspects and practical implementation issues including mitigation of real life limitations.
The intended audience will be the communities of signal processing and data processing related to radar, sonar, localization systems. We will review a true decade’s long successful history of cooperation between industry and universities on the verge of the state of art signal/data processing.
The contents of the tutorial will cover the following topics.
Who we are (short introduction of the authors and their organizations),
Historical overview of the long-standing cooperation.
Retracing the evolution of the tracking algorithms (track initiation, data association filtering, adaptive features, track quality measure, plot-track fusion, track termination, sensor registration, grid locking, etc.).
More recent developments: random finite sets in multi-target tracking.
Performance evaluation tools.
Implementation on live systems: budget of errors, hardware/software limitations, environment limitations (clutter, multipath, ducting, atmosphere non linearity/anisotropy, etc.).
Mitigation techniques of above mentioned limitations.
Validation & test: lesson learned.
Conclusion and way ahead.
Patrick A. Naylor
Department of Electrical and Electronic Engineering
Imperial College London
Enzo De Sena
Institute of Sound Recording
University of Surrey
Toon van Waterschoot
Department of Electrical Engineering
Reverberation is the acoustic phenomenon that occurs whenever speech and audio signals are produced in an environment with reflective boundaries, such as a room, a hall, or a car. Depending on the context, reverberation can be considered either a desirable or undesirable phenomenon. In the context of music, gaming and movie production, for instance, synthetic reverberation is commonly added to provide spatial information for better understanding and enjoyment of the sound. In contrast, dereverberation is advantageous to speech intelligibility and quality in many applications including hearing aids, hands-free telecommunications terminals and distant microphone (ambient) speech recognition. The interest in these research topics continues in the signal processing community, evidenced by special issues and conference special sessions, either directly on this topic or in closely related signal processing areas in the general field of room acoustics.
This tutorial will present the latest research methods and outcomes related to reverberation and dereverberation in acoustics, from a signal processing perspective. We will include information covering (a) a brief summary of key relevant issues of acoustics, from a signal processing perspective, (b) recent advances in methods for simulating room reverberation, (c) measures of reverberation and their applications, (d) advanced approaches for acoustic transfer function and relative transfer function estimation and (e) example methods of dereverberation processing. The tutorial will include examples and illustrations of reverberation and dereverberation processing in speech enhancement, hearing assistance, spatial audio, and other applications.
Miguel A. Vázquez
Universitat Politècnica de Catalunya.
Centre Tecnologic de Telecomunicacions de Catalunya
This tutorial presents the configuration and application of next generation satellite (NGS) systems mainly based on very high/high throughput satellites. The course is related to the applications, methodological advances and basic theory of signal processing and link layer techniques in satellite communications. We plan to touch the schemes that are implemented in the current standards, as well as advanced techniques that will support next generation satellite systems. Key research directions are presented. NGS either target systems with a Terabit per second capacity. Compared to terrestrial systems, the channel impairments maybe more adverse, mainly due to the low received signal to noise ratio, and also the on-board processing shall be maintained with a very low complexity. In spite of it, and in order to boost capacity, the future satellite systems aim at non-orthogonal access schemes; thus, shifting from a noise-limited to an interference-limited paradigm. NGS should also allow a seamless integration with terrestrial 5G systems and this is also addressed by the tutorial.
Presenters and their Affiliations:
– Marco Maso, Mathematical and Algorithmic Sciences Lab, Huawei Paris Research Center, France
– Marco di Renzo, Paris-Saclay University – Laboratory of Signals and Systems (CNRS – CentraleSupelec – Univ. Paris-Sud), France
– Samir M. Perlaza, Université de Lyon – CITI Laboratory (INRIA Univ. de Lyon – INSA de Lyon), France
– Bruno Clerckx, Communicaiton and Signal Processing Group, Department of Electrical and Electronic Engineering, Imperial College London, UK
Efficient energy utilization is among the main challenges of future communication networks to extend their lifetime and reduce operating costs. Networks are generally populated by battery-dependent devices. In some cases, such dependency is relevant at a point in which the battery lifetime often represents a bottleneck for the network lifetime as well. Within this context, wireless energy transmission becomes an alternative to eliminate the need for in-situ battery recharging. Nonetheless, for decades, the traditional engineering perspective was to design separately information transmission systems and energy transmission systems. However, this approach has been shown to be suboptimal. Indeed, a radio frequency (RF) signal carries both energy and information. From this standpoint, a variety of modern wireless systems and proposals question the conventional separation approach and suggest that RF signals can be simultaneously used for information and energy transmission. This tutorial starts from this observation and aims at familiarizing the attendees with the new communication paradigm of simultaneous wireless information and energy transfer (SWIET) in wireless networks, and its associated challenges.
This in-depth tutorial is built upon the expertise of the speakers on transceiver and algorithm design, information theory, signal processing and network performance analysis. This research field is gradually reaching its maturity and is characterized by a significant heterogeneity. In this context, the tutorial is structured to:
– Provide a general and historical introduction of energy harvesting technologies with specific focus on radio-frequency (RF) energy harvesting;
– Detail recent developments in communications and signal design for wireless energy transmission
– Highlight SWIET as a promising technique for increasing energy efficiency in future communication networks;
– Explain the main intuitions behind the conflicting aspects between information and energy transmission;
– Present a constructive set of tool for network performance analysis, with specific focus on energy-neutral networks;
– Discuss the main technical challenges and research opportunities for the future deployment of SWIET-empowered networks.
This tutorial is unique of its kind as it systematically treats aspects that are generally treated independently in similar tutorials, such as system-level performance analysis, device-level transceiver and algorithm design, and theoretical performance bounds of SWIET-empowered devices and networks. In particular, it is divided into four parts. In first part, an historical and technical introduction to wireless energy transfer is given. The second part focuses more specifically on communication techniques and fundamental limits of SWIET, both for point-to-point and multi-user scenarios. The third part extends the discussion to network-related aspects, and presents the modeling and analysis of energy neutral network. Finally, the fourth part is devoted to the introduction of the concept self-sustainable systems and energy recycling strategies, with practical examples.
Zai Yang (Nanjing University of Science and Technology, China)
Gongguo Tang (Colorado School of Mines, USA)
Petre Stoica (Uppsala University, Sweden)
In the past two decades or so, sparse estimation and compressed sensing techniques have emerged as a powerful tool for the recovery of sparse signals and made a great impact on a variety of applications. For array processing (to be more specific, direction-of-arrival (DOA) estimation), which has wide applications in radar, sonar and wireless communications, sparse estimation methods have demonstrated their superiority in flexibility, accuracy, robustness and reliability compared to conventional parametric and nonparametric approaches. Due to the discrete nature of sparse estimation, however, their application in array processing is usually based on approximations (e.g., gridding in the continuous parameter domain) and heuristic assumptions. This raises both practical and theoretical concerns. Great progresses have been made in the past few years. More precisely, the recently developed gridless sparse methods, e.g., those based on the atomic norm and covariance fitting, work directly in the continuous domain, can be cast as convex programs, and have strong theoretical guarantees. This tutorial will provide a unified presentation of the most recent advances in this area as well as the newest insights into sparse estimation, compressed sensing and array processing.
In this tutorial, we will start with the basics of array processing and their applications. A literature review on conventional methods will be made, with a focus on subspace-based methods and their limitations. We then introduce the technique of sparse estimation and compressed sensing. Key differences will be highlighted between sparse estimation and DOA estimation. The main content of this tutorial is focused on how the differences can be resolved by using the gridless sparse methods. Furthermore, connections between the gridless sparse methods as well as their connections to the conventional subspace-based methods will be discussed to provide further insights. Computational issues will be discussed and numerical examples will also be provided. Finally, some future research directions will be highlighted.
Abderrahim Elmoataz (University of Caen-Normandie, France)
Pierre Buyssens (University of Caen-Normandie, France)
Partial differential equations (PDEs) play a key role for mathematical modeling throughout
applied and natural sciences. In this context, many PDEs have been studied to describe importantprocesses, e.g., in physics, biology, economy, image processing, computer vision.
In particular they have been applied successfully in image and signal processing to a broad varietyof applications, e.g., isotropic and anisotropic filterings, non local filtering, image regularization and inpainting, or image segmentation (to name a few).
Recently, there is high interest in adapting and solving PDEs on data which is given by
arbitrary graphs and networks. The demand for such methods is motivated by existing and potential future applications, such as in machine learning and mathematical image processing. Indeed, any kind of data can be represented by a graph in an abstract form in which the vertices are associated to the data and the edges correspond to relationships within the data. In order to translate and solve PDEs on graphs, different discrete vector calculus have been proposed in the literature in recent years. One simple discrete calculus on graphs is based on discrete partial differences, which enables one to solve PDEs on both regular as well as irregular data domains in a unified and simple manner. This mimetic approach consists of replacing continuous partial differential operators, e.g., gradient or divergence, by a reasonable discrete analogue, which makes it possible to transfer many important tools and results from the continuous setting.
This tutorial aims at proposing a comprehensive introduction to the field of Partial
difference Equations (PdE) on graphs and their applications in image, signal and data processing, and machine learning. It lies at the interface of the following topics: local and non-local continuous PDE, PDE on graphs, Tug-of-War games, signal processing on graphs and local and non local manifold processing.
Presenters: Yang Yang (Intel Deutschland GmbH), Marius Pesavento (Technische Universität Darmstadt)
In the past two decades convex optimization gained increasing popularity in signal processing and communications, as many fundamental problems in this area can be modelled, analyzed and solved using convex optimization theory and algorithms. In emerging large scale applications such as compressed sensing, massive MIMO and machine learning, the underlying optimization problems often exhibit convexity, however, the classic interior point methods do not scale well with the problem dimensions. Furthermore, in applications that do not exhibit conventional convexity, general purpose gradient methods often show slow convergence.
Recently a variety of iterative descent direction methods gained interest, which can be customized to the requirements of specific applications, accounting for their underlying problem structures and (parallel) hardware architectures. In this tutorial, we capture this trend and address the challenges in designing efficient customized optimization procedures:
George Tzanetakis Professor
Canada Research Chair (Tier II) in the Computer Analysis of Audio and Music
University of Victoria
Music is a very complex signal with information spread across different hierarchical levels and temporal scales. In the last 15 years in the field of Music Information Retrieval (MIR) and Music Signal Processing there has been solid progress in developing algorithms for understanding music signals with applications such as music recommendation, classification, transcription and visualization. Probabilities and probabilistic modeling play an important role in many of these algorithms. The goal of this tutorial is to explore how probabilistic reasoning is used in the analysis of music signals.The target audience is researchers and students interested in MIR but the tutorial would also be of interest to participants from other areas of signal processing as the techniques described have a wide variety of applications. More specifically, the tutorial will cover how basic discrete probabilities can be used for symbolic music generation and analysis, followed by how classification can be cast as a probability density function estimation problem through Bayes theorem. Automatic chord detection and structure segmentation will be used as a motivating problems for probabilistic reasoning over time and Hidden Markov Models more specifically. Kalman and particle filtering will be described through real-time beat tracking and score following. More complex models such as Bayesian Networks and Conditional Random Fields and how the can be applied for music analysis will also be presented. Finally, the tutorial will end with Markov Logic Networks a formalism that subsumes all previous models. Through the tutorial the central concepts of Bayes Theorem, Markov assumptions and maximum likelihood estimation and expectation maximization will be described.