We are surrounded by sounds in our daily lives. To know the acoustic environment, Acoustic Scene Analysis and Signal Enhancement technologies are essential. The Acoustic Scene Analysis and Signal Enhancement include (but are not limited to) event detection, audio content searching, acoustic scene classification, sound profiling, source localization, source separation, noise reduction, dereverberation, sound effect generation, virtual acoustic reproduction, and many others. These techniques form the core of the state-of-the-art audio and acoustic signal processing and are indispensable to the realization of future communication 9via for both man-machine and human-human interfaces.
This special session is dedicated to recent advances in Acoustic Scene Analysis and Signal Enhancement using Microphone Array. The aim of this special session is to offer an opportunity to link these techniques in different areas and to find effective ways of achieving our goals. This special session represents a vehicle whereby researchers can present new studies, thus paving the way for future developments in the field. This special session will stimulate interest in the challenging area of Acoustic Scene Analysis and Signal Enhancement using Microphone Array, and create an increasing body of high-quality research aligned with this idea.
Digital signal processing for personal communication devices has been an active field of research and industrial development for more than 30 years. While initially focused on mobile phone and conferencing applications, advances in wireless transmission, battery, and chip technology have spawned a more recent interest in wearable ear-mounted communication devices, commonly termed hearables. These include hearing aids for the hearing impaired population, but also ear-mounted devices for normal hearing persons, such as augmented-reality headphones, personal sound amplification systems, and assistive headsets for challenging acoustical environments with background noise and reverberation.
While hearables as a whole show differences in their objectives and requirements in terms of available sensors and transducers, computational complexity and system latency, the underlying acoustical signal processing problems show strong similarities. These problems are for example related to sound acquisition in adverse acoustic environments, aiming at increasing speech intelligibility and speech quality using (active) noise reduction and dereverberation algorithms, audio rendering aiming at providing an immersive listening experience, and acoustic feedback reduction due to the coupling between the closely located loudspeakers and microphones. Although this has led to a highly fruitful exchange of ideas and algorithmic solutions between applications, high-quality, low-complexity and robust speech acquisition and sound reproduction algorithms for hearables are not yet available.
The objective and motivation for this special session is to further facilitate this inter- and intra-application synergy by presenting recent advances in acoustical signal processing algorithms for hearables by leading international experts from academia and industry, both in the fields of speech acquisition as well as sound reproduction.
From Internet to large research infrastructures, the volume of data generated by our societies is continuously increasing. A deluge faced by the producers of these data as well as their users. The big data issue is a significant scientific challenge that requires deep investigations in both engineering and fundamental science. Everyone is concerned and it is urgent to get answers to questions such as how to store these huge amounts of data? How to process and analyze them? Recently, low-rank tensor methods, including low-rank tensor recovery and completion, new decompositions, and distributed/online adaptation algorithms have received a particular attention, as solutions to a variety of mining tasks that are increasingly being applied to massive datasets.
This session proposal aims at gathering recent advances on tensor-based methods that deal with large-scale problems. The invited contributions deal with a variety of problems, such as low-rank decomposition of incomplete tensors, updating algorithms for big data tensors, large tensor spectral theory and performance limits or coupled tensor factorizations for fusion of data models. This session offers a good balance between theoretical findings and applications. Moreover, its structure targets not only the researchers that work in the field but may attract interest from the general audience of EUSIPCO that is aware of the relevance of big data signal processing.
Roads are important man-made infrastructures playing a crucial role on the mobility of people, goods and merchandises. Appropriate maintenance is essential to ensure a correct pavement performance and to preserve its structural integrity. Hence, periodic road surveys are needed to evaluate pavement surface condition and the inclusion of image analysis and processing techniques can considerably ease this condition assessment. This special session addresses the acquisition, detection and characterization of road pavement surface defects, using digital image processing techniques. The analysis of 2D images, combined analysis of 2D and 3D images, and 3D laser imaging are discussed for crack detection and characterization. Application to large scale road networks is also addressed.
This special session acknowledges the important role of personal recognition using biometrical information and the need to protect personal data. New applications increasingly require user authentication for secure transactions or for access to information, and pervasive imaging devices allow security checks to be performed even at a distance. This requires novel modalities to be considered and improved recognition algorithms. Additionally, there are many issues related to digital information privacy and protection that need to be addressed. This session discusses novel and improved biometric recognition systems, but also how image-based biometrics and privacy issues are linked together, both in terms of privacy leakage or developing tools for increasing privacy.
The special session is proposed in the context of the activities of the EURASIP SAT on Biometrics, Data Forensics, and Security.
This session will bring together research on an application domain of growing recent interest, and of high practical importance: signal processing and machine learning applied to the sounds of birds. Acoustic monitoring of species is an increasingly crucial tool in tracking population declines and migration movements affected by climate change. Detailed signal processing can also reveal scientific understanding of the evolutionary mechanisms operating on bird acoustic communication. What is needed is a set of tools for scalable and fully-automatic detection and analysis across a wide variety of bird sounds.
A central component of this special session will be the outcomes of the Bird Audio Detection Challenge . This data challenge is supported by the IEEE Signal Processing Society, and has published new large open datasets to facilitate development of methods. The special session will include outcomes of this challenge and invited submissions from groups who have demonstrated strong performance of their methods for bird audio detection. The session is not exclusively based on the challenge but will also invite new research contributions in the broader emerging topic of bird audio signal processing.
Cognitive modelling and learning has become a new trend for advanced signal analysis, especially for semantic content extraction and understanding. Various approaches have been proposed in recent years to address a range of underlying challenges, including data acquisition, denoising, feature extraction, dimensionality reduction, restoration, data compression, segmentation, detection and classification. In addition, fusion and Big Data mining is also receiving growing attention for enhanced modelling and analysis.
With rapid developments in machine learning, signal processing and big data analysis techniques, in particular compressed sensing, deep learning and multi-kernel based modelling, there are exciting new opportunities for exploiting these advances for semantic signal analysis and understanding in a range of inter-disciplinary research areas. Relevant applications can currently be found in fields ranging from communications, energy and manufacturing to health, security, remote sensing and numerous other fields. As a result, it is timely to summarise recent progress and advancements, including new models, algorithms and innovative applications, particularly those that are focussed on scalability, quality, efficiency and efficacy of solutions. To this end, in this Special Session, we aim to solicit state-of-the-art contributions, and also provide a forum for both academic and industrial research community to report progress and exchange findings.
In this special session, we are particularly interested in fundamental models, algorithms, integrated solutions and novel applications as well as benchmark data and methods for performance assessment. Researchers in all areas of cognitive signal processing and analysis are invited to this special session.
Component Analysis (CA) comprises a set of statistical techniques, which involve the factorisation of data to appropriate components that are relevant to certain tasks, such as alignment, clustering, segmentation, classification etc. CA methods are particularly well suited to perform visual learning from huge amounts of visual data (big data) and in general are very useful for dimensionality reduction and discovery of latent spaces. CA techniques are extensively used in many scientific disciplines such as computer vision, speech analysis and machine learning. Traditional CA techniques are often criticized of (a) being largely affected by the presence of outliers in the data (a common phenomenon in computer vision applications) and (b) not being able to capture the non-linear structure of the data. Recently due to the tremendous progress in robust learning and optimization new component analysis techniques have been developed that are robust to gross errors. Furthermore, recent efforts have been made to marry classical component analysis with deep representation learning techniques. This special session aims to show recent works on both robust component analysis, as well as the use of deep learning in component analysis.
Nowadays, computational methods are largely used to face complex modelling, prediction, and recognition tasks in different research fields. One of these fields is represented by the analysis of audio signals, which finds applications in communications, entertainment, security, forensics and health to name but a few.
The typical methodology adopted in these tasks consists in extracting and manipulating useful information from the audio stream to pilot the execution of automatized services. Such an approach is applied to different kinds of audio signals, from music to speech, from sound to acoustic data. The use of computational methods may also allow to provide a characterization of an audio stream by directly analyzing raw data. Moreover, cross-domain approaches to exploit the information contained in diverse kinds of environmental audio signals have been recently investigated.
It is indeed of great interest for the scientific community to understand the effectiveness of novel computational methods for audio analysis, in the light of all aforementioned aspects. The aim of this session is therefore to focus on the most recent advancements and their applicability to a wide range of audio analysis tasks.
Advances in technology and the continuous drift for the miniaturization of electronic devices that integrate sensing, processing and wireless communication capabilities, have given rise to the Internet of Things (IoT) era. In such an environment, where micro-processor based systems surround us in our everyday lives, signal processing problems arise in many new and challenging tasks. Beyond the typical scenario, in which every device aims to solve signal processing problems of local interest (e.g., estimating a communication channel, deciding if a certain event has occurred or not, tracking multiple objects), superior performance can be achieved via the joint work of a number of such devices. In other words, the devices cooperate by exchanging suitable messages and by working together to better meet common or, more interestingly, different goals.
Today, more than ever before, there is the need for efficient, distributed and cooperative methods for information processing, since a centralized architecture does not scale with the abundance of devices present. Modern ad-hoc networks should also enjoy self-organization characteristics that will render them able to operate unattended under various types of harsh conditions, such as a dynamically changing network topology or even the case in which a number of nodes cease to function.
This special session focuses on methods and techniques suitable for inference over large datasets, which are furthermore located at different geographic locations, and that address the new challenges of big-data analytics. In addition, emphasis is given to on-line methods that fall into the above categories. Also, methods that tackle important security and privacy concerns, that arise is such distributed systems, are encouraged.
Networks (social, biomedical, informational and technological) are ubiquitous in our societies. More importantly for the signal processing (SP) community, such networks generate massive amounts of “network data” that need to be processed and analyzed. Graphs play a crucial role in capturing the local interactions between the connected network nodes, and in explaining how the global network behavior arises from such local interactions. To address those challenges, graph SP has emerged as a new field whose goal is to understand and leverage the relationships between the graph topology and the properties of the network data (graph signals).
There is an evident mismatch between our scientific understanding of signals defined over regular domains (time or space) and signals defined over general graphs. Of course, this is not surprising. Human knowledge about time-varying signals was developed over the course of decades and boosted by real needs in, e.g., the areas of communications, speech, video or control, to name a few. On the contrary, the prevalence of network-related SP problems and the access to quality network data are much more recent events. Nevertheless, there is a pressing need to better understand information in network settings that will invigorate the development of graph SP in the next years.
The session aims to contribute to the development of graph SP (by bringing together experts in the field) and to increase the awareness of this emerging field in our community. Indeed, graph SP can be viewed as a generalization of the classical SP (the discrete time domain can be represented either as a path or a chain graph, and the image domain as a Cartesian product of two path graphs), so that every person with a background on SP is in the position to contribute to this field.
5G and beyond systems necessitate the exploitation of high-gain MIMO beamforming/precoding by using large antenna arrays at both the Base Stations and the mobile units to deliver the high data rates promised. Large scale antenna systems (LSAS) have been under development since 2010, and important potential gains have been established theoretically. However, it is only recently that the practical challenges of deploying LSAS have been the focus of the relevant research. These include but are not limited to: CSI acquisition and pilot contamination, hardware and RF-chain complexity and cost reduction, mixed baseband and RF signal processing with low complexity, physical space constraints and antenna array topology optimization. In particular, the high cost and power consumption of RF components such as high-resolution ADCs makes dedicating a separate RF chain for each antenna prohibitive, and conventional, fully digital baseband (BB) processing is infeasible. Hybrid analog-digital processing provides a key solution for allowing a reduced number of RF chains and low-spec RF components, where the transceiver processing is divided into the analog and digital domain. Accordingly, this special session proposal has been motivated by the recent increasing interest in the practical challenges of LSAS deployment, the recent advances on analog-digital signal processing, and the ever-increasing interest in Energy Efficient ICT. Areas covered by the special session include (but are not limited to):
The proliferation of high spatial, spectral, and temporal resolution imaging data from laboriously engineered instruments such as telescopes, microscopes, and range imagers, creates both challenges and opportunities. On the one hand, the massive amount of high velocity data present significant difficulties even for moderate complexity methods. On the other hand, massive amounts of high-quality data are readily available for both unsupervised and supervised learning purposes. The aim of this special session is to bring together researchers from image processing, machine learning, and big data processing, in an effort to cross-fertilize different disciplines with novel problems, methods, and architectures. The special session focusses on emerging imaging applications in remote sensing, including Earth Observation and Astronomical imaging. Emphasis is given in addressing real-life problems in image processing tasks, such as data-driven enhancement and denoising, as well as in learning tasks like dimensionality reduction, unmixing, and anomaly detection, on massive high dimensional imaging data, exploiting novel computation platforms, e.g., FPGA-, GPU- and distributed-based learning architectures.
Topics include, but are not limited to:
After two decades of studies on source separation, recent advantages in the topic have unfolded the path towards reliable solutions for real-world applications of acoustic signal enhancement. Multivariate analysis through multidimensional homogeneous or heterogeneous models has shown to be a promising approach to achieve consistent retrieval of wide-band acoustic sources. On one hand, separation can be obtained through multivariate spectral or spatial models falling in the class of unsupervised methods. On the other hand, other supervising modalities or prior information has been exploited leading to informed techniques, able to overcome typical ambiguities and indeterminacy of solely blind methods.
This special session is dedicated to novel and recent advances in the field of multivariate analysis for both unsupervised or supervised/informed acoustic signal enhancement. The aim of this session is to provide an opportunity to collect in a single body the state-of-art contributions in this field, create interest in the community and inspiring research towards new emerging audio enhancement applications. We believe that the field of multivariate analysis is still in its early age, especially for applications with multidimensional heterogeneous signals, i.e. when signals of different physical nature are linked through a common model with the final goal to better describe the acoustic signal for the enhancement, without ambiguities. Furthermore, in the era of data-dependent learning, audio enhancement models supported by prior information is becoming a practical path towards the integration of heterogeneous signals.
Musical audio processing is often based on the modeling of physical systems, whether for the purpose of synthesizing musical sounds; or of adding special effects inspired by nonlinear analog circuits. Modeling such systems is particularly challenging, as the final result is heavily dependent on the modeling nuances of the reference system and on the nonlinear interaction between its constituent parts. This area of research has grown at a formidable pace only in the past few years, thanks to the research advancements in the area of Signal Processing. Two are the areas that have received particular attention on the part of the audio research community: modeling Musical Instruments, and Virtual Analog modeling. The former is essential for gaining insight on sound production mechanisms, and can be used for predicting the timbral and acoustic behavior of musical instruments; for developing better sound synthesis algorithms; learning how to interactively sonify virtual environments, etc. Virtual Analog modeling concerns the development of advanced algorithms that are able to model/emulate analog circuitry for sound generation and processing. This latter area of exploration, however, also serves the purpose of developing new paradigms for robust interactive modeling of physical systems that can be represented by equivalent circuits (a block-wise interconnection of lumped-parameter systems or of distributed-parameter systems seen from a limited number of ports. These two areas of interest are intimately connected, as they are inspired by similar modeling principles. This Special Session is aimed at presenting an update on this field of research, by portraying a number of examples of applications of Signal Processing to these aspects of musical acoustics, including examples of vibro-acoustic modeling of acoustic musical instruments; applications of timbral and acoustic analysis of acoustic musical instruments; solutions for the modeling and the implementation of virtual analog systems for musical audio processing; etc.
Networks are everywhere. They carry information or matters at all scales and domains of sciences. Analyzing the efficiency, robustness, dynamics of networks is a domain of current huge interest as it brings system-level insights. In human technologies, for instance, the internet and social networks, methods are rapidly developing to study the topology formed by these networks. However, such networks are mostly directly accessible because they are man made while it is not the case in life sciences. Information concerning biological structural networks is to be extracted first before being accessible to the analysis. Bioimaging is a tool of choice for the non-destructive observation of such biological structural networks. At the human scale, one can think for instance of diffusion-weighted magnetic resonance imaging (MRI) for the observation of connected neural fibers, functional MRI for brain activity, perfusion imaging or angiography for the analysis of vascular networks or X-rays for root systems. At the microscopic scale, new optical or X-ray imaging methods become available to image in 2D or in 3D complex cellular networks, providing large and complex data sets. The wealth and complexity of data available through the recent advances in imaging open new opportunities for methods and tools to quantify such information providing new insight in the understanding of such networks.
In this session, we will consider recent advances in the quantification of biomedical networks. This will include various approaches based on skeletons, graphs or applied mathematics to extract information from these networks, and how this information can be modelled for the understanding of the underlying biological problem. Perspectives and future challenges in these areas will be discussed.
Nonlinear filters are applied in a wide range of real-world problems. Different learning or adaptation methods have been proposed over the years for their identification or operation. One of the most popular models used in nonlinear filtering is that of Linear-in-the-parameters (LIP) nonlinear filters. These filters are characterized by a linear combination of a nonlinear representation or expansion of the input signal. The class of LIP filters includes several families of nonlinear filters, usually distinguished according to the nonlinear transformation they perform as adaptive Volterra filters, polynomial filters, kernel adaptive filters, Fourier nonlinear filters, functional link-based filters, Hammerstein spline adaptive filters, extreme learning machines, and many others. The linear filtering technique chosen to estimate the filter parameters is usually related to the application of interest. LIP nonlinear filters can be implemented with online, batch or semi-batch identifications algorithms, and can be applied to problems ranging from regression to classification in different fields. In recent years, LIP-based nonlinear techniques have gained an increased interest in many diverse fields of signal processing, from audio processing, to image and video processing, to telecommunication, and they have been applied mainly for nonlinear modeling, nonlinear compensation, and signal enhancement.
This special session aims to bring together leading researchers in the fields of linear and nonlinear signal processing and machine learning for signal processing and to provide novel advances on LIP nonlinear filters and their applications.
Topics of interest may include, among others: nonlinear transformations for LIP filters, machine learning and adaptive algorithms for LIP nonlinear filters, sparse representations, complex-valued methods, Bayesian LIP nonlinear filters, nonlinear acoustic echo cancellation, nonlinear compensation, nonlinear enhancement, active noise control.
Broadband multichannel problems in the context of e.g. microphone arrays, wideband MIMO, sonar or biomedical sensing can be formulated and often solved by polynomial matrix algebra as a straightforward extension of well-known narrowband approaches. With matrix decompositions such as eigenvalue (EVD), singular value (SVD) or generalised eigenvalue decompositions (GEVD) taking a central stage in optimal narrowband techniques, a generalisation to polynomial matrix factorisations, the development of a polynomial matrix EVD (PEVD) algorithm have triggered a number of subsequent algorithm developments and recent applications. This special session will draw on new developments and provide an overview of theoretical, algorithmic and application developments in this emerging area.
Positioning technology is ubiquitous and widely used in various applications and infrastructures where precise position, navigation, and timing (PNT) of a user equipment is required. As a matter of fact, Global Navigation Satellite Systems (GNSS) is the technology of choice in outdoor scenarios. In indoor environments, GNSS is typically not suitable and multi-technology/multi-sensor solutions are considered. Even in outdoor scenarios, GNSS can be severely degraded in complex environments such as in dense urban canyons, malicious jamming attacks, or ionospheric scintillation perturbations in equatorial and polar regions on Earth. At any rate, signal processing has always played a key role in receiver design, performance enhancement, and vulnerabilities mitigation. This special session brings together contributions addressing several of these challenging and complex scenarios when it turns to positioning, with focus both on outdoor and indoor complex environments. The covered issues are crucial to deal with if one wants to leverage on these technologies to provide location-based services or, simply, navigate an area.
The upcoming fifth generation (5G) of cellular wireless communications is expected to support a wide range of different use-cases, ranging from mobile broadband applications, over massive machine type communications, to high-mobility vehicular scenarios. These use-cases all come with their own distinct key performance indicators as well as channel characteristics, leading to a correspondingly rich portfolio of unresolved problems in signal processing to efficiently support user demands. Key to providing satisfactory quality of service in such heterogeneous settings is a mix of different access technologies, as well as, flexibility and adaptability of the communication links and transceiver designs with respect to user requirements and channel characteristics. This special session is dedicated to recent advances in physical layer improvements, as well as, PHY-MAC cross-layer enhancements that aim at realizing such demanding 5G targets. The special session brings together researchers and engineers working on signal processing methods and algorithms for 5G mobile communications technology within diverse fields of application to share their experience and report on latest findings.
The “Radio Equipment Directive (RED)” was recently published and replaces the previous “Radio and Telecommunication Terminal Equipment (R&TTE) Directive” which was in force since 1999. The RED creates a new regulation framework in Europe and includes in particular clear provisions in order to enable the introduction of software reconfiguration technology in the Single European Market. Furthermore, Software Reconfigurability is going to be a substantial driver for future signal processing solutions, since it will make underlying radio platforms open and available for 3rd party components introducing new and competitive features across all layers – from the physical layer up to the application layer. The technology clearly is of utmost interest to the community. In the context of future 5G systems, this technology will be a key enabler to allow for an adaptation of generic hardware platforms to the needs of specific vertical markets.
This Special Session will gather cross-regional experts, including Korean and European thought leaders, in order to address state of the art technological solutions enabling software reconfiguration in wireless radio equipment. Furthermore, challenges in the field of security and certification will be discussed in order to provide the audience with a holistic picture on the multi-disciplinary challenge.
This proposal is supported by the 5G CHAMPION consortium which is working towards a real-field PoC of 5G Networks capabilities at Peyong Olymic games in 2018.
Sparsity and rank minimization based techniques has extensively been used in various fields of wireless communications. Most of the communicational signals possess the property of being sparse in some domain, or they can be modeled to have the low-rank property. These properties can be leveraged to process the signals more efficiently and accurately. Using the modern techniques of sparse signal processing, rank minimization, and low-rank sparse decomposition, we can make a great progress in the wireless communication areas such as: sparse channel estimation, compressive spectrum sensing and wireless parameter estimation, distributed networks, smart antennas and MIMO systems, wireless sensor networks, cognitive radio, smart grid networks, and green communications. This special session aims to discuss some of the recent advances in this area.