There has been significant recent progress in the development of tools for graph signal processing, including methods for sampling and transforming graph signals. In many applications, a graph needs to be learned from data before these graph signal processing methods can be applied. A standard approach for graph learning is to estimate the empirical covariance from the data and then compute an inverse covariance (precision) matrix under desirable structural constraints. We present recent results that allow us to solve these problems under constraints that encompass a broad class of generalized graph Laplacians. These methods are computationally efficient, can incorporate sparsity constraints, and can also be used to optimize weights for a given known topology. We illustrate these ideas with examples in image processing and other areas.
A network can be understood as a complex system formed by multiple nodes, where global network behavior arises from local interactions between connected nodes. Often, networks have intrinsic value and are themselves the object of study. In other occasions, the network defines an underlying notion of proximity or dependence, but the object of interest is a signal defined on top of the graph. This is the matter addressed in the field of graph signal processing (GSP). Graph-supported signals appear in many engineering and science fields such as gene expression patterns defined on top of gene networks and the spread of epidemics over social networks. Transversal to the particular application, the philosophy behind GSP is to advance the understanding of network data by redesigning traditional tools originally conceived to study signals defined on regular domains and extend them to analyze signals on the more complex graph domain. In this talk, we will introduce the main building blocks of GSP and illustrate the utility of these concepts through real-world applications. Our focus will be on the definition of stationary graph signals and the inference of underlying graph structures from graph signal observations.
We will consider the problem of distributed cooperative non-Bayesian learning in a network of agents, where the agents are repeatedly gaining partial information about an unknown random variable whose distribution is to be jointly estimated. The joint objective of the agent system is to globally agree on a hypothesis (distribution) that best describes the observed data by all agents in the network. Interactions between agents occur according to an unknown sequence of time-varying graphs. We highlight some interesting aspects of Bayesian learning and stochastic approximation approach for the case of a single agent, which has not been observed before and it allows for a new connection between optimization and statistical learning. Then, we discuss and analyze the general case where subsets of agents have conflicting hypothesis models, in the sense that the optimal solutions are different if the subset of agents were isolated. Additionally, we provide a new non-Bayesian learning protocol that converges an order of magnitude faster than the learning protocols currently available in the literature for arbitrary fixed undirected graphs. Our results establish consistency and a non-asymptotic, explicit, geometric convergence rate for the learning dynamics.
Wireless Sensor Networks (WSNs) often operate in environments where available energy and bandwidth are limited. It is imperative that suitable resource management policies be adopted to maximize system performance while prolonging the lifetime of the WSN. This talk will provide a review of the current state-of-the-art of sensor management approaches for distributed estimation problems. This will be followed by a more detailed discussion on optimization of sensor management policies for distributed estimation including sensor selection, sensor scheduling and sensor collaboration. Sensor management for distributed estimation in crowdsourcing based WSNs will also be discussed.
Recent Massive MIMO experiments have convincingly demonstrated the soundness of the underlying concept. Massive MIMO is poised to deliver spectacular improvements over 4G wireless technologies.
Massive MIMO creates virtual parallel circuits, each occupying the full spectral bandwidth, between a multiplicity of single-antenna terminals and an array of individually controlled antennas. Area spectral efficiency improvements over 4G technologies may range from ten to one-thousand, depending on the mobility of the terminals. Other benefits include energy efficiency gains in excess of one-thousand, and simple and effective power control that yields uniformly great service throughout the cell.
Cellular deployment of Massive MIMO in the prime sub-5GHz bands will be both hugely beneficial and highly disruptive, requiring either new TDD spectrum, or the reassignment of existing FDD spectrum, and the replacement of all base station and user equipment. There are less disruptive, but still exciting, applications of Massive MIMO, including small cell backhaul and fixed wireless access to homes, for which there are no back compatibility issues.
Wireless communication system concepts for 5G include a variety of advanced physical layer algorithms to provide high data rates and increased efficiency. Each of these algorithms provide different challenges for real-time performance based on the tradeoffs between computation, communication, and I/O bottlenecks and area, time, and power complexity. In particular, Massive MIMO systems can provide many benefits for both uplink detection and downlink beamforming as the number of base station antennas increases. Similarly, channel coding, such as LDPC, can support high data rates in many channel conditions. At the RF level, limited available spectrum is leading to noncontiguous channel allocations where digital pre-distortion (DPD) can be used to improve power amplifier efficiency. Each of these schemes impose complex system organization challenges in the interconnection of multiple RF transceivers with multiple memory and computation units with multiple data rates within the system. Parallel numerical methods can be applied to tradeoff computational complexity with minimal effect on error rate performance. Simulation acceleration environments can be used to provide thorough system performance analysis. In this talk, we will focus on design tools for high level synthesis (HLS) to capture and express parallelism in wireless algorithms. This also includes the mapping to GPU and multicore systems for high speed simulation. HLS can also be applied to FPGA and ASIC synthesis, however, there exist tradeoffs in area with flexibility and reuse of designs. Heterogeneous system architectures as expressed by Systems on Chip (SoC) attempt to address these system issues. The talk will conclude with a discussion of computation testbeds from supercomputers through desktop GPU to single board systems. The integration with radio testbeds from WARP and USRP to NI and Argos prototype massive MIMO systems will be explored.
Millimeter wave is the future of cellular and local area networks. Though the main motivation for mmWave is large spectral channels, most signal processing work has focused on tractable narrowband signal models. In this talk I review the challenges associated with signal processing in broadband millimeter wave channels. Then I review recent developments on two important topics. First, I explain the design of hybrid precoding and combing algorithms, which use a mixture of frequency-flat analog and frequency-selective digital precoding and combining. Second, I show how to formulate the hybrid frequency selective channel estimation algorithm to exploiting sparsity in the delay and angular domains. The hybrid precoders and combiners can then be configured based on the channel estimates, to achieve high spectral efficiency in broadband MIMO channels.
Cyber-physical systems (CPS) are engineered systems with built-in seamless integration of computational and physical components. Fundamental advances in sensing, learning, control, and information technologies, are well motivated to endow CPS with resilience, adaptability, scalability, and sustainability. In this context, the present talk will start with online convex optimization algorithms for estimating the state of future power grids. A framework will be then introduced for joint active and reactive power control in distribution grids, which also accounts for stochastic constraints of voltages and inverters to reduce losses. Efficacy of the novel approaches will be assessed using standard IEEE power grid benchmark distribution feeders. Leveraging statistical inference and stochastic optimization tools, the final topic will deal with state-of-the-art learning-aided management for sustainable data centers. Both analytical and empirical results will demonstrate how valuable insights from big data analytics can lead to markedly improved management policies by learning from historical user and network patterns.
Renewable integration is a century-long project. Over the past decade we have made impressive progress in integrating renewables, energy storage, and demand response into the existing power infrastructure. In this talk, we jump forward to a hypothetical final destination: power systems without fuel. In power systems without fuel, small, modular, renewable sources supply all power. In addition to sustainability and environmental benevolence, power systems without fuel offer superior operation to current power systems due to, for example, the obsolescence of unit commitment, the decreased importance of frequency, and the increased viability of direct current. We motivate several research problems under this umbrella, including electricity markets without fuel costs, decentralized control of direct current systems, and machine learning for demand response.
Information Theoretic Security (ITS) was introduced by Claude Shannon in 1948. In Shannon's setting the legitimate parties share a common secret key but communicate over a public noiseless channel, which can be wiretapped by an eavesdropper. Shannon's main result was to establish the minimum key-rate necessary to guarantee ITS against the eavesdropper. Wyner introduced the wiretap channel in 1975, where the legitimate parties communicate over a (possibly) noisy channel, which could be wiretapped by an eavesdropper over another noisy channel. Wyner established the maximum communication rate in this setting, while guaranteeing ITS (in an asymptotic sense) against the eavesdropper.
In this talk we will review the above results and then introduce a new setting where a single (common) message must be transmitted to two receivers over a wiretap channel. In addition we assume that the transmitter shares an independent secret key with each of the two receivers not known to the eavesdropper. We will explain how the coding techniques developed by Shannon and Wyner can be unified in this setting. By focusing on the "degraded" channel model, we will discuss conditions under which the following approaches are optimal (i) using secret-keys as one-time pads and ignoring the contribution of the noisy channel (ii) ignoring the secret-keys and only relying on the noisy channel (iii) hybrid schemes that combine both approaches.
While multiple antennas provide a natural mechanism for securing wireless communications at a physical layer, both the fundamental limits and practical coding schemes for the Multi-Input-Multi-Output-Multi-Eavesdropper (MIMOME) channel have only been developed in the last few years.
We first discuss how to design a layered coding scheme for the MIMOME channel that achieves the secrecy capacity. Our scheme only uses codes for the scalar wiretap channel, and successive interference cancellation at the receiver, as in traditional V-Blast schemes. Our approach is based on simultaneous joint unitary triangularization of the channel matrices of the legitimate user and the eavesdropper. As a byproduct it also provides a more transparent understanding of the structure of the optimal covariance matrix for the MIMOME channel.
In the second part of the talk we will consider the case when there are only a limited number of RF chains in the MIMOME system. We will discuss how Artificial Noise based Secure-MIMO schemes can be used in such systems, and discuss the constraints on the beam-forming vectors, and propose some novel solutions to these.
Driven by the Internet and the Web, an increasing amount of multimedia data is generated and shared by a variety of sources including Internet of Things (IoT) and mobile devices. The enormous amount of available multimedia data has created new challenges for management and effective discovery and utilization of this data. Fortunately, the same drivers have also enabled and facilitated the generation of accompanying auxiliary descriptive information through social networks and crowdsourcing. This combination of the large annotated datasets and high performance computing resources has given rise to a new generation of data-driven algorithms. Deep convolutional neural networks have generated impressive results in multimedia signal processing problems such as image classification, face processing, and speech recognition. This talk will mainly focus on visual information processing and will present the progress in the last decade or so in feature-based algorithms and data-driven algorithms based on deep learning that have surpassed previous algorithms, and in some cases even human performance on these visual tasks.
The application space of robotics sparks the imagination and provides a daunting set of challenges for any product developer. What was once fiction is now in our homes and where fear of rejection once dominated the thoughts of robotics visionaries, the amazing reality is that we are not delivering new products fast enough into a diverse and growing market. This talk will provide some background about the consumer robotics market and outline the challenges that robot product developers face by describing the architecture and elements of a modern robot product. It will touch on hardware, sensing and processors up through the many interacting ayers of signal and information processing that breathe life into a consumer robot system.
By illustrating key signal processing and information processing challenges that arise in such an integrated system, the talk will provide insights and feedback from the trenches of product development to the signal and information processing community on technical enablers that can help developers address the growing market of consumer robot products worldwide.
This paper focuses on some applications of cognitive radars. Cognitive radars are systems based on a perception-action cycle that sense the environment and learn from it important information on the target and its background, then adapt the transmitted waveform to optimally satisfy the needs of their mission according to a desired goal. Both active and passive radars are considered, highlighting the limits and the path forward. In particular, we here consider cognitive active radars that work in spectrally dense environments and change the transmitted waveform on-the-fly to avoid interference with the primary user of the channel, such as broadcast or communication systems.
We also describe cognitive passive radars, which contrary to the active ones cannot directly change the transmitted waveforms on-the-fly but can instead select the best source of opportunity to improve the detection and tracking performance.
Distributed machine learning and large scale optimization methods are starting to play an increasing central role in wireless sensors networks and particularly in data-adaptive and data-driven contexts such as the cognitive radio. In this work we present a review of state-of-the-art machine learning techniques used in sensor network. In particular, we focus on distributed and decentralized machine learning and optimization methods for wireless sensor network and cognitive radio devices. We also introduce a series of recent developments and applications of the alternating direction method of multipliers (ADMM) approaches on the decentralized machine learning problems that can potentially be used for related cognitive radio problems.
Observing and analyzing human brain function is a truly interdisciplinary endeavor combining engineering, neurosciences, and medicine. State-of-the-art technologies such as functional magnetic resonance imaging (fMRI) allow to non-invasively acquire a sequence of whole-brain snapshots that indirectly measure neuronal activity. Recent ``big data'’ initiatives (e.g., Human Connectome Project) provide us with large datasets reflecting the complex structure of human brain activity. Advanced signal processing plays a major role to extract meaningful and interpretable features. Here we present one such example to characterize dynamics of resting-state fMRI. Using state-of-the-art sparsity-driven deconvolution [1,2], we extract innovation-driven co-activation patterns (iCAPs) from resting-state fMRI [3]. The iCAPs' maps are spatially overlapping and their activity-inducing signals temporally overlapping. Decomposing resting-state fMRI in terms of iCAPs reveals the rich spatiotemporal structure of functional components that dynamically assemble known resting-state networks. The temporal overlap between iCAPs is substantial, which confirms crosstalk happening at the fMRI timescale; on average, three to four iCAPs occur simultaneously in specific combinations that are consistent with their behaviour profiles according to BrainMap. Intriguingly, in contrast to conventional connectivity analysis, which suggests a negative correlation between fluctuations in the default-mode network (DMN) and task-positive networks, we instead find evidence for two DMN-related iCAPs consisting the posterior cingulate cortex that differentially interact with the attention network. These findings illustrate how conventional correlational approaches might be misleading in terms of how task-positive and -negative networks interact, and suggest that more detailed, dynamical decompositions can give more accurate descriptions of functional components of spontaneous activity.
Data-driven methods such as independent component analysis (ICA) have proven quite effective for the analysis of functional magnetic resonance (fMRI) data and for discovering associations between fMRI and other medical imaging data types such as electroencephalography (EEG) and structural MRI data. Without imposing strong modeling assumptions, these methods effectively take advantage of the multivariate nature of fMRI data and are particularly attractive for use in cognitive paradigms where detailed a priori models of brain activity are not available.
This talk reviews major data-driven methods that have been successfully applied to fMRI analysis, presents recent examples of their application for studying the brain function, and addresses current challenges and prospects.
Adapting sparse image modeling to the data has been shown to provide improved image reconstruction in several imaging modalities. However, synthesis or analysis dictionary learning involves approximations of NP-hard sparse coding, and expensive learning steps. Recently, sparsifying transform learning (STL) received interest for its cheap and exact closed-form solutions to iteration steps. We describe the evolution of this framework and several variations as applied to biomedical imaging, including online STL for dynamic and big data; learning a union of transforms model for greater representation power; and a filter bank STL that provides more degrees of freedom in modeling by acting on entire images rather than on patches.
Classical work in computer vision has emphasized the study of individual objects, e.g. object recognition or tracking. More recently, it has been realized that most of these approaches do not scale well to scenes that depict crowded environments. These are scenes with many objects, which are imaged at low resolution, and interact in complex ways. Solving vision problems in these environments requires the ability to model and reason about a crowd as a whole. I will review recent work in my lab in this area, including the design of statistical models for the appearance and dynamics of crowd video with multiple flows, and their application to the solution of problems such as crowd counting, dynamic background subtraction, anomaly detection, domain adaptation, and crowd activity analysis.
With the explosive growth of information and communication, signals are generated at an unprecedented rate from various sources, including social, citation, biological, and physical infrastructure, among others.
Unlike time-series signals or images, these signals possess complex, irregular structure, which requires novel processing techniques leading to the emerging field of signal processing on graphs.
Signal processing on graphs extends classical discrete signal processing to signals with an underlying complex, irregular structure. The framework models that underlying structure by a graph and signals by graph signals, generalizing concepts and tools from classical discrete signal processing to graph signal processing. I will talk about graph signal processing, and, in particular, the classical signal processing task of sampling and interpolation within the framework of signal processing on graphs. As the bridge connecting sequences and functions, classical sampling theory shows that a bandlimited function can be perfectly recovered from its sampled sequence if the sampling rate is high enough. I will follow up with a number of applications where sampling on graphs is of interest.
The social good movement has taken root with many a corporation, entrepreneur and big thinker, with the simple aim of using technology to help create a better world. Data analytics, signal processing and related disciplines present one increasingly important way in which social good can be made possible and new communities are growing around it, fueled in large part by the fact that we are no longer constrained by data. Everything from Internet activity, satellite imagery, social media, health records, news, scientific publications, economic data, weather data, and government records is at our fingertips, giving us an unprecedented opportunity to change the world for the better using data sciences. From reducing or eliminating inequalities, to improving access to health care and education, to reducing pollution and our carbon footprint, the opportunities are endless. In this talk, Saška will give an overview of the emerging area of data science for social good. She will illustrate how the state of the art signal processing toolkit (e.g. prediction, classification, optimization, visualization, NLP) is driving new social good applications, and will present a broad range of innovative examples of doing good with data. She will explore the interdisciplinary nature of social good projects, and highlight data and algorithmic challenges that might call for new research directions.
Non-commutativity arises in many places in statistical signal processing including information fusion, graphical models and distributed estimation. Any problem where the model or the processing lacks symmetry, permutation invariance or revocable actions will have non-commutativity. This talk will discuss several signal processing areas where non-commutativity is manifested and some challenges and opportunities.
Viewed through a statistical inference lens, many network analytics challenges boil down to (non-) parametric regression and classification, dimensionality reduction, or clustering. Adopting such a vantage point, this keynote presentation will put forth novel learning approaches for comprehensive situation awareness of cognitive radio (CR) networks that includes spatio-temporal sensing via RF spectrum and channel gain cartography, flagging of network anomalies, prediction of network processes, and dynamic topology inference. Key emphasis will be placed on parsimonious models leveraging sparsity, low-rank or low-dimensional manifolds, attributes that are instrumental for complexity reduction.