Math Seminar: Comparison of Markov chains via weak Poincaré inequalities with application to pseudo-marginal MCMC
Dr. Andi Wang, University of Bristol

Monday, October 24th, 2022
11:20 AM – BSB117
Also available on Zoom: https://tinyurl.com/9nrnveur 

Abstract: I will discuss the use of a certain class of functional inequalities known as weak Poincaré inequalities to bound convergence of Markov chains to equilibrium. We show that this enables the straightforward and transparent derivation of subgeometric convergence bounds. We will apply these to study pseudo-marginal methods for intractable likelihoods, which are subgeometric in many practical settings. We are then able to provide new insights into the practical use of pseudo-marginal algorithms, such as analysing the effect of averaging in Approximate Bayesian Computation (ABC) and to study the case of lognormal weights relevant to Particle Marginal Metropolis–Hastings (PMMH) for state space models. Joint work with Christophe Andrieu, Anthony Lee and Sam Power.

 

Math Seminar: The Clifford Monopole equations (joint work with N. Santana, E. Lopez and A. Quintero-Velez).
Dr. Rafael Herrera-Guzman, Centro de Investigación en Matemáticas, Guanajuato, Mexico

Friday, September 30th, 2022
11:20 AM
Available on Zoom: https://tinyurl.com/9nrnveur 

Abstract: The spin groups and the Clifford algebras have played a very important role in Differential Geometry and Physics. In the search for a unified spinorial approach to special Riemannian holonomy, we found a suitable notion for twisted pure spinor which generalizes that of a classical pure spinor developed by Cartan. Along the way, we realized that parallel twisted pure spinors, besides satisfying the corresponding twisted Dirac equation, satisfy a curvature identity analogous to the second Seiberg-Witten equation in 4-dimensions. The Dirac equation and the curvature equation constitute what we call the Clifford monopole equations. We will describe the setup of these equations on manifolds of arbitrary dimension, show that they have solutions on certain spaces, how they restrict to the Seiberg-Witten equations, and sketch some aspects of the construction of the moduli space.

 

Math Seminar: The bootstrap for dynamical systems
Dr. Buddhima Kasun Fernando, Scuola Normale Superiore di Pisa, Italy

Friday, September 23rd, 2022
11:20AM
Available on Zoom: https://tinyurl.com/9nrnveur

Abstract: Despite their deterministic nature, chaotic dynamical systems often exhibit seemingly random behavior. Consequently, a dynamical system is usually represented by a probabilistic model of which the unknown parameters must be estimated using statistical methods. When measuring the uncertainty of such parameter estimation, the bootstrap in statistics stands out as a simple but powerful technique. In this talk, I will introduce the bootstrap for dynamical systems and discuss its consistency and its second-order efficiency using Edgeworth expansions. The content of the talk is based on a joint work with Nan Zou (Macquarie University) and will be accessible to anyone with background in basic probability and statistics. 

Math Seminar: Understanding SARS-Co-V-2 transmission in the early phases of the COVID pandemic
Dr. Ilaria Dorigatti, Imperial College

July 22nd, 2022
12:00PM
Available on Zoom: https://tinyrul.com/9nrnveur

Abstract: On 21st February 2020, the first Italian COVID-19 death was detected in the municipality of Vo’, a small town near Padua. At the time, the University of Padua conducted two sequential molecular swab surveys in the Vo’ population (February & March 2020) which were then followed by 3 serological surveys (May & November 2020, and June 2021). In this talk, I will present the statistical and mathematical models developed in the early phases of the pandemic around the data collected in Vo’, to understand the epidemiology of SARS-CoV-2 (Lavezzo et al, Nature, 2020), quantify heterogeneities in transmission and the effectiveness of interventions (Dorigatti et al, Nature Communications, 2021) as well as antibody dynamics and neutralization reactivity in the absence and presence of vaccination (Lavezzo et, al, Genome Medicine, 2022). I will also present the results of a recent analysis investigating the effects of different testing policies on variant emergence, where we show that surveillance using molecular testing is necessary to detect and reduce the transmission of an antigen test escaping variant which was detected in Veneto in 2020 (Del Vecchio et al, Research Square).

Math Seminar: Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness
Dr.
Murat Erdgodu,  University of Toronto

April 22nd, 2022
11:00 AM
Available on Zoom: https://tinyrul.com/9nrnveur

Abstract: We study sampling from a target distribution $e^{-f}$ using the Langevin Monte Carlo (LMC) algorithm. For any potential function $f$ whose tails behave like $|x|^\alpha$ for $\alpha \in [1,2]$, and has $\beta$-H\”older continuous gradient, we derive the sufficient number of steps to reach the $\eps$-neighborhood of a $d$-dimensional target distribution as a function of $\alpha$ and $\beta$. Our result is the first convergence guarantee for LMC under a functional inequality interpolating between the Poincar\’e and log-Sobolev settings (also covering the edge cases).

 


Math Seminar: Convergence properties of shallow neural networks: implications and applications in scientific computing

Dr. Grant Rotskoff, Stanford University

April 1st, 2022
11:00 AM
BSB117

Abstract: The surprising flexibility and undeniable empirical success of machine learning algorithms have inspired many theoretical explanations for the efficacy of neural networks. Here, I will briefly introduce one perspective that provides not only asymptotic guarantees of trainability and accuracy in high-dimensional learning problems but also provides some prescriptions and design principles for learning. Bolstered by the favorable scaling of these algorithms in high dimensional problems, I will turn to the problem of variational high dimensional PDEs. From the perspective of an applied mathematician, these problems often appear hopeless; they are not only high-dimensional but also dominated by rare events. However, with neural networks in the toolkit, at least the dimensionality is somewhat less intimidating. I will describe an algorithm that combines stochastic gradient descent with importance sampling to optimize a function representation of the solution. Finally, I will provide numerical evidence of the power and limitations of this approach.

Math Seminar:  The Manifold Joys of Sampling
Dr.
Santosh Vempala, Frederick G. Storey Chair of Computing and Professor – College of Computing, Georgia Tech

March 4th, 2022
11:00 AM
BSB 132

Abstract: Sampling high-dimensional sets and distributions is a fundamental problem with many applications. The state-of-the-art is that arbitrary logconcave densities can be sampled to arbitrarily small error in time polynomial in the dimension using simple Markov chains based on Euclidean geometry.  In this talk, we describe algorithms that exploit varying local geometry and can be viewed as sampling Riemannian manifolds. This approach will let us derive more efficient algorithms for some cases of interest, as well as analyze affine-invariant versions of Euclidean algorithms, such as the Dikin walk, Hamiltonian Monte-Carlo and Riemannian Langevin.

Math Seminar: Minimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-concave Sampling
Dr.
Yuansi Chen, Duke University, Department of Statistical Sciences

February 25th, 2022
11:00 AM
BSB 132

Abstract: We study the problem of using the Metropolis-adjusted Langevin algorithm (MALA) to sample from a log-smooth and strongly log-concave distribution in dimension d with condition number $\kappa$. We establish its optimal minimax mixing time under a warm start. First, we demonstrate that MALA with a warm start mixes in $O(d^{1/2} \kappa)$ iterations up to logarithmic factors; this improves upon the previous work on the dependency of either the condition number $\kappa$ or the dimension d. Our proof relies on comparing the leapfrog integrator with the continuous Hamiltonian dynamics, where we establish a new concentration bound for the acceptance rate. Second, we provide an explicit mixing time lower bound for reversible MCMC algorithms on general state spaces. We use this result to show that MALA requires at least $\Omega(d^{1/2} \kappa)$ steps in the worst case, matching our upper bound in terms of both the condition number and the dimension.

Math Seminar: Analysis of two-component Gibbs samplers using the theory of two projections
Dr.
Qian Qin, University of Minnesota

February 18th, 2022
11:00 AM
BSB 132

Abstract: Gibbs samplers are a class of Markov chain Monte Carlo (MCMC) algorithms commonly used in statistics for sampling from intractable probability distributions. In this talk, I will demonstrate how Halmos’s (1969) theory of two projections can be applied to study Gibbs samplers with two components. I will first give an introduction to MCMC algorithms, particularly Gibbs algorithms. Then, I will explain how problems regarding the asymptotic variance and convergence rate of a two-component Gibbs sampler can be translated into simple linear algebraic problems through Halmos’s theory. In particular, a comparison is made between the deterministic-scan and random-scan versions of two-component Gibbs. It is found that in terms of asymptotic variance, the random-scan version is more robust than the deterministic-scan version, provided that the selection probability is appropriately chosen. On the other hand, the deterministic-scan version has a faster convergence rate. These results suggest that one may use the deterministic-scan version in the burn-in stage, and switch to the random-scan version in the estimation stage.

Math Seminar: Unbiased Multilevel Monte Carlo methods for intractable distributions: MLMC meets MCMC
Dr.
Guanyang Wang, Rutgers University, Department of Statistics

February 11th, 2022
11:00 AM
BSB 132

Abstract: Constructing unbiased estimtors from MCMC outputs has recently increased much attention in statistics and machine learning communities.   However, the existing unbiased MCMC framework only works when  the quantity of interest is an expectation of certain probability distribution.  In this work, we prropose unbiased estimators for functions of expectations.  Our ideas is based on the combination of the unbiased MCMC and MLMC methods.  We prove our estimator has a finite variance, a finite computational complexity, and achieves ε-accuracy with O(1/ε²) computational cost under mild conditions.  We also illustrate our estimator on several numerical examples.  This is a joint work with Tianze Wang.

Math Seminar: Quantitative convergence analysis of hypocoercive sampling dynamics
Dr.
Lihan Wang, Carnegie Mellon University

February 4th, 2022
11:00 AM
BSB 132

Abstract: In this talk, we will discuss some advances on quantitative analysis of convergence of hypocoercive sampling dynamics, including underdamped Langevin dynamics, randomized Hamiltonian Monte Carlo, zigzag process and bouncy particle sampler. The analysis is based on a variational framework for hypocoercivity which combines a Poincare-type inequality in time-augmented state space and an L^2 energy estimate. Joint works with Yu Cao (NYU) and Jianfeng Lu (Duke).