Pathfinder: Quasi-Newton Variational Inference

Data evento: 
Da  28/10/202228/10/2022

Venerdì 28 ottobre 2022, alle ore 16.30, presso la Sala Conferenze al primo piano dell'Edificio D, Dipartimento di Scienze economiche, aziendali, matematiche e statistiche (DEAMS), nel comprensorio universitario di Piazzale Europa, si svolgerà il seminario dal titolo "Pathfinder: Quasi-Newton Variational Inference".

Relatore del seminario sarà il prof. Bob Carpenter, Flatiron Institute, Center for Computational Mathematics, New York.

Promotore dell'evento è il prof. Leonardo Egidi del Dipartimento di Scienze economiche, aziendali, matematiche e statistiche “Bruno de Finetti” dell'Università di Trieste.

Abstract: I will introduce the Pathfinder variational inference algorithm, which was motivated by finding good initializations for Markov chain Monte Carlo (i.e., solving the "burn-in" problem). It works by running quasi-Newton optimization (specifically, L-BFGS) on the target posterior (not the stochastic ELBO, as in other black-box variational inference algorithms). At each iteration of optimization, Pathfinder defines a variational approximation to the posterior, in the form of a multivariate normal distribution taking the low-rank plus diagonal inverse Hessian from the optimizer as covariance. It then selects the approximation with the lowest KL-divergence to the true posterior.
Multi-path Pathfinder runs multiple instances of Pathfinder in parallel and then uses importance resampling to produce a final set of draws. The single-path algorithm provides much better approximations (measured by Wasserstein distance or KL-divergence) than the previous state-of-the-art mean-field or full-rank black box variational inference schemes, and the multi-path algorithm is much better again for posteriors with multiple modes or complex geometry. The computational bottleneck is evaluating KL-divergence through the evidence lower bound (ELBO), but this step is embarrassingly parallelizable. Even without parallelization, Pathfinder is one to three orders of magnitude faster than the state of the art black box variational inference or using the no-U-turn Hamiltonian Monte Carlo sampler for warmup. It is also much more robust. We will show the results of evaluating on dozens of different models in the posteriordb test suite and also a range of high-dimensional and multimodal problems. This is joint work with Lu Zhang (first author who did most of the hard work), Aki Vehtari, and Andrew Gelman.

Il seminario si potrà seguire anche online su Microsoft - TeamsClick here to join the meeting

Meeting ID: 363 706 022 306 Passcode: CeJuSn

Download: