Speaker: Daniel Roy, University of Toronto
Title: Conditional Independence, Computability, and Measurability
A sub-community of machine learning -- working in an area called probabilistic programming -- are now using sampling programs as specifications for complex probability distributions over large collections of random variables, and writing very general algorithms for computing conditional distributions. The efficiency of these algorithms generally relies upon an abundance of conditional independences. In this work, we look at the problem of representing conditional independencies that hold among these random variables. In particular, we look at the setting of exchangeable sequences and arrays of random variables. The study of the computability of representation theorems due to de Finetti, Aldous, and Hoover reveals that representing conditional independence can come at a steep cost, but also that a change of representation -- to one allowing a small probability of error -- allows us to represent conditional independences in these random structures.
Joint work with Nate Ackerman, Jeremy Avigad, Cameron Freer and Jason Rute.