Title: Geometric Priors of Feedfoward ReLU Networks
Presented By: Francis Williams, NYU
Recent empirical and theoretical results have shown that feedforward ReLU networks trained with gradient descent bias towards well behaved functions. In this talk, I discuss implicit geometric priors of these networks in the context of computing a parametric surface from a point cloud. For this task, the remarkable ability for neural networks to generalize is manifested as reconstructed surfaces which capture salient details of the ground truth functions from which the input point clouds were sampled from. Finally, I briefly discuss recent theoretical results highlighting the implicit regularization properties of neural networks. In particular, I show that for the 1D curve interpolation problem, the choice of initialization relates the minimizing certain norms of curvature.
Francis is a PhD student at NYU advised by Daniele Panozzo and Claudio Silva.
**Lunch will be provided**
If you'd like to meet with Francis, please contact email@example.com