Speaker: Dami Choi
Supervisor: David Duvenaud, Mentor: Geoff Roeder
Talk title: Common Cause Model for Fast Conditional Image Generation
Generating high resolution, photo-realistic, and diverse images is a difficult task, especially when it comes to modeling images of varied labels, like ImageNet and its 1000 classes. Plug and Play Generative Networks (PPGN) are able to generate such high quality images, class conditionally, due to its learned prior and carefully designed generator. However, sampling from the Plug and Play model is very slow because the gradient has to back-propagate through the generator and the classifier. This presentation will propose a new framework that will make sampling much faster than Plug and Play, using the common-cause model.
Speaker: Weijie Xu
Supervisor: Roger Grosse, Mentor: James Lucas
Talk title: Defending Against Adversarial Inputs
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x′ that is similar to x but classified as t. This makes it difficult to apply neural networks in many areas. Recently, people create many methods to generate adversarial examples like fast gradient sign method, Jacobian Saliency Map Approach and basic interactive method.
When training neural networks it is often desirable to keep the network’s Jacobian small in order to improve generalization and robustness to adversarial inputs. Right now, a new research tries to improve adversarial robustness through stochastic jacobian norm approximation. In this research, we plan to use jacobian regularization to deal with all adversarial examples generated by different examples. Besides, since the adversarial examples generated by Carlini&Wagner method are very effective to all defending methods, we hope to modify jacobian regularization to defend this attack.
Speaker: Jiaoquan Jeff Chen
Supervisor: Marsha Chechik, Mentor: Ramy Shahin
Talk title: Empirical Evaluation of Lifting DSLTrans Transformation
Model Driven Engineering (MDE) is a methodology used to improve development productivity by using models to develop software at a higher level of abstraction. Model transformations transform a source model(s) based on a set of specified transformation rules. They increase productivity by enabling the automation of various engineering tasks. Software Product Line Engineering (SPLE) is used to manage variability in software products, by allowing developers to define and maintain sets of related models. SPLE techniques are widely used in industry to reduce the complexity of product portfolios. Difficulties arise, however, when applying model transformations to product lines. In order to allow such transformations to be applicable to the entire product line in a single step, we modify them using a technique called lifting. DSLTrans is a graphical, Turing incomplete model transformation language notable for guaranteeing the properties of termination and confluence for all transformations. In this project, we will be empirically evaluating the lifting of DSLTrans transformations, in order to gather evidence on the viability of lifting DSLTrans transformations for practical use.