Schedule

8:00am

Registration opens

8:00am - 9:00am

Coffee & Light Breakfast

9:05am  

Welcome to BayLearn 2017!  (Slides)
Jerremy Holland

9:15am

Keynote 1

Chair: Jerremy Holland

Technology meets Neuroscience - A Vision of the Future of Brain Optimization
Adam Gazzaley, UCSF & Neuroscape

10:00am  

Session 1

 

Session Chair: Jean-Francois Paiement

Context-aware Captions from Context-agnostic Supervision
Gal Chechik; Samy Bengio; Kevin Murphy; Devi Parikh; Ramakrishna Vedantam
Deep Reinforcement Learning of Bipedal Walking with Structured Locomotor Nets
Mario Srouji; Jian Zhang; Emilio Parisotto; Ruslan Salakhutdinov
Unsupervised deep clustering for semantic object retrieval
Steven Hickson; Rahul Sukthankar; Irfan Essa; Anelia Angelova

10:45am - 11:10am

Break

11:15am  

Keynote 2

Chair: David Grangier

Lessons from powering Facebook experiences at scale with AI
Joaquin Quiñonero Candela, Facebook.

12:00pm  

Session 2

 

Session Chair: David Grangier

A deep generative model for gene expression profiles from single-cell RNA sequencing
Romain Lopez; Jeffrey Regier; Michael Jordan; Nir Yosef
Certified Defences against Adversarial Examples
Aditi Raghunathan; Jacob Steinhardt; Percy Liang

12:30pm - 1:50pm

Poster Session (Best 8) and Lunch

2:00pm

Keynote 3

Chair: Mohak Shah

Neural Map: Structured Memory for Deep Reinforcement Learning
Ruslan Salakhutdinov, Apple

2:45pm

Session 3

 

Session Chair: Mohak Shah

Large-Scale Quadratically Constrained Quadratic Program via Low-Discrepancy Sequences
Kinjal Basu; Ankan Saha; Shaunak Chatterjee
Model compression as constrained optimization, with application to neural nets
Miguel Carreira-Perpinan; Yerlan Idelbayev

3:15pm - 3:40pm

Break

3:45pm    

Keynote 4

Chair: Alexey Pozdnukhov

Defense Against the Dark Arts: Making Machine Learning Robust to Adversarial Examples
Ian Goodfellow, Google.

4:45pm

Main Poster Session, Prizes, Beer Bash, Food and Refreshments

7:00pm

End of the Symposium

 

 
 

 

Poster contributions

Speculate-Correct Error Bounds for Local Classifiers
Eric Bax*, Verizon
 
maaGMA: Optimizing a Multi-Task Generator Against Several Discriminator Networks
Sahil Chopra*, Stanford University; Ryan Holmdahl, Stanford University
 
Analyzing global urbanization with remote-sensing data and generative adversarial networks
Adrian Albert*, MIT; Emanuele Strano, Massachusetts Institute of Technology; Marta Gonzalez, MIT
 
Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization
Fabian Pedregosa*, UC Berkeley
 
Learning Supervised Binary Hashing without Binary Code Optimization
Miguel Carreira-Perpinan*, UC Merced; Ramin Raziperchikolaei, UC Merced
 
Fusing Side Information for Transfer Learning
Yao-Hung Tsai*, Carnegie Mellon University; Ruslan Salakhutdinov, Carnegie Mellon University
 
A Relaxation Perspective on Policy Optimization
Daniel Levy*, Stanford University; Stefano Ermon, Stanford University
 
Iterative Refinement for Machine Translation
Roman Novak*, Google; Michael Auli, Facebook; David Grangier, Facebook
 
Stochastic Gradient Descent: Going As Fast As Possible But Not Faster
Alice Schoenauer-Sebag*, UCSF; Marc Schoenauer, INRIA; Michèle Sebag, CNRS
 
Deep Character-Level Click-Through Rate Prediction for Sponsored Search
Amin Mantrach*, Criteo; Bora Edizel, UPF; Xiao Bai, Yahoo
 
Off-Policy Actor-Critic  with Function Approximation for Bidding in Computational Advertising
Hamid Maei*, Criteo
 
Learning with Abandonment
Sven Schmit*, Stanford University; Ramesh Johari, Stanford University
 
Gaussian Prototypical Networks for Few-Shot Learning on Omniglot
Stanislav Fort*, Stanford University
 
Adversarial Spheres: Exploring Adversarial Examples on a Simple Dataset
Justin Gilmer*, Google Brain; Fartash Faghri, University of Toronto; Luke Metz, Google Brain;
Maithra Raghu, Google Brain/ Cornell; Ian Goodfellow, Google Brain
 
Learning Dialog Policy in End-to-End Task-Oriented Neural Dialog Models
Bing Liu*, Carnegie Mellon University; Ian Lane, Carnegie Mellon University
 
Causal Generative Neural Networks
Isabelle Guyon*, UPSud, INRIA, University Paris-saclay and ChaLearn
 
GCN-LSTM Framework For Real-Time Macroscopic Traffic Congestion Prediction
Sudatta Mohanty*, UC Berkeley; Alexei Pozdnukhov, Sidewalk Labs
 
Active Learning for Deep Convolutional Neural Networks using Dropout
Armin Kappeler*, Oath
 
The Effects of Memory Replay in Reinforcement Learning
Ruishan Liu*, Stanford University; James Zou, Microsoft
 
Active Learning for Training Deep Neural Networks
Tai-Peng Tian*, Apple Inc.; Wenda Wang, Apple Inc.; Yin Zhou, Apple Inc.; Oncel Tuzel, Apple Inc.
 
Deep Simultaneous Localization and Mapping
Emilio Parisotto, Carnegie Mellon University; Devendra Singh Chaplot, Carnegie Mellon University; 
Jian Zhang, Apple Inc.; Ruslan Salakhutdinov*, Carnegie Mellon University
 
Neural Program Synthesis with Policy Gradient
Daniel Abolafia*, Google Brain; Quoc Le, Google Brain; Mohammad Norouzi, Google
 
Learning from Simulated and Unsupervised Images through Adversarial Training
Ashish Shrivastava, Apple; Tomas Pfister, Apple; Oncel Tuzel*, Apple; Josh Susskind, Apple;
Wenda Wang, Apple Inc.; Russ Webb, Apple
 
Multi-Objective Optimization for Dynamic Pricing in the On-Demand Economy
Aayush Gupta*, Saratoga High School
 
Why adaptively collected data have negative bias and how to correct for it.
Xinkun Nie*, Stanford University; Xiaoying Tian, Stanford University;
Jonathan Taylor, Stanford University; James Zou, Stanford University
 
Verifying Properties of Binarized Neural Networks
Shiva Kasiviswanathan, Samsung Research; Nina Narodytska*, VMware;
Leonid Ryzhyk, VmWare; Mooly Sagiv, VMware; Toby Walsh, UNSW
 
Deep Lattice Networks and Partial Monotonic Functions
Maya Gupta*, Google
 
Deep Multiple Instance Feature Learning via Variational Autoencoder
Nanxiang Li*; Shabnam Ghaffarzadegan
 
Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?
Maithra Raghu*, Google Brain/ Cornell
 
Towards a cosmology emulator using Generative Adversarial Networks
Mustafa Mustafa*, Berkeley Lab
 
Deep Gaussian Processes and Deep Neural Networks
Jaehoon Lee*, Google Brain; Yasaman Bahri, Google Brain
 
Improving transfer using augmented feedback in Progressive  Neural Networks
Deepika Bablani*, Carnegie Mellon University; Parth Chadha, CMU
 
Simplicity and Generalization in Deep Neural Networks
Roman Novak*, Google; Jascha Sohl-Dickstein, Google Brain; Dan Abolafia, Google Brain
Jeffrey Pennington, Google Brain; Yasaman Bahri, Google Brain; 
 
TransFlow: Unsupervised Motion Flow by Joint Geometric and Pixel-level Estimation
Luca Rigazio*, Panasonic Silicon Valley Laboratory; Stefano Alletto, Unimore



Submission abstracts