Professional Documents
Culture Documents
Manipulation Mobility
Boqing Gong
(bgong@google.com)
Label-efficient learning of visual models Google Research
● Current research
○ Making object classifiers & detectors robust against natural corruptions and
out-of-domain datasets
Related publications:
○ Efficient video recognition models http://boqinggong.info/publications.html
○ Visual relationship detection
○ Domain adaptation, multi-task/transfer learning, neural checkpoint ranking
○ Long-horizon, large-scale meta-learning
● Areas of interest
○ Adversarial and real-world robustness
○ Domain adaptation and multi-task/transfer learning
○ Vision + language
Googler name: Ofir Nachum
Research Topic: Reinforcement Learning, with a focus on how we can
use existing experience datasets to accelerate learning.
Areas of interest:
● All things RL, especially sub-topics mentioned above.
● My ideal result is finding methods and algorithms that are
theoretically grounded as well as have practical impact.
Alireza Fathi
3D Scene Understanding (alirezafathi@google.com)
Google Research
Current research:
● RL: theoretically grounded and efficient agents
● AL: classic imitation learning, and less standard settings (eg inverse RL from suboptimal
but improving demonstrations, exploration from demonstrations)
● Game theory, esp. Mean Field Games
● Field robotics (eg, navigation in natural environments)
Areas of interest:
● RL, AL, Game theory
● Theoretically sound approaches leading to practical and efficient agents
● Practical applications (esp field robotics)
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Ibrahim Alabdulmohsin
ibomohsin@
Cross-Architecture Transfer Google Brain, Zürich
● Why?
○ Accelerate experimentation, architecture sweep, distillation, etc.
○ Improve understanding of how deep neural networks work.
● Examples of Evidence: Distillation and pretraining with random labels
Efi Kokiopoulou
Robust deep classification under noisy labels (kokiopou@)
Google Research
Current research:
● Train deep classifiers to be robust against
input-dependent label noise
● Take class correlations in label noise into account
● Add domain-knowledge to the noise model
Example teleoperation
Related work:
Pensieve, Pyramid Snapshot Challenge,
Zero-Shot Learning - Chris Piech, Stanford
iSnap, SourceCheck - Thomas Price, NC State
Dr. Scratch -Jesús Moreno-León, URJC
Googler name: Caroline Pantofaru (cpantofaru@) & Michael Nechyba* (mnechyba@)
Research Topic: People-centric perception
Areas of interest:
● Fairness (see next slide as well)
● Modeling - detection, tracking, diarization, pose, etc.
● Context: people in video, media, robotics/ambient and HCI
● Synthetic data augmentation
● Metrics
Googler name: Susanna Ricco* (ricco@), Caroline Pantofaru (cpantofaru@)
Research Topic: Computer Vision - Fairness
Current research:
● Understanding bias propagation under partial/weak supervision or distillation.
● Learning approaches to mitigate bias propagation.
Areas of interest:
● Partial / weak supervision
● Dataset design
● Fairness
○ Metrics (including beyond group fairness)
○ Interventions (including beyond classifiers)
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
AutoML for MIP (mixed integer programming) Pawel Lichocki
(pawell@)
Operations Research
Context
1/ There are a lot of MIP heuristics, e.g., randomized rounding, round
feasibility pump, pivot-and-shift, fix-and-propagate. pivot shift
2/ The heuristics iterate over LP-feasible or integral "solutions" LP
integral
in hope to stumble upon LP-feasible and integral solution. feasible
select
for i = 1..N
round(x[i]) for i = 1..N
if Frac(x[i])
propagate(i) if Frac(x[i])
for i = 1..N round(x[i])
pump(x) round(x[i])
if Frac(x[i])
propagate(i) propagate(i)
...
pump(x) round(x[i]) pump(x)
... propagate(i) ...
pump(x)
...
ML to design and control robots that can go anywhere
Googler: Tingnan Zhang (tingnan@)
● Motivation: we want our robots to be able to move on all
terrains: as fast as cars on paved roads, and as elegant as
animals on complex natural surfaces like sand, snow, grass
and mud.
● Goal: Developed learning system to automatically design
robot morphologies and figure out control policies for
complex terrains.
● Challenges:
○ Little known principles and priors to guide the
design.
○ Sim-to-real transfer to control policies.
○ Types of natural substrates are large, and their
properties are diverse (i.e. loose and flowable).
Learning Robot Locomotion from Videos
Googler: Wenhao Yu (magicmelon@)
● Imitation learning from animals leads to efficient learning and natural motion.
● Abundant of videos exist but motion-capture data is scarce.
● Learning coordinated locomotion from unstructured videos present unique challenges.
● Current research
○ Automatic decision making: reinforcement learning (NeurIPS 19, ICML 20a, ICLR 20,
NeurIPS 20a, NeurIPS 20b), optimization (NeurIPS 20c, AISTATS 21)
○ Learnable algorithm design: search (ICML 17, NeurIPS 20d, NeurIPS 20e), sampling
(AISTATS 19, NeurIPS 19), planning (ICLR 20, ICML 20b)
● Areas of interest
○ Ultimate goal: make the intractable (approximately) tractable
○ Foundation for algorithm design:
■ Reinforcement learning, learning to search/optimize
○ Application domains including:
■ Program/software understanding
■ Knowledge graph reasoning
■ Supply chain management
■ Science discovery
Robustness in Recommendation
alexbeutel@
● Current research
○ Robustness in Recommendation
○ Safe Multi-Objective RL for Recommendation
○ Fairness in Recommendation
● Areas of interest
○ How do we ensure recommender systems aren’t brittle and vulnerable to spurious correlations?
○ How should we make use of uncertainty in recommendation?
○ How can we make recommenders robust to adversarial attacks?
Karthik Raman
(karthikraman@)
Multilingual and Cross-Lingual learning Omniglot, Google Research
Areas of interest:
● Use Interpretability as a microscope on scientific
phenomena modeled by complex ML models to
discover something humans never knew before.
● Developing ways to detect limitations of
interpretability methods
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Qifei Wang
(qfwang@google.com)
Self-supervised Multi-task Learning Google Research
● Current research
○ Model unification via multi-task learning (WACV 21)
○ Multi-domain learning and domain generalization (CVPR 21)
○ Multi-modal learning for video understanding
● Areas of interest
○ Multi-task learning and multi-domain learning
○ Self-supervised and unsupervised learning
○ Few-shot learning
○ Multi-modal learning for ambient sensing
What do generative models understand? Alexey Dosovitskiy
(adosovitskiy@google.com)
Brain Berlin
● Current research
○ Object-centric models (Slot Attention)
○ Image generation (NeRF in the Wild)
○ Architectures for computer vision (Vision Transformer)
Relevant research:
Combinatorial and non-linear diffusion on graphs, (dynamic) Graph decomposition,
Packing / Covering problems (min-cost flow, Wasserstein distance), Preconditioning
and numerical primitives.
Areas of interest:
Theoretical and empirical study of non-linear diffusion
Sparsifying/sketching graphs maintaining structures such as random walks and clustering
Graph-based semi-supervised learning
Jeremiah Harmsen
Google Brain
Systems and Tools for Machine Learning jeremiah@google.com
🌍
Global ML TensorFlow Research
Storage Systems Datasets Velocity
Googler name: Yasemin Altun altun@google.com
Research Topic: Structure Aware Machine Learning for NLU
Areas of interest:
● (Conversational) knowledge-based question answering
● Task-oriented dialogue
● Reasoning over structured and semi-structured context
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Human presence detection around machinery Stefan Welker (swelker@)
Robotics at Google