
Machine Learning at Berkeley’s research division provides a platform for students and Berkeley faculty to collaborate on cutting-edge machine learning research. As many of our research projects culminate in publication at an AI/ML conference, we offer funding for our members to attend these conferences, allowing them to connect with the research world outside of Berkeley.
Research
Our Publications
SPOTLIGHT
Orthogonal Projected Gradient Descent
ICLR 2022
We introduce an improved attack technique to subvert defenses to adversarial example detection. We use our technique to evade four state-of-the-art detection defenses, reducing their accuracy to 0% while maintaining a 0% detection rate.
NeurIPS 2022
Built to Last: Lessons on Fostering a Student ML Community
We presented a paper on fostering a successful student-led ML community at the NeurIPS 2022 Workshop on Broadening Research Collaborations.
This work discusses Machine Learning at Berkeley's approach to encouraging collaboration among undergraduates, and how we're working to narrow the accessibility gap in the AI/ML space.
IEEE DLS 2019
Targeted Adversarial Examples for Black Box Audio Systems
The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Current work on fooling ASR systems have focused on whitebox attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task.
Authors: Rohan Taori, Amog Kamsetty, Brenton Chu, Nikita Vemuri
ICLR 2019
Synthetic Datasets for Neural Program Synthesis
The goal of program synthesis is to automatically generate programs from corresponding input-output (I/O) behavior, with current approaches achieving impressive results after training on randomly generated I/O examples. However, we empirically discover a large class of test time distributions that current deep models perform poorly on; to correct this, we propose a new methodology for controlling and evaluating the bias of synthetic data distributions.
Authors: Neel Kant, Christopher Bender, Brandon Trabucco
Advised By: Dawn Song (BAIR), Rishabh Singh (Google Brain), Richard Shin (PhD)
IEEE DLS 2017
Transferrability of Adversarial Attacks in MAML
Model-Agnostic Meta-Learning (MAML) has proven to be a powerful, lightweight framework for transfer of learned knowledge in task-specific model adaptation. In parallel, adversarial attacks have proven to be prevalent in various classes of models. In this paper, we propose a series of experiments designed to test the susceptibility of MAML to adversarial attacks.
Authors: Riley Edmunds, Noah Golmant, Vinay Ramasesh,
Phillip Kuznetsov, Piyush Patil, Raul Puri