# Adversarially Robust Optimization with Gaussian Processes

@inproceedings{Bogunovic2018AdversariallyRO, title={Adversarially Robust Optimization with Gaussian Processes}, author={Ilija Bogunovic and Jonathan Scarlett and Stefanie Jegelka and Volkan Cevher}, booktitle={NeurIPS}, year={2018} }

In this paper, we consider the problem of Gaussian process (GP) optimization with an added robustness requirement: The returned point may be perturbed by an adversary, and we require the function value to degrade as little as possible as a result. [...] Key Method We show that standard GP optimization algorithms do not exhibit the desired robustness properties, and give a novel confidence bound based algorithm StableOPT for this purpose. Expand

#### 53 Citations

Adversarial Robustness Guarantees for Gaussian Processes

- Computer Science, Mathematics
- ArXiv
- 2021

The empirical results suggest that the adversarial robustness of GPs increases with accurate posterior estimation, and a branch-and-bound scheme to refine the bounds and show that the algorithm is guaranteed to converge to values -close to the actual values in finitely many iterations. Expand

Corruption-Tolerant Gaussian Process Bandit Optimization

- Computer Science, Mathematics
- AISTATS
- 2020

It is observed that distinct algorithmic ideas are required depending on whether one is required to perform well in both the corrupted and non-corrupted settings, and whether the corruption level is known or not. Expand

On Lower Bounds for Standard and Robust Gaussian Process Bandit Optimization

- Computer Science, Mathematics
- ICML
- 2021

In this paper, we consider algorithm-independent lower bounds for the problem of black-box optimization of functions having a bounded norm is some Reproducing Kernel Hilbert Space (RKHS), which can… Expand

Efficient Batch Black-box Optimization with Deterministic Regret Bounds

- Computer Science, Mathematics
- ArXiv
- 2019

A novel batch optimization algorithm is proposed, which jointly maximizes the acquisition function and select points from a whole batch in a holistic way and derive regret bounds for both the noise-free and perturbation settings irrespective of the choice of kernel. Expand

Adversarial Robustness Guarantees for Classification with Gaussian Processes

- Mathematics, Computer Science
- AISTATS
- 2020

The empirical analysis suggests that GPC robustness increases with more accurate posterior estimation, and the method applies to experimentally investigate the robustness of GPC models on a 2D synthetic dataset, the SPAM dataset and a subset of the MNIST dataset. Expand

Mixed Strategies for Robust Optimization of Unknown Objectives

- Computer Science, Mathematics
- AISTATS
- 2020

A novel sample-efficient algorithm GP-MRO is designed, which sequentially learns about the unknown objective from noisy point evaluations, that seeks to discover a robust and randomized mixed strategy, that maximizes the worst-case expected objective value. Expand

Robust Adaptive Decision Making: Bayesian Optimization and Beyond

- Computer Science
- 2019

This dissertation considers four research directions with the goal of enhancing robust and adaptive decision making in Bayesian optimization and associated problems, and proposes a novel robust confidence-bound based algorithm that treats BO and LSE in a unified fashion. Expand

Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks

- Computer Science
- ICML
- 2020

A principled optimization framework, integrating a zeroth-order (ZO) gradient estimator with an alternating projected stochastic gradient descent-ascent method, where the former only requires a small number of function queries and the later needs just one-step descent/ascent update. Expand

Active learning for distributionally robust level-set estimation

- Computer Science, Mathematics
- ICML
- 2021

The proposed theoretically grounded and computationally efficient active learning method for efficiently identifying a reliable set H, which is defined as a region in which the DRPTR measure exceeds a certain desired probability α, can be interpreted as a level set estimation (LSE) problem forDRPTR. Expand

Saddle point optimization with approximate minimization oracle

- Computer Science, Mathematics
- GECCO
- 2021

A heuristic approach to adapt η derivative-free and implement zero-order and first-order minimization algorithms and reveals a possible shortcoming of recently developed robust adversarial reinforcement learning algorithms. Expand

#### References

SHOWING 1-10 OF 40 REFERENCES

Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design

- Computer Science, Mathematics
- ICML
- 2010

This work analyzes GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions. Expand

Robust Maximization of Non-Submodular Objectives

- Mathematics, Computer Science
- AISTATS
- 2018

A new algorithm Oblivious-Greedy is presented and the first constant-factor approximation guarantees for a wider class of non-submodular objectives are proved, i.e. when the number of deletions $\tau$ is linear in $k$. Expand

Robust Submodular Observation Selection

- Mathematics
- 2008

In many applications, one has to actively select among a set of expensive observations before making an informed decision. For example, in environmental monitoring, we want to select locations to… Expand

Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration

- Computer Science, Mathematics
- ECML/PKDD
- 2013

The Gaussian Process Upper Confidence Bound and Pure exploration algorithm (GP-UCB-PE) is introduced which combines the UCB strategy and Pure Exploration in the same batch of evaluations along the parallel iterations and proves theoretical upper bounds on the regret with batches of size K for this procedure. Expand

Safe Exploration for Optimization with Gaussian Processes

- Computer Science
- ICML
- 2015

This work develops an efficient algorithm called SAFEOPT, and theoretically guarantees its convergence to a natural notion of optimum reachable under safety constraints, as well as two real applications: movie recommendation, and therapeutic spinal cord stimulation. Expand

Distributionally Robust Submodular Maximization

- Computer Science, Mathematics
- AISTATS
- 2019

This paper achieves better performance on the actual underlying function $f$ by directly optimizing a combination of bias and variance, and shows compelling empirical evidence that DRO improves generalization to the unknown stochastic submodular function. Expand

Robust Optimization for Non-Convex Objectives

- Computer Science, Mathematics
- NIPS
- 2017

A reduction from robust improper optimization to Bayesian optimization is developed: given an oracle that returns $\alpha$-approximate solutions for distributions over objectives, it is shown that de-randomizing this solution is NP-hard in general, but can be done for a broad class of statistical learning tasks. Expand

Certifying Some Distributional Robustness with Principled Adversarial Training

- Computer Science, Mathematics
- ICLR
- 2018

This work provides a training procedure that augments model parameter updates with worst-case perturbations of training data and efficiently certify robustness for the population loss by considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball. Expand

Batch Bayesian Optimization via Local Penalization

- Computer Science, Mathematics
- AISTATS
- 2016

A simple heuristic based on an estimate of the Lipschitz constant is investigated that captures the most important aspect of this interaction at negligible computational overhead and compares well, in running time, with much more elaborate alternatives. Expand

Certifiable Distributional Robustness with Principled Adversarial Training

- Computer Science, Mathematics
- ArXiv
- 2017

This work provides a training procedure that augments model parameter updates with worst-case perturbations of training data by considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball. Expand