Adversarial for Classification Models

Provides a class that represents an adversarial example for image classfication tasks.

class perceptron.utils.adversarial.classification.ClsAdversarial(model, criterion, original_image, original_pred, threshold=None, distance=<class 'perceptron.utils.distances.MeanSquaredDistance'>, verbose=False)[source]

Defines an adversarial that should be found and stores the result.

backward(self, gradient, image=None, strict=True)[source]

Interface for model.backward for attacks.

gradient(self, image=None, label=None, strict=True)[source]

Interface to model.gradient for attacks.

Parameters:
image : numpy.ndarray

Image with shape (height, width, channels). Defaults to the original image.

label : int

Label used to calculate the loss that is differentiated. Deefaults to the original label

strict : bool

Controls if the bounds for the pixel values should be checked.

model_task(self)[source]

Interface to model.model_task for attacks.

predictions_and_gradient(self, image=None, label=None, strict=True, return_details=False)[source]

Interface to model.predictions_and_gradient for attacks.

Parameters:
image : numpy.ndarray

Image with shape (height, width, channels). Defaults to the original image.

label : int

Label used to calculate the loss that is differentiated. Defaults to the original label.

strict : bool

Controls if the bounds for the pixel values should be checked.