Ignore the noise: Using autoencoders against adversarial attacks in reinforcement learning (lightning talk)

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Reinforcement learning (RL) algorithms learn and explore nearly any state any number of times in their environment, but minute adversarial attacks cripple these agents. In this work, we define our threat model against RL agents as such: Adversarial agents introduce small permutations to the input data via black-box models with the goal of reducing the optimality of the agent. We focus on pre-processing adversarial images before they enter the network to reconstruct the ground-truth images.

Original languageEnglish
Title of host publicationProceedings - 2018 4th International Conference on Software Security and Assurance, ICSSA 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages81
Number of pages1
ISBN (Electronic)9781538692103
DOIs
StatePublished - Jul 2018
Externally publishedYes
Event4th International Conference on Software Security and Assurance, ICSSA 2018 - Seoul, Korea, Republic of
Duration: 26 Jul 201827 Jul 2018

Publication series

NameProceedings - 2018 4th International Conference on Software Security and Assurance, ICSSA 2018

Conference

Conference4th International Conference on Software Security and Assurance, ICSSA 2018
Country/TerritoryKorea, Republic of
CitySeoul
Period26/07/1827/07/18

Keywords

  • Adversarial examples
  • Autoencoders
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'Ignore the noise: Using autoencoders against adversarial attacks in reinforcement learning (lightning talk)'. Together they form a unique fingerprint.

Cite this