Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?
Our Sponsors:* Check out 1Password and use my code MALICIOUS for a great deal: 1password.comAdvertising Inquiries: https://redcircle.com/brands)