Every impenetrable LLM can be jailbroken. And every
service agreement that guarantees your data, entered into a prompt window, will not be used to train future models can be broken, loopholed or hacked. Once you enter content into a Large Language Model, or post anything onto the web, it’s no longer yours. A guide to keeping data safe in the AI landscape.