xxx
Indirect prompt injections, which are considered one of most serious AI security problems, take things up a notch. Instead of being entered by the user, the malicious prompt is inserted by an outside source. That could be a devious set of instructions included in text on a website that an AI summarizes; or text in a white font in a document that a human wouldn’t obviously see but a computer will still read. These kinds of attacks are a key concern as AI agents, which can let an LLM control or access other systems, are being developed and released.
xxx