Security for AI agents: How OpenAI prevents data theft via links

Security for AI agents: How OpenAI prevents data theft via links

OpenAI details the security architecture behind its new “Operator” agent, which executes web interactions in an isolated cloud sandbox rather than locally on user devices. By implementing cryptographic signatures according to RFC 9421, server operators and firewalls should be able to mathematically verify that a request actually originates from an authorized AI agent. We analyze whether this server-side “walled garden” approach effectively eliminates the risk of SSRF attacks compared to open systems such as Claude Computer Use.

Read more

OpenAI unveils GPT-5.2 codex: New security standards for coding agents

OpenAI unveils GPT-5.2 codex: New security standards for coding agents

📖 This article is part of our comprehensive ChatGPT guide. Read the full guide →

With an addendum to the System Card, OpenAI radically shifts the security focus of GPT-5.2 codex from content moderation to functional capabilities safety. The updated model now blocks malware, obfuscation and prompt injections directly during token generation instead of relying on external guardrails.

Read more