Offensive prompt injection and agent exploitation
If you want permissionless research, radical creativity, and the freedom to break things the big labs would never let you touch, you'll feel at home here. If you'd rather discover new failure modes than sit in meetings discussing old ones, you'll fit right in.
OpenAI has guardrails. Anthropic has constraints. Google has committees.
We are the lab built for:
If you want a safe, structured, corporate research environment, this is not it. If you want to uncover failure modes no one has documented yet, you will fit in immediately.
Most labs optimize for safety optics and controlled research. We optimize for discovery, creativity, and speed.
This is a place for people who want to push the edge of what AI systems can do, and what they can break.
Breakthroughs are rewarded here. If you uncover:
You receive a direct cash bonus. We reward ingenuity, curiosity, and technical creativity. This model does not exist at other labs.
Because the people who secure autonomous agents will:
If you want the chance to build something the world will rely on, this is it. Early builders don't follow the field - they define and create it.
This role is full-spectrum adversarial exploration of large language models and agentic systems.
No PhD is required. Skill and curiosity matter more than credentials.
Send your resume and a brief note about what excites you most about breaking AI systems.
jobs@agenthacker.ai