Federal News
HackerOne Launches AI Prompt Injection Testing
March 21, 2026
HackerOne has introduced Agentic Prompt Injection Testing to address the rapidly escalating security risks associated with AI prompt injection vulnerabilities, which have increased by 540% year-over-year. This new testing service enables organizations, including government agencies and contractors, to validate the security of AI systems under realistic adversarial conditions, helping to prevent data breaches and misuse of AI-integrated tools in operational environments.
- Why this matters: As AI adoption grows in government operations, validated security testing against prompt injection attacks becomes critical to safeguarding sensitive data and maintaining system integrity.
- Procurement professionals should consider integrating Agentic Prompt Injection Testing into AI system acquisition and risk management strategies to ensure robust defense against emerging AI threats.
- Contractors offering AI solutions can leverage this testing capability to demonstrate security resilience and compliance with evolving cybersecurity expectations.
- This development signals increasing demand for specialized AI security services, highlighting opportunities for vendors focused on advanced threat detection and mitigation in AI deployments.
Prompt injection has quickly become a severe risk to deployed AI systems because it can transform a trusted application into an attack surface. Security teams canβt rely on static controls or runtime filters alone. They need validated proof of whether an AI system can be exploited once itβs connected to real data and tools. Agentic Prompt Injection Testing delivers that evidence, enabling organizations to identify confirmed exposure and reduce risk before it impacts the business.
— Nidhi Aggarwal, Chief Product Officer at HackerOne
Vendors
HackerOne
Sources
- HackerOne Introduces Agentic Prompt Injection Testing as AI Security Risks Accelerate - Cybersecurity Insiders · Cybersecurity Insiders · Mar 21