Silmaril
/sɪl.mə.rɪl/ · n. Quenya, “radiance of pure light”
The world's first self-healing firewall for AI.
Join Private BetaSILMARIL HACKED


Mission
Silmaril's mission is to win the AI security arms race through autonomous research and cutting-edge ML. AI-enabled attacks caused $70B in damages in 2025, with attackers 4.5× more likely to succeed using AI. Today's defenses can't keep up because they only recognize threats they've seen before. We're building the unified security layer for AI, starting with a firewall that understands what an attack does in context and retrains itself continuously against each customer's evolving threat landscape. Our team is composed of engineers and researchers from Amazon, MIT, Columbia, and high-growth startups.
Firewall
Novel threats surfaced before attackers find them
Research agents probe reverse-engineered replicas of your system around the clock, chaining prompt injection, tool abuse, memory poisoning, and privilege escalation. 10 novel attack chains discovered per month, equivalent to a full security team’s annual output.
A firewall that retrains on your threat landscape
Custom multi-head ensemble model sits in the request path at <20 ms overhead. Retrains continuously on application-specific threat intelligence. 99.7% of attacks blocked vs 52% for leading guardrails.
Defenses that propagate across every system
When one customer’s firewall blocks a novel technique, the defense signature is abstracted and propagated to every other firewall. Only generalized patterns are shared, never customer data. Every deployment strengthens the whole network.
Impact
Accuracy
Against emerging and contextual threats, vs 52% for leading guardrails.
Latency Overhead
Inline firewall, continuous retraining, zero downtime.
Novel Attacks Discovered
Matching a full security team’s annual output in 30 days.
Manual Hours Saved Monthly
Time to resolution drops from days to ~30 minutes.
Disclosures
15 critical vulnerabilities found across OpenAI, Anthropic, Google, and Microsoft in two weeks.
Open AI
Prompt injection chain converted an OpenAI agent into an attack vector for remote code execution, escalating to access internal artifacts including source code.
Microsoft
Critical prompt injection vulnerabilities using email as the entry vector, achieving data exfiltration through SSRF in Copilot. Microsoft patched the vulnerability for millions of users.