You're executing untrusted code and running blind.
Over 90% of the code executing in your environments is code your team doesn’t write or review. Third-party dependencies, build tools, and AI agents run automatically, with the exact same access to your secrets and infrastructure as your own code.
Static analysis tells you what code looks like. Nothing tells you what it actually does.
Supply Chain Dependencies
Code you import, executing with your secrets.
A dependency can spawn shell commands, phone home, or read secrets during build while your CI logs still look clean. Static scans don't tell you what actually happens inside the runner, and this is the blind spot in supply chain attacks.
postinstall spawned shell process
2m
pip install made outbound curl
8m
container build spawned background process
15m
go build read sensitive host file
22m
dependency install posted env data
30m
npm install changed system permissions
45m
postinstall spawned shell process
2m
pip install made outbound curl
8m
container build spawned background process
15m
go build read sensitive host file
22m
dependency install posted env data
30m
npm install changed system permissions
45m
postinstall spawned shell process
2m
pip install made outbound curl
8m
container build spawned background process
15m
go build read sensitive host file
22m
dependency install posted env data
30m
npm install changed system permissions
45m
Agentic Execution
Generated code and AI agents running without a runtime audit trail
Coding agents - like Cursor, Claude, Codex - install packages, invoke tools, and make decisions with the same privileges as you. When that behavior drifts from intent, isolation and static firewalls can't tell apart. Runtime lineage can.
LIVE
Agent session active
Reviewing code statically isn't enough. Runtime behavior is the only way to trust what actually happened––and recent incidents prove the gap.
See and assert runtime behavior in GitHub Actions.
You already review code and run tests before you merge. Garnet adds what's missing — what the code actually did when it ran, not just what it claimed to do.
Jibril Runtime Agent
v2.8
Connected
Workflows3
build.yml
PASS
test.yml
WARN
cursor-agent-pr.yml
FAIL
Install in 3 lines of YAML
Drop the action into your workflow. This interfaces Jibril—our lightweight eBPF sensor—with the runner at the kernel level. No proxies, zero code changes, and sub-2% CPU overhead.
Jibril Runtime Agent
v2.8
Connected
Workflows3
build.yml
PASS
test.yml
WARN
cursor-agent-pr.yml
FAIL
Behavioral profiles for every run
Every workflow run produces a structured Run Profile. It is a tamper-proof, canonical record of what actually ran in the kernel—every network call, file access, and process spawn mapped to its exact lineage.
github-runner:1
npm install:42
postinstall → /bin/sh:87
curl exfil.sh:91
webhook.site
node payload.js:93
185.62.190.89
node build.js:55
esbuild:60
Review and assert in your workflow
Runtime assertions are like unit tests, but for runtime behavior. Treat execution boundaries—like expected egress or protected file paths—as testable invariants of your software. Expected behavior passes silently; deviations fail the PR with exact execution lineage.
Ground truth for what code actually did.
Catch unexpected behavior early
Runtime profiles surface deviations you’d never spot in a code review—regressions, unexpected side effects, and attacks—before they become incidents.
Audit dependency and agent execution
See exactly what AI agents and third-party dependencies did during a run, with kernel-level evidence. Know instantly if you’re impacted by a supply-chain attack or rogue agent behavior.
Derive policy from observed behavior
Build firewall rules, egress policies, and allowlists from what your code actually does at runtime—not from assumptions or manual inventories.
Detect known threats automatically
Garnet’s managed threat feeds flag known C2 domains, cryptominer pools, and attack patterns at runtime—catching novel threats before they show up in advisories, no security team required.
"There are a lot of tools that process security advisory data, but Garnet is the first I’ve seen that goes a step further, applying behavioral analysis to find issues before they get reported to an advisory database. This is the kind of thing we’d always wanted to do at npm, Inc., but never got around to. It’s super exciting to see it come to fruition."
Isaac Z. Schlueter
Creator of npm, ex-Project Lead Node.js
A new primitive for autonomous era of software
Making execution observable and trustworthy.
Logs, metrics, and traces were built for code you wrote and services you control. But software is no longer deterministic—and execution is the new surface. Garnet is the missing visibility substrate: runtime behavior captured at the kernel, structured as testable artifacts, and delivered where engineers already work.
Kernel-level visibility, zero overhead
Jibril is a lightweight eBPF agent that instruments at the kernel—not by parsing logs after the fact, but by observing syscalls as they happen. No code changes, no proxies, no sidecars. A single binary that drops into your workflow and captures every process spawn, network call, and file access with full ancestry. Sub-2% CPU overhead, zero configuration - built for modern ephemeral environments.
You already write tests to verify expected logic and run linters to enforce style. Runtime assertions extend that pattern to execution behavior—define which destinations are expected, which processes can spawn, which files can be touched. A failing assertion is a failing test in your PR. No new dashboards. The feedback loop engineers already trust, now covering what code actually does.
The source of truth for non-deterministic execution
You can’t predict behavior you didn’t author. AI agents, dependencies, and generated code make execution the attack surface—and static rules can’t cover it. Garnet captures what actually ran: a deterministic, structured record of every runtime decision. Not a scan. Not a heuristic. Ground truth for the era where code writes code.