SPR{K3 Defend is not a scanner you run once. It's a system that learns what normal looks like on your infrastructure — then fires when something happens that is structurally impossible under legitimate operation. Here's how it works in practice.
One command. The agent runs on each machine in your infrastructure — laptops, servers, training nodes, CI runners. It observes process execution, file operations, and network behavior. It never reads file contents, credentials, or model data. Metadata only.
Process names, parent-child relationships, network connection targets (IP:port, not payload), file paths written or executed, DNS queries. That's it. Your code, models, prompts, and data never leave your machine.
For the first 48–72 hours, Defend runs in learning mode. It watches your infrastructure's normal behavior and builds a model of what "valid" looks like for your specific environment.
During learning, the system maps:
The system does not fire during the learning period. It accumulates the behavioral model silently. The agent computes a trust score locally. That score — just a number — is sent to the dashboard for visibility, but no alerts until the baseline is established and you make the deployment decision.
After learning completes, you choose how Defend operates. This is your decision — the system doesn't activate automatically.
Defend observes and alerts. Every impossible state is logged, scored, and surfaced on the dashboard. Nothing is blocked. Your infrastructure runs exactly as before — you just see what's happening inside it.
Defend can block specific actions when the trust score drops below your threshold. A process attempting to execute a file it just downloaded? Blocked before execution. A pickle.load on network-received data? Intercepted. You set the threshold.
Most teams start passive. The value is visibility — knowing what's happening across your AI infrastructure in real time. Active mode comes later, after you trust the signal.
This is where the correlator runs — locally, on your machines. Events from your agents flow through the decay correlator on the same hardware that produced them — a spiking-neuron model that accumulates charge per process lineage and fires when structurally impossible behavior is detected.
Each event alone is benign. The combination within a process lineage is impossible.
Each event adds charge to a cell tied to its process lineage. Charge decays exponentially — half-life of 30 seconds for fast chains, 10 minutes for slow staging. If related events arrive before the charge decays, they compound. When charge crosses threshold, the alert fires. After firing, a refractory period prevents re-alerting on the same chain. Stale cells are garbage-collected. The model is exact: it's how biological neurons work.
When the correlator fires, Defend creates a cluster incident with the full event chain, lineage, and contributing evidence — all stored locally on your machine. In passive mode, alert metadata (rule ID, severity, timestamp) appears on your dashboard and optional webhook. In active mode, the triggering action is blocked.
The system gets sharper over time. Detection hypotheses that consistently identify real threats survive. Those that generate noise decay. Each confirmed incident feeds back into the pattern registry, which propagates to all agents across your mesh. The defense surface compounds — the same way the vulnerability research behind Ora compounds into better static detection.
Ora (static scanner) generates hypotheses about what's vulnerable. The correlator (runtime) validates those hypotheses against real behavior. Confirmed findings sharpen both systems. This is why the architecture is one system, not two products — the scanner proposes, the runtime disposes.
When the correlator fires and you have armed the system, Defend does one thing: it asks the operating system to terminate the offending process.
No kernel driver. No root access. No special privileges. The agent runs as a regular user-level process — the same user running your ML workloads. It uses a single standard operating system call to terminate the offending process. Nothing custom. Nothing invasive.
The process does not get to clean up, finish writing, or complete the exfiltration. It stops immediately. The agent logs the full event chain, the process lineage, and the reason for termination — all visible in your console.
Terminating a process is one line of code. The hard part — the part that took 14 NVIDIA CVEs and 760+ vulnerability patterns to build — is knowing which process to kill. The correlator fires only on structurally impossible behavior: actions that have no legitimate explanation regardless of context. A model loader reading SSH keys. A training script spawning a reverse shell. A checkpoint executing code during deserialization. Not suspicious. Not anomalous. Impossible.
This is why monitor mode comes first. You watch the correlator on your real workloads. You confirm it never fires on legitimate activity. Then you arm it. If it fires wrong in monitor mode, you tune it before it can kill anything.
The decision to arm the system is yours, when you are ready.
macOS 12+ or Linux (Ubuntu 20.04+, Debian 11+, RHEL 8+). Python 3.9+. Runs as a lightweight daemon — typically under 50MB RAM, negligible CPU.
Outbound HTTPS to defend.sprk3.com for event metadata and heartbeats. No inbound ports required. Works behind corporate firewalls and NAT.