Scan · Beta

Know if your ML code is safe to ship.

Point SPR{K3 Scan at a model file or an entire repository. It finds the vulnerability classes that exist in every ML codebase — unsafe deserialization, supply chain risks, hardcoded secrets, trust boundary violations. 760+ detection patterns from real CVE research, not theoretical signatures.

Two scanners, one binary

◈ Model File Analysis

Pass a .pt, .pkl, .bin, or .ckpt file. Instant verdict on whether it is safe to load. Detects pickle bombs, torch.load without weights_only, unsafe formats. Recommends safetensors migration.

◇ Repository Audit

Full codebase scan. Hardcoded secrets, unsafe deserialization, remote code trust without version pinning, insecure transport, unpinned dependencies, CI/CD exposure. CVSS scoring per finding.

# Model file
$ ./sprk3_scan model.pt

SPR{K3 Scan v1.1.0 · Target: model.pt · Findings: 2
[CRITICAL] PKL-002 torch.load without weights_only · line 1
[HIGH]     PKL-005 Unsafe checkpoint format · line 1

# Repository
$ ./sprk3_scan /your/ml/repo

SPR{K3 Scan v1.1.0 · Target: /repo · Findings: 7
[CRITICAL] SEC-002 Hardcoded API key · config.py:13
[CRITICAL] PKL-001 pickle.load on user path · loader.py:19
[HIGH]     TRC-003 trust_remote_code, no pin · model.py:10
How it works
01
Sign up — get your API key and the compiled binary instantly.
02
Download the zip. Binary pre-configured with your key — unzip and run.
03
Scan: ./sprk3_scan /your/project or ./sprk3_scan model.pt
04
View history and generate NIST AI RMF compliance reports at defend.sprk3.com.
Your code stays yours

Runs locally

Binary executes on your machine. Source code never leaves.

No file contents sent

Only hashed paths and finding metadata reach our server.

No credentials collected

We never see the secrets we find in your code.

Free during beta

Full access, no credit card, no limits.

Who this is for

ML Researchers

Downloading models from HuggingFace daily. Every torch.load() on an untrusted model is a potential RCE. Scan catches it before you run it.

Dev Teams Shipping ML

pip installing from requirements.txt you didn't audit, loading pickled datasets, running notebooks from Kaggle. Every trust decision flagged.

Compliance Officers

Your client or board asked if the ML code is secure. Scan gives you a NIST AI RMF report you can hand them.