Point SPR{K3 Scan at a model file or an entire repository. It finds the vulnerability classes that exist in every ML codebase — unsafe deserialization, supply chain risks, hardcoded secrets, trust boundary violations. 760+ detection patterns from real CVE research, not theoretical signatures.
Pass a .pt, .pkl, .bin, or .ckpt file. Instant verdict on whether it is safe to load. Detects pickle bombs, torch.load without weights_only, unsafe formats. Recommends safetensors migration.
Full codebase scan. Hardcoded secrets, unsafe deserialization, remote code trust without version pinning, insecure transport, unpinned dependencies, CI/CD exposure. CVSS scoring per finding.
./sprk3_scan /your/project or ./sprk3_scan model.ptBinary executes on your machine. Source code never leaves.
Only hashed paths and finding metadata reach our server.
We never see the secrets we find in your code.
Full access, no credit card, no limits.
Downloading models from HuggingFace daily. Every torch.load() on an untrusted model is a potential RCE. Scan catches it before you run it.
pip installing from requirements.txt you didn't audit, loading pickled datasets, running notebooks from Kaggle. Every trust decision flagged.
Your client or board asked if the ML code is secure. Scan gives you a NIST AI RMF report you can hand them.