Anubis AI is a deterministic, auditable AI inference engine built entirely from scratch in C++17 and CUDA. No PyTorch. No TensorFlow. No external ML dependencies. Every operation is reproducible, traceable, and hash-audited.
The engine is designed to be lean at the base and deep at the edge. The base model knows how to reason, explain, and communicate in multiple languages. Domain knowledge is delivered through small, swappable adapter packs — rather than baking all knowledge into a single massive model.
Independently built and independently funded by a solo developer. Every line of C++ and every CUDA kernel, written by hand.
The Anubis base model (~832M–1.6B parameters) is trained to reason, explain, follow instructions, and communicate across 10+ languages. It contains no domain knowledge by design.
A small (5–50M parameter) adapter is trained on curated domain content. The adapter teaches the base model what to reason about — without modifying the base model itself.
Load the base model + adapter into the Anubis runtime. Query in any language. Swap adapters to change domains. No cloud, no internet dependency after download.
Every inference is reproducible. Every operation is hash-audited. Results are cryptographically traceable — no probabilistic drift between runs.
.anubis binary checkpoint files with SHA256 block verification.
Each delivery includes a manifest.json with metadata, checksums, and the training configuration.