Blog & Demos
Tutorials, case studies, benchmarks, and open-source demos — everything you need to build with small language models.
Knowunity — 50% LLM Cost Reduction
Replaced frontier model API calls with distilled SLMs, cutting inference costs by 50% without sacrificing quality.
Octodet — Customer Study
How Octodet uses distil labs to power their AI workflows.
Rocketgraph — Customer Study
Rocketgraph customer study with distil labs.
Distil Labs Enables Rocketgraph's Private AI on IBM Power with Small Language Models
Fine-tuned IBM Granite 3.3 8B for OpenCypher query generation. 85% of Claude 4 performance, 10x faster, 100x less energy. Runs on IBM Power — no data leaves the enterprise.
Uptime Industries — Custom Model in Days from ~100 Datapoints
Built a custom model from ~100 datapoints in days. Self-service retraining with no vendor dependency.
We Benchmarked 12 Small Language Models Across 8 Tasks to Find the Best Base Model for Fine-Tuning
Qwen3-4B ranks #1 overall. Fine-tuned 4B matches or exceeds a 120B+ teacher on 7 of 8 benchmarks. A well-tuned 1B outperforms a prompted 8B.
Train Your SLM with the distil CLI Claude Skill
Train a specialized small language model for text-to-SQL conversion — entirely from within Claude Code using the distil labs CLI skill.
Building a Local Agent for Email Classification Using distil labs & n8n
A 0.6B email classifier that auto-labels Gmail locally. 93% accuracy from 154 seed examples. Runs on localhost via Ollama + n8n.
Distil-PII: Family of PII Redaction SLMs
A family of PII redaction SLMs from 135M to 3B parameters. The 1B model matches a 685B teacher — runs on laptops, data never leaves your machine.