Blog & Demos

Tutorials, case studies, benchmarks, and open-source demos — everything you need to build with small language models.

Case Study

Knowunity — 50% LLM Cost Reduction

Replaced frontier model API calls with distilled SLMs, cutting inference costs by 50% without sacrificing quality.

Classification
Read more →
Case Study

Octodet — Customer Study

How Octodet uses distil labs to power their AI workflows.

Classification
Read more →
Case Study

Rocketgraph — Customer Study

Rocketgraph customer study with distil labs.

Question Answering
Read more →
Distil Labs Enables Rocketgraph's Private AI on IBM Power with Small Language Models
Case Study

Distil Labs Enables Rocketgraph's Private AI on IBM Power with Small Language Models

Fine-tuned IBM Granite 3.3 8B for OpenCypher query generation. 85% of Claude 4 performance, 10x faster, 100x less energy. Runs on IBM Power — no data leaves the enterprise.

Question AnsweringOn-Prem / Edge
Read more →
Case Study

Uptime Industries — Custom Model in Days from ~100 Datapoints

Built a custom model from ~100 datapoints in days. Self-service retraining with no vendor dependency.

Question Answering
Read more →
Benchmark

We Benchmarked 12 Small Language Models Across 8 Tasks to Find the Best Base Model for Fine-Tuning

Qwen3-4B ranks #1 overall. Fine-tuned 4B matches or exceeds a 120B+ teacher on 7 of 8 benchmarks. A well-tuned 1B outperforms a prompted 8B.

ClassificationQuestion Answering
Read more →
Demo

Train Your SLM with the distil CLI Claude Skill

Train a specialized small language model for text-to-SQL conversion — entirely from within Claude Code using the distil labs CLI skill.

Question Answering
Read more →
Building a Local Agent for Email Classification Using distil labs & n8n
Demo

Building a Local Agent for Email Classification Using distil labs & n8n

A 0.6B email classifier that auto-labels Gmail locally. 93% accuracy from 154 seed examples. Runs on localhost via Ollama + n8n.

ClassificationAgentic AI
Read more →
Demo

Distil-PII: Family of PII Redaction SLMs

A family of PII redaction SLMs from 135M to 3B parameters. The 1B model matches a 685B teacher — runs on laptops, data never leaves your machine.

Information ExtractionOn-Prem / Edge
Read more →