Question Answering
Articles about training small language models for question answering and text-to-SQL.
Rocketgraph — Customer Study
Rocketgraph customer study with distil labs.
Distil Labs Enables Rocketgraph's Private AI on IBM Power with Small Language Models
Fine-tuned IBM Granite 3.3 8B for OpenCypher query generation. 85% of Claude 4 performance, 10x faster, 100x less energy. Runs on IBM Power — no data leaves the enterprise.
Uptime Industries — Custom Model in Days from ~100 Datapoints
Built a custom model from ~100 datapoints in days. Self-service retraining with no vendor dependency.
We Benchmarked 12 Small Language Models Across 8 Tasks to Find the Best Base Model for Fine-Tuning
Qwen3-4B ranks #1 overall. Fine-tuned 4B matches or exceeds a 120B+ teacher on 7 of 8 benchmarks. A well-tuned 1B outperforms a prompted 8B.
Train Your SLM with the distil CLI Claude Skill
Train a specialized small language model for text-to-SQL conversion — entirely from within Claude Code using the distil labs CLI skill.
Benchmarking the Platform
Distilled students match or exceed the teacher LLM on 8 of 10 datasets across classification, NER, open-book QA, tool calling, and closed-book QA.