On-Prem / Edge
Articles about deploying small language models on-premises and at the edge.
Case Study
Distil Labs Enables Rocketgraph's Private AI on IBM Power with Small Language Models
Fine-tuned IBM Granite 3.3 8B for OpenCypher query generation. 85% of Claude 4 performance, 10x faster, 100x less energy. Runs on IBM Power — no data leaves the enterprise.
Question AnsweringOn-Prem / Edge
Read more →
Demo
Distil-PII: Family of PII Redaction SLMs
A family of PII redaction SLMs from 135M to 3B parameters. The 1B model matches a 685B teacher — runs on laptops, data never leaves your machine.
Information ExtractionOn-Prem / Edge
Read more →