Information Extraction
Articles about training small language models for NER, PII redaction, and structured extraction.
Demo
Distil-PII: Family of PII Redaction SLMs
A family of PII redaction SLMs from 135M to 3B parameters. The 1B model matches a 685B teacher — runs on laptops, data never leaves your machine.
Information ExtractionOn-Prem / Edge
Read more →
Benchmark
Benchmarking the Platform
Distilled students match or exceed the teacher LLM on 8 of 10 datasets across classification, NER, open-book QA, tool calling, and closed-book QA.
ClassificationTool CallingInformation ExtractionQuestion Answering
Read more →