How Knowunity used distil labs to cut their LLM bill by 50%

February 11, 2026
We show how Knowunity use the distil labs platform to fine-tune and deploy their own models, significantly reducing their LLM costs.

Teaching Small Language Models New Skills - Training a Local Cybersecurity Agent

February 11, 2026
Learn how fine-tuning a Small Language Model (SLM) delivers superior cybersecurity log analysis and threat classification compared to massive LLMs. By specializing in the MITRE ATT&CK framework, this secure solution ensures total data privacy while outperforming generalist models in local environments.

Train your SLM with distill-cli Claude Skill

January 27, 2026
Train a custom Text2SQL model by chatting with Claude and the Distil Labs skill. no ML expertise, no data labeling, just a conversation and a few examples.

Building a local agent for email classification using distil labs & n8n

January 27, 2026
Automatically label your emails with a locally-running fine-tuned SLM and n8n. Keep your email data private. No cloud LLM APIs required.

We benchmarked 12 small language models across 8 tasks to find the best base model for fine-tuning

December 10, 2025
Fine-tuned 12 small models to find which ones are most tunable and perform best after fine-tuning. Surprise finding: Llama-3.2-1B showed the biggest improvement (most tunable), while Qwen3-4B delivered the best final performance - matching a 120B teacher on 7/8 tasks and outperforming by 19 points on the SQuAD 2.0 dataset.

Small expert agents from 10 examples

October 15, 2025
Distil labs turns a prompt and a few dozen examples into a small accurate expert agent. Our platform automates data generation, curation, fine-tuning, and evaluation—so you can reach LLM-level results with models 50–400× smaller, deployable almost anywhere, in hours.