HomeBlogContactDocumentationDashboard
Start with API
Join

Vibe-Tuning: The Art of Fine-Tuning Small Language Models with a Prompt

November 10, 2025
Fine-tuning is a pain – you need datasets, ML expertise, and a stack of GPUs just to get started. Not anymore. With model vibe-tuning, you go from prompt to production-ready model without these headaches. This blog post shows you exactly how to build one, starting with just a prompt.

Distil Labs Enables Rocketgraph’s Private AI on IBM Power with Small Language Models

October 31, 2025
In this blog, we discuss how we fine-tuned a small language model to generate OpenCypher queries for the Rocketgraph analytics platform, thereby enabling accurate, efficient and privacy-first AI-powered natural language querying capabilities for Rocketgraph's customers.

Small expert agents from 10 examples

October 15, 2025
Distil labs turns a prompt and a few dozen examples into a small accurate expert agent. Our platform automates data generation, curation, fine-tuning, and evaluation—so you can reach LLM-level results with models 50–400× smaller, deployable almost anywhere, in hours.

Distil-PII: family of PII redaction SLMs

October 21, 2025
We trained and released a family of small language models (SLMs) specialized for policy-aware PII redaction. After targeted fine-tuning on a compact, well-specified task, our SLMs dramatically outperform their pre-trained counterparts on an LLM-as-judge evaluation. Notably, the **1B model-which can be deployed on a laptop-achieves 0.81 ± 0.02, effectively matching a frontier 600B+ LLM class (e.g., DeepSeek 3.1 at 0.84 ± 0.03)** while retaining tight latency, cost, and on-device privacy.

Gitara: How we trained a 3B Function-Calling Git Agent for Local Use

October 16, 2025
How we created a local tool-calling language model to turn plain-English language questions into git commands with the accuracy of a cloud LLM. You can check it out in our GitHub repo or get the model directly from Huggingface.

Using custom SLMs in Agentic AI

October 16, 2025
In this post we explore how you can combine the best of both worlds: speed of development of LLMs but control and efficiency machine learning. This blueprint offers a more efficient and environmentally sustainable way of designing AI systems. Use SLMs for narrow agent tasks: faster, cheaper, private. Keep LLMs for open-ended work.

Benchmarking the platform

October 16, 2025
In this post we benchmark our distilled SLMs against a strong teacher across classification, extraction, QA (open + closed), and tool-calling. The punchline: trained students crush baselines and match or beat the teacher accuracy.
Home
Contact
Documentation
API
Login
Join
© 2025 distil labs. All rights reserved.
Privacy Policy
Imprint
Find us on LinkedIn

By clicking "Accept", you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Preferences
RejectAccept
Manage Consent Preferences by Category
Essentials
Always active

Necessary for the site to function. Always On.

Used for targeted advertising.

Remembers your preferences and provides enhanced features.

Measures usage and improves your experience.

Reject AllAccept All
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.