Using distil labs, we were able to spin up highly accurate custom small models tailored to our workflows in no time. Those models cut our inference costs by roughly 50% without sacrificing quality. The distil labs team was incredibly supportive as we got started and helped us get to production smoothly.
Lucas Hild
Co-Founder & CTO at Knowunity
The collaboration with distil labs shows whats possible when you fine-tune small language models and deploy them on IBM Power. It is not just about the performance–it's about delivering private, accurate AI that's ready for enterprise scale.
Unnikrishnan Rajagopal
Director at IBM Power
We needed a small model that could power our product on an IBM P11, entirely on-premises. distil labs' fine-tuned models allowed us to ship a self-contained solution where the SLM and our graph platform coexist on the same hardware. For customers in regulated industries, this means AI-powered query generation with complete data privacy – nothing ever leaves their environment.
David J. Haglin
Co-Founder and CTO at Rocketgraph
With distil labs, we built a custom model using just ~100 datapoints in days. The self-service retraining has been especially valuable for our team-we can retrain the model ourselves with new data. The distil labs team was responsive and guided us through the entire process.