Saturday, 8 November 2025

Show HN: Serve 100 Large AI models on a single GPU with low impact to TTFT https://bit.ly/4qNmEME

Show HN: Serve 100 Large AI models on a single GPU with low impact to TTFT I wanted to build an inference provider for proprietary AI models, but I did not have a huge GPU farm. I started experimenting with Serverless AI inference, but found out that coldstarts were huge. I went deep into the research and put together an engine that loads large models from SSD to VRAM up to ten times faster than alternatives. It works with vLLM, and transformers, and more coming soon. With this project you can hot-swap entire large models (32B) on demand. Its great for: Serverless AI Inference Robotics On Prem deployments Local Agents And Its open source. Let me know if anyone wants to contribute :) https://bit.ly/4nKufsu November 9, 2025 at 12:48AM

No comments:

Post a Comment