@LiorOnAI
That's it. You can now build custom models without lifting a finger. Catalyst records every API request automatically. It sits between your app and your LLM provider. Most fine-tuned models fail in production because they train on synthetic data. This records what users actually do. No fake datasets or manual labeling needed. Training happens in four steps: 1. Use GPT-4 or Claude normally 2. Requests get captured automatically 3. Traces become labeled training data 4. Smaller models distill from usage They match frontier quality on your use case. At 95% lower inference cost and 150ms latency. Apps too expensive at frontier pricing now scale. It also creates a flywheel effect. More users means better data means better models. Every unrecorded request is training signal lost. It gives you a production-grade model trained on your actual usage, at a fraction of the cost, without building a single dataset.