@orhundev
New TUI dropped for managing LLM traffic and GPU resources π₯ π ollamaMQ β Async message queue proxy for Ollama π― Per-user queues, fair-share scheduling, OpenAI-compatible endpoints, streaming π¦ Written in Rust & built with @ratatui_rs β GitHub: https://t.co/0UthA7KPIg #rustlang #ratatui #tui #gpu #llm #ollama #backend #proxy #terminal