GitHub - jundot/omlx: LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar - GitHub Daily Trend | Wave AI Podcast Notes