

122·
8 days agoI successfully ran local Llama with llama.cpp and an old AMD GPU. I’m not sure why you think there’s no other option.


I successfully ran local Llama with llama.cpp and an old AMD GPU. I’m not sure why you think there’s no other option.


Have they released the hi-fi tier they promised years ago?
Between Tidal’s high quality streaming and my Jellyfin server with FLAC of my CDs, I’m happy.
Llama.cpp now supports Vulkan, so it doesn’t matter what card you’re using.