llama.cpp

v 7526 Updated: 2 weeks, 3 days ago

LLM inference in C/C++

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

https://github.com/ggerganov/llama.cpp

To install llama.cpp, paste this into the macOS Terminal after installing MacPorts

sudo port install llama.cpp

Add to my watchlist

Installations 10
Requested Installations 10