text-generation-webui (oobabooga)
Also known as: oobabooga, ooba, textgen-webui
text-generation-webui (often called oobabooga) is a browser-based interface for running large language models locally. It wraps multiple backends (llama.cpp, ExLlamaV2, AutoGPTQ, Transformers) under a single Gradio UI, letting operators load, chat with, and configure models without writing code. Key features include model loading with quantization presets, multi-turn chat, instruction-following templates, and LoRA/QLoRA fine-tuning. Operators encounter it as a turnkey alternative to command-line tools like llama.cpp or Ollama, especially when they want a graphical interface for experimenting with different model architectures and quantization methods.
Practical example
An operator with an RTX 3090 (24 GB VRAM) uses text-generation-webui to load Llama 3.1 70B at Q4_K_M (40 GB) via llama.cpp backend with 24 GB VRAM and 16 GB system-RAM offload. The UI shows tokens/sec dropping from ~30 (full VRAM) to ~4. They switch to ExLlamaV2 backend with a 13B model at 4-bit GPTQ (7 GB) to get ~50 tok/s entirely in VRAM.
Workflow example
After installing via git clone https://github.com/oobabooga/text-generation-webui and running start_linux.sh, the operator opens http://localhost:7860. They select the Model tab, choose a backend (e.g., llama.cpp), enter a model path like ~/models/Llama-3.1-8B-Instruct-Q4_K_M.gguf, click Load, then switch to the Chat tab to interact. If VRAM is insufficient, they adjust the n-gpu-layers slider to offload fewer layers.
Related terms
Reviewed by Fredoline Eruo. See our editorial policy.