orchestrator
Open source
free (OSS, Apache 2.0) + Anyscale managed

Ray Serve

Distributed model serving on top of Ray. Lets you stitch vLLM / SGLang / custom runtimes into a multi-replica, multi-model deployment with autoscaling, traffic splitting, and pipeline composition. The orchestration layer above raw inference engines.

By Fredoline Eruo·Last verified May 6, 2026·33,000 GitHub stars

Overview

Distributed model serving on top of Ray. Lets you stitch vLLM / SGLang / custom runtimes into a multi-replica, multi-model deployment with autoscaling, traffic splitting, and pipeline composition. The orchestration layer above raw inference engines.

Stack & relationships

How Ray Serve relates to other entries in the catalog — recommended pairings, alternatives, dependencies, and edges to avoid. Each edge carries a one-line operator note from our editorial team.

Ray Serve ↔ ecosystem

Recommended stack

  • Commonly deployed with
    vLLM

    Ray Serve in front of vLLM is the canonical K8s production pattern — autoscaling replicas, traffic splitting, canary deploys.

  • Commonly deployed with
    vLLM

    Ray Serve orchestrates vLLM replicas in K8s. The canonical 'we replaced our OpenAI bill' production stack.

  • Commonly deployed with
    SGLang

    Same canonical pattern as vLLM — Ray Serve in front for K8s-grade autoscaling. SGLang's cross-replica RadixAttention sync compounds the cluster-level wins.

  • Commonly deployed with
    SGLang

    Same orchestration layer above SGLang as above vLLM. Ray Serve doesn't care which engine is underneath — that's the architectural point.

  • Commonly deployed with
    vLLM

    Ray Serve in front of vLLM = the canonical K8s production pattern. Autoscaling replicas, traffic splitting, canary deploys.

  • Commonly deployed with
    SGLang

    Same pattern as Ray Serve + vLLM. SGLang's cross-replica RadixAttention sync makes the cluster-level wins compound at multi-node scale.

Alternatives

  • Alternative to
    Exo

    Exo for Apple-Silicon LAN clusters; Ray Serve for datacenter multi-node. Different hardware targets; non-overlapping operating points.

Featured in this stack

The L3 execution stacks that pick this tool as a recommended component, with the one-line note explaining the role it plays in each.

  • Stack · L3·Production tier·Role: Cluster orchestrator (head node + worker placement)
    Build a distributed inference homelab stack (May 2026)

    Ray Serve is the canonical orchestration layer above vLLM in distributed deployments. Handles worker placement, autoscaling, traffic splitting, canary deploys. Same Ray cluster scales to add SGLang or other engines later — pick the orchestrator first; pick the engine inside it.

Pros

  • Composes multiple runtimes (vLLM + custom) under one routing layer
  • Autoscaling + canary deploys built in
  • Same stack handles training + serving

Cons

  • Ray adds operational surface area
  • Overkill for single-model deployments

Compatibility

Operating systems
Linux
macOS
Kubernetes
GPU backends
NVIDIA CUDA
AMD ROCm
LicenseOpen source · free (OSS, Apache 2.0) + Anyscale managed

Get Ray Serve

Frequently asked

Is Ray Serve free?

Ray Serve has a paid tier (free (OSS, Apache 2.0) + Anyscale managed). Check the pricing page for current terms.

What operating systems does Ray Serve support?

Ray Serve supports Linux, macOS, Kubernetes.

Which GPUs work with Ray Serve?

Ray Serve supports NVIDIA CUDA, AMD ROCm. CPU-only inference is also possible but slow.

Reviewed by RunLocalAI Editorial. See our editorial policy for how we evaluate tools.