RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
  1. >
  2. Home
  3. /Models
  4. /BGE Reranker v2 M3
other
0.57B parameters
Commercial OK
·Reviewed May 2026

BGE Reranker v2 M3

BGE M3 reranker. Cross-encoder for re-ranking RAG candidates; multilingual.

License: MIT·Released Apr 15, 2024·Context: 8,192 tokens
BLK · VERDICT

Our verdict

OP · Fredoline Eruo|VERIFIED MAY 8, 2026
unrated

Positioning

BAAI's BGE Reranker V2 M3 is the canonical companion reranker to BGE-M3 and the default open-weight cross-encoder reranker for production RAG pipelines in 2026. ~568M parameters (XLM-RoBERTa base, same architecture as BGE-M3 but trained as a cross-encoder), 8K context, multilingual coverage matching BGE-M3 (100+ languages). Released under MIT license — fully permissive commercial use. The model takes (query, document) pairs and outputs a relevance score — used as the second stage in retrieve-then-rerank pipelines after fast first-stage retrieval via BGE-M3 or other dense embedders.

Strengths

  • Best-in-class open-weight reranker for multilingual RAG pipelines.
  • Tight integration with BGE-M3: same architecture base, same multilingual coverage, designed to chain.
  • 8K context handling matches BGE-M3 — long-document chunks rerank without truncation issues.
  • MIT license = unconstrained commercial use.
  • Small + fast. 568M parameters reranks 100s of (query, doc) pairs per second on a single GPU.
  • Real quality lift over no-reranker baseline. Adding BGE Reranker V2 M3 to a BGE-M3 retrieval pipeline typically improves NDCG@10 by 8-15% vs dense-only retrieval.

Limitations

  • Cross-encoder inference is more expensive than dense retrieval. Each (query, doc) pair requires a forward pass — only practical for re-ranking the top-N (typically 50-200) candidates from first-stage dense retrieval.
  • Not as strong as massive proprietary rerankers on specific English-domain tasks. Cohere Rerank 3, voyage-rerank-2, OpenAI's text-rerank API may win on English-only benchmarks.
  • Code reranking is not its strength. For code retrieval reranking, specialized code rerankers win.
  • Architecture is conservative. Newer cross-encoders may surpass on specific MTEB reranking benchmarks but BGE Reranker V2 M3 remains the default for "good enough" plus open-weight.

Real-world performance

  • vs Cohere Rerank 3 (API): Cohere wins on best-in-class English. BGE Reranker V2 M3 wins on cost (self-hosted), multilingual, and unconstrained commercial use.
  • vs voyage-rerank-2 (API): voyage-rerank-2 wins on best English domain quality; BGE Reranker V2 M3 wins on cost + multilingual.
  • vs no-reranker dense retrieval: 8-15% NDCG@10 improvement on most retrieval tasks. Worth the inference cost for accuracy-sensitive pipelines.
  • vs older bge-reranker-large: Strict upgrade with multilingual + 8K context.

Should you run this locally?

Yes if you have any RAG pipeline where retrieval quality matters. The retrieve-then-rerank pattern (BGE-M3 dense retrieval → BGE Reranker V2 M3 cross-encoder reranking → top-K to LLM context) is the canonical open-weight RAG retrieval architecture in 2026.

Pair with: BGE-M3 for first-stage dense retrieval. The combination is the default open-weight RAG retrieval stack.

How it compares

  • vs BGE-M3: Different roles. BGE-M3 is the dense embedder (encoder); Reranker V2 M3 is the cross-encoder reranker. Use both in a retrieve-then-rerank pipeline.
  • vs older bge-reranker-large: V2 M3 is the strict upgrade — multilingual, 8K context.
  • vs Cohere Rerank 3 (API): API wins on English; BGE wins on cost + multilingual + unconstrained license.
  • vs cross-encoder/ms-marco-MiniLM-L-12-v2: Older smaller cross-encoder. BGE Reranker V2 M3 strict upgrade.

Run this yourself

  • CPU-only: Functional via SentenceTransformers CrossEncoder API. 10-30 pairs/sec on modern CPU.
  • Single GPU: Any modern GPU with 4+ GB VRAM. 100-500 pairs/sec on consumer GPU.
  • Production: Text Embeddings Inference (TEI) supports rerankers — same serving infrastructure as embeddings.
  • Pipeline pattern: BGE-M3 retrieves 100 candidates → BGE Reranker V2 M3 reranks → top-10 to LLM.
  • Vendor: BAAI / Hugging Face: BAAI/bge-reranker-v2-m3.

Overview

BGE M3 reranker. Cross-encoder for re-ranking RAG candidates; multilingual.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Parent / base model
BGE M30.57B
Edge
Family siblings (bge)
BGE M30.57B
Edge
BGE Reranker v2 M30.57B
You are here

Strengths

  • MIT
  • Multilingual reranker

Weaknesses

  • Slower than embedding-only ranking

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
FP161.1 GB2 GB

Get the model

HuggingFace

Original weights

huggingface.co/BAAI/bge-reranker-v2-m3

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of BGE Reranker v2 M3.

NVIDIA GB200 NVL72
13824GB · nvidia
AMD Instinct MI355X
288GB · amd
AMD Instinct MI325X
256GB · amd
AMD Instinct MI300X
192GB · amd
NVIDIA B200
192GB · nvidia
NVIDIA H100 NVL
188GB · nvidia
NVIDIA H200
141GB · nvidia
Intel Gaudi 3
128GB · intel

Frequently asked

What's the minimum VRAM to run BGE Reranker v2 M3?

2GB of VRAM is enough to run BGE Reranker v2 M3 at the FP16 quantization (file size 1.1 GB). Higher-quality quantizations need more.

Can I use BGE Reranker v2 M3 commercially?

Yes — BGE Reranker v2 M3 ships under the MIT, which permits commercial use. Always read the license text before deployment.

What's the context length of BGE Reranker v2 M3?

BGE Reranker v2 M3 supports a context window of 8,192 tokens (about 8K).

Source: huggingface.co/BAAI/bge-reranker-v2-m3

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.

Related — keep moving

Compare hardware
  • 4060 Ti 16 GB vs 4070 Ti Super →
  • Arc B580 vs 4060 Ti 16 GB →
Buyer guides
  • Best budget GPU — for 7B-13B models →
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Recommended hardware
  • NVIDIA GB200 NVL72 →
  • AMD Instinct MI355X →
  • AMD Instinct MI325X →
  • AMD Instinct MI300X →
  • NVIDIA B200 →
Alternatives
BGE M3
Before you buy

Verify BGE Reranker v2 M3 runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →
Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Same tier
Models in the same parameter band as this one
  • BGE M3
    other · 0.57B
    unrated
  • Llama 3.2 1B Instruct
    llama · 1B
    6.0/10
Step up
More capable — bigger memory footprint
  • Gemma 3 4B
    gemma · 4B
    7.5/10
  • Llama 3.2 3B Instruct
    llama · 3B
    7.4/10
Step down
Smaller — faster, runs on weaker hardware
No verdicted models in the next tier down yet.