Fusion-aware ordering
Intelligently blends dense and sparse candidates instead of relying on fixed heuristics.
Fine-tuned on your data
Learns from historical dense and sparse outcomes so every blend reflects your domain.
Evidence-based scoring
Ranks by learned relevance, not raw similarity, giving the reranker a stronger shortlist.
Calibrated confidence
Produces scores with clear thresholds so downstream systems know when to trust the set.
How it works
We replace Reciprocal Rank Fusion (RRF)—a fixed heuristic—with a learned model that produces a calibrated score. For a query q and candidate d, we learn:
s_fuse(q, d) = g([s_dense, s_sparse, Δrank, features, φ(q)])
Key inputs to the model:
Training optimizes pairwise or listwise losses (e.g., NDCG, MRR) so the fusion score mirrors real satisfaction outcomes while keeping downstream costs in check.