Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

zeroentropy
/
zerank-2

Text Ranking
sentence-transformers
Safetensors
English
qwen3
finance
legal
code
stem
medical
Model card Files Files and versions
xet
Community
9

Instructions to use zeroentropy/zerank-2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use zeroentropy/zerank-2 with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("zeroentropy/zerank-2")
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Notebooks
  • Google Colab
  • Kaggle
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Breaking change: predict() now returns raw logits (was sigmoid in [0, 1])

pinned
#9 opened about 17 hours ago by
dilawarm

where is weights

2
#6 opened 3 months ago by
m1kk0n

vLLM support?

1
#5 opened 4 months ago by
GLECO

Use torch.inference_mode() and disable gradient checkpointing

#4 opened 5 months ago by
prathamj31

Fix config.json for batching

#3 opened 6 months ago by
HatimF
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs