How it works
From a vague idea to a ranked shortlist in minutes — here’s the flow.
1. Define criteria
Describe what a good paper looks like for your use case. Create weighted rubrics using natural language.
- Create multiple criteria with weights
- Reuse saved rubrics across projects
- Use AI to draft and refine
2. Search and fetch
Query arXiv with topic-aware keywords. We fetch papers and metadata ready for evaluation.
- Smart query expansion
- Filter by date, authors, categories
- Deduplicate and normalize data
3. Evaluate and rank
Each paper is scored against your criteria. Get ranked results with concise AI-generated summaries.
- Per-criterion scores and weights
- Readable, focused summaries
- Export results and revisit later
Data sources
We integrate with arXiv to retrieve the latest abstracts and metadata relevant to your search.
LLM assistance
AI helps draft criteria, expand queries, and summarize results with an emphasis on your rubric.
Scoring model
Weighted criteria produce an overall score per paper so you can compare at a glance.
Ready to try it?
Create an account and start your first AI‑powered search today.