Help Us Grow ResearchCodeBench!
We’re looking for recent ML papers with available code to add to our benchmark. ResearchCodeBench is continuously evolving, and we welcome contributions from the research community to expand our collection of papers and implementation challenges.
Help us identify the core contributions within the code that test LLMs’ ability to implement novel ideas. Your expertise will help shape the future of AI evaluation benchmarks.
What We Need
- Paper Link (arXiv, conference page, etc.)
- Official GitHub Repository Link
- Your thoughts on the key code sections/lines to focus on
Submission Guidelines
Paper Requirements
- Published in 2024 or later
- Available on arXiv or accepted at a top-tier ML conference
- Has an official code repository with open-source license
- Contains novel algorithmic contributions suitable for LLM evaluation
Code Requirements
- Well-documented and runnable
- Contains clear implementation of the paper’s main contributions
- Has identifiable core functions that represent the novel ideas
- Includes test cases or examples
How to Submit
You can submit your contribution through our online form.
What Happens Next?
- Review Process: Our team will review your submission within 2-3 weeks
- Code Analysis: We’ll identify the key functions and create evaluation challenges
- Testing: We’ll test the challenges with existing LLMs to ensure quality
- Integration: Approved challenges will be added to the benchmark
- Credit: You’ll be acknowledged as a contributor in our publications and website
Contributors Get Credited!
All contributors will be acknowledged in our publications and on the official ResearchCodeBench website. Your contribution will help advance the field of AI research evaluation.
Why Your Contribution Matters
By contributing to ResearchCodeBench, you’re helping advance the development of AI systems that can understand and implement novel research ideas. This benchmark serves as a critical evaluation tool for measuring progress toward more capable AI research assistants.
Join us in building the future of AI research tools!
Contact Us
Have questions about the submission process? Reach out to us:
- Email: researchcodebench@gmail.com
- GitHub: github.com/researchcodebench
We appreciate your contribution to advancing AI research evaluation!