Contribute to ResearchCodeBench

Help Us Grow ResearchCodeBench!

We’re looking for recent ML papers with available code to add to our benchmark. ResearchCodeBench is continuously evolving, and we welcome contributions from the research community to expand our collection of papers and implementation challenges.

Help us identify the core contributions within the code that test LLMs’ ability to implement novel ideas. Your expertise will help shape the future of AI evaluation benchmarks.

What We Need

Submission Guidelines

Paper Requirements

Code Requirements

How to Submit

You can submit your contribution through our online form.

qr_code

What Happens Next?

  1. Review Process: Our team will review your submission within 2-3 weeks
  2. Code Analysis: We’ll identify the key functions and create evaluation challenges
  3. Testing: We’ll test the challenges with existing LLMs to ensure quality
  4. Integration: Approved challenges will be added to the benchmark
  5. Credit: You’ll be acknowledged as a contributor in our publications and website

Contributors Get Credited!

All contributors will be acknowledged in our publications and on the official ResearchCodeBench website. Your contribution will help advance the field of AI research evaluation.

Why Your Contribution Matters

By contributing to ResearchCodeBench, you’re helping advance the development of AI systems that can understand and implement novel research ideas. This benchmark serves as a critical evaluation tool for measuring progress toward more capable AI research assistants.

Join us in building the future of AI research tools!

Contact Us

Have questions about the submission process? Reach out to us:

We appreciate your contribution to advancing AI research evaluation!