Skip to content

Formalise Benchmarks #308

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
9 tasks
pedromxavier opened this issue May 16, 2023 · 7 comments
Open
9 tasks

Formalise Benchmarks #308

pedromxavier opened this issue May 16, 2023 · 7 comments
Labels
priority Should be fixed or implemented soon

Comments

@pedromxavier
Copy link
Contributor

pedromxavier commented May 16, 2023

Rationale

Create a formal benchmark pipeline to compare

  • Python
  • PythonCall (dev)
  • PythonCall (stable)
  • PyCall

Originally posted by @cjdoris in #300 (comment)

Requirements

  1. Match benchmark cases across suites
  2. Use the same Python executable across all interfaces
  3. Store multiple results or condensed statistics
  4. Track memory usage

Comments

Julia Side

Most benchmarking tools in Julia run atop BenchmarkTools.jl1 and using their interface to define test suites and store results is the way to go. Both PkgBenchmark.jl2 and AirspeedVelocity.jl3 provide functionality to compare multiple versions of a single package. Yet, they don't support comparison across multiple packages out-of-the-box. There will be some homework for us in building the right tools for this slightly generalized toolset.

Important to say that PkgBenchmark.jl has useful methods in its public API that we could leverage to build what we need. This includes methods for comparison between suites and for exporting those results to Markdown. AirspeedVelocity.jl is only made available through the CLI.

Python Side

In order to enjoy the same level of detail providede by BenchmarkTools.jl, we should adopt pyperf4.
There are many ways to use it, but a few experiments showed that the CLI + JSON interface is probably the desired option.

For each test case, stored in the PY_CODE variable, we would then create a temporary path JSON_PATH and run

run(`$(PY_EXE) -m pyperf timeit "$(PY_CODE)" --append="$(JSON_PATH)" --tracemalloc`)

After that, we should be able parse the output JSON and convert it into a PkgBenchmark.BenchmarkResults object. This makes it easier for integrating those results in the overall machinery, reducing the problem to setting the Python result as the reference value.

Tasks

  • Implement the reference Python benchmark cases
  • Implement the corresponding versions in the other suites
    • PythonCall (dev)
    • PythonCall (stable)
    • PyCall
  • Write a translator for pyperf JSON into BenchmarkResults
  • Write comparison tools
  • Write report generator
  • Setup GitHub actions

Resources

References

Footnotes

  1. BenchmarkTools.jl

  2. PkgBenchmark.jl

  3. AirspeedVelocity.jl

  4. pyperf

@github-actions
Copy link
Contributor

This issue has been marked as stale because it has been open for 60 days with no activity. If the issue is still relevant then please leave a comment, or else it will be closed in 7 days.

@github-actions github-actions bot added the stale Issues about to be auto-closed label Aug 19, 2023
@github-actions
Copy link
Contributor

This issue has been closed because it has been stale for 7 days. You can re-open it if it is still relevant.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 27, 2023
@LilithHafner
Copy link
Contributor

IMO this is still relevant, it should be re-opened and added to a milestone so that it is not automatically re-closed as stale.

@cjdoris cjdoris reopened this Sep 21, 2023
@cjdoris cjdoris added the priority Should be fixed or implemented soon label Sep 21, 2023
@cjdoris
Copy link
Collaborator

cjdoris commented Sep 21, 2023

Indeed, I like this PR, just haven't had a chance to properly review it.

@pedromxavier
Copy link
Contributor Author

I had a similar task in another project and some of the ideas converged to slightly different approaches. I will be happy to update this PR soon, probably during the next weekend.

@cjdoris
Copy link
Collaborator

cjdoris commented Sep 21, 2023

Sounds good!

@cjdoris cjdoris removed the stale Issues about to be auto-closed label Sep 22, 2023
@pedromxavier
Copy link
Contributor Author

Since PythonCall is not v1 yet, we have to decide on how we want to compare the different branches under interface changes. Are we going to keep separate suites for dev and stable or not?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority Should be fixed or implemented soon
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants