Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upRunning benchmarks during release #479
Comments
|
Potentially related: nodejs/benchmarking#293 Also on what day did the changes land which caused the regression? I'm curious if we could have caught it from: benchmarking.nodejs.org ? Throughput does look ~5% lower on 12.x than 10.x in the graphs |
|
The benchmarks we saw regressions on at google cloud were under large CPU
load. I have a way to reproduce, but am unsure if our benchmark suite is
checking this specific case.
…On Thu, Oct 3, 2019, 3:45 PM Michael Dawson ***@***.***> wrote:
Potentially related: nodejs/benchmarking#293
<nodejs/benchmarking#293>
Also on what day did the changes land which caused the regression. I'm
curious if we could have caught it from: benchmarking.nodejs.org ?
Throughput does look ~5% lower on 12.x than 10.x in the graphs
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#479?email_source=notifications&email_token=AADZYV65QAK3QAW37JXE62LQMZDVBA5CNFSM4I4NG6R2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAJLYYY#issuecomment-538098787>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AADZYVZMW6XLMOHMPEWBJPDQMZDVBANCNFSM4I4NG6RQ>
.
|
|
As discussed during the last meeting, the next step is to make sure we have a CI job that can run in a reasonable amount of time so we can use it for releases. |
|
The CI job that we can use for our microbenchmarks is: https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks The only problem with the job is that we have to explicitly name a module that we want to check for. We can not just run all of them. I am not sure but I believe our micro-benchmarks setup does not provide a run all version currently, so we have to fix that first (but I will check about that again). We also likely have to trim the run time of some of our benchmarks as they sometimes have a very long run time. |
|
I removed it from the agenda as we already discussed it properly and we just have to now improve the way we do this. |
|
@BridgeAR from what I understand a run all would take days.... So very much would need to stip down to a subset. |

Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.

We recently ran into some significant performance drop due to some changes in-between releases.
Our current microbenchmarks partially take a very long time but we could start evaluating what microbenchmarks we could run before we release a new version to detect significant performance drops early on.