X Tutup
The Wayback Machine - https://web.archive.org/web/20200912033501/https://github.com/nodejs/Release/issues/479
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running benchmarks during release #479

Open
BridgeAR opened this issue Oct 1, 2019 · 6 comments
Open

Running benchmarks during release #479

BridgeAR opened this issue Oct 1, 2019 · 6 comments
Labels

Comments

@BridgeAR
Copy link
Member

@BridgeAR BridgeAR commented Oct 1, 2019

We recently ran into some significant performance drop due to some changes in-between releases.

Our current microbenchmarks partially take a very long time but we could start evaluating what microbenchmarks we could run before we release a new version to detect significant performance drops early on.

@mhdawson
Copy link
Member

@mhdawson mhdawson commented Oct 3, 2019

Potentially related: nodejs/benchmarking#293

Also on what day did the changes land which caused the regression? I'm curious if we could have caught it from: benchmarking.nodejs.org ? Throughput does look ~5% lower on 12.x than 10.x in the graphs

@MylesBorins
Copy link
Member

@MylesBorins MylesBorins commented Oct 3, 2019

@targos
Copy link
Member

@targos targos commented Nov 12, 2019

As discussed during the last meeting, the next step is to make sure we have a CI job that can run in a reasonable amount of time so we can use it for releases.

@BridgeAR
Copy link
Member Author

@BridgeAR BridgeAR commented Nov 19, 2019

The CI job that we can use for our microbenchmarks is:

https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks

The only problem with the job is that we have to explicitly name a module that we want to check for. We can not just run all of them. I am not sure but I believe our micro-benchmarks setup does not provide a run all version currently, so we have to fix that first (but I will check about that again).

We also likely have to trim the run time of some of our benchmarks as they sometimes have a very long run time.

@BridgeAR
Copy link
Member Author

@BridgeAR BridgeAR commented Nov 19, 2019

I removed it from the agenda as we already discussed it properly and we just have to now improve the way we do this.

@BridgeAR BridgeAR added enhancement and removed Release-agenda labels Nov 19, 2019
@mhdawson
Copy link
Member

@mhdawson mhdawson commented Nov 21, 2019

@BridgeAR from what I understand a run all would take days....

So very much would need to stip down to a subset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants
You can’t perform that action at this time.
X Tutup