-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial analysis is slow for diesel #4186
Comments
How long does |
I've tried to run
(My old version does not complete this command either, it crashes with a trace similar to that one in #4053) Steps to reproduce: git clone https://github.com/diesel-rs/diesel
git checkout be0854094c79ae99ba3c893242a4be358ac0b0c5
cd diesel/diesel
rust-analyzer analysis-stats . |
On my system it gets a bit further:
A quick profile:
|
90f8378 is not the current weekly release, it's one week old and does not include the performance improvements. |
@flodiebold it's still slow for me (on 5671bac) |
So I've looked into the overflow crash we sometimes have in Diesel, and it's caused by these impls: impl<DB: Backend> QueryFragment<DB> for star where
<table as QuerySource>::FromClause: QueryFragment<DB>, of which there are a lot. Currently when we encounter that where clause, it causes us to try to enumerate all types that could implement |
Is there anything that could done here from diesels side to improve the situation? |
I think they are basically fine, we're just handling them a bit wrong. There's a WIP PR in Chalk that might fix this. |
This has definitely improved per my observations. It only takes 5 mins for RA to be ready now compared to 30 mins from before. |
This has regressed since this week's release. RA process is stuck at 100% when indexing diesel. (Ran it for more than 30 mins). |
@edwin0cheng Can this probably be caused by #7513? According to my last investigations with the rustc macro by example performance this seems to be a main bottle neck for diesel. (See here and here. As another note: It may be useful to run some sort of regular benchmark on a set of crates that are known to stress rustc/rust-analyzer to prevent/detect such regressions earlier. Maybe it's even possible to use the rustc-perf-testsuite for this as this is already a collection of crates with similar problems. |
Yes, it is infact caused by #7513, I am trying to figure out what happened now. [Update] |
@weiznich at that point the
We actually have that, but only for two or three crates: https://rust-analyzer.github.io/metrics/. They're gathered on GitHub Actions (we don't have other infra) and they're pretty noisy as-is. I don't think #7513 ended up in there, but it wouldn't have been visible anyway (the big bump you see was an unrelated PR). I actually wanted to add |
Before yesterday's release, analysis of diesel was only taking around 5 mins. I think it should be okay adding to benchmark suite with that in mind. |
Can you provide a link to the actual github action running those benchmarks? I assume they are not really time critical, right? So adding more benchmark crates should be fine as long as the queue does not grow continuously. It might be even OK to just run certain benchmarks once a day or something like that detect large regressions as this one + have some way to run them for certain specific PR's. I assume most PR's do not change the performance that much, but a rewrite of some "core" component is certainly something that should maybe have a separate benchmark run. I've setup a similar "on-demand" benchmark action for diesel, so I can provide at least some knowledge on how to do that. |
It happens in https://github.com/rust-analyzer/rust-analyzer/blob/master/.github/workflows/metrics.yaml, which points to https://github.com/rust-analyzer/rust-analyzer/blob/master/xtask/src/metrics.rs. I'm not sure what the free GitHub Actions budget is, but I assume that it stops working or gets throttled after a certain number of minutes. We already take about an hour for the nightly release artifacts -- multiplied by ten platforms --, and spending say half an hour more on each pull request would be a lot, since we merge about 30-60 of them each week. That's in addition to running the tests on PRs and bors r+ (about 10-15 minutes each). As for running these only on-demand, I would prefer to avoid it. I can probably donate some CPU time on hardware I own to keep running them on every PR. |
According to this github documentation the limit for the free tier are 20 concurrent jobs a max 6h per job. So from that point of view there would likely be quite a bit of room for additional benchmarks left. |
The limit seems to be either 2000 minutes per month or unlimited: https://github.com/pricing. It says "free for public repositories" in the table, but not if you scroll down to "Code workflow". |
That limit is only for private repos. |
@edwin0cheng can we make an MBE benchmark out of those macro calls? |
Probably interesting candidates form diesel:
|
One of problems of making these macro calls inside MBE benchmark are, they are all recursive macro calls, which MBE itself does not handle. I would recommend adding another macro benchmark in ide level instead. |
I am currently testing with this which is much smaller to work with.
|
7994: Speed up mbe matching in heavy recursive cases r=edwin0cheng a=edwin0cheng In some cases (e.g. #4186), mbe matching is very slow due to a lot of copy and allocation for bindings, this PR try to solve this problem by introduce a semi "link-list" approach for bindings building. I used this [test case](https://github.com/weiznich/minimal_example_for_rust_81262) (for `features(32-column-tables)`) to run following command to benchmark: ``` time rust-analyzer analysis-stats --load-output-dirs ./ ``` Before this PR : 2 mins After this PR: 3 seconds. However, for 64-column-tables cases, we still need 4 mins to complete. I will try to investigate in the following weeks. bors r+ Co-authored-by: Edwin Cheng <[email protected]>
7994: Speed up mbe matching in heavy recursive cases r=edwin0cheng a=edwin0cheng In some cases (e.g. #4186), mbe matching is very slow due to a lot of copy and allocation for bindings, this PR try to solve this problem by introduce a semi "link-list" approach for bindings building. I used this [test case](https://github.com/weiznich/minimal_example_for_rust_81262) (for `features(32-column-tables)`) to run following command to benchmark: ``` time rust-analyzer analysis-stats --load-output-dirs ./ ``` Before this PR : 2 mins After this PR: 3 seconds. However, for 64-column-tables cases, we still need 4 mins to complete. I will try to investigate in the following weeks. bors r+ Co-authored-by: Edwin Cheng <[email protected]>
Now with #4053 fixed I've tried the new weekly release (rust-analyzer 90f8378) on diesel again. It does work again, but the initial analysis needs now a couple of minutes (maybe 10 or so, didn't measure it exactly) of time, till completion is working and all type hints are shown. As this worked much faster on the previous version (rust-analyzer c388130) I've used I would consider this as performance regression.
Pinging @flodiebold as this is probably caused by new Chalk recursive solver in the 2020-04-20 release.
The text was updated successfully, but these errors were encountered: