Skip to content
This repository was archived by the owner on Dec 29, 2022. It is now read-only.

Integrate with large (non-cargo) build systems #401

Open
sunshowers opened this issue Jul 7, 2017 · 8 comments
Open

Integrate with large (non-cargo) build systems #401

sunshowers opened this issue Jul 7, 2017 · 8 comments
Labels
enhancement Indicates new feature requests

Comments

@sunshowers
Copy link
Contributor

While many Rust open source projects use Cargo, large organizations often use their own build systems. Supporting large build systems is on the Rust roadmap: rust-lang/rust-roadmap-2017#12

At Facebook we use Buck to build our Rust code. Our code is organized into crates (i.e. translation units) just like in open source, but we generally don't use Cargo.toml or Cargo.lock files -- instead, all our dependency management is done within Buck.

I'm trying to get RLS working against our internal Rust code. RLS easily recognizes intra-crate dependencies, but it can't work across crates.

I'm wondering if there's something we can do easily to make non-cargo build systems work. For example, RLS could call out to some code we provide (how? IPC? a dynamic lib loaded into RLS?) when it wonders where a crate is, and our code could return a path that RLS could look at.

@nrc
Copy link
Member

nrc commented Jul 7, 2017

This is definitely something I'd love to support in the long run. Currently Cargo is really tightly integrated with the RLS, so it is going to be hard work to support anything else.

I expect the easiest way to add some kind of support is to support building without any build system, i.e., just rustc. You'd build the whole project outside of the RLS (one could easily wire up a build task to do this in VSCode) and then use rustc via the RLS to build the primary crate. This would still take a lot of setting up, but it at least seems do-able.

@sunshowers
Copy link
Contributor Author

Thanks @nrc. I'm not sure I fully understand yet, but here are some questions that may help me. I'm interested in getting goto definition, symbol search and code completion working at first. (Reformatting is fully functional already, yay.)

For goto definition, it would seem like what RLS is really interested in is

(a) within a crate, performing static analysis to figure out the location of a definition, and
(b) if a symbol is in another crate, figuring out where the crate is located.

  1. Is this view correct?
  2. I would think (b) requires Cargo, or some sort of dependency resolution anyhow. Is that correct?
  3. I would think (a) does not fundamentally require Cargo, even if it might use Cargo today. Does (a) use Cargo today? If so, is that a fundamental requirement?
  4. What else does RLS need to use cargo for?

@sunshowers
Copy link
Contributor Author

sunshowers commented Jul 7, 2017

I dug a bit further in and it looks like we build dependencies with -Zsave-analysis, then build the "primary crate" with another cargo build. Hmm, I guess the artifacts required for intra-crate goto definitions can be built through invoking rustc with -Zsave-analysis directly.

@sunshowers
Copy link
Contributor Author

One more question -- we use a monorepo with a bunch of crates checked in. Should RLS be pointed to each individual crate?

I guess this is similar to workspace support, which AFAICT RLS doesn't yet support.

@nrc
Copy link
Member

nrc commented Jul 8, 2017

Is this view correct?
Sort of, the RLS works at a level of abstraction a bit beyond crate files/locations (more below)

I would think (b) requires Cargo, or some sort of dependency resolution anyhow. Is that correct?
No, we only use Cargo for managing builds (for quick response we want to take control of builds in a deep-ish way)

I would think (a) does not fundamentally require Cargo, even if it might use Cargo today. Does (a) use Cargo today? If so, is that a fundamental requirement?
Correct. We only use data from rustc for that.

What else does RLS need to use cargo for?
Managing builds only

So, -Zsave-analysis emits data for an entire crate. We take that data for each crate in the project and do some post-processing and cross-referencing. We do all lookups (for goto def, etc) based on that data (i.e., once we have the data from a build, we no longer need Cargo/rustc any more).

For generating the data, we use rustc directly for the primary crate.

One more question -- we use a monorepo with a bunch of crates checked in. Should RLS be pointed to each individual crate?

Yes

I guess this is similar to workspace support, which AFAICT RLS doesn't yet support.

Yes. We're working on workspace support right now, but atm the RLS can handle only one crate at a time. With workspace support we will be able to handle multiple crates in a single workspace. Making this work with a different build system will be tricky though. If we support a 'cargo-less' RLS, you'll probably still need to point it at one crate at a time.

@Xanewok
Copy link
Member

Xanewok commented Jul 8, 2017

So there's an interesting idea on the roadmap, namely a 'build plan' being produced by Cargo, with some ideas about translating that to Buck/Bazel and similar: rust-lang/cargo#3815. When/If that lands, I suspect it'll be fairly easy to consume and just use that for the builds. Probably a two-way conversion build plan <-> other build systems will be needed to be able to reliably perform the build according to the other build system's plan.

@Xanewok
Copy link
Member

Xanewok commented Aug 16, 2018

Working on this - the first part is at #988. This supports a simple command that should perform the build for the user instead and return a list of save-analysis JSON files to be loaded.

However, this does not support diagnostics yet and in-memory files. For that, we'd need to know the actual rustc invocations for packages that contain the source code.
Since we may have many crate targets we'd like to get analysis for (e.g. tests or some cfg-specified features), the graph of those invocations would change depending on the view into the source the user would be interested in.

@Xanewok
Copy link
Member

Xanewok commented Oct 16, 2018

Update: with #1070 merged, the save-analysis files returned from running an external build command (implemented in #988) will be analyzed in order to create an in-memory build plan, which can be reused to run incremental, in-memory builds and display diagnostics.

It's worth noting that while this achieves the set end goals, using save-analysis is not meant to be a long-term solution in terms of integration with other build systems.

@Xanewok Xanewok added the enhancement Indicates new feature requests label Mar 3, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement Indicates new feature requests
Projects
None yet
Development

No branches or pull requests

3 participants