Skip to content

Conversation

Diggsey
Copy link
Contributor

@Diggsey Diggsey commented Sep 25, 2025

Thanks @AlexTMjugador for doing the hard work on this. We ran into the same problem.

This PR is an alternative fix that is a bit simpler, but the dot-env finding logic is shamelessly lifted from the original PR.

The improvement here is that we don't need to keep a global cache of env vars, and we don't need to specify in advance which env vars will be needed.

@AlexTMjugador
Copy link
Contributor

I was only able to revisit my PR a few days ago in response to @abonander’s insightful review comments, so the timing for this follow-up PR is actually nice! 😄

While running some local experiments, I should confess I ended up going down into a bit of a rabbit hole trying to better understand the constrained environment proc macros may be run under, and exploring reliable ways to cache read environment variables within it.

Overall, I think this PR is an interesting look at the problem from a fresh pair of eyes. That said, I'm unsure whether it's acceptable to scan for env files and re-read them every time the proc macro runs, especially since, under RA's execution model during completions, that can happen more frequently than when doing standard cargo builds. This SQLx macro already has a significant impact on compile times, and slow builds are one of the biggest pain points for pretty much all Rust developers working on non-trivial projects. So I'd understand any hesitation around not having caching here.

More importantly from a functional perspective, this PR still contains a call to env("SQLX_OFFLINE_DIR") in the expand_with_data function, which does not honor any values read at the DotEnvFile struct. We could restructure the code to pass the DotEnvFile into this function, but keeping track of it feels both annoying and error-prone. That's another part of why I cornered myself into the box of trying a more caching-oriented approach.

@Diggsey
Copy link
Contributor Author

Diggsey commented Sep 25, 2025

Makes sense, but I think with this approach we can add caching on top more simply? For example, you could just slap a #[memoize] attribute on DotEnvFile::load and call it a day? There's no need to cache individual environment variables AFAICT.

@AlexTMjugador
Copy link
Contributor

AlexTMjugador commented Sep 25, 2025

Makes sense, but I think with this approach we can add caching on top more simply? For example, you could just slap a #[memoize] attribute on DotEnvFile::load and call it a day?

From the memoize crate docs:

Calls are memoized for the lifetime of a program, using a statically allocated, Mutex-protected HashMap.

And for the reasons stated on #4018 (comment), that's precisely the kind of caching we want to avoid, unless we take care of keying the cache in a way that changes can be reliably and cheaply detected, which is where the complications with the caching stem from and memoize does not abstract away.

@Diggsey
Copy link
Contributor Author

Diggsey commented Sep 25, 2025

@AlexTMjugador I don't think that comment is relevant since the cache key here would be the manifest path.

@AlexTMjugador
Copy link
Contributor

I don't think that comment is relevant since the cache key here would be the manifest path.

Yeah, if you pass the cache key as a parameter to that function memoize would be functionally equivalent, though it seems unergonomic to have to pass such parameter every time you want to read an environment variable. Which is why I said that memoize alone only gets you so far in resolving the problem.

@Diggsey
Copy link
Contributor Author

Diggsey commented Sep 25, 2025

it seems unergonomic to have to pass such parameter every time you want to read an environment variable

I'm not sure I follow? You already need to pass the manifest path in order to load the dotenv file, so it's no more unergonomic than it is currently?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants