-
Notifications
You must be signed in to change notification settings - Fork 697
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cabal
invokes pkg-config
on every installed package, so cabal run
or cabal install
takes 20 minutes on my system
#8930
Comments
@lschneiderbauer : Any difference with cabal-install 3.8.1.0 or 3.10.1.0? |
I don't think there should be. But I also don't know if this is something we can or should fix. As the linked tickets describe "We also want to be able to use the pkgdb purely, which means we can't push checking for versions to the call-site where we know which packages are required. Further we don't know which packages we need to query until midway in the solver (since that depends on which deps we pick), so we can't first winnow down the list." The issue amounts to a problem with the configuration of the user's system, which in turn affects cabal. It would be nice if cabal wasn't affected by misconfigured systems, but architecturally the only way to do so seems to be to turn the entire currently pure solver into an impure one -- which would be a pretty adverse result. |
@gbaz wrote:
I would think we could isolate queries to |
@lschneiderbauer : To make progress on this, we need a reproduction of this behavior. |
I can test as soon gentoo offers those packages (afaik they are working on it).
@andreasabel It probably will take me some time, due to lack of experience and other distractions, but I will try.
It's understandable that you don't want to give that up. From that point of view, would it make sense if the user could influence the choice of tool to get the required information? Let's say pkg-config is the default, but I could say something along the lines of |
Cabal historically just left it to users to configure their systems so that desired libs were available in the libdirs, and allowed cabal files to set extra-libs and users to set --extra-lib-dirs. It later introduced pkgconfig-depends as a mechanism to provide auto-discovery of packages in systems supporting it. We recently fixed the way pkgconfig-depends interacts with the solver so that an adroit library author can first set the pkgconfig-depends field, and if that fails, it flips an auto-flag and instead tries to build using extra-libs. I really want to motivate not attempting to address this directly -- having an "ostensibly pure" interface wrapping unsafePerformIO calls to shelling out to an external program is a pretty fragile and ugly approach. And again, what we have should work correctly, as long as a general system is not in some sense overall misconfigured. I think cabal should try to work around common system misconfigurations where possible (this is for example how we fallback to package-by-package queries in the case of a completely broken package db), but where its overly complicated I think we can just leave things be. |
Hadn't we had several reports in this direction, I would agree. The question that should be answered here is well how easy it is to get from a very unspecific symptom (taking long) to the specific cause (some misconfigured package somewhere) that is not really in scope because it is detached from the task that the user wanted |
The issue you linked to was fixed. |
Hello, i have a similar problem. I am on a vanilla Arch system. I have tried to set I don't think it is a viable assumption that pkg-config database is always in good shape, and pkg-config itself works fine in those situations. I believe that it is a mistake to query for packages that Cabal does not absolutely need. Even if the databse is in tact, this takes a very long time on systems which have a lot of .pc files. This bug is hitting hard on those of us who use a distribution which always installs development files – e.g. Arch or any source-based distribution. On those systems querying for all packages is going to take a long time regardless of any issues in the database. Also, many installed packages often have uninstalled optional dependencies, and .pc files for those packages will not be present on the system, and it is an expected situation. Here is an extract from my cabal buld log with added comments:
Why does Cabal continue looking for a downloader even when it found its preferred one?
Why does Cabal run
|
Because it collects downloaders into its programdb, and then queries the db to make a choice. In general, cabal attempts to factor its work into "effectful information gathering" and "pure logic" phases which can regularize error handling, etc and clarify the logic.
Because the first call lists all the packages, and the second call asks for modversion on them list thus acquired. There's no way to get all packages and their modversion results in a single execution of pkg-config afaik. And again, if we didn't call this on all packages upfront, we would be required to interleave pkg-config logic into the cabal solver, which is currently, and thankfully, pure. Since the logic of that solver is so complex, making it still more complicated seems like a bad idea.
If this doesn't work, then that's a bug we should try to fix. |
And again, if we didn't call this on all packages upfront, we would be
required to interleave pkg-config logic into the cabal solver, which is
currently, and thankfully, pure.
I understand that letting go of the purity of Cabal's solver is a compromise
that you are not willing to make. But aren't there any other options?
Things that come to my mind (without understanding the internals of Cabal at
all):
1) Add a separate pass (after resolving) that checks the presence of
non-Haskell libraries. If it fails, cabal would just give up with an error.
2) Defer checking the existence of non-Haskell libraries to the configuration
phase of a package. This might fail in the middle of a build, which is nasty,
but at least it does not slow down every build.
The user could be given a choice about how they want this checking to be done.
There might be an option of not doing any checks at all, which obviously will
fail latest at link time, if a library is missing.
Cabal could also print a message (on verbosity level 0) that gives the user
some information about what is happening – such as "Querying pkg-config
database...". If this takes a long time, the user has immediately more
information about what is going wrong. I don't believe this would make
anyone's user experience worse in any way. THis is already done for package
resolution – which takes often much less time than querying the pkg-config database.
I don't know how many builds really rely on pkg-config support, but Iassume,
that there are more builds which do not need pkg-config than those that do.
Therefore I believe that querying pkg-config database in some other way would
improve build times for everybody.
> I am on a vanilla Arch system. I have tried to set pkg-config-location to /bin/false in ~/.cabl/config, but Cabal does not respect this setting.
If this doesn't work, then that's a bug we should try to fix.
Yes. However, if nothing else is done, Iwish there would be an option like
`--disable-pkg-config`, which would be documented. It would feel much less hacky
than pointing `pkg-config-location` to an invalid or non-existing `pkg-config` executable.
Thank you for your time!
|
The reason it's part of the solver is that existence of pkg-config packages can be used to conditionalize build plans, e.g. falling back to a slower pure-Haskell package if a fast C binding isn't available on a system. |
The reason it's part of the solver is that existence of pkg-config packages can be used to conditionalize build plans, e.g. falling back to a slower pure-Haskell package if a fast C binding isn't available on a system.
Would it be possible to try resolving packages first assuming that all
libraries are available. Then Cabal could check, if the packages selected for
the plan actually are available, and if not, rerun the solver giving it
information about missing packages. Of course this might lead to
several resolution attempts, but would it be that bad, as this would not be
very likely?
|
It seems to me that the slowness here comes from the fallback path checking each pkg individually, no? and that if the pkg-config db didn't have errors this wouldn't happen? Also, when we do fall back, it does print a message it is falling back, correct? |
It seems to me that the slowness here comes from the fallback path checking
each pkg individually, no?
Well, of course it contributes to that.
I timed the execution of `pkg-config --modversion` for all packages on my
system. (I mean the execution of pkg-config with every installed pkg-config
package listed on the same command line. I'm not talking about the fallback of
running pkg-config separately for each package.) It took 19.5 seconds, and it
fails, as we already know.
Now, I removed all offending packages with broken dependencies from the list
(there were 10 of them), and run the command again. Now it takes 34 seconds
and properly outputs the version numbers. It seems that pkg-config runs much
faster when it fails, probably because it avoids some work when it knows that
the whole process is not going to succeed anyway.
and that if the pkg-config db didn't have errors this wouldn't happen?
It would not, but 34 seconds of pkg-config in most of cabal runs is way too
much wasted time. On my system I have 1191 pkg-config packages. I don't even
have any single desktop environment installed. If I had a full installation of
gnome or KDE, the amount of these packages would be even higher.
I am running an Intel Core i7 machine with 4 cores clocked at 2GHz and 8 GB of
RAM.
The situation is completely different on a distro which has separate -dev
packages for development files.
Another thing is that on Arch it is not likely that the package db ever has all
dependent packages present. As I explained, some dependencies are optional,
and if the user does not isntall them manually, the packages will not be
there. Installing more packages (which are not needed) in order to shorten
cabal build times would be a bad solution to the problem especially because
the lack of these packages is not an issue for any other software that I have
ever used.
Also, when we do fall back, it does print a message it is falling back,
correct?
Well yes, but one needs to pass -vX to cabal in order to see that. I don't
know what this X is.
|
A patch to bump verbosity level on that output would be straightforward and welcome. (edit: and if arch provides a "broken" packagedb by default, i think they should patch their pkg-config to not break on it) |
A patch to bump verbosity level on that output would be straightforward and welcome.
Does this mean that the issue of a 34 seconds delay on cabal initialization
(assuming normal operation and intact pkg-config database) is not considered
serious enough to be fixed?
|
No. It means that this is another proposal that would be welcome and straightforward to implement (just like the earlier mention of fixing the treatment of pkg-config path in the cabal.config file.) For reasons discussed above, the proposals thus far to actually address the core issue mainly seem to be nonstarters that would be architecturally complex and fragile. However, the proposal to "optimistically" just assume all the libraries are there is in fact the behavior cabal falls back to when it cannot find the pkg-config database. So perhaps a flag "--skip-pgkconfig" or the like which more obviously bypasses this, and text in that warning suggesting that possibility could work? |
To add to @Merivuokko's data, I'm on a similarly-powerful machine, also running Arch, with 1757 pkg-config entries. The one-at-a-time fallback approach takes about 7s, but the happy path, after I've manually removed three broken packages from the invocation, still takes 3.5s, which is enough to dominate the time Cabal takes to perform simple operations. So I think regardless of the existence of broken packages, we need to find some way to avoid so much querying. If we don't want to impact the purity of the solver, then perhaps we could just do some caching? Recording timestamps for the directories listed by PS. Can someone please change the thread title to mention "pkg-config"? It would have made this easier to find. |
cabal run
or cabal install
takes 20 minutes on my systemcabal
invokes pkg-config
on every installed package, so cabal run
or cabal install
takes 20 minutes on my system
@gbaz wrote:
Maybe this design is outdated with the advent of using information from
|
This is potentially a pretty annoying UX regression, so I'd see it as a last resort. I notice that you didn't include my suggestion of caching |
@andreasabel I veto adding more lazy IO 😂. |
That seems like a great solution and I'd welcome a patch for it. I had considered caching before, but hadn't realized that there were potentially directories to check for invalidation on, so got stuck on that. The one caveat is that the standard cabal caching is per project not globally. However, that may actually be fine here, on first pass, since the main cost we're worried about is repeated runs on individual projects, not overall runs. |
@georgefst wrote:
Well, it is in the spirit of |
Using |
@gbaz wrote:
If this is true, may we ask you for a PR? |
Naive question: can the solver find all the pkg-config dependencies that it could care about and then return those to the rest of cabal to only look at those relevant packages? The performance problem here seems to be asking pkgconf for 1000 packages, so what if we only asked for the 50 that could ever influence solving? |
It seems the most affected users here are on Arch because of their pkg-config database policy, as well as the average size of Arch users' pkg-config databases due to not separating library and development packages. However, just for context, it seems that Arch maintainers do not consider this a bug: https://bugs.archlinux.org/task/80171#comment223421 So for the benefit of users with this kind of system setup, an improvement in cabal execution time would really be nice here. (Personally I'd also feel for the lazy IO option, but that's just my opinion.) |
On 2023-11-01 at 19:50 -0700, Andrea Bedini ***@***.***> wrote:
@lschneiderbauer @Merivuokko would you be so kind to see if using pkgconf helps in your situation?
I am already using pkgconf.
(by the way, how do I get an environment simlar to yours?)
Install Arch or any source based distribution (such as Gentoo) with a lot of
library packages – e.g. install a desktop environment.
Looking at trace, you can see that both pkgconf and pkg-config have to read
and parse all files in `pc_path` for `--list-package-names`; only to
re-resolve the path of each pc file when asked for `--modversion`. I could
not find a way to get a list of all package names _along with their
version_. This does not seem to be a common use-case.
This also demonstrates quite well that cabal tries to do something that is not
intended to be done.
I understand the desire for correctness, but I believe a simple performant
solution is better than a mathematically correct complicated error-prone
solution. In my (admittedly uninformed) opinion, introducing a new cache falls
to the second category.
But maybe the easiest thing to be done is to give the user an option to
disable all pkg-config querying altogether. I will at least stick to this
option, if this problem is about to be tackled by introducing a cache.
|
I haven't looked into all of the trade-offs between caching and unsafe IO, but I wanted to mention that I think that the solver is already reading the source package index through a data structure that uses lazy IO. It seems to work well, since the solver's algorithm only requires looking up packages as they are needed, in dependency order. The pkg-config database could be used similarly. The main difference is that it isn't owned by cabal, so there could be more types of unexpected errors. |
I'm really excited to see the progress on fixing this in #9422. In case it's helpful, I have another data point. I'm running Arch as well, and This issue roughly doubles my usual build times for clean builds. It's takes up much more time proportionately if I have made a change to only one module and the rest of the modules are still cached. Sometimes it needs to do the I tried examining my Anyway, I'm really excited to see this resolved. It's one of my main pain points using Haskell right now. |
It seems that the developers do not agree (or have the resources) to fix this
issue at the moment.
In the meanwhile, I wish that there would be a working configuration option to
disable all pkg-config checks. Could this be implemented and documented?
This issue is affecting many people, and it is possibly giving many users a
very bad user experience without them knowing what slows down Cabal.
For those, who run a rolling-release OS, like Arch or Gentoo, disabling
pkg-config operations is possible the best way to go in the future as well, if
this issue is going to be solved by introducing caching. On these
distributions, packages might be updated often, daily or even more frequently.
It is likely that every update of the system invalidates all pkg-config caches
kept up by cabal quite effectively destroying their purpose. (I mean that they
were implemented to make life easier for those who use a rolling-release
distribution which installs all development files of every package by default,
but the caches get invalidated so frequently that they may end up being quite
useless.)
I think it is really a good idea to not be hesitant and implement a suboptimal
solution – especially one that would be compilicated to implement.
Defining pkg-config to be /bin/false in Cabal configuration is not a good
solution either. I only want to prevent Cabal for querying a piece of
information for every installed package, like the version check is
implemented. If cabal needs to acquire link flags for a particular package
(that it knows it needs in the build), I am totally willing to allow it to
call pkg-config.
|
That's interesting. I wonder if there's any way of doing this currently and, if not, if there are any tickets open. Perhaps it's worth it to open one? BTW, @Merivuokko, I had some trouble parsing your message. As if some negations were spurious, etc. You can edit and condense the message directly in the github UI if you feel like it. |
On 2023-11-17 at 01:33 -0800, Mikolaj Konarski ***@***.***> wrote:
> In the meanwhile, I wish that there would be a working configuration option to
> disable all pkg-config checks. Could this be implemented and documented?
That's interesting. I wonder if there's any way of doing this currently
and, if not, if there are any tickets open. Perhaps it's worth it to open
one?
I created a new issue: #9458
BTW, @Merivuokko, I had some trouble parsing your message. As if some
negations were spurious, etc.
Maybe this is because of my native language which is Finnish. The new issue
hopefully makes my point clear.
|
While #9422 is very welcome, and its implementation delightfully simple, it only solves about half of the issue for me, and I'm not sure this thread should be closed. Without a global cache and per-package invalidation like #9360, I fear that Cabal on a typical Arch system is still going to spend a lot of time re-querying the same package versions over and over. I think my complaints are essentially the same as @Merivuokko's, except that I wouldn't be satisfied by disabling the checks completely. |
I think that we should implement the simple, safe, finished thing, and give it a shot for a release at least. If someone would like to address a flag for disabling the pkgdb in the solver pass, then that's welcome too and will be a simple change. Any other change would be complex and invasive, so we might as well see, in practice, how far we get with only these two simple ones. |
I'm not against that. My main point was just that we should either keep this issue open or create a new one summarising what remains, since we already know that the implemented fix is far from perfect. |
Another concern that's just occurred to me: since #9422's caching is per-project, does that mean it's ineffective for |
Cabal's To my eyes, this seems like a perfectly sensible application of lazy I/O, especially as the solver already relies crucially on lazy I/O to load the index. I would be very curious to hear more concrete objections from those who are arguing against lazy I/O to address this as it seems to me, as a fellow lazy I/O skeptic, to be the most natural solution to the problem at hand and can only be a strict improvement over the status quo. |
@bgamari If I understand correctly, the main reason that we didn't use lazy IO is that it would be more difficult to implement than caching. I don't know of any objections to merging the code if someone implemented it. |
There have been objections, actually. Because it's really hard to get right, and the Haskell road is littered with failed lazy I/O implementations. That said, I personally have been resigned to it eventually happening, because |
I chatted with ben and a good compromise might be we move to lazy io, but we also move to parsing pc files ourselves (reputedly straightforward) which should be more efficient and less flaky than shelling out. having the flakiness of shelling out on top of the possibly confusion of lazy io seems particularly scary, but if we move to our own io entirely there are at least fewer variables to control. |
I bet that parsing the pc files ourselves is sufficient to solve the problem altogether. We only need to extract the name and version, which is a very small part of what pkg-config actually does. I have been vocal against lazy I/O, mostly because it is a "bad thing" ™️. I am not using a pure functional progamming language to hide I/O where it does not belong. This hidden I/O will make the computation depend on when and where it runs, which is definitely not what I would want. As they say, functional core / imperative shell. If we want to interleave I/O and solving, let's do it the "right way" ™️. There are plenty of options to do that while keeping the solver pure (iteratees, continuations, callbacks, etc...). |
If I had to write a solver, I would write it against an abstract service monad. class MonadSolver m where
askPkgConfig :: SystemLibraryName -> m InstalledVersion
... I can then decide how I implement this monad and its runner. instance MonadSolver SolveM where
...
type SolverData
runSolveM :: SolveM a -> SolverData -> IO a I can place If you ask me, Haskell is about programming with monads. I structure my programs this way: define service monads, and then implement them how I see fit. What could be bad about the solver doing system calls?
|
While I agree with @andreasabel in principle, I have my doubts about the feasibility of his proposal. Lifting the solver into a monad does not sound like an easy task and IMHO not one that is justified by the cost of introducing lazy I/O given that we have no shortage of arguably more important bugs already in need of hands. |
Describe the bug
Some cabal commands, e.g.
cabal v2-run
take around 20 minutes on my system (on a vanilla unalteredcabal init
project).I am using gentoo linux w/ cabal-install 3.6.2.0, ghc 9.2.7.
The reason for the huge runtime seems to be that cabal calls something like
pkg-config --list-all | cut -f 1 -d ' ' | xargs pkg-config --modversion
(see related discussion in pull request #8496). Executing this command explicitly takes around the same time.The reason for pkg-config to take so long is another matter: In this case there is a library installed with a .pc file that seems to introduce some circular dependencies (see grpc/grpc#29137) which results in much longer waiting times for each library that depends on that "malfunctioning" library, adding up to take around 20 minutes on my system.
It seems to me one should rethink the approach of using
pkg-config --list-all
to retrieve ALL system library versions beforehand. Just one misconfigured system library will hinder cabal to do its job properly, resulting in this absurd situation that cabal requires a fix in an unrelated project (grpc) to work properly.The text was updated successfully, but these errors were encountered: