-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
initial Build Compatible Extension API proposal #451
Conversation
a39ac51
to
2b18a94
Compare
FTR, the CI is failing on JDK 11 javadoc build which seems to be stricter than its 8 counterpart. |
This looks interesting. I assume there was a discussion about this on the mailing list, or in some other publicly-reachable forum? tradeoffs and architecture and design and whatnot? Could you point me to that, please? |
Announcement and a more "deep dive" elaboration should be coming soon, and this is not a done deal, more like a starting point. Everything is up for scrutiny. |
Did I miss something? I thought we need to work out what should be in CDI Lite first. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this PR trying to address #425 ? If yes, we need to agree on the scope. If not, what issue is this PR for?
Just to reverberate what other said - this is supposed to be a starting point for a discussion, not a resolved thing, don't worry. |
Ok, the promised bits are here: |
As @manovotn said, there was a little synchronization issue, mostly caused by me being on vacation. This PR, and the linked article, is supposed to be a starting point for a discussion on the Lite extension API. Important part of the Lite effort, but not all of Lite. I'm sure you will agree that it's best to start a discussion with a concrete proposal, rather than some vague "we need to have extensions, how do you think they should look like?". I mentioned it above, and also in the article, but I'll repeat: everything is up for debate and change. This is not a "take it or leave it" situation, it's exactly the opposite. |
I see a lot of value in how the extension points work in terms of similar things that we do in Micronaut (adding annotations, modifying, removing etc.) However, one issue I see with this proposal is it assumes the classes are available on the classpath for an extension. For example This is problematic as IMO one of the problematic areas in Quarkus is that it places the entire runtime classpath onto the build time enhancement classpath which is one of the things that annotation processors got right by ensuring the annotation processor classpath (where extensions would go) has an isolated classpath separate from compilation and runtime. This is important because you want to ensure that dependency conflicts in extensions (which will happen) do not conflict and result in hard to debug compiler errors (nobody wants to remote debug a compiler). Why is there an assumption in this proposed spec that build time enhancement is going to take place on the byte code and not on the source code? Processing the source code has significant advantages in that you can:
Any spec that is proposed should not assume the Quarkus approach which has many disadvantages IMO. |
That's a good point. There should be an abstraction similar to
That's not entirely true - see https://quarkus.io/guides/class-loading-reference for more details.
To be fair, it has significant disadvantages as well. Ideally, the API should not prefer one or the other. |
Clearly. Each approach has trade offs. Nevertheless my point remains a standard should not be Quarkus under another name and should allow for other approaches, otherwise it is yet another standard with significant overreach. |
I'm sure you will all agree that we should strive to keep this discussion strictly technical. Also I'd suggest to file a separate issue for this topic, as described in #452, because this is an important topic and if we keep all discussions on one place, it will be a mess. That said, I can tell that I certainly had annotation processors in mind at the beginning (even thought about using Now -- I really thought that using generics for expressing basic type queries is compatible with annotation processors. I confess I didn't try. If not, we'll have to come up with something different. Ah and one more thing: representations such as |
Please let's avoid pointless accusations. We have tried to make the API as general as we could and if you did check Quarkus, you will know that it has very different 'extension scheme' from what we have brought up in this PR. In fact we started with just the API and only later got to creating a working Quarkus prototype in order to be able to play with the plain Lite extension API, test out how user friendly it is and see if you can re-write existing framework integration from Quarkus-specific extensions into actual pure CDI Lite extensions. That being said, it is very well possible that we missed a use case or two which is why we are looking for technical feedback and suggestions on how it should look like to support that given use case. |
@graemerocher FWIW we aren't thinking of (or really wanting) Quarkus to be the "RI", just one compatible implementation. Would be great to have a micronaut impl and an API that can support them both (along with other impls) |
Thanks @n1hility Nice to receive a rational response from a Red Hat engineer, and yes we are absolutely on the same page on that front. My desire is to support whatever CDI light is and that is a long term goal, but it has to be something that is supportable and not tied to a single implementation with a single architecture (byte code processing).
I am not sure I understand, what topic are you referring to? The desire to have an API that everyone can support seems pretty relevant to this.
@Ladicek @manovotn Please clarify where I accused anyone of anything or where I deviated from technical discussion, it seems the both of you are a touch emotional. My concern as stated is merely that if the spec contains over reach, and it is my opinion that many specs unfortunately and irreversibly do (why is DI even included in JAX-RS for example) then it is not something that other implementations can viably support in the future and it will be the case that it is a spec with a single implementation (Quarkus) which has to be undesirable for all parties involved.
Annotation processors can in fact reference classes, however you really don't want a massive annotation processor classpath so there needs to be a way to do the same thing without generics and not enforcing a user to depend on a concrete API. For example with annotation processors you can match annotation patterns which are strings like "foo.bar.*"
I agree those are unfriendly, although I understand the purpose as the name of a an inner class or array etc. is not the same source code vs byte code. I would also say that having 2 ways to do things is confusing and just duplicating documentation effort. It would be better to just provide the fluent API and then be able to pass annotation / class names as strings if they don't exist on the classpath: @Enhancement
public void configure(AppArchiveConfig app) {
app.classes()
.subtypeOf("example.MyService")
.configure()
.stream()
.filter(it -> !it.hasAnnotation(MyAnnotation.class))
.forEach(it -> it.addAnnotation(MyAnnotation.class));
} The name |
Hi, I read the blog post but I fail to see why we need yet another duplication of the API (we already have some duplicated~like API in the extension area). |
Awesome! Thank you for collaborating on this and sharing this valuable feedback and technical scrutiny, it will lead to a better standard and API.
Ah I see. So in terms of referring to both strings and classes, in the scenario you have in mind, is it the extension author who decides (directly or indirectly) what's on that class path / dependency path is and whether they choose to refer to a type or not? So on the same CDI-Lite application, one extension in the application might just decide to refer to all strings, and another in the same might use types, possibly even the same type name just referenced differently? |
Unfortunately that would not be a compatible solution. The lifecycles of portable extensions and the runtime application are intertwined by design, and its common practice / expectation to pass state directly into the app. Since in a build time model extensions and the app run in different "VMs", that can't work. (Extensions are not designed to be serializable, build and run don't even necessarily share the same class path, theres expectations around passing reflective types, etc) |
We wanted to have separate issues for separate discussion to avoid having several parallel discussions/ideas in one issue/PR.
I had a similar impression from your initial comment. Looks like we both misunderstood each other :-)
As was highlighted several times, names are up for changing. And this one in particular. We've had various ideas none of which really captured its purpose so this was the last attempt that just went public with the API. We need a better name but I don't really have any up my sleeve; it's supposed to be an entry point to the whole application, all possible classes that you might want to filter and transform for cases where going with just parameters in methods isn't enough.
Hmm, you mean as in having the option to either have method parameters or just go via I do agree with the point of specifying classes as |
This is wrong, extension don't need to be serializable, the "post deployment state" (you can see it as the bean manager or annotated types + beans + config - interceptors, decorators, programmatic qualifiers etc...) must be and is. It is not a spec feature but a vendor feature since CDI is an API and does not assume anything between build and runtime specific envs - for the good, look at JPA which did assume with some annotation processor constraints and kind of failed to be used at build time in 90% of apps - but it just defines one API vendors integrate as they want while they respect the expected runtime. Once again it already works today. Another big issue with such a "light" API is that it will quickly end up as requiring the same API as the standard one (I agree we can simplify it a bit due to the 1.0 -> 2.0 changes but this is not critical) except it will replace all the native reflection part by "copies" of these API (guess fields will get name to type as String, method will get name + parameters {String name,String type} + exceptions as strings etc....). Not a real gain. Now look at the runtime, if you get an interceptor, you still need reflection or to break the InvocationContext which would once again break library writing (or require yet another facade and drop CDI goal to be a primary API, in other words: CDI does not become the API but a technical stack which is a failure IMHO since it only makes sense as an API and not a vendor stack detail. You drop the reflection but still need the meta to let the extension be useful so at the end you just redid what is done today: dump the state at build time and replace most of the reflection work done at runtime by a lookup in this state (a bit like CDS works). |
@rmannibucau Not really. See 11.5. Container lifecycle events: |
@mkouba yep but the same will apply to the light extensions - or will move to registered beans which is the same - so if you prefer this phrasing: "we can easily make today's spec build time compatible and portably" (today it is portable if the extension is serializable somehow, used some json but it was extension specific). So still, the step to make CDI "phasable" (build -> run) is easier and more consistent than the step to fork CDI core (extensions/lifecycle) to create another spec, this is the whole point. Today it is often compensated by having a build scanning phase and dump it in a bean able to read this scanning result....but nothing to put in the spec IMHO, in particular in annotation processors which are always a pain for users from what I saw. |
@n1hility Yes, for example it is maybe ok to refer to references when you are processing a stable API JAR that contains only interfaces and no third party dependencies, but you wouldn't want to reference say Hibernate which has many third party dependencies and maybe introduce conflicts. However even in the case of a "stable" API you can have changes. For example look at the
@manovotn So again this is an area where implementation may differ since for example Micronaut does not dictate your application entry point. We do build a registry of compile time computed beans but that is separate from the entry point. I would say if this is going to be generic a name like |
It is in fact similar to existing CDI extensions and the usage of wildcard in lifecycle method parameters. For instance one pretty common (and poorly performing) case was to use I don't see it as dictating any entry point. You needn't use this as one; you can use method parameters giving you precisely the classes/fields/method you want to modify. Also, |
Mailing list discussion: https://www.eclipse.org/lists/cdi-dev/msg00048.html |
So what I wanted to move into a separate issue is concerns about annotation processors compatibility. There doesn't have to be a discussion that a CDI Lite extension mechanism, however we call it, must be implementable with annotation processors -- of course it must, that's a given. I would like to have a technical discussion about that topic and possible trade-offs it might bring. But if we have it here, well, then we have it here. The present objection, if I understand correctly, is that putting the types [to which extensions refer] to compilation classpath can lead to compilation problems due to dependency misalignment. (Please correct me if I'm wrong.) To that, I'd say that you need to align those dependencies anyway, because you need to run the correct version of the extension. If you integrate Framework X Version 1.0 using an extension, you need to use an Extension For Framework X Version 1.0. Running an extension for a different version of the integrated framework is likely to lead to runtime problems, which I'm sure you'll agree is worse than compilation problems. So, again, those dependencies must be aligned anyway -- in which case, putting them to the compilation classpath seems like a minor thing to me. -- If there are other reasons to not put dependencies to compilation classpath, I'd like to learn about them. (In otherwords, yes, I indeed made an assumption that classes referred to by an extension are available. I thought that's natural because of the above. I most definitely did not make an assumption that "all implementations of this API are Quarkus".) Now, I can imagine a
with something like
and so on. That could be conceptually relatively close to the current proposal. (Or we can have something that is conceptually very different from the present proposal. That's good too.) I'd argue that such API is harder to use ("stringly typed" vs. "strongly typed" and all that), but this is certainly a discussion we can have. (When it comes to |
There are multiple reasons to keep the compilation path and annotation processor path separate that have been established over many years since annotation processing is a mature model used by many frameworks and tools (in particular on mobile where Android SDK is almost entirely based on them). Having a larger compilation classpath and/or annotation processor path will impact compiler performance since you have more jars to load against and scan. The Gradle engineers have done a lot of work on making annotation processing support incremental compilation and this again requires that the annotation processor path is kept to a minimum. In terms of alignment normally it is better to have libraries separate their annotation jars (with a It is not the case that you need the annotation processor classpath and the runtime classpath to be aligned, in fact Micronaut has dozens of modules none of which get included on the annotation processor classpath and is testament to the fact that this is in fact not necessary. If you decide ok I want to include the API on my extension classpath, extension authors would have to take doubly great care about transitives to ensure that 2 APIs from 2 different extensions don't transitively depend on conflicting versions of something like Jackson for example. Just look at all the transitives hibernate pulls in from dom4j, to javassist, bytebuddy etc. why would you want those on your compilation classpath and risk a potential conflict that would result in a user having to remote debug a compiler? At least when conflicts occur at runtime you can use regular debugging techniques to analyze and resolve them. |
I have marked the PR as a draft, because it certainly isn't supposed to be merged as is, but it is also supposed to be discussed. Though at this point, I might have also just closed this, because the current proposal that we debate on the calls is somewhat different already. I'll think about that, but in the meantime, I'll keep it as a draft. I agree that makes more sense. |
2b18a94
to
fd91566
Compare
fd91566
to
461be23
Compare
I have pushed a thoroughly revised initial proposal for Build Compatible Extensions API, corresponding to the current state of the https://github.com/Ladicek/StillDI project, updated for the I have also went through all the files in this PR and added a number of TODO comments -- mostly to highlight all the places where I wasn't sure of something, or where I know there's more work required. I have, for now, intentionally abstained from making larger changes, even though I'm convinced of some (e.g. removal of The current number of TODOs is 88, and they are far from minutiae (there's e.g. a TODO for the name "build compatible extensions"), which should clearly show that this isn't me throwing complete stuff over the wall and asking everyone else for ratification, but a genuine call for participation. Constructive feedback would be very much welcome (with understanding that present discussion has many examples of constructive, non-constructive and counter-constructive feedback, so let's try to be better this time). |
This is present at jakartaee/cdi#451 and defines 2 packages: `jakarta.enterprise.lang.model` and `jakarta.enterprise.inject.build.compatible.spi`. This commit removes almost everything from the `api` module, which contained the proposed API under `cdi.lite.extension`.
Signed-off-by: Ladislav Thon <lthon@redhat.com>
461be23
to
24ce327
Compare
Dismissing old review which is by now outdated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Let's get this into the repo so that we can roll out some Alpha and start improving it iteratively.
Agree. Merging this is the beginning, not the end. Thanks! |
@graemerocher Anway, I would welcome an Alpha release even for Weld. It shouldn't be an issue either; I just wanted to wait until today meeting to see if anyone has issues with it. If not, I should be able to release the artifact tomorrow :-) |
BTW I think any committer of this project should be able to access and execute the Jenkins job that does a release (actually two jobs, staging and then repo release), so you wouldn't even be blocked waiting on me should you need more iterations out there. |
ok sounds reasonable |
Hi, |
Not really, experimental packages tend to do more harm than use. Both, for implementors and anyone actually trying to early adapt and test new APIs. Besides, we are pretty sure an API like this will be needed and present with next version of CDI for Lite. Of course this isn't a final version but we need a draft out so that implementors can try our its limits and usability.
So, I suppose by the so called "no agreement" you mean CDI Lite and the presence of build compatible extensions within it. This has been agreed on by committers (and community) in a vote started by this email - https://www.eclipse.org/lists/cdi-dev/msg00361.html Furthermore, this SPI is being iterated on continuously during CDI meetings and was also discussed multiple time on PRs and on mailing list. Both future implementors (and not just them) agree that we need an SPI like this to supplement CDI extensions in a build time environment. In fact, this SPI was drafted exactly because there was a technical need for an equivalent of portable extensions that would work in build time environment - e.g. that would work in Lite. |
There is clearly no agreement on that point and a soft one can have been found due to the investment requirement to push against "big vendors" but at the end it is not a good decision for the community. Technically it is proven that native applications are not going to be mainstream because:
So this flavor of CDI "Lite" is just unexpected and a lack of knowledge of what is already there in 99% of apps and therefore shouldn't hit jakarta package IMHO. |
There is and your denying of it won't change it, sorry.
Right, which is why all the committers acknowledged the need for it and voted +1?
Your knowledge of the whole Java ecosystem and your near-prophetical abilities to foresee the future course and needs of development are truly astonishing but I don't see any hard evidence supporting them so I won't even comment on them one by one. The project still aims to support Jakarta EE as it is (a runtime oriented setup) but that doesn't need to prevent it from having variant(s) that can be used outside of it. CDI can evolve is multiple ways and in order to do so, an agreement of majority of committers is needed which, after a very long period of discussions, was achieved. |
I'd like to point out one thing, too. It seems to be a popular misconception that the main goal of Lite is to enable native compilation. It is not. The main goal is to enable moving bean discovery and related tasks (such as running extensions) to build time. This is a popular implementation strategy for AtInject (see e.g. Dagger) and I can't see why it shouldn't be possible for CDI. |
Well it is clearly not the case and I didn't get so positive feedbacks from most people I asked about so once again it is a very biased vision we should find a way to make it less vendor oriented.
I'm actually speaking of what happent, and that the enthusiasm of the beginning are now down, nothing prophetical compared to this proposal of new API without a need.
Actually no since CDI support will need to cover both - this is why I asked why not doing another project since the proposal is CDI unrelated at the end. Once again I'm not against trying this, I am against doing it in CDI scope. @Ladicek :
Then you don't need any API since it is already doable with all implementation without a single change. So you are right there is no reason it is not done for CDI 2.0 but no need to reinvent the API from scratch too. |
That is incorrect. For the record I deliberately did not vote +1 for many of the reasons @rmannibucau outlines above. (The outcome of the vote, of course, is not in question.) Respectfully: let's keep the tone professional here. |
Not really, there is lazy consensus applied to votes, as described here - https://github.com/eclipse-ee4j/cdi/wiki#decision-making-process However, the vote is there precisely for people to express their thoughts, regardless of whether they vote one way or another. And with all respect, it would be valuable if people did that instead of coming up weeks later on PRs which are not nearly as visible as mailing list not to mention by that time the vote has concluded. It is hard to incorporate such feedback is all I am saying :-) |
I think we should move on from the issue
Overall the vote was very positive +12 / (+6 committers), and no -1s. It's beneficial to open the door to new implementations, there is no real negative to those that aren't interested in Lite, and the spec restructuring is going to lead to an overall better and much cleaner spec. Sure there may not be perfect unanimous agreement but that's ok. If we always agreed the same on everything we wouldn't have a standard in the first place, since everyone would have agreed on the same implementation. A good standard will allow for flexibility, and I think this was accomplished. We should work together to move forward and continue the spec's improvement. |
Guys, just move forward outside CDI in a eclipse-ee4j/cdi-lite or whatever you want to name it but putting it inside CDI brings a ton of issues as soon as you start to use CDI as a spec (kind of the same as at inject implementations have together) which is more than unlikely to serve anyone at all except in terms of communication which is pretty bad for end users. |
EDITED by manovotn:
Issue for this can be found here - #452
The blog post can be seen here - http://www.cdi-spec.org/news/2020/09/15/CDI_Lite_extension/
Please do take a look at them as they describe this in depth as well as define a way to provide feedback (in a separate issue so that we don't clutter one issue/PR).