-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add Squeak4.6 and Squeak5.0 to ConfigurationOfMetacelloPreview #stable release #365
Comments
|
@dalehenrich Anything I can do on this to nudge it along so we can release the updated ConfigurationOfMetacelloPreview so Squeak 4.6 & 5.0 get a good version of Metacello? |
the tests are failing on travis and while I l think the changes should cause the failures, I am also concerned about the volume of failures and the fact that I haven't seen 4.6 or 5.0 pass yet ... |
While I agree that there's no need to merge until the tests pass it seems like you triggering Travis is a bottleneck on this issue. Is there a way for me or @krono to keep fixing this without your intervention? I'd guess checkout the commit, make another issue and make a PR. Does that seem right to you? I was looking at the massive failures and they seemed to be from the github api rate limiting bug. Is builderCI using basic auth, oauth or the unauthenticated calls to the github API? Seems like it would have to be basic or oauth but wanted to check with you before delving into that. |
@krono an @pdebruic I am seeing these tests fail constantly for GemStone (not likely to be an APIU rate limit problem) and I am concerned that we have introduced an regression since the last release of of the preview configuration and if it indeed is a regression then all of sudden "every Metacello load in the universe will fail" ... so I have to be cautious, but of course it takes more time to be meticulous and up to now I haven't had the luxury ... I will eat dinner and see what I can find out tonight ...
|
... yeah those tests are failing on all platforms (almost) with the new configuration ... |
did you find evidence ? I looked for the explicit error messages and AFAIK I can't run tests on travis-ci using authentication (if you can So I have to look closer on the off chance that we have a regression... Dale On 8/30/15 5:06 PM, Paul DeBruicker wrote:
|
Test failure reproduces locally so I'm debugging now ... On 8/30/15 6:21 PM, Dale Henrichs wrote:
|
Side effect of an External project bugfix[1] ... with a fix for a I'll update the test on the branch then update the SHA ... we should see Dale [1] dalehenrich/external#2 On 8/30/15 6:24 PM, Dale Henrichs wrote:
|
…y to reduce sensitivity of test to new Seaside versions
…d so try to reduce sensitivity of test to new Seaside versions
Okay @pdebruic and @krono Squeak5.0 is producing a MNU: |
@pdebruic and @krono at this point it looks like the only failures are Squeak5.0 and Pharo2.0 (occasionally) .... before going further I would like to know that Squeak5.0 is not going to require additional work --- I don't look forward to having to publish another SHA for Squeak5.0 (which requires a new version ... new tags ... new deploy) ... |
I see. It seems to be a problem in the update procedure for Squeak 5.0 so nothing with metacello-work per se. |
So, after fixing on the Squeak side, the MNU is gone and we are down to one error: MetacelloGithubIssue175TestCase>>testIssue175 (#175 is about GitHub caching stuff, I'll see to that) |
Looking at the failure of #175, we had clean slate for the whole gitub-cache stuff as of #345 but I don't see anything of that loaded. In my test image, replaying the Travis commands, I get Also, the error is a Heisenbug for me. When I run all tests for the first time, the error is present. When I try to debug or run it a second time, the error just disappears and everything works nicely. I am unsure about the origin of the error. Can we ignore this one? I think the cache dir switching issue is less important for the bootstrap and slipstreaming |
regarding topa.49 ... this the ConfigurationOfMetacelloPreview and it is an intermediate stepping stone to being able to load directly from github ... it is such a pain to maintain this whole bootstrapping cha cha, that I minimize the changes that are back ported ... passing the tests at this point in time means that the latest version can be loaded ... Yes I have seen the random errors and have been working on the master branch to eliminate them ... typically the problem has shown up because a particular package doesn't get unloaded correctly which then upsets the image state for a follow-on test ... random failures can be ignored ... I've restarted the Squeak5.0 runs and if they pass I will move forward with the checklist ... I'm concerned about Pharo2,0 failures, but I don't have the time to look into them ... I will cross my fingers:) Thanks @krono (and @pdebruic) for your help and patience ... |
Looks like the same error is hitting a slightly different test -previewBootstrap.st being triggered in a CompiledMethodWithNode class>>generateMethodFromNode:trailer: call ... |
@krono .... left you off previous comment |
@dalehenrich Where did you get the trigger message from? I can't find it in the log… |
Hmmmm, there are too many things going on ... I swear that I re-ran the tests and then looked at the log file and saw that it was the same error (that's when I copied the Apparently we're down to test failures? Perhaps if you could double check the errors, since I distrust the ability of my browsers to show me the proper results from travis... then we can call this fixed and I can move on to the next step ... I've got other things that I've got to get done, so I will move on when I have a few uninterrupted moments:) |
Take your time.
So 2 Errors and 2 Failures |
==== Startup Error: Error: Class category name 'Metacello-TestsPharo20MC' for the class 'MetacelloTestsPackageSet' is inconsistent with the package name 'Metacello-TestsCommonMC.pharo20'
Okay ... I've fixed the Pharo2.0 (and Pharo5.0 failure due to mapckage mismatch) ... I've applied Paul's patch for Issue #369 on the issue_365 branch The two errors do not reproduce locally, so are probably due to random failures ---- historically this is because a package was not cleaned up correctly by squeak in one test causing a failure in a subsequent test ... --- but random test failures won't prevent me from pushing this puppy ... assuming the tests will be green I'll need to update the Preview config again ... we'll see if I get to this tomorrow:) |
Okay we're going to call the tests passing ... the only failure is with Squeak4.4 and I'm just seeing random test failures ... the master branch has much of the randomness squeezed out |
yay :) |
Merge Issue #365 work into configuration branch
check off step 3 |
check off step 4 |
check off step 5:
|
need PR #364 merged in to truly support Squeak4.6 and Squeak6.0 |
Basically slipstream the definitions onto the 1.0.0-beta.32.17 release
The text was updated successfully, but these errors were encountered: