-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: use ipfs peerId as pin location #966
Conversation
Cluster now provides the peerId for the underlying ipfs node in pin status response, so we fix things to use that as our pin location peer id. Previously we used the cluster peerId which should only be used for internal cluster admin. With this change users will be able to use the peerId in the /status response to connect to the ipfs node that has their stuff via `ipfs swarm connect <peerId>` - Fix toPin to use ipfsPeerId - Update tests to verify that status response for new content has a peerId of the ipfs node for one of the cluster nodes. - Update ipfs-cluster client to [4.0.0](https://github.com/nftstorage/ipfs-cluster/releases/tag/v4.0.0) - Update ipfs-cluster in docker-compose to [0.14.5-rc1](https://github.com/ipfs/ipfs-cluster/releases/tag/v0.14.5-rc1) - Tweak test hooks to not wait for things to shut down before printing the test results overview, to reduce debugging friction. Fixes: #414 TODO: - [] db migration script to update PinLocation peerIds - [] verify cluster infra is updated to >= ipfs-cluster >= 0.14.5-rc1 - [] schedule read-only maintenance License: (Apache-2.0 AND MIT) Signed-off-by: Oli Evans <oli@tableflip.io>
PeerId map
extracted from |
Looks like the |
The thrashing of the I think ideally that'd be something we fix on cluster. Either way, it's not really in scope for this PR, so I'll raise a seperate issue to follow it up. |
License: (Apache-2.0 AND MIT) Signed-off-by: Oli Evans <oli@tableflip.io>
Tested locally. new pin locations created with ipfs peerId. Next steps are
once that is done, test it out on staging:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, this is looking good! 👍🏼
We just need to figure out the remaining dependencies to move forward with release + maintenance window
@@ -0,0 +1,6 @@ | |||
UPDATE pin_location SET peer_id='12D3KooWPySxxWQjBgX9Jp6uAHQfVmdq8HG1gVvS1fRawHNSrmqW' WHERE peer_id='12D3KooWF6uxxqZf4sXpQEbNE4BfbVJWAKWrFSKamxmWm4E9vyzd'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We sadly don't have yet a migration mechanism out of the box. Perhaps, we can just create a migrations folder here, and prepend name of the sql script with date. What do you think?
@@ -53,18 +53,19 @@ export const mochaHooks = () => { | |||
await delay(2000) | |||
}, | |||
async afterAll () { | |||
// Note: not awaiting promises here so we see the test results overview sooner. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine for me. But one question first. Running consecutively the tests is "safe", or can we easily start the tests while the containers are still being stopped/clenned?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think no, we might have to revert this if we ever need that.
@hsanjuan do you have a rough estimate for when you'll release v0.14.5 of ipfs-cluster? No rush, it's just to know when we could schedule landing this PR. |
@olizilla today |
Verified, cluster is ready for us to roll this out ipfs-cluster-ctl --host /dnsaddr/web3.storage.ipfscluster.io version
0.14.5 |
@olizilla let's schedule a maintenance window for next week |
@@ -47,7 +47,7 @@ services: | |||
|
|||
cluster0: | |||
container_name: cluster0 | |||
image: ipfs/ipfs-cluster:latest | |||
image: ipfs/ipfs-cluster:v0.14.5-rc1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alanshaw I'm going to revert this change to the docker-compose to as the latest is now v0.14.5
https://github.com/ipfs/ipfs-cluster/releases
License: (Apache-2.0 AND MIT) Signed-off-by: Oli Evans <oli@tableflip.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -53,18 +53,19 @@ export const mochaHooks = () => { | |||
await delay(2000) | |||
}, | |||
async afterAll () { | |||
// Note: not awaiting promises here so we see the test results overview sooner. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think no, we might have to revert this if we ever need that.
Reverts #966 and instead stores the IPFS peer ID in the `pin_location` table.
Cluster now provides the peerId for the underlying ipfs node in pin status response, so we fix things to use that as our pin location peer id. Previously we used the cluster peerId which should only be used for internal cluster admin. With this change users will be able to use the peerId in the /status response to connect to the ipfs node that has their stuff via
ipfs swarm connect <peerId>
toPin
to useipfsPeerId
Fixes: #414
TODO:
License: (Apache-2.0 AND MIT)
Signed-off-by: Oli Evans oli@tableflip.io