-
Notifications
You must be signed in to change notification settings - Fork 1.2k
example: Use js-ipfs to load the entire assets of a WebSite/WebApplication #127
Comments
Going to give this a shot. |
Ok so concerns burning my neurons. Resolving dependency requests in a page. Initially I was thinking that the HTML5 LocalFileSystem API could be used to store a page and its dependencies. Turns out only one browser has or will implement it, Chrome. In other browsers its not available. Then I thought this could be done using data urls that encompass all the resources. Strangely different browsers have idiosyncratic size limitations on these, so there may be a series of weird incompatibilities at the file level (but not my problem). So far as I am aware css/html/svg all can resolve paths that lead to network requests and can also take data urls. So we then can search the text/ast for relative and root-relative paths that match the resources in a page. This will create a text entry larger than the binary representation of the file everywhere it is required in the markup. This may be negligible considering we have already arrived at the browser. This also assumes we attempted to load a folder and are aware of its contents. A single html file with an uknown context for its dependencies may be invalidated by this approach.This approach seems like it would work across browsers. Another approach is that all pages have to be sanitized by their creators to be loaded by the browser and use javascript to load its ipfs contained dependencies. This feels rigid and may be inconsistent with how the same resource might load from the gateway unless there is a uniform method or implementation of this. So am I missing anything here? Where there any other ideas you guys had or have envisioned? |
Actually there is another way you could handle this. As far as I understand service workers allow you to proxy requests from your page, so if we had an ipfs proxy inside a service worker we could transparently proxy all the requests and resolve them through that. Service workers are available in FF and Chrome so that's not too bad. Ref: https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API |
Ok this is a way more natural approach. I'm going to start implementing based on workers. Thanks @dignifiedquire. |
Ok, so progress.....I was able to get the repo into the browser as well as get router setup to respond to /ipfs/:hash with service workers. Caveat: service workers have to be used over ssl which makes it not completely serverless. But now I don't know how to get the data back out of the repo. I don't believe there is export functionality yet, is there? (I'm professionally certified in being wrong) The current work will be here as soon as I get to a more ssh friendly network environment: https://github.com/vijayee/js-ipfs-example-browser |
Not this time! haha I am working on an export function in this PR Would like to work with you as well on this project so look forward to hearing from me this week! |
@nginnever how's the export coming? |
I'm having trouble with the idb store. IPFS seems to expect it to have datastore methods but but it only has abstract blob store methods so I'm getting the error function BlockService(ipfsRepo, exchange) {
var _this = this;
this.addBlock = ipfsRepo.datastore.put.bind(ipfsRepo.datastore);
this.addBlocks = function (blocks, callback) {
if (!Array.isArray(blocks)) {
return callback(new Error('expects an array of Blocks'));
}
async.eachLimit(blocks, 100, function (block, next) {
_this.addBlock(block, next);
}, callback);
}; I assume its a problem with the way I'm initializing the repo. Does anyone have any insight on how to do it properly? The following is a snippet of my approach. importScripts('./lib/IPFS.js')
const IPFSRepo = require('ipfs-repo')
const store = require('idb-plus-blob-store')
const options = {
stores: store
}
const repo = new IPFSRepo('ipfs', options)
ServiceWorkerGlobalScope.IPFSNode = new Ipfs(repo) |
In your example, the repo you create is not being used. new Ipfs takes one argument which is a path or a repo instance itself that you must pass if you want to use a specific repo instance https://github.com/ipfs/js-ipfs/blob/master/src/core/index.js#L28 |
@vijayee Any hopes that you had the chance to look at this again? js-ipfs has improved a ton API wise and examples wise, it should help you get this done easier |
@diasdavid @dignifiedquire I'm giving this a shot. So far I've ran into a few issues documented here #718. Also, in order to save time, I'm using the voice memo demo (https://github.com/GoogleChrome/voice-memos), but can do something else if that won't work. |
Is this issue still open, anyone currently working on it? |
Not currently, this ended up starting the webworker effort, but this issue didn't get much progress after that. If your willing to give it a shot, now might be a good time, I can give some pointers as to what I was trying to accomplish if interested. |
Maybe this could be developed as a webpack plugin? Where once the plugin is included + js-ipfs, it outputs a |
That could be one way, I kinda went a bit more ambitious with this one and was trying to get it working in serviceworkers with an eye on integrating it with the SW tools from Google. Good thing is that we ended up with WW support so it should be possible now. But a webpack plugin would also do. |
I was thinking of a different approach, but since I'm new to ipfs, I won't know it will work until I try. Here's my thoughts:
I was thinking, mimic the behavior how the browser works (fetch resources, build rendering tree), but with given set of DOM APIs. We can tweak/optimize the rendering method (possible webpack integration), but goal should be to keep it simple as possible. What do you guys think? |
@tpae That might be a bit of an overkill IMHO. What's needed is a way of loading those assets from IPFS as if they were fetched with HTTP. The nice thing about ServiceWorkers is that they can proxy requests transparently, so if instead of doing a fetch to the original resource, you could fetch it from IPFS. From the perspective of an existing website (or just in general, the serviceworker flow), whats is needed is a mapping from the original asset name to the ipfs hash ( The basic flow then would be as follows:
This could latter be integrated into the existing google SW tools, which if accepted gives IPFS wide spread support - take a look at https://github.com/GoogleChrome/sw-toolbox. I'm not sure how the webpack plugin would function @victorbjelkholm, but it be interested to see that. One reason I went with the SW was because I couldn't see any other way of doing this, monkey patching XHR/Fetch could get your half way there, but the static resources would still have to be manually populated on page load, sort of what @tpae is describing. |
@dryajov that makes sense. I was doing some reading and found this example: https://github.com/GoogleChrome/samples/blob/gh-pages/service-worker/mock-responses/service-worker.js So you want to simulate cache hit with IPFS and return resources from IPFS directly, right? From my understanding of service workers, they have limitations (don't have access to DOM, https, etc..), but still flexible enough to do what we need it to do. I think your solution makes sense. But I'm wondering whether it will have implications of rendering that content onto the DOM. Maybe there's some security implications, since it feels like anyone can take advantage of this and possibly do some complicated phishing attacks. :) I also think if a user wants additional functionalities with their web(ipfs) apps (load custom JS, frameworks, etc) or possible future of PaaS/heroku-like service, we can utilize service workers to handle it. If it works exactly like how you described, then we can create service workers to behave like the backend of the application. Relay data between ipfs to browser seamlessly. |
@tpae Yep, there should be a few more examples as to how the sw-precache does its thing, which is very close to what we're trying to accomplish here. Also, take a look at this example - https://github.com/GoogleChrome/voice-memos, I was using this as a test bet, its a pretty popular SW example and we could also take advantage of IPFS to store the voice memos them self, which is a bonus. As for security, yeah, we're kinda bypassing all the same origin policies there which do create some pretty big security issues, but for demo purposes it be ok ;) At this point and until IPFS lands in the browser, whatever security we end up having will have to be handcrafted, which is not ideal, so beware. Also, @diasdavid @victorbjelkholm @jbenet @dignifiedquire @haadcode re security with this approach, if you have any suggestions/ideas, it be great to hear. |
How do you go about registering the service worker? They need to come from same origin, do you suggest loading it locally for now, and figure it out later? Do you foresee IPFS becoming part of the browser anytime soon? I'm also concerned about mobile browser support, since service workers are not widely supported yet. |
@tpae Yeah, it has to be a hybrid for now, the initial resource would be served over HTTP along with the SW code, from then on IPFS would be fetching the remainder of the resources, note that the initial resource could be also hosted/stored on IPFS but served by and HTTP gateway. As for browser support, there is work being done to get browser vendors to start adopting this, but no idea on the timeline, hopefully fast enough. Also, here is the current state of SW support - http://caniuse.com/#feat=serviceworkers, pretty much everyone is on board except safari... |
@tpae Thinking about security a bit more, there should really be no issues... The reason is, first you're only loading resources that you know about, second the hash guaranties that the resource can't be tampered with. However, if you try to proxy all calls to IPFS, then sure, anything could come back, but it is no different than calling some third party resource, some trust schema has to be established there before you can safely interact with the resource (api keys, auth, certs, etc...), in the case of IPFS it could be a known root hash that you know and trust, but I'm getting a little off my comfort zone on this one, so better have someone else clarify/confirm it ;) |
@dryajov right, it makes sense. I think this is why they require HTTPS in order to run service workers. It adds some level of security, knowing that the data isn't tampered during the process. I think having the initial bootstrap layer that loads resources makes sense. As a user, if I want to host my app on IPFS, I should be able to provide a proxy layer (either through github pages or other means). We should also think about discoverability of resources, AFAIK, we can't necessarily predict which files/hashes we will need until we start rendering the content. I think this could also be used to improve security (see which resources will be loaded, and only load what is needed) maybe we need to have a manifest file to point to resources (also have the ability to load existing external resources). You mentioned we need to define a mapping to the ipfs hash. We could potentially have dependency chains, like a resource loading another set of manifest of resources. I guess what I'm thinking is like a decentralized, ipfs-based npm.. |
Closing this issue. It is clear now that it makes sense to achieve this through a Service Worker with js-ipfs |
@victorbjelkholm I still think a webpack plugin would be worth implementing. Where can I find the source of ipfs-webpack-plugin to give it a try? |
@mojoaxel I agree! It was private, opened it up but I opened it up now. Consider the code very WIP, probably best to rewrite. Unsure if it actually does anything at this point... |
Build a
js-ipfs
example where a browser page gets loaded, loads an IPFS node and loads all of the page html through IPFS.Bonus points:
The text was updated successfully, but these errors were encountered: