-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using faster url parser #643
Comments
a pull request or some other request for specific action is likely to get more attention here fwiw |
Well I request your opinion before I make it PR ready (it's a normal user module right now) |
I like the idea of making |
I think there is nothing wrong with using faster parser as it's just an On Wed, Jan 28, 2015 at 1:39 PM, Vladimir Kurchatkin <
|
I would love to see this on io |
@xaka It's about subtle things, like data properties vs. accessor properties. It's hard to foresee everything once whole module changes. This requires major version bump at least. |
That's a good point. Switching from properties to accessors means breaking On Wed, Jan 28, 2015 at 2:36 PM, Vladimir Kurchatkin <
|
this is one of the reasons why i'm trying to get rid of the |
Can't we lazily patch the stringify and json transforms to avoid API change/break? |
The getters can be made enumerable (but they're still on the prototype) and the Url class can implement It can also be considered to make the properties just eager data properties, it would still be 11x faster. But often many of the properties are not needed (especially .href?) so it could be a bummer. |
fixed at #1561 |
Reopening since it was reverted 😄 |
I think we could probably plan to get this into 3.0.0 with time to let the ecosystem know / start updating. |
My question would be how much of a perf difference would switching to plain properties over accessors have. If we can keep the API and get most of the perf, that'd be a win-win. |
perf is not related to accessors, but data property api is terrible and not like browser api at all :) |
I'd question how browserlike we want to go. Do we really want something like this? > document.location.hash = null
> document.location.hash
"#null" |
I assume the intention to follow browsers comes form wish of writing "isomorphic" apps and following standards in general (even though it gives you nonsense like |
I'd prefer to not have to run everything though toString(), which is probably what the browser does. (I guess that's maybe not a bad thing?) |
(copypasta'd from #1591) This will probably find its way back to the TC at some point, but as I explain in npm/npm#8163, @isaacs and I don't think the right thing to do is to change npm to match the new |
I will simply just improve the performance, if it wasn't for lazy .href it could even be done in a patch. But only making |
The point is not this but accessors would have allowed all components of the url be automatically up to date when one is changed. |
I agree it would be much better to match browsers (as @isaacs has always suggested; surprised to see the about-face) and to maintain a consistent state between the different properties. If we can't manage to do that with |
My suggestion would be: In io.js 2.x use the same parser as before, but emit a warning on deletion. A few ideas:
In io.js 3.x drop support for that. If npm won't change url interface until then, we should consider applying floating patch. |
In case anyone didn't notice, 2.0.0 was released so anything backward-incompatible (or probably even annoying, like a warning) would have to be in 3.0.0. |
Well there is nothing wrong with introducing a warning in a minor release. |
The intention is not to follow browsers for the good of writing isomorphic apps. The intention is to follow url resolution logic like browsers, so that crawlers and other Node.js programs behave reasonably. The url object parsing is based on the browser A 100% isomorphic browser-style accessor-using auto-updating url object is a great idea. It belongs in npm, not in core. |
Will this make it into 4.0.0? |
That depends on @petkaantonov right now. See #1650. |
Refs: #7448 ... a WHATWG URL Parser implementation. Bit slower than |
Closing given the lack of continued activity and discussion on this. Can reopen if necessary. |
@jasnell @petkaantonov testing this using node 7.5.0 on 16.04 (dual core kvm Intel Xeon E3-12xx v2) still shows huge difference we found about this after cpu profiling our application which showed url.parse takes most of the time and we can't avoid making requests we have switched to node benchmark/nodecore.js
node benchmark/urlparser.js
|
I just hit an issue in production where extremely long referer urls (3000 chars, 5500 chars, and 9000 chars) were causing our node instances to spike CPU usage and we tracked it down to the Anyone have an update on when this will be merged? I'll switch to |
Latest node.
|
Still 2.5x: $ node -v
v16.14.0
$ node benchmark/nodecore.js
misc/url.js parse(): 78603.333
misc/url.js format(): 72242.340
misc/url.js resolve("../foo/bar?baz=boom"): 71972.167
misc/url.js resolve("foo/bar"): 78404.141
misc/url.js resolve("http://nodejs.org"): 78088.377
misc/url.js resolve("./foo/bar?baz"): 78914.819
$ node benchmark/urlparser.js
misc/url.js parse(): 169323.52
misc/url.js format(): 199909.74
misc/url.js resolve("../foo/bar?baz=boom"): 198722.90
misc/url.js resolve("foo/bar"): 226340.86
misc/url.js resolve("http://nodejs.org"): 159880.62
misc/url.js resolve("./foo/bar?baz"): 284006.64 |
(Original node issue nodejs/node-v0.x-archive#6788)
I have rewritten the url parser module of node core as it was/is a serious bottleneck in some of the techempower benchmarks
Running node's urlparser benchmark using iojs it's still 16x faster when not retrieving properties and 11x faster when retrieving all properties (which are lazy getters in my implementation). In absolute terms the current iojs urlparser throughputs 25k parses per second vs 400k lazy/270k eager parses per second.
format
andresolve
are also affected in similar magnitudes.The text was updated successfully, but these errors were encountered: