Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inability to compile nondigest and digest assets breaks compatibility with bad gems #49

Closed
gwcoffey opened this issue May 6, 2013 · 159 comments

Comments

@gwcoffey
Copy link

gwcoffey commented May 6, 2013

For instance, the jquery_mobile-rails gem does not use image-url in its CSS. This is a bug really, but it "worked fine" in Rails 3. I assume they will fix the gem eventually. But in the meantime, in Rails 4, it is broken and as far as I can tell there's no way to work around it. The simple short term fix is to compile both digest and nondigest assets for this gem, but I don't see any way to do that. I can apparently only turn digest off or on globally for the application.

A rake assets:precompile:nondigest task or something similar would provide a workaround. But it seems to be gone now.

Is there a good way to deal with this, or do I just have to drop the gem and manage jquery mobile manually outside the asset pipeline, which is a pain?

@pelargir
Copy link

pelargir commented Jul 5, 2013

+1

Removing generation of non-digest files also makes is impossible to have static 404 error pages that use the same images, JS, and CSS as the remainder of the site (without making duplicate non-digest copies of those files in /public). But then I end up with two copies of the same file in my app.

It's really unfortunate that this feature would be yanked without an explanation of how to deal with these situations.

@rafaelfranca
Copy link
Member

From the README: Only compiles digest filenames. Static non-digest assets should simply live in public/.

@pelargir
Copy link

pelargir commented Jul 5, 2013

I saw that. So please explain how I'm supposed to reference my precompiled application.css file from my static 404 error page? Am I supposed to copy the precompiled file into /public and remove the fingerprint from the filename? So then, if I change the file in the future, I have to precompile, copy, and rename all over again? That's silly.

@guilleiguaran
Copy link
Member

You can use a rake task like this for your case:

task non_digested: :environment do
  assets = Dir.glob(File.join(Rails.root, 'public/assets/**/*'))
  regex = /(-{1}[a-z0-9]{32}*\.{1}){1}/
  assets.each do |file|
    next if File.directory?(file) || file !~ regex

    source = file.split('/')
    source.push(source.pop.gsub(regex, '.'))

    non_digested = File.join(source)
    FileUtils.cp(file, non_digested)
  end
end

@ryana
Copy link

ryana commented Jul 21, 2013

FWIW, I disagree with the resolution of this issue. I'm all for software being opinionated, but I shouldn't have to create a rake task like this in order to compile assets without the digest. It is my application after all ;) The bigger issue from my perspective is that this breaks applications on Heroku that are doing things like serving JS widgets that can't have digests in the name.

@ryana
Copy link

ryana commented Jul 21, 2013

https://gist.github.com/ryana/6049833

A monkeypatch for current version of Sprockets cc: @pelargir

@tikaro
Copy link

tikaro commented Jul 30, 2013

I also disagree with the resolution of this issue:
"So please explain how I'm supposed to reference my precompiled application.css file from my static 404 error page?" is more or less the use case I have - a centralized Rails single-sign-on utility provides precompiled CSS via absolute URLS to other client applications.

@rafaelfranca
Copy link
Member

If you need to generate assets without digest so you have to do this on your own.

@tikaro
Copy link

tikaro commented Jul 31, 2013

After talking this over, my big-chair developer decided to handle this by having Rails return a 301 redirect FROM the old, non-digested CSS location, TO whatever the latest digested CSS location is. That way, the client sites get to use a non-changing location and the rails app doesn't have to bake everything twice.

@rafaelfranca
Copy link
Member

This was exactly because we remove the undigest version. The Rails app had to bake the compilation twice.

@guilleiguaran
Copy link
Member

@tikaro like https://github.com/sikachu/sprockets-redirect (it's for Rails 3.x but should be easy to port to Rails 4)

@ryana
Copy link

ryana commented Jul 31, 2013

@tikaro clever, but you'll want to 302 instead of 301. 301s are permanent and some browsers will cache them.

@tikaro
Copy link

tikaro commented Jul 31, 2013

@ryana good point, re: 302 and 301; I think I over-described what we've done, and may have gotten it wrong. Cloudfront is involved, too. Perhaps I can coax him in here so he can describe what he did.

@Intrepidd
Copy link

Hey guys,

Just wanted to say that I liked the middleware solution so I updated @sikachu's code for rails 4 : https://github.com/Intrepidd/sprockets-redirect

I'll issue a PR in no time.

Cheers

@will-r
Copy link

will-r commented Aug 8, 2013

This seems like a real problem to me. It is now impossible to refer to rails-generated assets from outside the rails application. @Intrepidd's middleware is an elegant patch but sadly no use if you're delivering the assets through a CDN.

Would you accept a pull request that adds a config.assets.nondigest setting? There's no need to bake twice: we can just write duplicate files during Manifest#compile and remove them again during Manifest#remove. That's what I'm doing in a monkeypatch now and it seems to work well.

@ilyakatz
Copy link

I concur, no having non-digest files without an alternatives is pretty annoying. Maybe I missed something, but what problem is it solving that non-digest files are no longer generated?

@ilyakatz
Copy link

For those of you looking for a way to make it work with error pages, I adopted a post on how to make error pages come from asset pipeline (added the bit about non-digest assets) https://gist.github.com/ilyakatz/6346020

@scottillogical
Copy link

I also have a use case for the ability to precompile assets without digests - we need to reference a javascript file from a 3rd party app, and it would be nice to use the asset pipeline - just without the digests!

@alexspeller
Copy link

Agreed, @will-r it seems easy to not compile assets twice and still have this built in.

@tilsammans
Copy link

I know it's unproductive to pile on but I really agree with all the commenters here. A decision was made to remove this and by gosh we're gunna stick to that decision no matter what it seems. While the use case for non-digest assets is completely valid IMO:

  1. download almost any javascript library, like fancybox
  2. put jquery.fancybox.min.js in vendor/assets/javascripts
  3. put related images in vendor/assets/images
  4. include the main javascript in your application.js
  5. watch all your images not loading.

What are we supposed to do in this case? Put fancybox in public? So by-pass the asset pipeline altogether. This does not seem like an improvement to me, in fact it's a regression. We had something that worked just fine, now we are back to loading a bunch of javascripts separately. We can ditch the asset pipeline altogether?

If I am missing something, I sure would like to be educated. But putting potentially a dozen scripts in public is no kind of solution, frankly I am shocked it is even being suggested.

@alexspeller
Copy link

Yeah I haven't yet seen a single rails 4 project that doesn't need to work around this. This decision should be reconsidered IMO.

@turlockmike
Copy link

This was a bad decision and is negatively affecting multiple apps i'm working on for no reason. Rails should not be this opinionated.

@ryana
Copy link

ryana commented Sep 20, 2013

@rafaelfranca you ever gonna come comment on this issue again? I've moved on by using the monkeypatch above, but I find it funny that I get emails from this thread every few days and you guys are just ignoring it. I understand you have jobs and probably busy, but you're stewards of this project. If you can't handle it, responding with a simple "ok cool submit a pull request and we'll merge it in" would suffice. I'm sure the community would pick up the slack.

@will-r
Copy link

will-r commented Sep 20, 2013

This is what I'm using at the moment. It's a little crude and doesn't tackle the slightly harder problem of what files to remove, but as you can see the added computation is very slight. All we have to do is build another filename and write that file too.

module Sprockets
  class Manifest

    def compile(*args)
      unless environment
        raise Error, "manifest requires environment for compilation"
      end

      paths = environment.each_logical_path(*args).to_a +
        args.flatten.select { |fn| Pathname.new(fn).absolute? if fn.is_a?(String)}

      paths.each do |path|
        if asset = find_asset(path)
          files[asset.digest_path] = {
            'logical_path' => asset.logical_path,
            'mtime'        => asset.mtime.iso8601,
            'size'         => asset.bytesize,
            'digest'       => asset.digest
          }
          assets[asset.logical_path] = asset.digest_path

          target = File.join(dir, asset.digest_path)
          undigested_target = File.join(dir, asset.logical_path)

          if File.exist?(target)
            logger.debug "Skipping #{target}, already exists"
          else
            logger.info "Writing #{target}"
            asset.write_to target
            asset.write_to undigested_target
            asset.write_to "#{target}.gz" if asset.is_a?(BundledAsset)
            asset.write_to "#{undigested_target}.gz" if asset.is_a?(BundledAsset)
          end

          save
          asset
        end
      end

    end
  end
end

@alexspeller
Copy link

Hot from the mines, there's now a gem for this:

https://github.com/alexspeller/non-stupid-digest-assets

gem 'non-stupid-digest-assets'

Problem solved.

@ryana
Copy link

ryana commented Sep 20, 2013

lol

@jeremy
Copy link
Member

jeremy commented Sep 20, 2013

We've had no trouble using digested assets throughout, and we've also updated our Rails 3 apps to do the same.

When an external app needs to reference a specific static asset, you can use a separate rake task that symlinks yourasset-<digest>.css to yourasset.css after precompile.

When you want to pull in an external lib that has its own scripts, images, and css, you can vendor the assets in public/somelib-v1/* an include the javascript in the asset pipeline. Then you're still free to put far-future expires on all static assets. When you upgrade to somelib v2, just add a public/somelib-v2/ directory.

Stable assets that can be publicly cached—and CDNed—forever. Now that's non-stupid 😁

@turlockmike
Copy link

I don't have a problem using digested assets, but I do have a problem with rails being so opinionated about it and removing the choice. The biggest problem being with 3rd party javascript libraries or using my own app as a javascript source. Not every app is the same and that's why these choices need to exist.

@ryana
Copy link

ryana commented Mar 21, 2014

Yes I know how proxies work. I've used nginx and varnish and apache. I understand that there can be complex caching strategies implemented at layers above the Rack app. But my statement stands: I don't understand how this affects the discussion on how to customize the headers that Sprockets sends.

If someone is using a CDN or a proxy, it is that person's responsibility to make sure the correct caching strategy is in place. For everyone else, if we're going to tell sprockets to not generate digests for certain files, we need the ability to tell Sprockets "Hello Mr. Sprocket, this file over here might change without a change in filename, so please don't send a public Cache-Control header. xoxo"

@Intrepidd
Copy link

@ryana I get your point now, sorry.

Indeed this would be a good addition imo.

@schneems
Copy link
Member

Christ this thread is long. It’s going to get much longer.

Sprockets handles quite a few use cases, the hard part is each app will only use one out of many possibilities. You cannot just look at your use case without considering all the use cases. I’ve gone through the (entire) thread and i’m trying to condense all use cases. I’ve replied under why a middleware solution (as proposed earlier I see) could work.

This seems to be the sentiment on this thread

There hasn't been an explanation of why something so seemingly benign has been removed

I don’t think this feature was as benign as you might think. It can really trip people up when they’re setting a far future cache (as you should when serving assets). It may work for most use cases but when it blows up, it blows up in the worst way: silently due to improper cache invalidation.

Inline replies

middleware is an elegant patch but sadly no use if you're delivering the assets through a CDN.

@will-r i believe this is incorrect. A CDN would obey the cache-control rules here. If there are none the CDN will fetch a new version every time so your CDN still works, it just becomes less performant.

I'm not sure how heroku works though.

We run assets:precompile and then let the app serve assets config.serve_static_assets = true. The only downside here is that it’s slow to serve. If you use a CDN (which any production app should be doing) it suddenly becomes faster than even serving via NGINX. It takes minutes to set up a CDN, and it's practically free performance. It's 2014 there's no reason you should be running in production without a CDN.

Here’s the use-cases

Below them is how well a middleware solution would work.

  • Static 404 page referencing asset pipeline assets
    ✔️
  • Works with CDN
    ✔️
  • Reference a javascript file from a 3rd party app
    ✔️
  • jQuery libraries don’t know anything about the asset pipeline (changing url(..) to image-url("..”))
    ✔️
  • Doesn’t work with nginx as sprockets is turned off config.serve_static_assets = false

I think this is the wrong default setting. If NGINX sees the file in public and serves it then rails/sprockets will never see the request. If the file is not present i believe (will need to verify) that it will send the request to the application. Besides you should be using a CDN in front of NGINX even if you are using it to serve assets.

  • CDN performance problem: not using cache-control means CDNs and browsers don’t cache at all, meaning every asset request hits our servers every time.

This is a problem with middleware or any other solution. Even in Jeremy’s proposed symlinking solution you have to use a new version every time. That works for him, but won’t work for the case of having a 3rd party reference a javascript file. Here’s my proposal here we allow you to set a default cache-control for all middleware served assets (keep it low 10 minutes etc. by default is 0), Allow the middleware to take params ?cache-control=10000 which will dynamically set cache control. As most CDNs consider different query params a different cache object this will cause a new object lookup and the cache-control to be set to whatever you desire. Maybe this part is overkill or obviously a bad reason, but it is a place to start.

tl;dr

Symlinking or simply generating non-digest assets does not work (due to cache-invalidation) and it cannot be re-introduced. I think we can address all these use cases with a middleware option with some tweaking. If we choose not to fix this we should match production behavior to development to avoid #49 (comment) at the very least.

FWIW we had to add a section to Heroku documentation to address this expected behavior https://devcenter.heroku.com/articles/rails-4-asset-pipeline#only-generate-digest-assets as it was so unexpected for many

If i’ve left out a use case let me know.

@ryana
Copy link

ryana commented Mar 21, 2014

@schneems Much comment. Such A+. Would read again ;)

So as I understand it, your middleware solution would rewrite /assets/foo.jpg to asset_path('foo.jpg'). Am I also correct in assuming that when it had to do such a rewrite (and only when it had to do that rewrite) it would remove the Cache-Control header if one was sent up by a lower middleware layer (Sprockets), and ensure that an Etag was on the response?

The use case for that (which I actually just realized may not be in your response) is when you are Shopify and you want your users to include https://cdn.shopify.com/s/assets/external/app.js but app.js could change at any time.

@schneems
Copy link
Member

@ryana s/assets/external/app.js would need to be served without cache-control (set to 0) to make it update in realtime. If you're using Cloudfront it will respect cache-control so although the request goes through a CDN, the request immediately gets forwarded to your app server and it doesn't get cached. This is less than ideal (the CDN now adds latency), but perhaps setting to cache-control 60 may be close enough to real time for your application and although the cache will be reset 60 times a minute at least you get some benefit from the CDN.

So to clarify: by default we set cache-control to 0. This is the only way we can guarantee updated assets are served by CDN/browser/proxy/whatever. Allow the user to configure this with something like:

config.static_cache_control.non_digest = "public, max-age=60"

So by default we prefer usability and make performance configurable on a per-app basis by exposing a separate cache control setting.

@gwcoffey
Copy link
Author

@schneems, are you suggesting that this behavior would be possible with configuration, or automatic for all assets? If it applies to all assets, then we're right back where we were in 3 (and partially where we are now): mistakenly not using asset-url() and its ilk silently fails. How do I as a developer easily catch in testing that I've forgotten an asset-url?

In 4, the asset isn't served at all, so I can see the failure (although not in development, which is a bug in my mind).

In 3, the asset is served with long cache expiration, and we have problems updating it in the future.

In your proposed model, it will be served, and the cache will be handled, but now my app is requesting the asset on every page view, and I'm oblivious to it unless I check the log, or get some FOUC or something.

I would much prefer that when I forget to use asset-url:

1: The asset doesn't serve in development.
2: The asset doesn't serve in production.

If I want to do something configuration wise to deliberately bypass that I'm all for it. But the default should be to fail fast.

Using something like this for a third party library (ie: jquery, etc…) seems short-sighted to me. You're deliberately defeating cache efficiency to save the developer a few minutes of effort. If you want to put a third party asset-heavy library in your project, you have two options:

1: Convert it to use asset-url. Really, in my experience, this usually takes 10 minutes, and then updates go through the pipeline and you get good caching and protection when things change.

2: Put it in public, in a versioned directory, and adjust the link/javascript tags accordingly. You still get excellent cache performance, and you still can swap in new versions fairly easily. You don't get the JS/CSS compiled into your application.* so you incur an extra request, but because you have long cache expirations, it is only one request per visitor.

If rails wants to make it easier to incorporate third party libraries into the pipeline, it should make it automatically handle url(..) as asset-url(...) intelligently during compilation (which may be hard) rather than defeat the cache for every request. I'd much rather get good cache performance and good safety than "slightly easier".

So it seems the middlewear would primarily be there to facilitate the "api.js" scenario, and configuring it would be fine.

Geoff

@schneems
Copy link
Member

Thanks for your input though this is a huge 🚲 🏠 we have to pick a default. No default will make everyone happy. The premise of this thread is that people are not happy that there was a feature regression without any warning. You are suggesting that we ignore that problem, prefer existing 4.x functionality and instead raise an error. A developer must now take action to have prior behavior which is no different from adding an extra gem which everyone here seems to agree is not a good thing.

Your issue is this: You have relied on assets failing in production to be a "feature" and my proposed change would take this feature away. We need a replacement way to make it obvious when you are doing the wrong thing(TM). A kneejerk workaround would be to remove the middleware http://guides.rubyonrails.org/rails_on_rack.html#highlighter_76637 now you're back at current functionality. Maybe a better solution would be to introduce a development dependency similar in nature to github.com/schneems/sprockets_better_errors that raises an error in development for performance tuning reasons.

At the end of the day we cannot have it "just work" which would make everyone on this thread happy, and have it "complain and blow up loudly when it's not performant". Active Record allows you to write really slow queries without complaint, yet if you are interested in tuning it there are a variety of tools and techniques available. Setting a config.static_cache_control.non_digest would actually make your non digest assets performant behind a CDN, so it's a pretty big leap of faith to assume using one SASS helper or another was incorrect (by default). I do believe this is an issue that should be addressed and and maybe even addressed in rails. I do not believe this is the place for forum to discuss this particular need at length. The core issue is "Inability to compile nondigest and digest assets breaks compatibility with bad gem" let's try to focus on that.

@karmajunkie
Copy link

@schneems I don't want to speak for anyone else, but for me the immediate issue is not a change in the defaults. Defaults change for good reasons, and I believe and agree that the defaults should serve the masses and what is believed to be "best practice" for the majority.

My issue (and I believe most will agree with me on this thread) is that not only were the defaults changed, but it was done so in a way in which it was impossible to change them back—so it wasn't just defaults, but capabilities themselves, were changed/removed. On a meta level, for a platform in use as widely as Rails, deprecating a feature like that is a policy mistake on the part of the maintainers and core team.

While I agree that the caching behavior for non-digested assets isn't desirable for most people, for some its essential, and most of us think of ourselves as grownups who are capable of making a tradeoff intelligently. I think everyone probably agrees that the development vs production thing is also a "bug". but most of all, what I've found frustrating following this thread was the vocal feedback on legitimate use-cases from plenty of developers, and the "too bad" attitude towards that point of view despite several people volunteering to submit PRs that would fix the problem in a non-obtrusive way.

To the maintainers:
I'd like to echo the sentiment that _your work is appreciated._ Seriously, thank you. When we've been banging our heads against a frustrating problem, its easy for consternation to take over my keyboard. But we all know (or should) that maintaining a high-profile project requires quite a bit of effort. That said, taking over a component of a high profile project like Rails may be a volunteer duty, but its a duty you volunteered for, and the community of users rightly expects a level of interaction that rises above "this isn't a problem for me, so I don't want to maintain that somewhat dusty corner of the library." Lastly, I appreciate the movement in the last couple of days to try to resolve this issue in a way that at least attempts to resolve the problems highlighted in the thread.

@amacneil
Copy link

I also want to add that I'm totally fine with the new default behavior. I'm more than happy to enable nondigest assets on a per-file basis, which hints to me that I'm doing the Wrong Thing, but still fixes the majority of use cases described above.

I think the middleware redirect above is smart. Although it won't fix the problem if people are using nginx to serve assets, it sounds like most people use a CDN directly in front of rails these days anyway. I also think removing caching headers for non digested files is sensible (the lack of cachability is part of the trade off).

Thanks again for all the work that has gone into this gem. It's really head and shoulders above anything else available out there.

@zarqman
Copy link
Contributor

zarqman commented Mar 30, 2014

I've been bitten by the core issue here a few times: needing to reference consistently named .css and .js files from an external sources, such as email or 3rd party sites.

The middleware approach sounds like the best approach on the whole (thanks for proposing it @schneems). It addresses multiple core issues rather elegantly:

  • The redirect avoids having to keep a non-digested version of the file in sync, which has proven to be a problem at times.
  • It will work with nginx/apache (albeit with potentially reduced performance), as the typical configuration will pass-through the request and let the middleware handle the translation (probably assumes config.serve_static_assets = true). It may be possible to integrate with config.action_dispatch.x_sendfile_header to minimize the performance hit in some situations.
  • It goes a long way toward ensuring that assets still work when misconfigured, while also addressing the cache-control matter. The cache expiry issue very likely would benefit from middleware in certain deployment stacks, such as Heroku, even if another approach were to be pursued for the filenames. Might as well have it all in one place.
  • It's easily backported to/enabled on older versions of Rails.
  • It offers a way to assure that development and production behave the same way.

On this last point, middleware is not the only potential solution of course. Other work is happening in #84, #117, etc. This is encouraging and useful, as it would save countless hours on the part of not only developers, but everyone they ask for help. Since I'm going to assume the middleware will be able to be disabled (and might even default to that way), this other work is definitely not wasted.

As just hinted at, this new middleware doesn't have to be enabled by default. There is a precedent for enabling/disabling middleware based on a config setting (config.force_ssl as one example), so I wouldn't think it'd be too bothersome to do it here too.

All that said, I'm encouraging the following:

  1. Create the middleware, hopefully as a default part of the Rails stack somewhere. It's a viable solution that addresses a couple real, in-the-wild problems.
  2. Continue the discussion on whether the best default behavior for Rails assets is to just-work or to strongly-encourage-good-behavior.

If the consensus is to just work, then enable the middleware by default. This should also implicitly solve the development/production behavioral divergence issue.

On the other hand, if the consensus is to encourage the right thing, then disable the middleware by default. The middleware will still exist for those that have more diverse needs or other personal reasons for wanting that behavior.

@zarqman
Copy link
Contributor

zarqman commented Apr 1, 2014

For those interested, I've taken a stab at this middleware: https://github.com/zarqman/smart_assets

@paulkoegel
Copy link

+1 for optional asset paths without a digest.

@arcmode
Copy link

arcmode commented Apr 8, 2014

Another option is to create soft link from the latest assets

namespace :assets do

  desc "Create soft links as non digested assets"
  task soft_links: :environment do
    Rake::Task['assets:precompile'].invoke
    assets = Dir.glob(File.join(Rails.root, 'public/assets/**/*'))
    manifest_path = assets.find { |f| f =~ /(manifest)(-{1}[a-z0-9]{32}\.{1}){1}/ }
    manifest_data = JSON.load(File.new(manifest_path))
    assets_data = manifest_data["assets"].each do |asset_name, file_name|
      file_path = File.join(Rails.root, 'public/assets', file_name)
      asset_path = File.join(Rails.root, 'public/assets', asset_name)
      FileUtils.ln_s(file_path, asset_path, :force => true)
    end
  end

end

@wuservices
Copy link

I'm running into this now as I'm upgrading to Rails 4. I'm looking for non-digested assets for the reasons listed here, but another reason I need them is as a fallback for older assets.

I have pages and paths of digested assets cached in other non-Rails parts of my application. These pages may be cached by a reverse proxy and could thus serve up digests that no longer exist.

I used to solve this by catching a 404 for a stale asset and redirecting to the non-digested asset with a short cache-control header. Before I did this, I found that sending out a bunch of deploys during the middle of the day would lead to missing assets and a broken experience for many users.

I'm sure this scenario isn't common as I'm parsing the manifest and using it in my Java app, but I wanted to share this use case. For now, I'll be trying some of the workarounds listed here.

@brad-werth
Copy link

+1 for optional asset paths without a digest.

@joevandyk
Copy link

I don't really understand why the rake task that generates assets can't also make a copy of the file to a non-digested filename.

namespace :assets do
  task :precompile do
    assets = Dir.glob(File.join(Rails.root, 'public/assets/**/*'))
    manifest_path = assets.find { |f| f =~ /(manifest)(-{1}[a-z0-9]{32}\.{1}){1}/ }
    manifest_data = JSON.load(File.new(manifest_path))
    manifest_data["assets"].each do |asset_name, file_name|
      file_path  = File.join(Rails.root, 'public/assets', file_name)
      asset_path = File.join(Rails.root, 'public/assets', asset_name)
      FileUtils.cp(file_path, asset_path)
    end
  end
end

Adding that to my Rakefile solved the issue.

@BrandonMathis
Copy link

I was able to sidestep this issue by just copying minifyed js that I didn't want digested into a public/javascripts folder.

@turlockmike
Copy link

I haven't chimed in in a while, but I want to definitely +1 the Rack Middleware solution.

If we can add the must-revalidate tag and the last-modified header. Ideally, each file should have an associated last-modified date that gets updated when the file gets updated. This would make http caching very valuable while still solving the issue for a lot. Of course this solution means that an app is likely to receive a lot more request than if we can use modified-dates in the file name, but this is probably the next best solution.

@saurabhnanda
Copy link

Where can one read the rationale for why the default behaviour was changed? We lived with the previous behaviour for so many years, what exactly was the problem/bug/security-issue this change fixed?

@akaspick
Copy link

I'm not sure of the actual reason, but I do know that before it used to take a LONG time to compile assets on a deploy because it would look through basically everything in an app and gems and compile it. Now it's quick and speedy only compiling the desired assets. A deploy for one of my apps went from over 5 minutes to roughly 30 seconds.

@ryana
Copy link

ryana commented May 28, 2014

@saurabhnanda I believe the rationale was that the previous behavior, despite being around for a while, was decided to be a poor pattern since it encouraged instances of stale assets being cached out on edge locations (ie. the new application.js doesn't get picked up when it changes). The new changes make it impossible for an asset to be stale (since the name changes), while still letting Rails safely use use cache headers that maximize end-user performance.

One of the discussions going on in this thread -- and IMO, the most important one ;) -- center around the fact that in this new world it's not possible out of the box to use the asset pipeline to generate an asset (ie. my-widget.js) that can be referenced by a 3rd party (ie. If you have a JS tag generated by the asset pipeline, then your customer cannot install that JS tag on their site because the name of the generated file keeps changing).

@akaspick I believe the decreases in compile time had little to no impact on why this change was made. Furthermore, Rails4 brought in some optimizations from turbo-sprockets et al that only compile assets when they need to be changed. That was a big performance boost.

@mlangenberg
Copy link

Is it really that difficult to only set far future expire headers for filenames containing a fingerprint?

location ~ "^/assets/.*-[0-9a-f]{32}.*" {
  gzip_static on;
  expires max;
  add_header Cache-Control public;
  break;
}

Add this to your Nginx configuration and you can safely use cache headers to maximize end-user performance without risking application.js getting stale.

You would refer to digest-filenames from your Rails app, while 3rd party can still reference my-widget.js.

@wuservices
Copy link

@mlangenberg, this won't work in situations where an asset isn't consistently available.
There are 2 scenarios for this in a scaled environment:

  1. The asset doesn't exist yet because it's too new. This can happen during a rolling deployment. Maybe you get HTML from one server that has just restarted and has the newest version of the page with fingerprints for the latest asset. However, their browser then makes another request which is load balanced to a server which hasn't restarted yet. In a dumb horizontal architecture without any kind of stickyness, you'd get a 404 for the asset. As I mentioned above, my solution for this was to write a rule to catch the 404, then proxy the non-digested version with short cache-control headers (at the old URL to avoid an extra round trip).
  2. The asset is too old and has been cleaned up after multiple quick deployments. If Sprokets only keeps a few versions of each asset, what if you deploy 4 times in a row very quickly? What if your entire site is cached by a CDN and it's is still serving slightly stale versions of the page while the cache is invalidated - even if only for 15-30 seconds? The HTML will then again refer to assets which potentially don't exist.

In either case, non-digested assets alone don't fix the issue, but they make serving the correct / most recent version of file much more feasible. I've solved this with another gem for now but hopefully this use case makes sense. I also agree with difficulties serving static assets for JS libraries too, but that hasn't been as big of an issue for me personally. I can just put the files outside of Rails.

@procommerz
Copy link

This is so wrong... Anybody who believes this issue should remain closed, must rethink. The single most powerful source of pain for thousands of Rails developers, because of somebody's personal subjective judgement of what is a bad pattern and what isn't.

@rails rails locked and limited conversation to collaborators Jun 13, 2014
@rafaelfranca
Copy link
Member

Thank you. I locked this conversation. We already have enough information to think about this issue and in fact we are going to handle it in some way (we already did some improvements on master branch to alleviate the pain). If we think we need more feedback from the community I'll reopen the thread.

m0gg added a commit to m0gg/dart-rails that referenced this issue Dec 4, 2014
  * due to sprockets digest method, we are forced to use a workaround to allow
  "precompilation" of .dart files, see [non-stupid-digest-assets](https://github.com/alexspeller/non-stupid-digest-assets)
  and [Inability to compile nondigest and digest assets breaks compatibility with bad gems #49](rails/sprockets-rails#49)
  it is optionally (and automatically) available by adding `gem 'non-stupid-digest-assets', '>= 1.1', github: 'm0gg/non-stupid-digest-assets'
` to your Gemfile
  * added dart_app.js to precompile list
  * faced an issue with UglifyJs stackoverflowing with too large dart2js files
vuntz added a commit to vuntz/barclamp-crowbar that referenced this issue Feb 2, 2015
Based on
rails/sprockets-rails#49 (comment)

Without this, a lot of things just don't work (including static pages).
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests