Skip to content

Timing out after 1.0 -> 1.1 upgrade #334

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jakecraige opened this issue Aug 17, 2015 · 5 comments
Closed

Timing out after 1.0 -> 1.1 upgrade #334

jakecraige opened this issue Aug 17, 2015 · 5 comments

Comments

@jakecraige
Copy link

My application Rails 4.2.2 (Ruby 2.2.2) on Heroku is timing out after upgrading. I'm using Rack::Timeout to kill requests after 10s and it hits it every time.

Here's what I'm seeing in the logs:

2015-08-17T19:29:51.612196+00:00 app[web.1]:   Rendered a_template.html.slim (20.3ms)
2015-08-17T19:29:52.063606+00:00 app[web.1]: source=rack-timeout id=fbd02348-8cb0-4ac9-84af-c04d428c1a9c wait=306ms timeout=10000ms service=2062ms state=active
2015-08-17T19:29:53.004007+00:00 app[web.1]: source=rack-timeout id=fbd02348-8cb0-4ac9-84af-c04d428c1a9c wait=306ms timeout=10000ms service=3003ms state=active
2015-08-17T19:29:54.007696+00:00 app[web.1]: source=rack-timeout id=fbd02348-8cb0-4ac9-84af-c04d428c1a9c wait=306ms timeout=10000ms service=4006ms state=active
2015-08-17T19:29:55.004989+00:00 app[web.1]: source=rack-timeout id=fbd02348-8cb0-4ac9-84af-c04d428c1a9c wait=306ms timeout=10000ms service=5004ms state=active
2015-08-17T19:29:56.024033+00:00 app[web.1]: source=rack-timeout id=fbd02348-8cb0-4ac9-84af-c04d428c1a9c wait=306ms timeout=10000ms service=6023ms state=active
2015-08-17T19:29:57.005402+00:00 app[web.1]: source=rack-timeout id=fbd02348-8cb0-4ac9-84af-c04d428c1a9c wait=306ms timeout=10000ms service=7004ms state=active
2015-08-17T19:29:58.003664+00:00 app[web.1]: source=rack-timeout id=fbd02348-8cb0-4ac9-84af-c04d428c1a9c wait=306ms timeout=10000ms service=8002ms state=active
2015-08-17T19:29:59.001756+00:00 app[web.1]: source=rack-timeout id=fbd02348-8cb0-4ac9-84af-c04d428c1a9c wait=306ms timeout=10000ms service=9000ms state=active
2015-08-17T19:30:00.004667+00:00 app[web.1]: source=rack-timeout id=fbd02348-8cb0-4ac9-84af-c04d428c1a9c wait=306ms timeout=10000ms service=10003ms state=timed_out

When I rollback to 1.0 it goes away and I no longer timeout. I tried setting the transformer to the JSXTransformer to eliminate Babel as a possibility and I still see the same issue so it looks like it's something else.

Any idea what could be causing this?

@rmosolgo
Copy link
Member

Hmm, thanks for reporting this.

Are you using server rendering? If so, maybe it was caused by our changes there. We switched the default pool size from 10 to 1 but that shouldn't have affected total time, because the 10-member "pool" was actually running one-at-time anyways (unless you're on JRuby).

I've seen a couple bugs in the past where sprockets is re-compiling things in production (A server-rendering bug I think) it could be that?

If you're not using server rendering then, gosh, I'm not sure :S you could share your configs & one example that times out

@jakecraige
Copy link
Author

I am using server rendering and MRI.

We actually have an interesting set up where we are calling react_component around 5+ times per request. I'm unsure if it could be related to that and the pooling change.

For a sanity check I configured it with:

    config.react.server_renderer_pool_size = 10
    config.react.jsx_transformer_class = React::JSX::JSXTransformer

To try and mimic the previous version and the same thing happens.

I disabled pre-rendering and it loads fine, so it's definitely related to that. We were sometimes experiencing timeouts before the upgrade to 1.1.

Since this is tightly integrated to a private application I'm not sure if I can pull out a failing test. But here's a stack trace of one of them that timed out(1.0) https://app.honeybadger.io/fault/44154/42efc9f4960c0e6686bd67844466a881

That prerender_component method looks like:

  def prerender_component(component_name, props)
    react_component(component_name, deep_camelize_keys(props), { prerender: false })
  end

Any thoughts?

@rmosolgo
Copy link
Member

I saw you have rack-mini-profiler in there, are you able to confirm that the largest part of that 10s is inside the prerender? Basically I'm wondering if it's

  • Slowness on prerender / ExecJS (is it just running slow? Is it over-worked?)
  • Somehow the asset pipeline is getting involved and we're losing time because things are being compiled during request/response cycle

Another idea for a work-around -- are you using NodeJS for ExecJS? If so, you could try therubyracer, we found it to be the fastest: #290

@jakecraige
Copy link
Author

Unfortunately I'm unable to look into the mini profiler results since it times out prior to the page rendering.

Happy news though, adding therubyracer made it stop timing out. :) I had seen that issue but didn't actually add it to my Gemfile to try it out...

I'll close for now since that seemed to resolve it. I'll reopen if it comes up again.

Thanks for the quick responses and help @rmosolgo !

@rmosolgo
Copy link
Member

Sure thing glad that fixed it for now! 😬

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants