-
Notifications
You must be signed in to change notification settings - Fork 755
Use therubyracer or node.js ExecJS backend in production? #222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I removed it after reading this note: https://devcenter.heroku.com/articles/rails-asset-pipeline#therubyracer |
Thanks. Have you noticed any performance issues on Heroku?
|
Sorry, I don't use heroku and I don't use prerender! I just wanted to take that warning into consideration. This guy might have more to say: #156 Sorry I don't have much! |
@tomtaylor I've been using react-rails with pre-rendering on heroku for a few months now. The sad part is I've been having memory related performance issues with ruby 2.1.2 for a while now so I didn't have a clean slate to compare to after adding therubyracer, but I think the memory usage didn't change dramatically (but with it being bad to begin with it's hard to say). One thing I can say for sure is that using the "node.js" backend (which I did at first before going live). Is much much slower. And after I looked into it, it made sense to me, but I might be wrong so take this with a grain of salt: The way react-rails works is by keeping a pool of ExecJS javascript context to use for rendering. These are (usually) warmed up by evaluating the Again this might be because of something wrong with my setup, but I remember following the code to the end back then. So with |
wow, so maybe I was wrong to remove that suggestion from the readme? On Wed, Mar 25, 2015 at 1:27 PM, bogdan-dumitru [email protected]
|
@rmosolgo Like I said I might have been wrong abut this but that's my understanding right now, and the result of the last time I investigated this. If anybody knows if ExecJS can keep warm node context in memory then I'm all ears 🍰 |
Thanks @bogdan-dumitru, that's really useful to know. The high memory issues we're seeing with therubyracer aren't causing particular performance issues, I was just surprised at the usage. We're using Ruby 2.2.1. (We'd like to run more processes in each dyno, if possible, but each process seems to stabilise around ~300MB at the moment, so can't run more than 1 in a 1X dyno (512MB max). Switching to 2X dynos might help, and might give us enough room to run 3 worker processes in each, saving overall.) |
@tomtaylor you might also wanna fiddle with the number of renderers you want react-rails to keep in the pool (cose that should have a linear impact on the extra memory). So if you're running a single process with X threads you don't need more than X renderers, and I think the default is 10. |
@bogdan-dumitru yeah, we're using 5 threads in Puma, and |
To add my anecdotal evidence to the conversation, we just deployed a change to our production (heroku) app adding prerendering for all of our |
Check out #290 for some comparisons of ExecJS backends & discussion too |
Sure this is covered over there, but I can definitely confirm that using therubyracer cleared the performance problem right up. It doesn't look to be using that much more memory, either |
seems like this has been settled :) |
The README used to mention that a "high performance in process JS VM like therubyracer" should be used with react-rails, but I notice this has been removed in recent revisions. I'm currently running an app on Heroku and the memory usage is pretty high, something I attribute to therubyracer.
Does anyone have any experience of using react-rails in production with the node.js ExecJS backend? Is there a reason why this note was removed from the README? Thanks!
The text was updated successfully, but these errors were encountered: