H2o: Missing delay (sleep, Thread.sleep) in mruby?

Created on 2 May 2017  ·  6Comments  ·  Source: h2o/h2o

I wrote a mruby handler which uses http_request to keep my cache fresh in wordpress but because I don't know of any way to sleep the handler and it does too many requests it fails the request because php can't keep up.

Is there any way to sleep for some seconds in h2o using mruby (I tried installing custom Thread mruby extension but I can't seem to get date for response out of thread)?

The error:
[lib/handler/fastcgi.c] in request:/index.php/tag/science:connection failed:failed to connect to host

The code:

if request_is_from_self and links_file_exist and req_is_get
    links = `php #{links_filepath}`

    for link in links.split(' ') do
        req = http_request(link)
        _, _, _ = req.join
    end
end
enhancement mruby

All 6 comments

hi @taosx
i somewhat doubt that its really php that can't keep up.
are you sending out http requests to the very own instance of h2o that then runs fastcgi?

not sure if you did this intentionally to emulate pauses but from what i read you're join for every request sequentally. The idea would be that the requests go out in parallel, so you only start joining after all requests are fired.

can you try something like

links.split(' ').map{|l| http_request }.to_a.map{|r| r.join}

and post a more detailed error description?

also if its always the same few links and they are more or less static you could quickly implement a cheap in memory cache.

I believe your code is not providing an argument to http_request, I modified like so:
links.split(' ').map{|l| http_request(l) }.map{|r| r.join}
but it still seems to perform worse that the code I posted above.
I solved the solution by installing and activating opcache to php and now the time it takes to do all requests dropped from 27 secs to 3.1 secs without errors. By using your code is going to 4 seconds.

After I do some more tests I would like to open source the whole code for my wordpress site with h2o which caches pages every 2 minutes, precompresses with both gzip and brottli at max settings and serves from cache. Love h2o so far, thank all the contributors to h2o!!

In the future I would love to see more scripting features for h2o, like openresty but based on h2o :D

@taosx ah sorry. i forgot the crucial part: .to_a

links.split(' ').map{|l| http_request }.to_a.map{|r| r.join}

just out of curiosity, can you try that again. and how many requests do you make / how big is the links array?

@yannick I tried again, i had to modify http_request to give it's argument (l).
The speed dropped to 3.06 seconds :D and sometimes 2.9+ when warmed the cache.
The links array has 41 elements (aws ec2 t2.micro).

Everytime I add another post i get 1 link from the post itself + ~5 links from tags added...
I think I have a small problem in the near future. I think I'll drop mruby for now and go for a different approach in the future.

Do you think I could use h2o with mruby as a reverse proxy cache for wp, I believe it's possible.

it seems that you shell out to php via links = php #{links_filepath}. that is likely very slow, did you remove that time?
for just h2o the t2.micro should be ok, but if you also run the php stuff on that machine they compete for the single cpu and performance drops further. so at least for testing i'd take something like a c4.large or c4.xlarge.

yes caching via mruby should be possible, either you do it in memory or you use redis (which currently still needs a patch but should be merged soon, see https://github.com/h2o/h2o/pull/1152 )

This is an interesting discussion!

Aside from how the issue should be resolved (e.g. by implementing a cache using mruby), I believe that there is no reason why we should not provide a sleep function in our mruby handler.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

dch picture dch  ·  5Comments

Jxck picture Jxck  ·  7Comments

utrenkner picture utrenkner  ·  3Comments

Ys88 picture Ys88  ·  5Comments

ahupowerdns picture ahupowerdns  ·  8Comments