Socket.io: AWS EC2 Behind ELB Always Prints Error Unexpected response code: 400

Created on 30 Oct 2014  ·  66Comments  ·  Source: socketio/socket.io

Hey guys -

Can't seem to find out a solution for this, but when I run my app behind a load balancer, I get this error on the client:

WebSocket connection to 'wss://fakedomain.com/socket.io/?EIO=3&transport=websocket&sid=QH8VmXbiEcp3ZyiLAAAD' failed: Error during WebSocket handshake: Unexpected response code: 400 

I understand the error, since it's trying to talk to the load balancer and now the EC2 instance (I'm not great with AWS so feel free to offer help on this one!) but what I don't understand is how to make the error not show up!

I'd love to fix the root cause but I'm guessing it involves a separate dedicated socket.io server to handle all the real time stuff which I don't have time for at the moment, but could someone please run me through supressing this error?

I'm assuming it's falling back to polling, which seems to work just fine (I have the socket connection connected and it fires) but I don't want to launch with a red error in my console.

Thanks in advance for any advice you might have!

Most helpful comment

I assume you're not using Elastic Beanstalk (the instructions there would be much easier).

Go to EC2->Network & Security->Load Balancers

Select your load balancer and go to Listeners. Ensure that both the Load Balancer protocol and the Instance Protocol are set to TCP for port 80 and SSL for port 443 rather than HTTP and HTTPS.

All 66 comments

Also, I've installed via bower if it matters:

"socket.io-client": "~1.1.0",

Do you have sticky connection on your server?

I assume you're not using Elastic Beanstalk (the instructions there would be much easier).

Go to EC2->Network & Security->Load Balancers

Select your load balancer and go to Listeners. Ensure that both the Load Balancer protocol and the Instance Protocol are set to TCP for port 80 and SSL for port 443 rather than HTTP and HTTPS.

Oh man. This is solid advice I haven't seen elsewhere. I'll try in the AM
and report back. Thank you!

On Wed, Oct 29, 2014, 7:48 PM Vadim Kazakov [email protected]
wrote:

I assume you're not using Elastic Beanstalk (the instructions there would
be much easier).

Go to EC2->Network & Security->Load Balancers

Select your load balancer and go to Listeners. Ensure that both the Load
Balancer protocol and the Instance Protocol are set to TCP for port 80 and
SSL for port 443 rather than HTTP and HTTPS.


Reply to this email directly or view it on GitHub
https://github.com/Automattic/socket.io/issues/1846#issuecomment-61038664
.

So maybe I'm mistaken, but our site is running on HTTPS (with our SSL cert). Changing them to breaks a lot of stuff down the line. I was checking X-Forwarded-Proto in my app to make sure the request was HTTPS and if not, forcing it to redirect. Apparently there is no X-Forwarded-Proto header with SSL/TCP and Express is reporting req.secure as false (even though I typed https).

Suggestion?

Not sure, sockets use TCP/SSL protocol not HTTP so you will need to change it to TCP/SSL in order to make web sockets work. As far as how to fix the rest of your app, I have no idea.

Thanks for the help. I'll have to make a new environment and mess around a bit. There's a lot of cogs in the machine!

Is it possible to have websockets run on their own ports? If so, could you link some documentation? (I'm pretty sure there's not but I could be wrong)

They can work on whatever port you want, just specify when you connect the port like so:

io('https://myserver:123');

I'm using Elasticbeanstalk and facing the same issue, I don't have SSL installed but connected through regular http.

Hi,
I have the same problem, though i'm using SockJS.
my application is a Java Spring 4 application, works on development machine and
getting the same error on AWS.
looks like the Upgrade header is being dropped by someone, i can see it on the client.

could find a solution yet...

I am in the same boat and on a live chat with an AWS engineer and we are both stumped. I did confirm that the ELB needs to be changed from HTTP/HTTPS to TCP (or SSL (Secure TCP)) but I still get frequent 400 errors. Never had this problem until moving to Elastic Beanstalk load balanced cluster.

please try commenting any app.use middleware stuff (header/cors related) if any, or app.get("/", not sure which one worked for me, but post your results please.

Update: I have confirmed it is the ELB, whether it is configured for HTTP or HTTPS traffic, socket.io fails. When I go directly to one of the EC2s everything works great.

Attached is what I am seeing when going through the ELB.
snipimage

I have seems to solved my problem by adding Route53 instance in front of the loadbalancer.
i now have a working websocket with secure connections under AWS environment.

Try adding the Route 53, if this is not solving your problem i'll be happy to share my configuration.

@yoav200 think you could share your whole configuration for what made things work for you? Ie, ports on EBS security + protocol (http/tcp), any .ebextensions you needed, etc. I tried a ton of different things to hack my nginx configs to keep the upgrade headers and send them to node, but nothing worked for me. FWIW - a wiki page would be really, really helpful for this. Only thing that did work was going direct to my EC2 instance.

I have set up a Route 53 instance that manage my domain.
i have configured a Hosted Zone with a record that points my domain directly to the load balancer.

the load balancer is configured to forward request to my Tomcat instances, as shown here:
lb-listeners

the security group for ELB is pretty standard:
lb-security-group

Are you using nginx? Did you have to do anything special configuration-wise?

I am very close to having this working, spent almost 8 hours with an AWS support engineer over the last two days. Had to rip nginx out of the mix but am very close, sockets are working fine through the ELB over http, tomorrow I think I can handle the https stuff server-side instead of at the ELB-level.

I will report back with my findings.

I don't use nginx, i manage everything at Amazon, i have moved my domain to Amazon as well,
however i have migrated my application from a hosted cloud environment that uses AWS for their instances but managed the load with nginx and i had no problem there.

when asked them how it is working for them their response was:

we don't use Amazon Elastic Load Balancer, we use our own NGinx load balancing layer.
A solution could be to use an Nginx or HAProxy server instead of the ELB and to use an autoscaling group with 1 node of the haproxy/nginx to ensure the high availability of the component.

Yeah, I was told that HAProxy was another option but I am quite certain I will have it working with the following services by the end of the day:

  • Elastic Beanstalk
  • ELB
  • EC2 Autoscaling group (created by EB)
  • RDS
  • ElastiCache

The node app is working with all these services currently but like I said, only over HTTP so not much left to do to get production ready. I'll report back with my findings/configs later.

@niczak would you be able to share a bit of how you have this working?

Right now I'm attempting to connect to Socket.IO on my Node instance on an EC2 along with using ElastiCache's Redis.

I opened up the EC2 instance to the public along with responded with a public IP address so that I could easily connect to the socket (http://coding-ceo.ghost.io/how-to-run-socket-io-behind-elb-on-aws/). on my ELB I'm using sticky sessions (hoping this would stay connected), however my socket still won't connect and gives me the lovely failed: Connection closed before receiving a handshake response.

The polling does work fine though, but trying to get the sockets and entire path working fine.

Would love to see a use case here so that I can properly configure my instance (even if it's only on HTTP instead of http and https) as I've been head smashing myself for quite a while now.

Thank you so much for your help!

@petrogad

Hey Pete!

My ELB listeners are as simple as can be (see image) and by default on the EC2 port 80 is forwarded to 8080 using iptables. With that said, I can hit my server on HTTPS, then my node app listens on 8080 and serves up the SSL cert itself.

elb

Node code for handling certs is as follows:

        files = ["COMODOHigh-AssuranceSecureServerCA.crt", "AddTrustExternalCARoot.crt"];
        ca = (function() {
            var _i, _len, _results;
            _results = [];
            for (_i = 0, _len = files.length; _i < _len; _i++) {
                file = files[_i];
                _results.push(fs.readFileSync('/etc/ssl/tcerts/' + file));
            }
            return _results;
            })();

        httpsOptions = {
            ca: ca,
            key: fs.readFileSync('/etc/ssl/tcerts/multidomain.key', 'utf8'),
            cert: fs.readFileSync('/etc/ssl/tcerts/14432575.crt', 'utf8')
        };

        this.serverPort = 8080;
        var httpsServer = https.createServer(httpsOptions, apiInstance);
        httpsServer.listen(this.serverPort);
        startSockets(httpsServer);

In order for web sockets to work (without falling back to long polling) I _had_ to disable Cross-Zone Load Balancing. Unfortunately AWS does not claim to support web sockets through their ELB so these types of configuration "hacks" are necessary. Hopefully this will be enough to get you up and running!

Thanks @niczak !

Everything is working perfectly, had HTTP traffic going which was throwing things off. Also got redis across multiple instances working; the last thing is just working on reCluster and allowing a tcp route to stick with a particular worker.

Have you had much success with this? I'm using express, so sticky-session hasn't been working great. Have you had any luck with that?

Thanks again and hopefully once this is all complete can get a nice blog post out there for others struggling with a similar item.

Thanks @niczak !!

I finally got mine working as well. However, in additional to configuring
1) Load balancer to use TCP protocol
2) Load balancer to disable Cross-Zone load balancing

I also need to set
3) Proxy server to be "none" in the Software Configuration section.

screen shot 2015-01-09 at 10 28 56 am

That's great news @aidanbon and @petrogad, huzzah for sockets on AWS! :+1:

Thanks a lot! It solved my issue with elastic beanstalk. It makes sense to remove nginx proxy

AN update from my original post:

I have my HTTPS changed to SSL (443 ---> my custom port) and left HTTP (80 ---> my custom port).

I have code that was checking with express !req.secure && X-Forwarded-Proto !=== https that redirects you to HTTPS if you come in on port 80...

HOWEVER

Changing to SSL makes X-Forwarded-Proto come back undefined.... breaking this connection. (also req.secure seems like it may be deprecated)

So now:

if (req.get('X-Forwarded-Proto') === 'http') {
            res.redirect('https://' + req.get('Host') + req.url);
        }

seems to work great, and I don't get the 400 error in the console any more.

Hullo - I got this working using Node, NGINX, SSL, and an ELB on Elastic Beanstalk doing the following:

Create a container command in .ebextensions to modify the nginx configuration script to include proxy websockets:

container_commands:
    00proxy:
        command: sed -i 's/proxy_http_version.*/proxy_http_version\ 1.1\;\n\ \ \ \ \ \ \ \ proxy_set_header\ \ \ \ \ \ \ \ Upgrade\ \ \ \ \ \ \ \ \ \$http_upgrade\;\n\ \ \ \ \ \ \ \ proxy_set_header\ \ \ \ \ \ \ \ Connection\ \ \ \ \ \ \"upgrade\"\;/g' /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf

Install your SSL cert on the load balancer using the Elastic Beanstalk web console.

Then go to the EC2 web console, and change the load balancer to listen on SSL 443 -> TCP 80. This did not working when I changed it in the EB web console, I had to do it directly in EC2.

That should get everything working. I haven't had to do any redirects and I managed to get NGINX working too.

@ChrisEdson is there anything else to include in the package to use this solution ?
I gave it a try but the deployment failed due to errors. The log show the follofing line :

/etc/init.conf: Unable to load configuration: No such file or directory

Can you post your code?

On Fri, Sep 4, 2015 at 1:13 PM, Treyone [email protected] wrote:

@ChrisEdson is there anything else to include in the package to use this solution ?
I gave it a try but the deployment failed due to errors. The log show the follofing line :

/etc/init.conf: Unable to load configuration: No such file or directory

Reply to this email directly or view it on GitHub:
https://github.com/socketio/socket.io/issues/1846#issuecomment-137718880

The app code is an extremely basic socket.io imprementation, almost an out of the box example usage.
The only thing I added is the .config file in .ebextensions, in which I copied the code snippet you quoted earlier.
So I don't have much to post :)

No worry, it's just difficult to debug without the code or logs. Can you post the logs? Probably not appropriate to stick up on here, maybe put them in a paste bin. 

On Fri, Sep 4, 2015 at 3:26 PM, Treyone [email protected] wrote:

The app code is an extremely basic socket.io imprementation, almost an out of the box example usage.
The only thing I added is the .config file in .ebextensions, in which I copied the code snippet you quoted earlier.

So I don't have much to post :)

Reply to this email directly or view it on GitHub:
https://github.com/socketio/socket.io/issues/1846#issuecomment-137749890

Hum, I published my code via grunt instead of a manual packaging and it works like a charm. Automatization should be mandatory ;)

Glad to hear it!

On Tue, Sep 8, 2015 at 3:14 PM, Treyone [email protected] wrote:

Hum, I published my code via grunt instead of a manual packaging and it works like a charm. Automatization should be mandatory ;)

Reply to this email directly or view it on GitHub:
https://github.com/socketio/socket.io/issues/1846#issuecomment-138572724

Save my day!

We saw this basic issue, too. Basically, because SocketIO sends two requests to establish the websocket connection, if those calls get federated across instances, the session ID is not recognized and the handshake fails, leading to SocketIO falling back to long polling. This is true whether or not you're using ws or wss; it assumes you've set up load balancing at layer-4 (TCP), as I could not get layer-7 (HTTP) load balancing to work with websockets.

The fix would be to enable sticky sessions, so that once a call goes across to an instance, the load balancer continues to send those requests to the same instance; however, if you're using AWS' ELBs, they do not allow you to use sticky sessions for layer-4 load balancing.

What we ended up doing was using raw websockets, rather than SocketIO, as then there is no session data that needed to be persisted across calls (just one call, the handshake, takes place, and that works).

A fix on AWS' part would be to allow sticky sessions to be used in some manner for layer-4 load balancing.

A fix on SocketIO's part would be to make it so the websocket handshake does not require the context of the prior sessionID (I don't know the purpose or lifecycle around this; I am assuming it exists both to help long polling, and to more quickly reconnect in the event of connection termination; if the client determined a connection had died it could generate a new session and reconnect successfully, rather than have the backend refuse the handshake because it believed a connection already existed).

A potential fix would be to use Redis to persist session state across all instances. I have not tested this.

A workaround, if your load allows it, is to also ensure only a single node instance per availability zone, and disable cross-zone load balancing. That way, all calls will default to a single node, avoiding this issue.

@aidanbon I could not find Proxy server to be "none" in the Software Configuration section ?

I'm simply using a ubuntu server and MeteorJS with mup deployment.. could you please get me a link ??

Cheers

@sahanDissanayake I found my Proxy Server config option via:

  1. Click "Configuration" at the Environment of interest
  2. Click "Software Configuration" from within the Configuration page
  3. The "Proxy server" drop-down as shown in my previous screenshot.

Note: My environment was provisioned almost a year ago. Not sure AWS panel has updated for newer environments. However, I can still see this drop-down today at my environment.

@aidanbon I do not use any other services other that ec2.. so your settings is not relavant to me ? or do i need to start using this poxy server ?

So the issue I have is, my web app websockets work in port 8080, but NOT in port 80.

This is the issue

@lostcolony Thank you for the summary. I have arrived at the same conclusion. Have you read this blog post: https://medium.com/@Philmod/load-balancing-websockets-on-ec2-1da94584a5e9#.hd2ectz8t? He's essentially saying one could pass the layer-4 (TCP) through to a HAProxy instance, which then takes off the PROXY header and routes the traffic to the right instance using sticky sessions. However, it is not apparent to me how the two HAProxy get their "backend" server list configured and where it runs and where the actual socket.io server runs. Can you make any sense of it?

EDIT: I think I clicked now. He knows the list because he's got a known set of servers and doesn't care about auto-scaling...

The server list would be configured on each HAproxy box. I'm not sure how
to make that work with auto scaling ec2 instances, unless you wrote some
additional infrastructure to update that list when a new server came
online. Socket.io they're running on the same server as their Web app, so
connections come in through the Elb, are federated to an HAproxy instance,
the HAproxy instance(s) ensure that IPs connections always go to the same
web/app server, which is where socket.io was set up.
On Jan 13, 2016 2:50 AM, "Felix Schlitter" [email protected] wrote:

@lostcolony https://github.com/lostcolony Thank you for the summary. I
have arrived at the same conclusion. Have you read this blog post:
https://medium.com/@Philmod/load-balancing-websockets-on-ec2-1da94584a5e9#.hd2ectz8t?
He's essentially saying one could pass the layer-4 (TCP) through to a
HAProxy instance, which then takes off the PROXY header and routes the
traffic to the right instance using sticky sessions. However, it is not
apparent to me how the two HAProxy get their "backend" server list
configured and where it runs and where the actual socket.io server runs.
Can you make any sense of it?


Reply to this email directly or view it on GitHub
https://github.com/socketio/socket.io/issues/1846#issuecomment-171208211
.

@lostcolony OK, I saw there was a interesting project in regards to auto-scaling on amazon, using HAProxy - at least useful for reference: https://github.com/markcaudill/haproxy-autoscale. What I don't get is why use an ELB in front of HAProxy? There must be only one HAProxy instance, otherwise, how would you guarantee that the ELB sends traffic to the right HAProxy instance? Unless they (the HAProxy instances) shared a common sticky table somehow using ElastiCache or Redis or something along those lines? ... I do realize we are going _way_ off topic.

If you have the HAProxies configured to route connections based on IP you
could guarantee it'd end up at the same instance. That is, if the HAProxies
all had the same rule saying that ip A.B.C.D is directed to instance one,
then regardless of which HAProxy the ELB hit, it would end up at the same
place. That's what is happening in that link you sent, they're balancing
via source, meaning the HAProxy instance hashes the IP and uses that to
determine what instance it goes to. Multiple HAProxies, provided they all
know about the same instances, would thus send traffic from one location to
the same backend instance.

On Wed, Jan 13, 2016 at 1:19 PM, Felix Schlitter [email protected]
wrote:

@lostcolony https://github.com/lostcolony OK, I saw there was a
interesting project in regards to auto-scaling on amazon, using HAProxy -
at least useful for reference:
https://github.com/markcaudill/haproxy-autoscale. What I don't get is why
use an ELB in front of HAProxy? There must be only one HAProxy instance,
otherwise, how would you guarantee that the ELB sends traffic to the right
HAProxy instance? Unless they (the HAProxy instances) shared a common
sticky table somehow using ElastiCache or Redis or something along those
lines? ... I do realize we are going _way_ off topic.


Reply to this email directly or view it on GitHub
https://github.com/socketio/socket.io/issues/1846#issuecomment-171386260
.

@lostcolony I understand that the hash is derived deterministically for a given ip, however, there must be a mapping of "hash -> instance". In your case you are saying there is a rule that's saying "A.B.C.D is directed to instance one", but would you not want HAProxy to determine itself where to redirect to, given a list of available servers? After all, "A.B.C.D" is just some client's ip and not known up front.

The rule for mapping the hash to an instance is the same across all HAProxy
instances. That is, A.B.C.D will result in the same hash no matter the
HAProxy instance it hits, and that hash will map to the same server, no
matter the HAProxy instance in use, provided that every HAProxy knows about
the same instances (and possibly in the same order/the same names). The
actual IPs do not have to be known up front, because they're immaterial.
Any IP will hash to the same server regardless of the HAProxy it passes
through.

For a completely faked example (HAProxy uses something that is not so
predictable, I'm sure), imagine we had three servers, which are configured
in order as server0, server1, and serve2 (the actual details don't matter,
just that there is a clear ordering, implicit is fine). Every HAProxy
instance has the same servers, in the same order. They also have the same
rule as to how to deal with an incoming IP address. For this trite example,
it's add up all four parts of the address as integers (easily extensible to
support IPv6, add up all as hex into base 10), and divide by 3, take the
remainder. So IP 123.12.61.24 = (123+12+61+24) % 3, or 1. So it gets routed
to server1. So no matter what HAProxy instance it comes into, that
connection will be sent to server1.

Now, if server1 goes offline, the algorithm changes across all HAProxy
instances, to being add all the parts, modulus 2.

And this generally works -unless- you get a netsplit (or configure the
HAProxy instances to have a different server list). Where one HAProxy sees
3 instances, and another sees 2. If that happens, you can't be guaranteed
to reach the same backend. And indeed, that can prove to be a problem. But
that's the solution being described in the link.

On Wed, Jan 13, 2016 at 3:51 PM, Felix Schlitter [email protected]
wrote:

@lostcolony https://github.com/lostcolony I understand that the hash is
derived deterministically for a given ip, however, there must be a mapping
of "hash -> instance". In your case you are saying there is a rule that's
saying "A.B.C.D is directed to instance one", but would you not want
HAProxy to determine itself where to redirect to, given a list of available
servers? After all, "A.B.C.D" is just some client's ip and not known up
front.


Reply to this email directly or view it on GitHub
https://github.com/socketio/socket.io/issues/1846#issuecomment-171428465
.

I found that on Elastic Beanstalk even though i had setup the Load balancer to use TCP and SSL with the web interface, when I checked the load balancer config directly it was still using HTTPS for the secure listener. So you should probably check in the EC2 > Load Balancers section that the ports are setup as they should be:

screen shot 2016-03-08 at 16 08 34

Once that is sorted add the following to .ebextensions and all works well for me :)

container_commands:
  01_nginx_static:
    command: |
      sed -i '/\s*proxy_set_header\s*Connection/c \
              proxy_set_header Upgrade $http_upgrade;\
              proxy_set_header Connection "upgrade";\
          ' /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf

This will only work 100% when you only have one instance however, because sticky session support doesnt work over TCP.

@lostcolony, @williamcoates post just now reminded me to thank you for explanation. Cannot believe I forgot to do that. Your explanation was tremendously helpful, so thank you for taking the time.

So I had the same problem and turned out it I had set up the ECB to use HTTPS.
I changed HTTPS to SSL and I dont get the message anymore.
so, just don't use HTTP/HTTPS but TCP/SSL

As of Aug 2016 you can use Amazon's Application Load Balancer (ALB) instead of the "classic" ELB. Worked "out of the box" for me with listeners on HTTP 80 and HTTPS 443.

@trans1t Sweet man, thanks for the info! I will have to check that out.

@niczak - I also has switched to ALB for socket traffic stickiness. However, I first struggled with the ALB Sticky Sessions configuration (as there weren't too much doc on this subject; at least a few months ago). Finally have it figured out. The key thing is that "Stickiness is defined at a target group level", so make sure you create target group to serve your traffic and then add stickiness to it.

@aidanbon That is fantastic information, thank you so much for sharing!

For anyone that is still having a problem with AWS Application Load Balancer try updating your Security Policy, I ran into the same problem even though it was working perfectly fine in the exact setup for another website but noticed the security policy was about 1 year behind. Updating it to a newer one seems to have solved the problem.

@michaelmitchell did you change anything, or just update it without modification ?

I didn't change anything in my application, the only additional step I did take was to remove and re-add the HTTPS listener on the ALB as after changing to the new policy didn't appear to work.

I did learn however that seeing the error even after updating the security policy was likely my browser session holding onto the old security policy so I am not sure if adding/re-adding the listener was really needed, but opening incognito it worked fine and after closing all my chrome sessions and reopening it worked fine and haven't seen it since.

Good to know, thanks !

Le ven. 31 mars 2017 à 12:02, Michael Mitchell notifications@github.com a
écrit :

I didn't change anything in my application, the only additional step I did
take was to remove and re-add the HTTPS listener on the ALB as after
changing to the new policy didn't appear to work.

I did learn however that seeing the error even after updating the security
policy was likely my browser session holding onto the old security policy
so I am not sure if adding/re-adding the listener was really needed, but
opening incognito it worked fine and after closing all my chrome sessions
and reopening it worked fine and haven't seen it since.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/socketio/socket.io/issues/1846#issuecomment-290672158,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADrIi_SXqccWOLUMIWF1IhClGXvKAGMcks5rrM8sgaJpZM4C0swP
.

It seems like this thread became a place for figuring out issues with AWS/socket.io—

My setup is that the client makes an HTTPS request with data for a socket.io connection.
The HTTPS request should respond with the cookies needed for sticky sessions. The issue is that I'm unable to pass a cookie header in the socket.io connection.

I'm using an ALB with these listeners:
image

Which points to a target group with stickiness configured like this:
image

I hit the HTTPS endpoint first with:

fetch(`https://${url}/hub/api/spinup/${uuid}`, {
                method: 'GET',
                headers: headers,
            })

then initialize the socket with

connection = io(`wss://${url}:443/`, {
            path: `/user/${uuid}/spawn`,
        })

The instance the HTTPS request hits _must_ be the instance the socket connection hits.

Once a socket is established (if it hits the right instance by dumb luck) it stays there and things work well, the issue is getting the websocket to hit the same endpoint as the HTTPS request.

My first instinct is to use

connection = io(`wss://${url}:443/`, {
    path: `/user/${uuid}/spawn`,
    extraHeaders: {
        cookie: cookieValueFromFetchResponse
    }
})

This works in node, but in the browser cookie is a forbidden header on XMLHTTPRequests, so I'm unable to send it along.

Has anyone done something similar?

I haven't done similar, but, while ALBs still don't have header based routing, they -do- have host based routing. This is extremely hackish, as such, but you could basically make it so that each instance behind the ALB has an explicit path to it, that the instance knows about. When that initial HTTP request goes out, part of the response includes the path, and the connection for the websocket includes that path. I.e., the HTTP request hits instance '5' (based on whatever criteria; generated UUIDs would be better), returns '5' as part of the response, and the websocket is opened to url/5. The ALB has a rule that url/5 gets routed to instance 5.

If you're using autoscaling (and your instances are cattle not pets), you'll need to have your code running on the instance modify the ALB settings to route the appropriate domain to it.

Read more here: https://aws.amazon.com/blogs/aws/new-host-based-routing-support-for-aws-application-load-balancers/

I will also say, however, that you probably want to just avoid the pattern you're using if at all possible. The data coming in the HTTP request should be stored in a way that is accessible to every instance (DB or similar), or it should be sent over the websocket connection once it's established. As it is, you could still have an instance die in between the HTTP request and the websocket opening up, and you could run into the same issue (albeit far less often)

@lostcolony thanks for the advice—that’s definitely worth trying.

the http request is what triggers the websocket server be spun up (it’s actually in a docker container, but that’s abstracted away). the two requests need to be routed to the same instance so the websocket server exists on the instance that’s being connected to

with this line, it helps to sit the jupyter server behind the elb.

service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp

in case somebody come to this ticket with kubernetes on aws.

I just disable the HTTP2 then it's OK for me. You don't have to do anything else about ELB.

Screen Shot 2019-12-13 at 16 12 49

Hi,
I have the same problem, though i'm using SockJS.
my application is a Java Spring 4 application, works on development machine and
getting the same error on AWS.
looks like the Upgrade header is being dropped by someone, i can see it on the client.

could find a solution yet...

Hi @yoav200 ,

SAVE ME.
I am stuck in same problem. Please help
I am using Springboot 2.x and SockJs for websocket
I have created tomcat application deployed in aws,
Elastic beanstalk with classic load balancer

Now the problem is
iOS developer is unable to connect , it reaches my backend but it fails during handshake
below is the logs which i get in both CLB and ALB


2019-12-13 15:21:22.662 DEBUG 27543 --- [nio-8080-exec-2] o.s.w.s.DispatcherServlet : GET "/wtchat/websocket", parameters={}
2019-12-13 15:21:22.701 DEBUG 27543 --- [nio-8080-exec-2] o.s.w.s.s.s.WebSocketHandlerMapping : Mapped to org.springframework.web.socket.sockjs.support.SockJsHttpRequestHandler@30c6a17
2019-12-13 15:21:22.714 DEBUG 27543 --- [nio-8080-exec-2] o.s.w.s.s.t.h.DefaultSockJsService : Processing transport request: GET http://mydomain.com/wtchat/websocket
2019-12-13 15:21:22.716 ERROR 27543 --- [nio-8080-exec-2] c.w.c.MyHandshakeHandler : Handshake failed due to invalid Upgrade header: null

2019-12-13 15:21:22.717 DEBUG 27543 --- [nio-8080-exec-2] o.s.w.s.DispatcherServlet : Completed 400 BAD_REQUEST

I have followed what you have said
image
image

created a record set in Route 53 which directly points to load balancer.

After doing this also I am getting same as above error.

What else do i need to do?

Was this page helpful?
0 / 5 - 0 ratings