Nginx 502 Bad Gateway auth0 nextjs

I have a nextjs app that I want to move from vercel to convox.

Unfortunately, the auth0 login doesn’t work. I get “502 Bad Gateway”

Looking in nextjs-auth0 issues and other non-nextjs specfic problems, I think this is caused by large headers.

Using lens, I’ve connected to my cluster and set the nginx.ingress.kubernetes.io/proxy-buffer-size: "128k" annotation on my ingress which has resolved the issue for me.
I am not sure if this will persist if I upgrade the rack though.

Is there a recommended, permanent solution?

Other issues:

1 Like

Hello Rhys,

I spoke with the dev team regarding this issue and they will be looking at it during the next planning session.

Unfortunately right now there is no permanent solution. We would suggest NOT updating the rack as terraform will override the changes.

Regards,

Nick
Support Engineer
Convox

1 Like

We’re hitting this issue as well. The default size makes it practically impossible to use ANY SAML-based SSO.

The ingress controller needs to have this setting applied by default if possible.

In addition to this customization we’re also having to add POD_IP to our deployments in the container ENV:

        - name: POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP

in order to look up the POD IP addresses from inside a pod. It would be great to get this added to the standard config as well.

1 Like

I also really need this! I accidentally took my servers offline today with 502 errors after I deployed an update to our CSP headers. I had forgotten that large headers are an issue. (And really need to add some kind of tests for this in the meantime.)

Thanks for the tip! For now I’m just going to disable the CSP headers, but I may consider using this. But it would be ideal if this was a Convox rack param. Thanks!

EDIT: In case anyone else is deploying a Ruby on Rails application, here’s a Rack middleware you can use to crash the server if your request or response header data ever goes over 4k: Rack middleware to enforce header length · GitHub
(Only use this for test and development environments!) Make sure you also have some Capybara tests that run in a browser, and add integration tests for all the important flows. This will fail your CI builds in case you ever end up adding too many headers, or your session cookie gets too big, etc.

EDIT 2: Arrrrrgh, this explains the reports of random 502 errors I sometimes hear about from my customers:

Failures:
  1) Signature Request for tenant agreement with 5 signers
     Failure/Error: raise HeadersTooLargeError, "Request headers too large: #{request_headers_string.length} > #{NGINX_HEADER_LENGTH_LIMIT}"
     Rack::HeaderLengthEnforcer::HeadersTooLargeError:
       Request headers too large: 4042 > 4000
     # ./app/lib/rack/header_length_enforcer.rb:25:in `call'

I figured something out. In case anyone else finds this and needs to increase the limit, here’s a quick step-by-step guide. (And for my own future reference.)

A few things to know:

  • The resource type for the Nginx ingress controller is ing
  • You can call kubectl edit ing to edit the config and annotations in your text editor. I use VS Code for this, by running export KUBE_EDITOR="code --wait"
  • You can show a list of namespaces with kubectl get namespaces. We will be using one called <rack name>-system. In this example, my rack name is europe-v3.

Steps

  1. Switch to your Convox rack:

convox switch europe-v3

  1. Fetch kubeconfig:

convox rack kubeconfig > /tmp/kubeconfig

  1. Configure kubectl to use settings in /tmp/kubeconfig:

export KUBECONFIG=/tmp/kubeconfig

  1. Show a list of ingress controllers:

kubectl get ing -A

NAMESPACE             NAME         CLASS    HOSTS                                                                                           ADDRESS                                                                            PORTS     AGE
europe-v3-docspring   web          <none>   web.docspring.*******.convox.cloud,eu.docspring.com,app-eu.docspring.com + 2 more...   *******.elb.eu-central-1.amazonaws.com   80, 443   156d
europe-v3-system      api          <none>   api.*******.convox.cloud                                                               *******.elb.eu-central-1.amazonaws.com   80, 443   156d
europe-v3-system      kubernetes   <none>   api.*******.convox.cloud                                                               *******.elb.eu-central-1.amazonaws.com   80, 443   156d
  1. Show all the details about your web ingress controller:

kubectl describe ing web -n europe-v3-docspring

Here you can see all the Annotations that are used to configure Nginx:

Name:             web
Labels:           app=docspring
                  atom=***************
                  provider=k8s
                  rack=europe-v3
                  release=***************
                  service=web
                  system=convox
                  type=service
Namespace:        europe-v3-docspring
Address:          ***************.elb.eu-central-1.amazonaws.com
Ingress Class:    <none>
Default backend:  <default>
TLS:
  cert-web terminates web.docspring.***************.convox.cloud
  cert-web-domains terminates eu.docspring.com,app-eu.docspring.com,api-eu.docspring.com
Rules:
  Host                                         Path  Backends
  ----                                         ----  --------
  web.docspring.***************.convox.cloud
                                                  web:4001 (*.*.*.*:4001,*.*.*.*:4001,*.*.*.*:4001 + 1 more...)
  eu.docspring.com
                                                  web:4001 (*.*.*.*:4001,*.*.*.*:4001,*.*.*.*:4001 + 1 more...)
  app-eu.docspring.com
                                                  web:4001 (*.*.*.*:4001,*.*.*.*:4001,*.*.*.*:4001 + 1 more...)
  api-eu.docspring.com
                                                  web:4001 (*.*.*.*:4001,*.*.*.*:4001,*.*.*.*:4001 + 1 more...)
                                                  
Annotations:                                   alb.ingress.kubernetes.io/scheme: internet-facing
                                               cert-manager.io/cluster-issuer: letsencrypt
                                               convox.com/backend-protocol: http
                                               convox.com/idles: false
                                               kubernetes.io/ingress-class: nginx
                                               nginx.ingress.kubernetes.io/backend-protocol: http
                                               nginx.ingress.kubernetes.io/proxy-connect-timeout: 60
                                               nginx.ingress.kubernetes.io/proxy-read-timeout: 60
                                               nginx.ingress.kubernetes.io/proxy-send-timeout: 60
                                               nginx.ingress.kubernetes.io/server-snippet:
                                                 keepalive_timeout 60s;
                                                 grpc_read_timeout 60s;
                                                 grpc_send_timeout 60s;
                                                 client_body_timeout 60s;
                                               nginx.ingress.kubernetes.io/ssl-redirect: true
Events:
  Type    Reason  Age                  From                      Message
  ----    ------  ----                 ----                      -------
  Normal  Sync    12m (x58 over 156d)  nginx-ingress-controller  Scheduled for sync
  Normal  Sync    12m (x45 over 116d)  nginx-ingress-controller  Scheduled for sync
  1. Edit the ingress controller in your text editor:

kubectl edit ing web -n europe-v3-docspring

Add nginx.ingress.kubernetes.io/proxy-buffer-size: "16k" line in the metadata: => annotations: section:

(Or set this to something like "128k" if you need larger buffers.)

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    # ...
    nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
  1. Save and close the file
  2. Show a list of your ingress controller pods:

kubectl get pods -l name=ingress-nginx -n europe-v3-system

NAME                            READY   STATUS    RESTARTS   AGE
ingress-nginx-bd6b85668-48cn8   1/1     Running   0          116d
ingress-nginx-bd6b85668-kzbz5   1/1     Running   0          156d
  1. Describe your ingress pod:

kubectl describe pod ingress-nginx-bd6b85668-48cn8 -n europe-v3-system

You should see an event at the bottom saying that Nginx has been reloaded with the new configuration. (You don’t need to restart anything, and there shouldn’t be any downtime.)

...
Events:
  Type    Reason  Age                  From                      Message
  ----    ------  ----                 ----                      -------
  Normal  RELOAD  23m (x20 over 116d)  nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  1. If you want to be extra sure that this worked, you can confirm that the Nginx config has been updated. You can do this by starting a bash session in the pod:

kubectl exec -it ingress-nginx-bd6b85668-48cn8 -n europe-v3-system -- bash

bash-5.1$ ls -l
total 116
...
-rw-r--r--    1 www-data www-data     41517 Sep 29 21:09 nginx.conf
-rw-r--r--    1 www-data www-data      2656 Mar 24  2021 nginx.conf.default
...
  1. Check the proxy_buffer_size and proxy_buffers values in nginx.conf:

grep "proxy_buffer_size\|proxy_buffers" nginx.conf

			proxy_buffer_size                       4k;
			proxy_buffers                           4 4k;
			proxy_buffer_size                       16k;
			proxy_buffers                           4 16k;
...

You’ll still see some 4k values, but these are just for some internal services. All the server { ... } configuration blocks for your application should be updated to use the new value (16k in this case.)

Caveats

It looks like you will need to set this annotation whenever you deploy your app or update Convox. This might cause some temporary downtime with 502 errors. (I’m not sure about this though.)

Notes on Terraform

I also noticed that this doesn’t seem to be part of my Terraform configuration. I refreshed the state by running terraform refresh, and was hoping that I would see the new annotation in my terraform.tfstate file. But then I realized that the Terraform config doesn’t actually know anything about my apps, so this part must be managed directly by the Convox application and stored somewhere else.

I’d be interested to know if there’s a way to import all of my application resources into the Terraform .tfstate file, or maybe store a backup of the Convox data. (I’m still a bit fuzzy on how this works.)


Anyway, I’m really glad that I figured out how to fix the issue, and I hope this will be available as an option in the future. I’m going to go back to using CSP and rel=preload headers, and hopefully won’t see any more 502 errors.

Hi @mark, I was wondering if you have also been able to get the client IP forwarded correctly? I’m only seeing an internal IP address in my request headers:

X-Forwarded-For: 10.1.200.144
X-Forwarded-Host: example.com
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Real-Ip: 10.1.200.144
X-Request-Id: 239ae7d027f0e38160ce5156b575c3e0
X-Scheme: https

Is there a similar change I can make to fix this issue and get the real IP forwarded from the load balancer? Thanks!

CC @rhys @Nick-Convox


Good news - I discovered the proxy_protocol option in the Convox docs for AWS: Convox

This is set to false by default. I believe this can be set by running convox rack params set proxy_protocol=true

(Unfortunately it requires 5-10 minutes of downtime,)


Update from Nov 8, 2022

(I can’t comment any more since I already commented three times in a row.)

Just confirming that this annotation was still available on my web ingress after I upgraded Convox from v3.3.4 to v3.5.0.

nginx.ingress.kubernetes.io/proxy-buffer-size: 32k

I exported the terraform config and am managing it myself on my local machine. I also initialized a git repo and am tracking the changes, so I have a backup stored in a private repo as well. This gives me a lot more confidence and it feels a bit safer knowing that I can revert any changes in git and run terraform apply to fix any issues