Solve identity server signin-oidc 502 Bad Gateway error
It was time to run the Identity Server 4 with the rest of the microservices in the Kubernetes cluster on a dedicated machine. I am running single node cluster, since this is not production, but just one of my pet projects and it is running locally. Still, it is going through one more Nginx (on top of the Kubernetes ingress) used to expose over the internet and create the certificates through Let’s Encrypt.
When I run the Identity Server 4 in that Kubernetes cluster and some of the services that need to use it for authentication in debug mode from Visual Studio 2022 all is great. I can login and get the token in a cookie.
The problem is if I run both services from the same Kubernetes cluster. I get 502 Bad Gateway error.
From the logs I see these error. During the investigation it turned out this is not related at all to the 502 Bad Gateway issue, but will leave it here for now.
After some search on internet I found information that this should be a problem with Nginx and the header size of the response. I guess the problem is on the way to the server since when I run the “dependent” service locally from Visual Studio it works just fine.
My Nginx is installed through Nginx Proxy Manager in a container, but it seems they put a thought about such situations and provided a way to extend the configuration. You can check the advanced configuration section and more precisely the “Custom Nginx Configuration“.
First place in Nginx is the /etc/nginx/nginx.conf
http {
...
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
large_client_header_buffers 4 16k;
...
}
So translated to Nginx Proxy Manager (NPM) – add to the <path to the mapped folder>/data/nginx/custom/http.conf
In my case is ~./data/nginx/custom/http.conf
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
large_client_header_buffers 4 16k;
Second place is the rules configuration. See the last two lines in bold.
location /{
proxy_pass http://localhost:500;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
}
Translated to NPM in it goes to <path to the mapped folder>/data/nginx/custom/server_proxy.conf
In my case is ~./data/nginx/custom/server_proxy.conf
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
These buffers might be an overkill. If I see some problems with the Nginx will revise the numbers.
The change above was not enough so I had to change the Kubernetes ingress as well. Added some annotations to the deployment configuration.
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
Here is the ingress.yaml I have in the Helm chart
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: someservice-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
spec:
rules:
- host: {{ .Values.host }}
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: someservice-clusterip
port:
number: 80
- pathType: Prefix
path: /
backend:
service:
name: someservice-clusterip
port:
number: 443