Problem Description
Users who are members of hundreds of Active Directory groups receive a “400 Bad Request – Request header or cookie too large” error when trying to authenticate to web applications through Keycloak using OIDC/Kerberos authentication.
Root Cause
When using Kerberos/SPNEGO authentication with Active Directory, the Kerberos authentication ticket contains ALL of the user’s AD group memberships. For users with 200+ AD groups, this ticket can be 150-250KB or larger. The default nginx ingress controller buffer sizes (1KB initial, 4x8KB for large headers) cannot handle these oversized authentication headers, causing requests to be rejected before they even reach Keycloak.
Important: This issue occurs during the initial authentication request, so affected users will not appear in Keycloak event logs – the request is rejected at the nginx layer before reaching Keycloak.
Symptoms
- Users with many AD group memberships cannot authenticate
- Error message: “400 Bad Request – Request header or cookie too large”
- No authentication events appear in Keycloak logs for affected users
- Users with fewer AD groups can authenticate successfully
- Problem occurs during initial login, not subsequent requests
Solution
Increase the nginx ingress controller buffer sizes to accommodate large Kerberos authentication tickets and HTTP/2 header limits.
Step 1: Identify the Nginx Ingress Controller Namespace
# Find where nginx ingress controller is deployed
kubectl get pods -A | grep ingress
# Common namespaces: default, ingress-nginx, kube-system, nginx-ingress
Step 2: Locate the Nginx ConfigMap
# List ConfigMaps in the nginx namespace
kubectl get configmap -n <nginx-namespace> | grep -i nginx
# Common names:
# – ingress-nginx-controller
# – nginx-configuration
# – ingress-nginx-nginx-controller
# – ingress-nginx-nginx-ingress-controller
Step 3: Edit the ConfigMap
kubectl edit configmap <configmap-name> -n <nginx-namespace>
Add the following settings under the data: section:
data:
client-header-buffer-size: “512k”
large-client-header-buffers: “8 512k”
proxy-buffer-size: “512k”
proxy-buffers: “8 512k”
http2-max-field-size: “512k”
http2-max-header-size: “512k”
http2-max-requests: “1000”
If the ConfigMap has no data: section, create it:
apiVersion: v1
kind: ConfigMap
metadata:
name: <configmap-name>
namespace: <nginx-namespace>
data:
client-header-buffer-size: “512k”
large-client-header-buffers: “8 512k”
proxy-buffer-size: “512k”
proxy-buffers: “8 512k”
http2-max-field-size: “512k”
http2-max-header-size: “512k”
http2-max-requests: “1000”
Step 4: Verify Changes Applied
The nginx controller will automatically detect the ConfigMap change and reload within 5-10 seconds. No pod restart is required.
Watch for the reload in the logs:
kubectl logs -n <nginx-namespace> -l app.kubernetes.io/component=controller -f
You should see:
Configuration changes detected, backend reload required
Backend successfully reloaded
Verify the settings in the nginx configuration:
# Get the nginx controller pod name
NGINX_POD=$(kubectl get pods -n <nginx-namespace> -l app.kubernetes.io/component=controller -o jsonpath='{.items[0].metadata.name}’)
# Check that buffer settings are applied
kubectl exec -n <nginx-namespace> $NGINX_POD — cat /etc/nginx/nginx.conf | grep -E “client_header_buffer_size|large_client_header_buffers|http2_max”
Expected output:
client_header_buffer_size 512k;
large_client_header_buffers 8 512k;
http2_max_field_size 512k;
http2_max_header_size 512k;
http2_max_requests 1000;
Step 5: Test the Fix
Have an affected user (with many AD groups) attempt to authenticate through Keycloak.
Optional – Simulate with curl:
# Test with artificially large header (should return 302, not 000)
curl -s -o /dev/null -w “HTTP Status: %{http_code}\n” -k \
-H “X-Test: $(python3 -c ‘print(“A”*50000)’)” \
https://keycloak.yourdomain.local
Configuration Explanation
| Setting | Purpose | Value |
| client-header-buffer-size | Initial buffer for client request headers | 512k |
| large-client-header-buffers | Additional buffers for large headers (count × size) | 8 × 512k |
| proxy-buffer-size | Buffer for backend response headers | 512k |
| proxy-buffers | Buffers for backend responses | 8 × 512k |
| http2-max-field-size | Maximum size of a single HTTP/2 header field | 512k |
| http2-max-header-size | Maximum size of the entire HTTP/2 header block | 512k |
| http2-max-requests | Maximum requests per HTTP/2 connection | 1000 |
Why Not Use Ingress Annotations?
While it’s possible to add buffer settings via Ingress annotations like:
nginx.ingress.kubernetes.io/client-header-buffer-size: “512k”
nginx.ingress.kubernetes.io/large-client-header-buffers: “8 512k”
These annotations may not be respected by all nginx ingress controller versions. The ConfigMap approach is more reliable and applies globally to all Ingresses, which is appropriate for this authentication-related issue.
Performance Impact
The memory impact of these increased buffer sizes is minimal:
- Buffers are allocated per connection and only when needed
- Most connections will use far less than the maximum
- With 1000 concurrent connections worst-case: ~512MB (acceptable on modern infrastructure)
- No measurable performance degradation
Increasing Buffer Sizes Further
If 512k is insufficient (users with 500+ AD groups), increase the values:
data:
client-header-buffer-size: “1024k” # 1MB
large-client-header-buffers: “8 1024k” # 8 × 1MB
http2-max-field-size: “1024k”
http2-max-header-size: “1024k”
There is no practical penalty for larger buffer sizes in enterprise environments.
Alternative Solutions (Not Recommended)
Disable Kerberos Authentication
If Kerberos/SPNEGO is not required, you can remove it from the authentication flow:
Keycloak Admin Console → Authentication → Flows → Browser
→ Remove or disable Kerberos execution
Pros: Eliminates large Kerberos tickets
Cons: Loses SSO functionality, requires manual username/password entry
Limit Groups in Kerberos Tickets (AD-Side)
Active Directory administrators can configure Group Policy to limit the number of groups included in Kerberos tickets.
Pros: Reduces ticket size at the source
Cons: Requires AD admin access, may impact other systems, complex to configure
Troubleshooting
Changes Not Applied
If settings don’t appear in nginx.conf after editing the ConfigMap:
- Verify the correct ConfigMap was edited:
kubectl get configmap <configmap-name> -n <nginx-namespace> -o yaml
- Check nginx controller logs for errors:
kubectl logs -n <nginx-namespace> -l app.kubernetes.io/component=controller –tail=100
- Confirm the nginx controller is watching the ConfigMap – some controllers require specific naming conventions
Users Still Getting 400 Errors
- Verify settings applied (Step 4 above)
- Increase buffer sizes to 1024k (1MB)
- Check if there are other proxies/load balancers in front of nginx that also have limits
- Check Keycloak logs for any application-level errors
Testing Without Affected Users
# Test with various header sizes
for size in 10000 50000 100000 200000; do
echo “Testing ${size} byte header:”
curl -s -o /dev/null -w “HTTP Status: %{http_code}\n” -k \
-H “X-Test: $(python3 -c “print(‘A’*${size})”)” \
https://keycloak.yourdomain.local
done
Expected: All tests should return 302 (redirect) or 200, not 000 or 400.
Related Issues
This same configuration resolves similar issues with:
- Large cookie sizes in authenticated sessions
- Applications that send many custom headers
- OAuth/OIDC flows with large state parameters
- SAML assertions in headers
References
- Nginx ngx_http_core_module – client_header_buffer_size
- Nginx ngx_http_v2_module – http2_max_field_size
- Kubernetes nginx-ingress ConfigMaps documentation
AVI E.