Skip to content

Unable to deploy nlb into a different VPC than the cluster #4121

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
strelok1 opened this issue Apr 2, 2025 · 1 comment
Open

Unable to deploy nlb into a different VPC than the cluster #4121

strelok1 opened this issue Apr 2, 2025 · 1 comment
Labels
triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@strelok1
Copy link

strelok1 commented Apr 2, 2025

Bug Description
(!) I know we're using a slightly outdated version of aws-load-balancer-controller

I have the following resource definition (generated by terraform, but probably not important)

In this definition the subnet-a and subnet-b and the IP addresses I specified are in a different VPC to the cluster as this VPC. This is the setup that I need and networking is setup so the connectivity works.

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: '5'
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: '30'
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: '5'
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: '2'
    service.beta.kubernetes.io/aws-load-balancer-name: myservice-uat-svc1-nlb-svc1
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: subnet-a-ip, subnet-b-ip
    service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-a, subnet-b
    service.beta.kubernetes.io/aws-load-balancer-type: external
  name: myservice-uat-nlb-service-svc1
  namespace: myservice-uat
  resourceVersion: '293664379'
  uid: 67119cb6-79d5-4c19-97cc-f3bc16a07c20
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: <redacted clusterIP>
  clusterIPs:
    - <redacted clusterIP>
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerClass: service.k8s.aws/nlb
  ports:
    - nodePort: 30853
      port: 80
      protocol: TCP
      targetPort: 8080
  selector:
    app: myservice-uat-deployment
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer: {}

The load balancer is correctly created in the VPC with the subnets specified.
But target groups are not and the following error is in the aws-load-balancer-controller logs:

{"level":"error","ts":"2025-04-02T00:24:21Z","msg":"Reconciler error","controller":"service","object":{"name":"myservice-uat-svc1-nlb-svc1","namespace":"svc1-uat"},"namespace":"svc1-uat","name":"svc1-uat-nlb-service-ebix","reconcileID":"295532a5-92c7-4ea8-9b37-65f1e08d93bd","error":"InvalidConfigurationRequest: The following target groups are in a different VPC than load balancer 'arn:aws:elasticloadbalancing:ap-xxx-X:XXXXXXXXX:loadbalancer/net/myservice-uat-svc1-nlb-svc1/f0696a86256e46c6': arn:aws:elasticloadbalancing:ap-xxx-X:XXXXXXXXX:targetgroup/k8s-svcuat-svcuat-173712f63a/c418a4733372532e\n\tstatus code: 400, request id: e7812aeb-f255-4ee6-8a22-36fc69b1cff8"}

UPDATE

I upgraded aws-load-balancer-controller to latest and now the load balancer is not even being created and I see the following error in the logs:

operation error Elastic Load Balancing v2: CreateLoadBalancer, https response error StatusCode: 400, RequestID: 27fd14e1-6a02-48e8-8101-774058a7dd6b, InvalidConfigurationRequest: One or more security groups are invalid

Steps to Reproduce

Manifest is above in the description. The subnets specified should be from a different VPC to the cluster.

Expected Behavior
Load balancer and target groups should be setup in the correct VPC (where the subnets are).

Actual Behavior

The load balancer is created in the correct VPC but the target groups are not.

Regression
Don't know

Current Workarounds

N/A
Environment

  • AWS Load Balancer controller version: 2.5.4
  • Kubernetes version: 1.30
  • Using EKS (yes/no), if so version?: yes 1.30
  • Using Service or Ingress: Service
@shraddhabang
Copy link
Collaborator

@strelok1 The error InvalidConfigurationRequest: One or more security groups are invalid during the AWS Load Balancer Controller's CreateLoadBalancer operation usually indicates that the SG referenced during Create is not within the Target VPC. Can you please see if the SG refernced by the controller is in your desired VPC?
You can check that from logs by looking at built model for your svc.

@shraddhabang shraddhabang added the triage/needs-information Indicates an issue needs more information in order to work on it. label Apr 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

2 participants