Cluster Access

Overview

With infrastructure provisioned, you now need to configure access to the AKS cluster. This step covers:

  • Updating your kubeconfig to authenticate with AKS
  • Verifying cluster connectivity and node status
  • Confirming Terraform-deployed resources (Helm releases, service accounts, External Secrets)
  • Understanding private cluster access options (VPN, Azure Bastion, public API)
  • Granting access to additional team members

After this step, you will have working kubectl access and can verify all infrastructure components are healthy.

How AKS authentication works

AKS uses Azure Active Directory for authentication. When you run kubectl, it:

  1. Uses your Azure credentials to get a token
  2. Sends the token to the AKS API server
  3. AKS verifies the token against Azure AD
  4. If authorized, your command executes

This is why you need:

  • Working Azure credentials (configured earlier)
  • Your identity to be authorized in AKS
  • Network connectivity to the AKS API endpoint

Terraform automatically grants you access because it creates the cluster using your credentials. The cluster creator is automatically an admin. Other team members need to be added separately (covered below).

Configure kubectl

Update your kubeconfig file with the AKS cluster credentials:

$az aks get-credentials \
> --resource-group $(terraform output -raw resource_group_name) \
> --name $(terraform output -raw cluster_name)

This command:

  • Retrieves cluster connection information from AKS
  • Adds a new context to your ~/.kube/config file
  • Configures token generation using your Azure credentials

Expected output:

Merged "confidentai-stage-aks" as current context in /Users/you/.kube/config

“Could not connect to the endpoint URL” error?

This usually means:

  1. Wrong resource group: Ensure the resource group name matches your deployment
  2. AKS not ready: The cluster may still be provisioning—wait a few minutes
  3. Network issues: Your network may block HTTPS to Azure APIs

Verify your configuration matches your Terraform outputs.

Verify cluster access

Test that you can communicate with the cluster:

$kubectl get nodes

Expected output:

NAME STATUS ROLES AGE VERSION
aks-system-12345678-vmss000000 Ready <none> 30m v1.31.x
aks-system-12345678-vmss000001 Ready <none> 30m v1.31.x
aks-azewcais-12345678-vmss000000 Ready <none> 30m v1.31.x
aks-azewcais-12345678-vmss000001 Ready <none> 30m v1.31.x

You should see 2 system nodes plus your worker nodes (depending on confident_node_group_desired_size) in Ready status.

Timeout or connection refused?

This typically means the AKS API is not accessible from your network:

Unable to connect to the server: dial tcp x.x.x.x:443: i/o timeout

If confident_public_aks = false (default): The AKS API is only accessible from within the VNet. You need VPN access or VNet peering to your corporate network. See “Private cluster access” below.

If confident_public_aks = true: The API should be publicly accessible. Check your NSG rules and network connectivity.

Check system pods

Verify core Kubernetes components are running:

$kubectl get pods -n kube-system

You should see pods for:

  • coredns — DNS resolution within the cluster
  • kube-proxy — Network routing
  • azure-cni — Azure CNI networking

All pods should be Running with all containers ready.

Verify Terraform-deployed resources

Terraform deployed several Kubernetes resources. Let’s verify they’re working correctly.

Helm releases

Check that all Helm charts installed successfully:

$helm list -A
NameNamespaceExpected Status
ingress-nginxingress-nginxdeployed
external-secretsconfident-aideployed
argocdargocddeployed
cert-managercert-managerdeployed
clickhouse-operatorclickhouse-operator-systemdeployed

Helm release shows “failed” or “pending-install”?

This sometimes happens when AKS wasn’t fully ready. Usually fixable by re-running:

$terraform apply

Terraform will retry the failed Helm installations.

Confident AI namespace

Verify the namespace exists:

$kubectl get namespace confident-ai

Service accounts

Check that the required service accounts are created:

$kubectl get serviceaccounts -n confident-ai

Expected service accounts:

Service AccountPurpose
confident-storage-saAllows pods to access the Storage Account via Workload Identity
external-secrets-saAllows External Secrets Operator to read from Key Vault
ecr-credentials-syncUsed by the ECR credential rotation CronJob

Why service accounts? Service accounts enable Azure Workload Identity, which gives pods fine-grained Azure permissions. Instead of giving the whole cluster access to Storage, only pods using confident-storage-sa can access the blob containers. This follows the principle of least privilege.

External Secrets

External Secrets Operator syncs credentials from Azure Key Vault into Kubernetes secrets. Verify it’s working:

$kubectl get clustersecretstore

Expected:

NAME AGE STATUS CAPABILITIES READY
confident-clustersecretstore 30m Valid ReadWrite True

Check the ExternalSecret:

$kubectl get externalsecret -n confident-ai

Expected status: SecretSynced

NAME STORE REFRESH STATUS
confident-externalsecret confident-clustersecretstore 1h SecretSynced

ExternalSecret shows “SecretSyncedError”?

This means it couldn’t read from Key Vault. Common causes:

  1. Permissions: The external-secrets-sa managed identity may not have Key Vault Secrets User role
  2. Key Vault network ACLs: The Key Vault may be blocking access from the cluster
  3. Secret name mismatch: The ExternalSecret is looking for secrets that don’t exist in Key Vault

Check the error details:

$kubectl describe externalsecret confident-externalsecret -n confident-ai

Private cluster access

By default (confident_public_aks = false), the AKS API server is only accessible from within the VNet. This is a security best practice—it prevents unauthorized access from the internet.

To access a private cluster, you need network connectivity to the VNet.

If your organization has VPN connectivity to Azure (via ExpressRoute, Site-to-Site VPN, or Virtual WAN):

  1. Connect to your corporate VPN
  2. Ensure the VPN routes include the Confident AI VNet address range
  3. Run kubectl commands normally

This is the recommended approach for production because it uses your existing network security infrastructure.

VPN routing must include the AKS VNet. If you configured a custom address space (e.g., 10.0.0.0/16) in Prerequisites, ensure your VPN routes include it. Work with your network team to add the route if needed.

Option B: Azure Bastion / Jump box

If you don’t have existing VNet connectivity, you can use an Azure VM within the VNet as a jump box:

  1. Create a VM in the Confident AI VNet
  2. SSH into the VM
  3. Install kubectl and az CLI on the VM
  4. Run kubectl commands from the VM

The Terraform code includes a commented-out bastion configuration in bastion.tf that you can enable as a starting point.

If you’re just testing, you can enable public API access by setting confident_public_aks = true in your tfvars and re-running Terraform. This makes the AKS API accessible from the internet.

Public AKS API is a security risk. While authenticated by Azure AD, a publicly accessible API endpoint increases your attack surface. Only use this for temporary testing, never for production.

Grant access to team members

The person who ran Terraform is automatically an AKS admin. To grant access to other team members:

Add Azure AD group object IDs to your tfvars:

1confident_aks_admin_group_object_ids = ["aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"]

Then re-run terraform apply. This grants cluster admin access to all members of that Azure AD group.

Using Azure CLI

For individual users:

$az role assignment create \
> --assignee "<user-object-id>" \
> --role "Azure Kubernetes Service Cluster Admin Role" \
> --scope "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.ContainerService/managedClusters/<cluster-name>"

Role assignments require the identity to exist in Azure AD. If you get errors, verify the user or group object ID is correct and exists in your tenant.

ArgoCD access

ArgoCD is deployed for GitOps-based deployments. You can access it once your network has connectivity to the cluster:

$# Get the ArgoCD URL
$terraform output argocd_server_url
$
$# Credentials
$# Username: admin
$# Password: the argocd_admin_password you configured

ArgoCD runs inside the cluster behind an internal Azure Load Balancer, so it’s only accessible via the internal network. You’ll need VPN connectivity to access the dashboard.

Troubleshooting

”You must be logged in to the server (Unauthorized)”

error: You must be logged in to the server (Unauthorized)

Your Azure identity isn’t authorized to access the cluster:

  1. Verify your credentials: az account show
  2. Check you’re using the same identity that ran Terraform
  3. If using a different identity, have an admin add you (see above)

“Unable to connect to the server: dial tcp: i/o timeout”

You have no network path to the AKS API:

  1. For private clusters, ensure you’re connected to VPN
  2. Verify the VPN routes include the VNet address range
  3. Check no firewall is blocking HTTPS (port 443) to Azure

Nodes show “NotReady”

Nodes take a few minutes to fully initialize. Wait 2-3 minutes after the cluster is created. If they stay NotReady:

$kubectl describe node <node-name>

Look at the “Conditions” section for clues. Common causes:

  • Azure CNI not configured correctly
  • Node can’t reach the AKS API
  • Node VM has insufficient resources

Next steps

With cluster access configured, proceed to Kubernetes Deployment to deploy the Confident AI application services.