Obtaining container logs

Hello there!

I’m in a middle of a process of adapting convox for the company i work for. I started with provisioning my cluster on Digital Ocean. Everything went more or less smoothly, however i needed to spent some time in Kubernetes Dashboard (provided by DigitalOcean) to debug some internal app issues around missing ENV vars etc.

When everything was in place and working, i removed the DO cluster and started provisioning on AWS (as my company uses AWS for everything). Using the exact same setup, my app is not started and i’m not sure what’s the proper way of debug that as i have no access to Kubernetes Dashboard (AWS does not provides it out of the box like DO did, i would need to install that on my own). Without being able to access pod logs, i can’t really tell why my app is not starting:

➜  my-rails-app git:(CMS-834) ✗ convox releases promote RJMESMCGZYR
Promoting RJMESMCGZYR...
2020-03-06T08:10:33Z system/k8s/atom/app Status: Reverted => Pending
2020-03-06T08:10:34Z system/k8s/web Scaled up replica set web-6d88864d9f to 1
2020-03-06T08:10:34Z system/k8s/atom/app Status: Pending => Updating
2020-03-06T08:10:35Z system/k8s/web-6d88864d9f-s9k7v Container image "580245656094.dkr.ecr.us-east-2.amazonaws.com/company-name/my-rails-app:web.BBMOWHKESOO" already present on machine
2020-03-06T08:10:35Z system/k8s/web-6d88864d9f-s9k7v Created container main
2020-03-06T08:10:35Z system/k8s/web-6d88864d9f-s9k7v Started container main
2020-03-06T08:10:42Z system/k8s/web-6d88864d9f-s9k7v Readiness probe failed: Get http://10.1.70.33:3000/: dial tcp 10.1.70.33:3000: connect: connection refused
2020-03-06T08:10:45Z system/k8s/web-6d88864d9f-s9k7v Back-off restarting failed container
ERROR: rollback

Is there any way i could see what happened in the container during a deploy using convox CLI? Any command i’m not aware of? Using convox logs gives me the exact same output as above.

Thanks in advance!

Did you ever solve this? I’m having this same issue.

Apologies for bringing back an old thread. Ran into this and didn’t find an answer. Hoping the below might help for future searchers. Assumes a V3 rack

Find the crash logs with kubectl:

  1. Install kubectl or if on a mac with brew brew install kubectl
  2. Add the AWS CLI credential for this account in $HOME/.aws/credentials
  3. Switch to it with export AWS_PROFILE=name_in_square_brackets
  4. Export the kubeconfig from Convox convox rack kubeconfig > $HOME/.kube/app_name
    • Set app_name to be exactly the same name as what is listed in convox apps
    • view the file to make sure it was exported cat $HOME/.kube/app_name
  5. Switch to it with export KUBECONFIG=$HOME/.kube/app_name
  6. Confirm connection with kubectl get pods --all-namespaces
    • Find the namespace for the app in the list. It should be in the form rackname-appname
  7. Set that namespace for future commands kubectl config set-context --current --namespace=namespace_name_here
  8. Start the deploy in Convox convox deploy and wait for Back-off restarting failed container
  9. List find the pod name with kubectl get pods
  10. Output the logs with kubectl logs pod_name_here

This should report the error message or what is causing the app to crash.