1

57-argo-workflows.sh

 1 year ago
source link: https://gist.github.com/vfarcic/28e2adb5946ca366d7845780608591d7
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

57-argo-workflows.sh · GitHub

Instantly share code, notes, and snippets.

Hi. I get UNAUTHORIZED errors and i cant pull the images from the repo. Im confused

vagrant@vagrant:~/argo-workflows-demo$ argo --namespace workflows
logs @latest
--follow

toolkit-rvfbv-1809338610: Enumerating objects: 346, done.
Counting objects: 100% (64/64), done.jects: 1% (1/64)
Compressing objects: 100% (51/51), done.jects: 1% (1/51)
toolkit-rvfbv-1809338610: Total 346 (delta 26), reused 47 (delta 12), pack-reused 282
toolkit-rvfbv-1809338610: error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "vfarcic/devops-toolkit:1.0.0": POST https://index.docker.io/v2/vfarcic/devops-toolkit/blobs/uploads/: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:vfarcic/devops-toolkit Type:repository] map[Action:push Class: Name:vfarcic/devops-toolkit Type:repository]]

Author

I forgot to add the command that would change the registry used from vfarcic to whatever is your user. I just added https://gist.github.com/vfarcic/28e2adb5946ca366d7845780608591d7#file-57-argo-workflows-sh-L100. That should fix the problem. Can you try it out and let me know whether it worked?

It correctly edits the file after removing the space before vfarcic and $REGISTRY_USER

cat workflows/cd-mock.yaml
| sed -e "s@value:vfarcic@value:$REGISTRY_USER@g" \
| tee workflows/cd-mock.yaml

Other than that, i get a new error when i submit the workflow.

argo --namespace workflows submit workflows/cd-mock.yaml
FATA[2021-05-24T13:07:58.685Z] Failed to submit workflow: templates.full.tasks.build-container-image template reference container-image.build-kaniko-git not found

This might be related to my Kustomize installation. Im looking into it

Author

That's strange since there is space between value: and vfarcic. Take a look at the following commands and the output:

export REGISTRY_USER=xyz

cat workflows/cd-mock.yaml \
    | sed -e "s@value: vfarcic@value: $REGISTRY_USER@g"

The output:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: toolkit-
  labels:
    workflows.argoproj.io/archive-strategy: "false"
spec:
  entrypoint: full
  serviceAccountName: workflow
  volumes:
  - name: kaniko-secret
    secret:
      secretName: regcred
      items:
        - key: .dockerconfigjson
          path: config.json
  templates:
  - name: full
    dag:
      tasks:
      - name: build-container-image
        templateRef:
          name: container-image
          template: build-kaniko-git
          clusterScope: true
        arguments:
          parameters:
          - name: app_repo
            value: git://github.com/vfarcic/argo-workflows-demo
          - name: container_image
            value: xyz/devops-toolkit
          - name: container_tag
            value: "1.0.0"
      - name: deploy-staging
        template: echo
        arguments:
          parameters:
          - name: message
            value: Deploying to the staging cluster...
        dependencies:
        - build-container-image
      - name: tests
        template: echo
        arguments:
          parameters:
          - name: message
            value: Running integration tests (before, during, and after the deployment is finished)...
        dependencies:
        - build-container-image
      - name: deploy-production
        template: echo
        arguments:
          parameters:
          - name: message
            value: Deploying to the production cluster...
        dependencies:
        - tests
  - name: echo
    inputs:
      parameters:
      - name: message
    container:
      image: alpine
      command: [echo]
      args:
      - "{{inputs.parameters.message}}"

You can see that the output now contains value: xyz/devops-toolkit instead of value: vfarcic/devops-toolkit.

I did not manage to coplete the tutorial.

For what its worth:

cat workflows/cd-mock.yaml
| sed -e "s@value: vfarcic@value: $REGISTRY_USER@g"

is working correctly.

While

cat workflows/cd-mock.yaml \
| sed -e "s@value: vfarcic@value: $REGISTRY_USER@g" \
| tee workflows/cd-mock.yaml

is deleting the contents of the file.

Author

If the first command works, the second should work as well since it is piping the output to tee that writes it into the specified file (which happens to be the same one).

Would it help if we do a screen-sharing session and take a look at it together? If that sounds good, please pick any time that suits you from https://calendly.com/vfarcic/meet.

Thanks very much for your availability smiley . I went ahead and almost completed it, so hopefully i wont steal much more of your time.


First of all, the following command worked as expected when i used another terminal..(my bad)..

cat workflows/cd-mock.yaml | sed -e "s@value: vfarcic@value: $REGISTRY_USER@g" | tee workflows/cdock.yaml

Apart from that, building with kustomize generates the following error ( which seems related to this issue kubernetes-sigs/kustomize#2538 )

kustomize build     argo-workflows/overlays/production     | kubectl apply --filename -

Error: accumulating resources: accumulation err='accumulating resources from '../../base': '/home/vagrant/argocd-production/argo-workflows/base' must resolve to a file': recursed accumulation of path '/home/vagrant/argocd-production/argo-workflows/base': accumulating resources: accumulation err='accumulating resources from 'github.com/argoproj/argo/manifests/base': evalsymlink failure on '/home/vagrant/argocd-production/argo-workflows/base/github.com/argoproj/argo/manifests/base' : lstat /home/vagrant/argocd-production/argo-workflows/base/github.com: no such file or directory': git cmd = '/snap/kustomize/28/usr/bin/git init': exit status 1


Another approach

I used the -k flag of kubectl for building. (since kustomize is now integrated in kubectl)

kubectl apply -k argo-workflows/overlays/production/

In order for this to work one must firstly create the namespace using the flag --save-config like this:

kubectl create namespace workflows --save-config

Then i follow the next steps with success

Author

I'm not using kubectl apply -k because it has a very old version of Kustomize without any sign that it'll ever be updated. You could also try upgrading kubestomize. I'm currently using 4+.

There should be no need to create the workflows Namespace separately. You can see that https://github.com/vfarcic/argocd-production/blob/master/argo-workflows/overlays/workflows/kustomization.yaml has namespace.yaml as one of the resources. That file (https://github.com/vfarcic/argocd-production/blob/master/argo-workflows/overlays/workflows/namespace.yaml) is the manifest that defines the workflows Namespace.

I follow the steps exactly, is there any reason that open http://$ARGO_WORKFLOWS_HOST render 502 bad gateway?

Author

That can indicate Ingress not being installed, application not running, and quite a few other server-side issues. Can you do curl -i http://$ARGO_WORKFLOWS_HOST and paste the output?

@vfarcic I figured out. According to Argo Workflow documentation, when creating Nginx Ingress, nginx.ingress.kubernetes.io/backend-protocol: https needs to be added to the annotations.

Also now it needs client token to access argo-server.
I use "kubectl -n argo exec (argo-server pod name) -- argo auth token" to generate token

Hi @vfarcic, Thanks for posting this, I have one issue where I get

error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "namespace/username/test-repo:1.0.0": POST https://index.docker.io/v2/namespace/username/test-repo/blobs/uploads/: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:namespace/username/test-repo Type:repository] map[Action:push Class: Name:namespace/username/test-repo Type:repository]]

I'm using oracle container registry which the username consists of a namespace and a username, I did setup my regcred correctly, though I can't seem to get it working with the template.

I have this value in the template

      - name: container_image
        value: namespace/username/test-repo

but it says https://index.docker.io/v2 instead of syd.ocir.io which is the registry in the regcred secret

Author

@MazenElzanaty Can you confirm that the secret was created in the same namespace where Workflow build is running?

@vfarcic Yes, Actually I think the issue with kaniko itself.

Hi @vfarcic I got this error when I was submit the workflow to argo

toolkit-v4l9s-2218966670: Enumerating objects: 350, done.
Counting objects: 100% (68/68), done.jects:   1% (1/68)
Compressing objects: 100% (54/54), done.jects:   1% (1/54)
toolkit-v4l9s-2218966670: Total 350 (delta 28), reused 50 (delta 13), pack-reused 282
toolkit-v4l9s-2218966670: kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue

I already retried everything (deleting workflows namespace and re-adding it), but it still doesn't work. Can you help me with this? Thanks! :)

Author

@theodoreandrew I heard a similar complaint a week ago and, if I remember correctly, it was reproducible on Docker Desktop Kubernetes. Where are you running it?

@vfarcic Oh I also use Docker Desktop. I am using minikube as the VM. I am not sure if that's what you are asking since I am also a bit new to k8s

Author

Can you try it on, let's say, Rancher Desktop. I'm using it exclusively for a while now (6 months approx) and haven't seen any issues in it. Also, it's been working find in "real" clusters like, for example, GKE and EKS.

If Rancher Desktop is not an option for you (even though I highly recommend it; watch https://youtu.be/evWPib0iNgY), I'll do my best to install whatever you're having and try to reproduce it. In that case, please let me know whether you're using Minikube or Docker Desktop. If it's minikube, please let me know which driver you're using (if it's the default one, you should see it in the output of minikube start).

@vfarcic I just run minikube start and I saw that it using docker driver

Author

That (minikube with Docker) is the combination I heard others complaining. The workaround is to add --force argument, at least until the "real" fix is done (if ever).

Independently of that issue, I strongly recommend switching to Rancher Desktop as a local Kubernetes cluster.

@vfarcic Would you please help me out to fix this issue. error: no kind "Workflow" is registered for version "argoproj.io/v1alpha1" in scheme "pkg/scheme/scheme.go:28"

Author

Where did you observe that error?

@vfarcic I has seen in logs of the workflow pod that is created by sensors.
❯ kubectl get workflow -n argo
NAME STATUS AGE node-test-4cfkz Succeeded 17h

kubectl logs workflow/node-test-4cfkz -n argo
error: no kind "Workflow" is registered for version "argoproj.io/v1alpha1" in scheme "pkg/scheme/scheme.go:28"

Author

I haven't experienced that error myself. I'll do my best to reproduce it and, if I do, figure out what to do. However, I'm traveling with limited available time so I can't confirm when I'll get to it.

@vfarcic when I try to log the crd I found similar type error. Would you please check.

Author

Sorry for not responding earlier. I was (and still am) traveling with little to no free time. I'll do my best to double-check it soon.

@vfarcic Thanks for the Response, Enjoy the Trip


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK