openSUSE:ALP/Workgroups/Git-Packaging-Workflow/WorkPackages/TriggerAndScheduling

Jump to: navigation, search

python based webserver reacting to webhooks

https://github.com/dcermak/scm-staging took the route of running a Python webserver that implements webhooks. Currently there is no additional Scheduling necessary as it just reacts to each webhook request as it comes in.

Woodbecker

Another approach would be using a CI engine:

Focus in these tests was on CI engines making use of containers for jobs easy as that would align with ALP. They were tested on Kubernetes.

Woodpecker didn't work on the first try in Kubernetes due to it using dind. There is now native support for Kubernetes, it might be worth testing that. The advantage of Woodpecker over Concourse might be that it supports Gitea out of the box. The default integration via OAuth might be a hurdle as the scopes common on forges lead to requesting too much permissions.

Concourse

Concourse worked well from its Helm chart. Using a containers from registry.opensuse.org for jobs worked as expected. Each step in a job is executing a command in a container which gets scheduled to an amount of worker pods in Kubernetes (there are also non-Kubernetes worker implementations). It supports webhooks, but is architected to be able to notice when one webhook request was dropped or not rely on it by polling and pick up triggers from e.g. the state in a Git repository. Job runs can also be manually triggered, including with local changes via CLI to make testing work in progress easier. It has a bigger focus on extensibility over batteries included where Woodpecker takes the opposite route. All extensibility is via container images that receive/send JSON on stdin/out, see https://concourse-ci.org/implementing-resource-types.html . One extension is for triggering on git commits or tags. Triggering from merge requests wasn't tested, as nobody finished an extension for Gitea merge requests yet (the are ones for merge requests of other forges).

I have not tried it, but running VMs might be possible in concourse like kubevirt does on k8s by running it in a container. Additionally openqa can be triggered from any CI, currently people are using Github actions for that.

There exists an extension for OBS https://github.com/SUSE/open-build-service-resource which supports triggering on changes to and getting OBS packages and getting sources from it. AFAIK it relies on polling as I do not think OBS has webhooks, though it has an event bus, but not sure that can easily integrate with Concourse as it probably needs an open connection to listen to notifications.

TODO:

  • link to git of configuration used.
  • setup https certificate so others can use the test instance.
  • see about combining both approaches

Gocd

Gocd is already in use on botmaster.suse.de with jobs from https://github.com/openSUSE/openSUSE-release-tools/tree/master/gocd . The ruby erb templates are rendered into the yaml files due to gocd's peculiarities wrt yaml. Plugins can be made by implementing Java APIs ( https://plugin-api.gocd.org/current/ ) . There is no plugin for Gitea merge requests, yet. There is no functionality for webhooks, however the jobs can be started via specific web requests. Containers can not be used as directly as in container centric CI engines, but there is functionality to have prepared Kubernetes pods with specific containers running, which are called elastic agents spawned from an elastic profile (see https://docs.gocd.org/current/gocd_on_kubernetes/importing_a_sample_workflow.html ). Jobs can then select these agents based on the elastic profile id (see https://docs.gocd.org/current/gocd_on_kubernetes/sample_pipelines_explained.html#elastic-profile-1 ). A better way for Gocd its architecture might be a Kubernetes Task plugin, but there is currently no such thing. There is currently no notification plugin for setting a Gitea status.

Tekton

Rancher related projects might switch from Drone (which Woodpecker forked from) to https://tekton.dev/ .