Contributing to Camel K

We love contributions!

The project is written in go and contains some parts written in Java for the integration runtime. Camel K is built on top of Kubernetes through Custom Resource Definitions. The Operator SDK is used to manage the lifecycle of those custom resources.

Requirements

In order to build the project, you need to comply with the following requirements:

  • Go version 1.13+: needed to compile and test the project. Refer to the Go website for the installation.

  • Operator SDK v0.17.1+: used to build the operator, and the Docker images. Instructions in the Operator SDK website (binary downloads available in the release page).

  • GNU Make: used to define composite build actions. This should be already installed or available as package if you have a good OS (https://www.gnu.org/software/make/).

The Camel K Java runtime (camel-k-runtime) requires:

  • Java 11: needed for compilation

  • Maven: needed for building

Running checks

Checks rely on golangci-lint being installed, to install it look at the Local Installation instructions.

You can run checks via make lint, or you can install a GIT pre-commit hook and have the checks run via pre-commit; then make sure to install the pre-commit hooks after installing pre-commit by running:

$ pre-commit install

Checking Out the Sources

You can create a fork of this project from GitHub, then clone your fork with the git command line tool.

Structure

This is a high-level overview of the project structure:

Table 1. Structure
Path Content

/build

Contains the Docker and Maven build configuration.

/cmd

Contains the entry points (the main functions) for the camel-k binary (manager) and the kamel client tool.

/deploy

Contains Kubernetes resource files that are used by the kamel client during installation. The /deploy/resources.go file is kept in sync with the content of the directory (make build-embed-resources), so that resources can be used from within the go code.

/docs

Contains the documentation website based on Antora.

/e2e

Include integration tests to ensure that the software interacts correctly with Kubernetes and OpenShift.

/examples

Various examples of Camel K usage.

/pkg

This is where the code resides. The code is divided in multiple subpackages.

/script

Contains scripts used during make operations for building the project.

Building

To build the whole project you now need to run:

make

This executes a full build of the Go code. If you need to build the components separately you can execute:

  • make build-operator: to build the operator binary only.

  • make build-kamel: to build the kamel client tool only.

  • make build-runtime: to build the Java-based runtime code only.

After a successful build, if you’re connected to a Docker daemon, you can build the operator Docker image by running:

make images

Push runtime snapshot

For the Java bits in runtime you can push to the Apache Snapshot repository with:

  • make push-runtime-snapshot: to push the Java-based runtime snapshot JARs.

In your settings.xml you’ll need to have the correct ASF credentials to push.

    <server>
      <id>apache.snapshots.https</id>
      <username>username</username>
      <password>password</password>
    </server>
    <server>
      <id>apache.releases.https</id>
      <username>username</username>
      <password>password</password>
    </server>

Don’t forget to first run a make build-runtime before pushing the snapshot.

The above command produces a camel-k image with name docker.io/apache/camel-k. Sometimes you might need to produce camel-k images that need to be pushed to the custom repository e.g. docker.io/myrepo/camel-k, to do that you can pass a parameter imgDestination to the make as shown below:

make imgDestination='docker.io/myrepo' images

Testing

Unit tests are executed automatically as part of the build. They use the standard go testing framework.

Integration tests (aimed at ensuring that the code integrates correctly with Kubernetes and OpenShift), need special care.

The convention used in this repo is to name unit tests xxx_test.go, and name integration tests yyy_integration_test.go. Integration tests are all in the /e2e dir.

Since both names end with _test.go, both would be executed by go during build, so you need to put a special build tag to mark integration tests. A integration test should start with the following line:

// +build integration

Look into the /e2e directory for examples of integration tests.

Before running a integration test, you need to be connected to a Kubernetes/OpenShift namespace. After you log in into your cluster, you can run the following command to execute all integration tests:

make test-integration

Running

If you want to install everything you have in your source code and see it running on Kubernetes, you need to run the following command:

For Red Hat CodeReady Containers (CRC)

  • You need to have Docker installed and running (or connected to a Docker daemon)

  • You need to set up Docker daemon to trust CRC’s insecure Docker registry which is exposed by default through the route default-route-openshift-image-registry.apps-crc.testing. One way of doing that is to instruct the Docker daemon to trust the certificate:

    • oc extract secret/router-ca --keys=tls.crt -n openshift-ingress-operator: to extract the certificate

    • sudo cp tls.crt /etc/docker/certs.d/default-route-openshift-image-registry.apps-crc.testing/ca.crt: to copy the certificate for Docker daemon to trust

    • docker login -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.apps-crc.testing: to test that the certificate is trusted

  • Run make install-crc: to build the project and install it in the current namespace on CRC

  • You can specify a different namespace with make install-crc project=myawesomeproject

  • To uninstall Camel K, run kamel uninstall --all --olm=false

The commands assumes you have an already running CRC instance and logged in correctly.

For Minishift

  • Run make install-minishift (or just make install): to build the project and install it in the current namespace on Minishift

  • You can specify a different namespace with make install-minishift project=myawesomeproject

This command assumes you have an already running Minishift instance.

For Minikube

  • Run make install-minikube: to build the project and install it in the current namespace on Minikube

This command assumes you have an already running Minikube instance.

Use

Now you can play with Camel K:

./kamel run examples/Sample.java

To add additional dependencies to your routes:

./kamel run -d camel:dns examples/dns.js

Debugging and Running from IDE

Sometimes it’s useful to debug the code from the IDE when troubleshooting.

Debugging the kamel binary

It should be straightforward: just execute the /cmd/kamel/main.go file from the IDE (e.g. Goland) in debug mode.

Debugging the operator

It is a bit more complex (but not so much).

You are going to run the operator code outside OpenShift in your IDE so, first of all, you need to stop the operator running inside:

// use kubectl in plain Kubernetes
oc scale deployment/camel-k-operator --replicas 0

You can scale it back to 1 when you’re done, and you have updated the operator image.

You can set up the IDE (e.g. Goland) to execute the /cmd/manager/main.go file in debug mode with operator as argument.

When configuring the IDE task, make sure to add all required environment variables in the IDE task configuration screen:

  • Set the KUBERNETES_CONFIG environment variable to point to your Kubernetes configuration file (usually <homedir>/.kube/config).

  • Set the WATCH_NAMESPACE environment variable to a Kubernetes namespace you have access to.

  • Set the OPERATOR_NAME environment variable to camel-k.

After you set up the IDE task, with Java 11+ to be used by default, you can run and debug the operator process.

The operator can be fully debugged in Minishift, because it uses OpenShift S2I binary builds under the hood. The build phase cannot be (currently) debugged in Minikube because the Kaniko builder requires that the operator, and the publisher pod share a common persistent volume.