Knative eventing connectors based on Apache Camel Kamelets.
Each connector project is able to act as a source or sink for the Knative eventing broker.
The projects create container images that get referenced in a Knative IntegrationSource
or IntegrationSink
custom resource.
You can use a Maven archetype to generate new connector modules in this project. The archetype should give you a good start into the new connector project as it generates the basic folder structure and some source files.
You need to build the Maven archetype locally first before using it for the first time:
mvn install -pl :archetype-source
The command creates the Maven archetype artifact on your local machine, so you can use it in the Maven archetype:generate
command:
Note
|
The Maven archetype archetype-source creates a new IntegrationSource connector. Please use archetype-sink to create a new IntegrationSink connector project.
|
mvn archetype:generate \
-DarchetypeGroupId=dev.knative.eventing \
-DarchetypeArtifactId=connector-archetype-source \
-DarchetypeVersion=1.0-SNAPSHOT \
-DoutputDirectory=integration-source
This command starts the archetype generation in interactive mode.
In interactive mode the prompt asks you to provide some more information such as the Kamelet name that you want to use in this kn-connector source.
Also, you may want to customize the Apache Camel component name that is used in the Kamelet (e.g. aws-s3
as Kamelet name and aws2-s3
as Apache Camel component name).
After the archetype:generate
command has finished you will find a new project folder that is ready to be built with Maven.
You can also avoid the interactive mode when specifying all input parameters:
mvn archetype:generate \
-DarchetypeGroupId=dev.knative.eventing \
-DarchetypeArtifactId=connector-archetype-source \
-DarchetypeVersion=1.0-SNAPSHOT \
-DartifactId=aws-s3-source \
-Dkamelet-name=aws-s3 \
-Dcomponent-name=aws2-s3 \
-DoutputDirectory=integration-source
Each connector requires a set of Camel Quarkus dependencies in the Maven POM.
After the 1st build of the new Maven project you can review a generated list of dependencies in target/metadata/dependencies.json
.
This generated info gives you a list of dependencies that you should consider adding to the Maven project POM.
Also, the metadata folder includes the information about supported Kamelet properties (target/metadata/properties.json
).
You may review this list of properties and decide which one of these the IntegrationSource CRD should include in its spec.
The generated info includes all property specifications such as enumeration constraints or formats.
Note
|
Each connector project generates an AsciiDoc file in its project root folder (properties.adoc ) which holds a list of all supported Kamelet properties. This includes the environment variable name that you may set as part of a connector deployment in order to overwrite one of these Kamelet properties
|
Note
|
The connector project contains a generated properties specification (properties.yaml ) that you can use as a base for adding the connector to the IntegrationSource CRD specification as typed API.
|
The generation of the info and spec files is done automatically when the connector Maven project is built.
Each kn-connector project uses Quarkus in combination with Apache Camel, Kamelets and Maven as a build tool.
You can use the following Maven commands to build the container image.
./mvnw package -Dquarkus.container-image.build=true
The container image uses the project version and image group defined in the Maven POM.
You can customize the image group with -Dquarkus.container-image.group=my-group
.
By default, the container image looks like this:
quay.io/openshift-knative/{{source-name}}:1.0-SNAPSHOT
The project leverages the Quarkus Kubernetes and container image extensions so you can use Quarkus properties and configurations to customize the resulting container image.
See these extensions for details: * https://quarkus.io/guides/deploying-to-kubernetes * https://quarkus.io/guides/container-image
./mvnw package -Dquarkus.container-image.build=true -Dquarkus.container-image.push=true
Pushes the image to the image registry defined in the Maven POM. The default registry is quay.io.
You can customize the registry with -Dquarkus.container-image.registry=localhost:5001
(e.g. when connecting to local Kind cluster).
In case you want to connect with a local cluster like Kind or Minikube you may also need to set -Dquarkus.container-image.insecure=true
.
The build produces a Kubernetes manifest in (target/kubernetes/kubernetes.yml
).
This manifest holds all resources required to run the application on your Kubernetes cluster.
You can customize the Kubernetes resources in src/main/kubernetes/kubernetes.yml
.
This is being used as a basis and Quarkus will generate the final manifest in target/kubernetes/kubernetes.yml
during the build.
You can deploy the application to Kubernetes with:
./mvnw package -Dquarkus.kubernetes.deploy=true
This deploys the application to the current Kubernetes cluster that you are connected to (e.g. via kubectl config set-context --current
and kubectl config view --minify
).
You may change the target namespace with -Dquarkus.kubernetes.namespace=my-namespace
.
Each Kamelet defines a set of properties. The user is able to customize these properties when running a connector deployment.
You can customize the properties via environment variables on the deployment:
The environment variables that overwrite properties on the Kamelet source follow a naming convention:
-
CAMEL_KAMELET_{{KAMELET_NAME}}_{{PROPERTY_NAME}}={{PROPERTY_VALUE}}
The name represents the name of the Kamelet as defined in the Kamelet catalog.
The environment variables may be set as part of the Kubernetes deployment or as an alternative on the Knative ContainerSource:
apiVersion: sources.knative.dev/v1
kind: ContainerSource
metadata:
name: kamelet-source
namespace: knative-samples
spec:
template:
spec:
containers:
- image: quay.io/openshift-knative/aws-s3-source:1.0
name: timer
env:
- name: CAMEL_KAMELET_AWS_S3_SOURCE_BUCKETNAMEORARN
value: "arn:aws:s3:::mybucket"
- name: CAMEL_KAMELET_AWS_S3_SOURCE_REGION
value: "eu-north-1"
sink:
ref:
apiVersion: eventing.knative.dev/v1
kind: Broker
name: default
You can also set the environment variable on the running deployment:
kubectl set env deployment/{{source-name}} CAMEL_KAMELET_TIMER_SOURCE_MESSAGE="I updated it..."
You may also mount a configmap/secret to overwrite Kamelet properties with values from the configmap/secret resource.
As the Kamelet properties are configured viw environment variables on the ContainerSource you can also use values referencing a configmap or secret.
apiVersion: sources.knative.dev/v1
kind: ContainerSource
metadata:
name: kamelet-source
namespace: knative-samples
spec:
template:
spec:
containers:
- image: quay.io/openshift-knative/aws-s3-source:1.0
name: timer
env:
- name: CAMEL_KAMELET_AWS_S3_SOURCE_BUCKETNAMEORARN
value: "arn:aws:s3:::mybucket"
- name: CAMEL_KAMELET_AWS_S3_SOURCE_REGION
value: "eu-north-1"
- name: CAMEL_KAMELET_AWS_S3_SOURCE_ACCESSKEY
valueFrom:
secretKeyRef:
name: my-secret
key: aws.s3.accessKey
- name: CAMEL_KAMELET_AWS_S3_SOURCE_SECRETKEY
valueFrom:
secretKeyRef:
name: my-secret
key: aws.s3.secretKey
sink:
ref:
apiVersion: eventing.knative.dev/v1
kind: Broker
name: default
The example above references a secret called my-secret
and loads the keys aws.s3.accessKey
and aws.s3.secretKey
.
You can also load and reference the values of the configmap/secret in the environment variables following Apache Camel expressions:
Given a configmap named my-aw-s3-source-config
in Kubernetes that has two entries:
region = Knative rocks!
period = 3000
You can reference the values of the configmap in the environment variables like this:
-
CAMEL_KAMELET_TIMER_SOURCE_MESSAGE={{configmap:kn-source-config/message}}
-
CAMEL_KAMELET_TIMER_SOURCE_PERIOD={{configmap:kn-source-config/period}}
The configmap property function in Apache Camel follows this general syntax:
configmap:name/key[:defaultValue]
This means you can also set a default value in case the configmap should not be present.
configmap:kn-source-config/period:5000
The configmap and secret based configuration requires to add a volume and volume-mount configuration to the connector deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: timer-source
spec:
selector:
matchLabels:
app.kubernetes.io/name: timer-source
app.kubernetes.io/version: 1.0-SNAPSHOT
template:
spec:
containers:
- image: localhost:5001/openshift-knative/timer-source:1.0-SNAPSHOT
imagePullPolicy: Always
name: timer-source
env:
- name: CAMEL_KAMELET_TIMER_SOURCE_MESSAGE
value: "{{configmap:kn-source-config/message}}"
- name: CAMEL_KAMELET_TIMER_SOURCE_PERIOD
value: "{{configmap:kn-source-config/period:1000}}"
volumeMounts:
- mountPath: /etc/camel/conf.d/_configmaps/kn-source-config
name: timer-source-config
readOnly: true
volumes:
- name: kn-source-config
configMap:
name: my-timer-source-config
Camel is able to resolve the configmap mount path given in the volume mount.
The mount path is configurable via application.properties
in the connector project:
-
camel.kubernetes-config.mount-path-configmaps=/etc/camel/conf.d/_configmaps/kn-source-config
-
camel.kubernetes-config.mount-path-secrets=/etc/camel/conf.d/_secrets/kn-source-config
The mount path configured on the Kubernetes deployment should match the configuration in the application.properties
.
Instead of settings the mount paths statically in the application.properties
you can also set these via environment variables on the
Kubernetes deployment.
-
CAMEL_K_MOUNT_PATH_CONFIGMAPS=/etc/camel/conf.d/_configmaps/kn-source-config
-
CAMEL_K_MOUNT_PATH_SECRETS=/etc/camel/conf.d/_secrets/kn-source-config
The same mechanism applies to mounting and configuring Kubernetes secrets. The syntax for referencing a secret value via Apache Camel property function is as follows:
secret:name/key[:defaultValue]
This means you can overwrite Kamelet properties with the values from the secret like this:
-
CAMEL_KAMELET_TIMER_SOURCE_MESSAGE=secret:kn-source-config/msg
-
CAMEL_KAMELET_TIMER_SOURCE_PERIOD=secret:kn-source-config/period
Each connector produces/consumes events in CloudEvent data format. The connector uses a set of default values for the CloudEvent attributes:
-
ce-type: dev.knative.eventing.{{source-type}}
-
ce-source: dev.knative.eventing.{{source-name}}
-
ce-subject: {{source-name}}
You can customize the CloudEvent attributes with setting environment variables on the deployment.
-
KN_CONNECTOR_CE_OVERRIDE_TYPE=value
-
KN_CONNECTOR_CE_OVERRIDE_SOURCE=value
-
KN_CONNECTOR_CE_OVERRIDE_SUBJECT=value
You can set the CE_OVERRIDE attributes on a running deployment.
kubectl set env deployment/{{source-name}} KN_CONNECTOR_CE_OVERRIDE_TYPE=custom-type
You may also use the SinkBinding K_CE_OVERRIDES
environment variable set on the deployment.
Each connector sink consumes events in CloudEvent data format. By default, the connector receives all events on the Knative broker.
You may want to specify filters on the CloudEvent attributes so that the connector selectively consumes events from the broker. Just configure the Knative trigger to filter based on attributes:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
annotations:
eventing.knative.dev/creator: kn-connectors
labels:
eventing.knative.dev/connector: log-sink
eventing.knative.dev/broker: default
name: log-sink
spec:
broker: default
filter:
attributes:
type: dev.knative.eventing.timer
subscriber:
ref:
apiVersion: v1
kind: Service
name: log-sink
The trigger for example filters the events by its type ce-type=dev.knative.eventing.timer
.
Knative brokers may use TLS encrypted transport options as described in https://knative.dev/docs/eventing/features/transport-encryption/#overview
This means that Event producers need to use proper SSL authentication to connect to Https Knative broker endpoints with cluster-internal CA certificates.
The IntegrationSource may use a volume mount with the cluster-internal CA certificates being injected.
The integration source needs to enable the SSL client via environment variables and set the path to the injected CA certs and PEM files:
-
CAMEL_KNATIVE_CLIENT_SSL_ENABLED=true
-
CAMEL_KNATIVE_CLIENT_SSL_KEY_PATH=/knative-custom-certs/knative-eventing-bundle.pem
This enables the SSL options on the Http client that connects with the broker endpoint. The SSL client support provides these environment variables:
EnvVar | Description |
---|---|
CAMEL_KNATIVE_CLIENT_SSL_ENABLED |
Enable/disable SSL options on the Http client. Default value is |
CAMEL_KNATIVE_CLIENT_SSL_VERIFY_HOSTNAME |
Enable/disable hostname verification. Default value is |
CAMEL_KNATIVE_CLIENT_SSL_KEY_PATH |
Path to the key store options configuring a list of private key and its certificate based on Privacy-enhanced Electronic Email (PEM) files. |
CAMEL_KNATIVE_CLIENT_SSL_KEY_CERT_PATH |
Paths to client key CA certificates. Supports multiple paths as comma delimited String value. |
CAMEL_KNATIVE_CLIENT_SSL_KEYSTORE_PATH |
Java keystore (.jks) or (.p12) path as an alternative to using PEM files. |
CAMEL_KNATIVE_CLIENT_SSL_KEYSTORE_PASSWORD |
Keystore password. Value can be set via secretKeyRef. |
CAMEL_KNATIVE_CLIENT_SSL_TRUST_CERT_PATH |
Paths to trust CA certificates. Supports multiple paths as comma delimited String value. |
CAMEL_KNATIVE_CLIENT_SSL_TRUSTSTORE_PATH |
Java truststore (.jks) or (.p12) path as an alternative to using PEM files. |
CAMEL_KNATIVE_CLIENT_SSL_TRUSTSTORE_PASSWORD |
Truststore password. Value can be set via secretKeyRef. |
As you can see the SSL client support provides multiple ways to configure keystore and truststore options. It is recommended to set keystore/truststore passwords vie secretKeyRef on the IntegrationSource spec. When no truststore configuration is given the SSL client support defaults to using trust all options.
The Knative broker may require a client to use proper OIDC (OpenID Connect) tokens as an authorization. This means that Event producers need to add the authorization header to events sent to the broker.
The IntegrationSource can use a volume mount with the OIDC token being injected via ConfigMap or Secret.
You need to enable the OIDC support on the integration and set the path to the OIDC token. You can do this via environment variables:
-
CAMEL_KNATIVE_CLIENT_OIDC_ENABLED=true
-
CAMEL_KNATIVE_CLIENT_OIDC_TOKEN_PATH=/oidc/token
This enables the OIDC options on the Knative Http client that connects with the broker endpoint. The OIDC client options know these environment variables:
EnvVar | Description |
---|---|
CAMEL_KNATIVE_CLIENT_OIDC_ENABLED |
Enable/disable OIDC options on the Http client. Default value is |
CAMEL_KNATIVE_CLIENT_OIDC_TOKEN_PATH |
Path to the OIDC token. |
CAMEL_KNATIVE_CLIENT_OIDC_RENEW_TOKENS_ON_FORBIDDEN |
Enable/disable the automatic renewal when client receives a forbidden response from the broker. Default is disabled (=false) |
CAMEL_KNATIVE_CLIENT_OIDC_CACHE_TOKENS |
Enable/disable token caching. When enabled token is retrieved once and will be cached as long as broker does not respond with a forbidden response. Default is disabled (=false). |
The OIDC tokens may expire and get renewed by Knative eventing. The renewal means that the volume mount is updated with the new token automatically.
In order to refresh the token the Camel Knative client must read the token again.
The Knative client options supports the token renewal on a 401 forbidden
response from the Knative broker.
Once the client has received the forbidden answer it automatically reloads the token from the volume mount to perform the renewal.
As an alternative to that you may disable the token cache on the client so the token is always read from the volume mount for each request.
The required Camel dependencies need to be added to the Maven POM before building and deploying. You can use one of the Kamelets available in the Kamelet catalog as a source or sink in this connector.
Typically, the Kamelet is backed by a Quarkus Camel extension component dependency that needs to be added to the Maven POM. The Kamelets in use may list additional dependencies that we need to include in the Maven POM.
Creating a new kn-connector project is very straightforward. You may copy one of the sample projects and adjust the reference to the Kamelets.
Also, you can use the Camel JBang kubernetes export functionality to generate a Maven project from a given Pipe YAML file.
camel kubernetes export my-pipe.yaml --runtime quarkus --dir target
This generates a Maven project that you can use as a starting point for the kn-connector project.
The connector is able to reference all Kamelets that are part of the default Kamelet catalog.
In case you want to use a custom Kamelet, place the *.kamelet.yaml
file into src/main/resources/kamelets
.
The Kamelet will become part of the built container image, you can just reference the Kamelet in the Pipe YAML file as a source or sink.
For more information about Apache Camel Kamelets and their individual properties see https://camel.apache.org/camel-kamelets/.
For more detailed description of all container image configuration options please refer to the Quarkus Kubernetes extension and the container image guides: