This article describes how to install IBM API Connect 2018 on a one-node Kubernetes for personal/demo/PoC/MVP usage. I am going to create a VM with CentOS7 in IBM Cloud and then deploy a one-node Kubernetes and IBM API Connect 2018 on top of it. As a DNS I will use service.

Provisioning of a VM in IBM Cloud

Login to IBM Cloud, choose Services, check the Compute checkbox and select the Virtual Server service.

Configure a VM:

  1. Choose a type for the Virtual Server. For personal usage I would consider to use Public or Transient. Transient is a way cheaper but it can be deleted by IBM without notification if they need resources you use.
  2. Select a location for your VM
  3. Select a profile. I would recommend not less then B1.16x32
  4. Select an OS. I used CentOS7-minimal-(64 bit)-HVM for this article.
  5. Change the size of the Boot disk to 100GB and add another one for 250GB which we will use to keep Persistent Volumes.
  6. Configure security groups for your VM. At least SSH and HTTPs.

Then press Create and give it a few minutes to be provisioned. After that configure SSH access to the VM by copying your public key there and enabling PubkeyAccess for SSH daemon on the VM. Or you can use a password provided with your VM to login there.

When you login to a newly created VM, first of all you need to create a filesystem for your additional device and mount it to a directory.

Create a local storage folder. We will mount your filesystem there in further steps.
mkdir /root/storage

Find out your 250GB device name using the command below. In my case it was /dev/xvdc
fdisk -l

Format /dev/xvdc device to ext4 filesystem
mkfs.ext4 /dev/xvdc

Mount it to a directory we created earlier:
echo "/dev/xvdc /root/storage ext4 defaults,relatime 0 2" >> /etc/fstab

Increase the limit for Virtual Memory
sysctl -w vm.max_map_count=1048575
echo "vm.max_map_count=1048575" >> /etc/sysctl.conf

Disable swapping
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab

Configure the host locale
echo "LANG=en_US.utf-8" >> /etc/environment
echo "LC_ALL=en_US.utf-8" >> /etc/environment

Set hostname
hostnamectl set-hostname

Disable SELinux
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

Reboot the VM

After it starts up check that the disk was mounted to the /root/storage directory. The command below expected to return something like: /dev/xvdc on /root/storage type ext4 (rw,relatime,seclabel,data=ordered)
mount | grep /root/storage

Installing and configuring Docker and Kubernetes

Configure Kubernetes Repository:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

Install Kubeadm and Docker
yum install kubeadm docker -y
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet

Initialize Kubernetes master
kubeadm init --apiserver-advertise-address=YOUR_PUBLIC_IP_HERE --pod-network-cidr=

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Set KUBECONFIG variable and create a kubectl alias

export KUBECONFIG=$HOME/.kube/config
alias k="kubectl -n apiconnect"
echo "export KUBECONFIG=$HOME/.kube/config" >> /root/.bashrc
echo 'alias k="kubectl -n apiconnect"' >> /root/.bashrc

Remove master's taints so your only node could be schedulable
kubectl taint nodes --all

Deploy pod network. I used Calico v3.13. Check which one is the newest and compatible with your Kubernetes at the moment.
kubectl apply -f

Be sure that all pods are up and running
kubectl get nodes
kubectl get po --all-namespaces

Installing Helm and Deploying Tiller

Download Helm v2.xx. As of the date of writing, Helm3 has not been supported yet for this version of IBM API Connect.
tar -zxvf helm-v2.16.5-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm help

Set variables
export NAMESPACE=apiconnect
export TILLER_NAMESPACE=apiconnect
echo "export NAMESPACE=apiconnect" >> /root/.bashrc
echo "export TILLER_NAMESPACE=apiconnect" >> /root/.bashrc

Create a Kubernetes Namespace for IBM API Connect. IBM API Connect subsystems could be installed in separate Namespaces but here I am going to install them all together in the same Namespace.
kubectl create namespace $NAMESPACE

Deploy Tiller
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=apiconnect:default
helm init

Check that the Tiller pod is up and running
k get po | grep tiller

Installing NGINX Ingress Controller

Create a values file for Nginx Ingress Controller Helm Chart
vi nginx-ingress-values.yaml

    hsts-max-age: "31536000"
    keepalive: "32"
    log-format: '{ "@timestamp": "$time_iso8601", "@version": "1", "clientip": "$remote_addr",
      "tag": "ingress", "remote_user": "$remote_user", "bytes": $bytes_sent, "duration":
      $request_time, "status": $status, "request": "$request_uri", "urlpath": "$uri",
      "urlquery": "$args", "method": "$request_method", "referer": "$http_referer",
      "useragent": "$http_user_agent", "software": "nginx", "version": "$nginx_version",
      "host": "$host", "upstream": "$upstream_addr", "upstream-status": "$upstream_status"
    main-snippets: load_module "modules/"
    proxy-body-size: "0"
    proxy-buffering: "off"
    server-name-hash-bucket-size: "128"
    server-name-hash-max-size: "1024"
    server-tokens: "False"
    ssl-ciphers: HIGH:!aNULL:!MD5
    ssl-prefer-server-ciphers: "True"
    ssl-protocols: TLSv1.2
    use-http2: "true"
    worker-connections: "10240"
    worker-cpu-affinity: auto
    worker-processes: "1"
    worker-rlimit-nofile: "65536"
    worker-shutdown-timeout: 5m
    useHostPort: false
    enable-ssl-passthrough: true
  hostNetwork: true
  kind: DaemonSet
  name: controller
  create: "true"

Install the Helm Chart (you can install it in another namespace if want):
helm install stable/nginx-ingress --name ingress --values ingress-config.yml --namespace kube-system

Validate that it was installed:
kubectl get po -n kube-system | grep ingress

Installing a Docker Registry

Create a Docker container with a Registry

docker run -d \
  -p 5000:5000 \
  --restart=always \
  --name registry \
  -v /var/lib/registry:/var/lib/registry \

Check that the Registry is up and running:
docker ps | grep registry

Configuring Dynamic provisioning of Kubernetes HostPath Volumes

As there is no storage on my VM I will use a Deployment which allows to dynamically create hostPath PersistentVolumes based on my PersistentVolumeClaims. I am going to use this hostpath-provisioner:

Prepare required ClusterRole and ClusterRoleBinding file for the provisioner

vi storage-rbac.yaml

kind: ClusterRole
  name: hostpath-provisioner
  namespace: apiconnect
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]

  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]

  - apiGroups: [""]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]

  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]

kind: ClusterRoleBinding
  name: hostpath-provisioner
  namespace: apiconnect
  kind: ClusterRole
  name: hostpath-provisioner
- kind: ServiceAccount
  name: default
  namespace: apiconnect 

Prepare a hostpath provisioner Deployment yaml. Correct volume paths in the file if its different from /root/storage.

vi hostpath-provisioner.yaml

apiVersion: apps/v1
kind: Deployment
  name: hostpath-provisioner
    k8s-app: hostpath-provisioner
  namespace: apiconnect

  replicas: 1
  revisionHistoryLimit: 0

      k8s-app: hostpath-provisioner

        k8s-app: hostpath-provisioner

        - name: hostpath-provisioner
          image: mazdermind/hostpath-provisioner:latest
            - name: NODE_NAME
                  fieldPath: spec.nodeName

            - name: PV_DIR
              value: /root/storage

            - name: PV_RECLAIM_POLICY
              value: Retain

            - name: pv-volume
              mountPath: /root/storage

        - name: pv-volume
            path: /root/storage

Prepare a StorageClass yaml

vi StorageClass.yaml

kind: StorageClass
  name: myblock
  annotations: "true"
provisioner: hostpath

Create them all in Kubernetes
kubectl create -f storage-rbac.yaml -n apiconnect
kubectl create -f hostpath-provisioner.yaml -n apiconnect
kubectl create -f StorageClass.yaml -n apiconnect

Uploading IBM Api Connect images to the Docker Registry

I suggest you have access to images and rights to use them. For this installation you need:
management-images-kubernetes_lts_v2018.4.1.10.tgz: images for the Management subsystem
analytics-images-kubernetes_lts_v2018.4.1.10.tgz: images for the Analytics subsystem
portal-images-kubernetes_lts_v2018.4.1.10.tgz: images for the Portal subsystem an image for the Gateway subsystem
dpm20184110.lts.tar.gz: a DataPower Monitor image required for the Gateway
apicup-linux_lts_v2018.4.1.10: an apicup tool
toolkit-linux_lts_v2018.4.1.10.tgz: an apic command line tool

The version of a fix pak (last two numbers) can be different.

Install apicup and apic tools.
apicup is an utility shipped with IBM API Connect and required for IBM API Connect installation and configuration.
apic is a CLI client for IBM API Connect

cp apicup-linux_lts_v2018.4.1.10 /usr/bin/
tar xvf toolkit-linux_lts_v2018.4.1.10.tgz
cp apic-slim /usr/bin/
mv /usr/bin/apicup-linux_lts_v2018.4.1.10 /usr/bin/apicup
mv /usr/bin/apic-slim /usr/bin/apic
chmod 755 /usr/bin/apicup
chmod 755 /usr/bin/apic

apicup version
apic version

Upload IBM API Connect images for Management, Analytics and Portal subsystems:
apicup registry-upload management management-images-kubernetes_lts_v2018.4.1.10.tgz localhost:5000

apicup registry-upload analytics analytics-images-kubernetes_lts_v2018.4.1.10.tgz localhost:5000

apicup registry-upload portal portal-images-kubernetes_lts_v2018.4.1.10.tgz localhost:5000

Upload IBM API Connect Gateway images. I this case we have to set the correct tags first and then upload the images.
docker load -i

docker tag ibmcom/datapower:2018. localhost:5000/datapower-api-gateway:2018.4.1.10-318002-release-prod

docker push localhost:5000/datapower-api-gateway:2018.4.1.10-318002-release-prod

docker load -i dpm20184110.lts.tar.gz

docker tag ibmcom/k8s-datapower-monitor:2018.4.1.10 localhost:5000/k8s-datapower-monitor:2018.4.1-1-18ca914

docker push localhost:5000/k8s-datapower-monitor:2018.4.1-1-18ca914

Busybox is also required
docker pull busybox:1.29-glibc
docker tag busybox:1.29-glibc localhost:5000/busybox:1.29-glibc
docker push localhost:5000/busybox:1.29-glibc

Configuring IBM API Connect subsystems

On this step we will prepare Helm charts for IBM API Connect using apicup tool.

Create a directory and initialize a new installation. It will create an apiconnect-up.yaml file which will contain variables for your installation.
mkdir myApic && cd myApic
apicup init

Run the following set of commands to configure your Management subsystem.

apicup subsys create mgmt management --k8s
apicup subsys set mgmt ingress-type=ingress
apicup subsys set mgmt mode=dev
apicup subsys set mgmt namespace=apiconnect
apicup subsys set mgmt registry=localhost:5000
apicup subsys set mgmt storage-class=myblock

apicup subsys set \
        mgmt \ \ \
apicup subsys set mgmt cassandra-max-memory-gb=9 cassandra-cluster-size=1                    
apicup subsys set mgmt cassandra-volume-size-gb=50
apicup subsys set mgmt create-crd=true 

apicup subsys get mgmt --validate

Run the following set of commands to configure your Analytics subsystem.

apicup subsys create analyt analytics --k8s
apicup subsys set analyt mode=dev
apicup subsys set analyt ingress-type=ingress

apicup subsys set analyt
apicup subsys set analyt

apicup subsys set analyt registry=localhost:5000
apicup subsys set analyt namespace=apiconnect

apicup subsys set analyt storage-class=myblock

apicup subsys set analyt coordinating-max-memory-gb=12
apicup subsys set analyt data-max-memory-gb=8
apicup subsys set analyt data-storage-size-gb=200
apicup subsys set analyt master-max-memory-gb=8
apicup subsys set analyt master-storage-size-gb=5

apicup subsys get analyt --validate

Run the following set of commands to configure your Analytics subsystem.

apicup subsys create ptl portal --k8s
apicup subsys set ptl ingress-type=ingress

apicup subsys set ptl
apicup subsys set ptl

apicup subsys set ptl registry=localhost:5000
apicup subsys set ptl namespace=apiconnect
apicup subsys set ptl storage-class=myblock

apicup subsys set ptl www-storage-size-gb=5
apicup subsys set ptl backup-storage-size-gb=5
apicup subsys set ptl db-storage-size-gb=12
apicup subsys set ptl db-logs-storage-size-gb=2
apicup subsys set ptl admin-storage-size-gb=1

apicup subsys get ptl --validate

Create a yaml extension file for the Gateway. This file is required if we want to expose Gateway GUI.
vi datapower-values.yaml

 # Gateway MGMT variables
 # This value should either be 'enabled' or 'dislabled'. Default is disabled
 webGuiManagementState: "enabled"
 webGuiManagementPort: 9090
 # This value should either be 'enabled' or 'disabled'. Default is disabled
 gatewaySshState: "enabled"
 gatewaySshPort: 9022
 # This value should either be 'enabled' or 'disabled'. Default is disabled
 restManagementState: "enabled"
 restManagementPort: 5554

Run the following set of commands to configure your Gateway subsystem.

apicup subsys create gwy gateway --k8s
apicup subsys set gwy extra-values-file=datapower-values.yaml
apicup subsys set gwy mode=dev
apicup subsys set gwy ingress-type=ingress

apicup subsys set gwy
apicup subsys set gwy

apicup subsys set gwy namespace=apiconnect
apicup subsys set gwy registry=localhost:5000

apicup subsys set gwy image-pull-policy=IfNotPresent
apicup subsys set gwy replica-count=1
apicup subsys set gwy max-cpu=4
apicup subsys set gwy max-memory-gb=12
apicup subsys set gwy storage-class=myblock
apicup subsys set gwy v5-compatibility-mode=false
apicup subsys set gwy enable-high-performance-peering=true
apicup subsys set gwy enable-tms=true
apicup subsys set gwy tms-peering-storage-size-gb=10
apicup subsys set gwy mode=dev

apicup subsys set gwy storage-class=myblock

apicup subsys set gwy v5-compatibility-mode=false
apicup subsys set gwy enable-high-performance-peering=true 
apicup subsys set gwy enable-tms=true

apicup subsys get gwy --validate

Installing IBM API Connect

Run the commands below one-by-one to install IBM API Connect
apicup subsys install mgmt --debug
apicup subsys install analyt --debug
apicup subsys install ptl --debug
apicup subsys install gwy --debug

Check that all pods are up and running
k get po

Create a service to expose IBM DataPower Gateway (API Gateway) management port:
kubectl expose pod r554d996560-dynamic-gateway-service-0 --port=9090 --target-port=9090 --type=NodePort -n apiconnect

You can find the port for the exposed Gateway pod using this command:
k get svc | grep -E 'gateway.*NodePort'
Use your public IP and a port printed by the command above to access your Gateway's web UI.

Configure an SMTP server

IBM API Connect requires an SMTP server to be connected. For our installation we will use a fakesmtp ran in a contatiner.
mkdir /root/emails
docker run -d -p 2525:25 -v /root/emails:/var/mail munkyboy/fakesmtp
You can find all incoming emails in /root/emails. You can use any email domains and addresses you want with this server.

Configure IBM API Connect topology

Open the Cloud Manager UI
For the first login use admin/7iron-hide credentials. You will be prompted to change the password right after that.

Go to Resourses -> Notifications and configure SMTP. Use any password you want for SMTP authentication.

Set this SMTP configuration as a Notification mechanism for you installation.

Then configure your IBM API Connect Topology by connecting all subsystems together. Go to the Topology view and add subsystems (Register Service button) there.

adding Analytics subsystem to the IBM API Connect
adding Gateway subsystem to the IBM API Connect
adding Portal subsystem to the IBM API Connect

The final configuration will look like this. Don't forget to assotiate your Analytics with a Gateway Service.

At this step your local IBM API Connect 2018 stand is ready to use. Enjoy.