Quick notes on setuping an Openshift cluster in Cloudforms

Just some quick notes on how to setup an Openshift cluster in Cloudforms.

Versions

[root@openshift-master ~]# oadm version
oadm v3.1.0.4-16-g112fcc4
kubernetes v1.1.0-origin-1107-g4c8e6f4
CF version : Nightly aug 2016

Openshift API

(mainly from https://access.redhat.com/webassets/avalon/d/Red_Hat_CloudForms-4.0-Managing_Providers-en-US/Red_Hat_CloudForms-4.0-Managing_Providers-en-US.pdf)

26JULY 2016 : It seems that most of the setup is already done in the OS Enterprise installation.

Project

Check if the project “management-infra” already exists with “oc get projects” command:

[root@openshift-master ~]# oc get projects
NAME               DISPLAY NAME   STATUS
default                           Active
management-infra                  Active
openshift                         Active
openshift-infra                   Active

if not, create it with (not tested):

oadm new-project management-infra --description="Management Infrastructure"

Service account

Check if the service account “management-admin” already exists with “oc get serviceaccounts” command :

[root@openshift-master ~]# oc get serviceaccounts
NAME               SECRETS   AGE
builder            3         1d
default            2         1d
deployer           2         1d
inspector-admin    3         1d
management-admin   2         1d

if not, create it with (not tested):

$ cat ServiceAccountIntegrationCloudFroms.json
{
  "apiVersion": "v1",
  "kind": "ServiceAccount",
  "metadata": {
    "name": "management-admin"
  }
}
$ oc create -f ServiceAccountIntegrationCloudFroms.json
serviceaccounts/robot

Cluster Role

check if the cluster role “management-infra-admin” already exists with “oc get ClusterRole” command:

[root@openshift-master ~]# oc get ClusterRole | grep management
management-infra-admin

if not, create it with (not tested):

$ cat ClusterRoleIntegrationCloudFroms.json
{
    "kind": "ClusterRole",
    "apiVersion": "v1",
    "metadata": {
        "name": "management-infra-admin",
        "creationTimestamp": null
    },
    "rules": [
        {
            "verbs": [
                "*"
            ],
            "attributeRestrictions": null,
            "apiGroups": null,
            "resources": [
                "pods/proxy"
            ]
        }
    ]
}
$ oc create -f ClusterRoleIntegrationCloudFroms.json

Policies

Create the following polocies to gice enough permission to your service account:

oadm policy add-role-to-user -n management-infra admin -z management-admin
oadm policy add-role-to-user -n management-infra managementinfra-admin -z management-admin
oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:management-infra:management-admin

Token name:

[root@openshift-master ~]# oc get -n management-infra sa/management-admin --template='{{range .secrets}}{{printf "%s\n" .name}}{{end}}'
management-admin-token-wbj84
management-admin-dockercfg-0sgjy

Token

[root@openshift-master ~]# oc get -n management-infra secrets management-admin-token-wbj84 --template='{{.data.token}}' | base64 -d
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZX..............ZQBxIaWooQ_kwDsmJNcZJx7DkraoOdbgcmc5W2JYXW-IySxAr5wyVZv5dVP406w

Then use this token in the CF UI in the default endpoint of the container setup.

Hawkular