DNS challenge for let’s encrypt SSL certificates

Last week I had to generate a SSL certificate for a domain which has its web server on a corporate network. The Web Server on the corporate network has outgoing internet access but cannot be reach from Internet. I was not sure it was possible to generate a certificate in this case with let’s encrypt since my previous experience was with a Web server reachable from internet to answer the let’s encrypt challenge (http://djynet.net/?p=821).

Luckily I was wrong 😉 It is indeed possible to prove let’s encrypt that you own the domain with a DNS challenge! Here are my notes on how to do it.

Download the client with:

wget https://dl.eff.org/certbot-auto
chmod a+x ./certbot-auto

Run the client in manual mode with DNS challenge and wait for the client to provide you the challenge

[root@vps99754 ~]# ./certbot-auto certonly --manual --preferred-challenges dns --email <your email> -d <the domain>

Saving debug log to /var/log/letsencrypt/letsencrypt.log

Obtaining a new certificate

Performing the following challenges:

dns-01 challenge for <the domain>


NOTE: The IP of this machine will be publicly logged as having requested this

certificate. If you're running certbot in manual mode on a machine that is not

your server, please ensure you're okay with that.

Are you OK with your IP being logged?


(Y)es/(N)o: Y


Please deploy a DNS TXT record under the name

_acme-challenge. <the domain> with the following value:


Once this is deployed,


Press Enter to Continue

At this point you just need to update your DNS with the entry provided as show in the following picture and press enter (maybe wait few seconds after you done the update if you use a webUI like me to update your DNS provider)

Waiting for verification...

Cleaning up challenges

Generating key (2048 bits): /etc/letsencrypt/keys/0000_key-certbot.pem

Creating CSR: /etc/letsencrypt/csr/0000_csr-certbot.pem


 - Congratulations! Your certificate and chain have been saved at

   /etc/letsencrypt/live/<the domain>/fullchain.pem. Your cert will

   expire on 2017-07-23. To obtain a new or tweaked version of this

   certificate in the future, simply run certbot-auto again. To

   non-interactively renew *all* of your certificates, run

   "certbot-auto renew"

 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate

   Donating to EFF:                    https://eff.org/donate-le

All set 😉 Pretty easy and very nice feature to validate a Webserver not connected to internet (as long as you have access to its DNS server and it is reachable from internet).

Quick note on Angular UI templates

I was recently looking for some dashboard framework to be used with a MEAN stack. Here are my notes:

GitHub: https://github.com/angular-dashboard-framework/angular-dashboard-framework
Note: No update in 2017

GitHub: https://github.com/start-angular/sb-admin-angular
Note: dead, no update in 2017, Port to angular of SB-admin2

GitHub: https://github.com/blackrockdigital/startbootstrap-sb-admin-2/
Note: not angular

GitHub: https://github.com/start-angular/ani-theme
Note: seems pay for angular version

GitHub: https://almsaeedstudio.com/themes/AdminLTE/index2.html
Note: non angluar template

Site: https://www.patternfly.org/
Note: Made by RH

GitHub: https://github.com/akveo/ng2-admin
Note: Not really UI. Package with NodeJS and not working with Express

Monarch, Remark, Slant, Fuse, Clip-Two, Make, Materil, Materia, Materialism, Maverick, Clean UI, Urban, Piluku, Avenxo, xenon, Angle, Metronic, square, slim, flatify, Triangular, ANGULR
Note: Not free

I did not find one that I like so I decided to have a look on lower level framework to design an UI myself:

GitHub: https://circlingthesun.github.io/angular-foundation-6/
Note: Based on fondation, Very similar to angularui

GitHub: http://ionicframework.com/
Note: More mobile oriented

Site: http://mobileangularui.com/
Note: More mobile oriented

GitHub: https://github.com/valor-software/ng2-bootstrap
Note: Same as AngularUI. Also based on bootstrap

Site: https://semantic-ui.com/
Note : Similar to AngularUI and FondationUI but based on another framework.


GitHub: https://github.com/uoziod/suave-ui
Note: seems very very light

prime ng

Triangle Table

I wanted to do a prototype of the new table design I had in mind for some time. That was the good occasion to try a new CAD tool: onshape. Onshape is an online platform for CAD which has a free plan for hobbyist with all the functionalities available. The only limitation I see is that you cannot make private design (all your design are visible by anybody online). I tried various other free CAD tools in the past (and mainly used Openscad) but onshape offer much more functionalities than all the other. You should definitively give it a try : https://www.onshape.com/

It take some time to get familiar with onShape but they have a great community and some good tutorial on Youtube. Most of my time was spend understanding the Mate connector, Construction mode (very useful) and extruding with an angle. Here is the actual version

Then I decided to print the 2 tables and use some wood stick to do the feet and here is the result

Now I need how to do it real size. The real difficulties here is the positioning of the feet on the 2 tables which require precise angle.

I may design a PLA piece to do this join and keep wood for all the rest of the table. It was a good opportunity for me to test onshape and I would definitively recommend to anybody that do CAD.

Raspberry Pi and HID Omnikey 5321 CLI USB

I recently come across a project where I needed to interact with some RFID tag. I wanted to retrieve the Unique ID of the each badge. I had absolutely no information on the badge except the string “HID iClass” written on it.

I start doing some research and found out that there are 2 big frequencies used in RFID: 125 kHz and 13.56 MHz. The iClass seems mainly based on the 13.56 MHz standard so I decided to go for a reader on this frequency.

Then I found out that there are several standard on this frequency. The most used are (in order) ISO 14443A, ISO 14443B, and ISO 15693. Nevertheless the iClass type includes several tag variations with all these standards. Finally I decided to buy the ADA fruit reader which handles both ISO 14443A and B: https://www.adafruit.com/products/364

I set it up with a Raspberry Pi 2 and was able to read the TAG send with the reader but sadly not the tag I wanted to read… Since I was unable to read my tag I guess they are using the third protocol: ISO 15693.

I look for some reader for the ISO 15693 but the choice is very limited (since it is not widely use). In the meantime I found a cheap HID reader on amazon (https://www.hidglobal.fr/products/readers/omnikey/5321-cli) which should be compatible with HID iClass card so I decided to buy it.

It works pretty well on Windows with their driver and software and gives me some useful information about my badge. It allowed me to confirm that it use the ISO 15693 standard:

It’s a good start nevertheless I wanted to use it on Raspberry Pi. I decided to do some research and found out that this type of RFID card reader is called “PCSC”:

PC/SC (short for “Personal Computer/Smart Card”) is a specification for smart-card integration into computing environments. (wikipedia)

Moreover there is a USB standard for such device: CCID.

CCID (chip card interface device) protocol is a USB protocol that allows a smartcard to be connected to a computer via a card reader using a standard USB interface (wikipedia)

Most USB-based readers are complying with a common USB-CCID specification and therefore are relying on the same driver (libccid under Linux) part of the MUSCLE project: https://pcsclite.alioth.debian.org/

There are plenty of soft related to RFID reading on Linux that I found during my research before choosing to try CCID. Here are my raw notes for future reference:

  • PCSC lite project
  • PCSC-tools
  • librfid
    • Seems dead
    • https://github.com/dpavlin/librfid
    • low-level RFID access library
    • This library intends to provide a reader and (as much as possible)
    • PICC / tag independent API for RFID applications
  • pcscd
  • libnfc
    • https://github.com/nfc-tools/libnfc
    • forum is dead
    • libnfc is the first libre low level NFC SDK and Programmers API
    • Platform independent Near Field Communication (NFC) library http://nfc-tools.org
    • libnfc seems to depend on libccid but it seems to depend on the hardware reader used :Note: If you want all libnfc hardware drivers, you will need to have libusb (library and headers) plus on *BSD and GNU/Linux systems, libpcsclite (library and headers).Because some dependencies (e.g. libusb and optional PCSC-Lite) are used
  • Opensc

I decided to go with the MUSCLE project available here: https://pcsclite.alioth.debian.org/ccid.html

After I installed the driver/daemon and the tools to interact with the reader I had trouble since the reader was not detected by pcscd. Luckily there is a section “Check reader’s compliance to CCID specification” on the pcsc page to know if the driver is supported. I follow it and send the repport to the main maintainer of pcsc driver: Ludovic Rousseau.

He confirms me that the driver was never tested with this driver and give me the instruction to try it :

Edit the file CCID/readers/supported_readers.txt and add the line:
0x076B:0x532A:5321 CLi USB
Then (re)install the CCID reader and try again to use the reader.

I follow it and the reader gets detected by the daemon. Nevertheless the card is not detected so I provided more feedback/logs to Ludovic for debugging and sadly the result is that the reader cannot be supported:

The conclusion is that this reader is not CCID compliant. I am not surprised by this result.
You have to use the proprietary driver and no driver is provided for RaspberryPi.
If you are looking for a contactless reader have a look at https://pcsclite.alioth.debian.org/select_readers/?features=contactless

I will try to see if I can interact with the reader and libusb and also found a cheap open source ISO 15693 reader to continue this project.

Update 23JAN2017

I contact Omnikey to have support to use their reader for my project and they confirmed there is no driver on the Pi for it.

we don’t have any drivers for 5321 CLi on Raspberry Pi. Please have a look at OMNIKEY 5022 or OMNIKEY 5427 CK instead. The can be accessed through native ccidlib.

In the meantime I also bought another reader compatible with the ISO standard 15693: http://www.solutions-cubed.com/bm019/

I plug it with an Arduino Uno thanks to their blog article : http://blog.solutions-cubed.com/near-field-communication-nfc-with-the-arduino/

Nevertheless I was still unable to read the TAGS. I start doing deeper research and found that the ISO 15693 can have several settings and I do not know which one my iClass tags are using. I tried all the possible combinations that the BM019 handle:

Even with all the tests I made I’m still unable to read them. I dig deeper and found out that the BM019 module is built around the CR95HF ST chip. It seems that I’m not the only one trying to read Icalss with their IC and their support forum has several post explaining that it is not possible since iClass do not properly follow the ISO 15693 standard:

issue comes from Picopass which is not ISO 15693 complliant  ,
timing are not respected . 
We have already implemented a triccky commannd which allow us to support Picopass , a new version of CR95HF devevelopment softaware will be soon available including a dedicated window for PICOPASS .

After 3 readers and countless hours of attempt I’m still unable to read the iClass badges since they do not seems to implement any real standard.

Quick notes on setuping an Openshift cluster in Cloudforms

Just some quick notes on how to setup an Openshift cluster in Cloudforms.


[root@openshift-master ~]# oadm version
oadm v3.1.0.4-16-g112fcc4
kubernetes v1.1.0-origin-1107-g4c8e6f4
CF version : Nightly aug 2016

Openshift API

(mainly from https://access.redhat.com/webassets/avalon/d/Red_Hat_CloudForms-4.0-Managing_Providers-en-US/Red_Hat_CloudForms-4.0-Managing_Providers-en-US.pdf)

26JULY 2016 : It seems that most of the setup is already done in the OS Enterprise installation.


Check if the project “management-infra” already exists with “oc get projects” command:

[root@openshift-master ~]# oc get projects
default                           Active
management-infra                  Active
openshift                         Active
openshift-infra                   Active

if not, create it with (not tested):

oadm new-project management-infra --description="Management Infrastructure"

Service account

Check if the service account “management-admin” already exists with “oc get serviceaccounts” command :

[root@openshift-master ~]# oc get serviceaccounts
NAME               SECRETS   AGE
builder            3         1d
default            2         1d
deployer           2         1d
inspector-admin    3         1d
management-admin   2         1d

if not, create it with (not tested):

$ cat ServiceAccountIntegrationCloudFroms.json
  "apiVersion": "v1",
  "kind": "ServiceAccount",
  "metadata": {
    "name": "management-admin"
$ oc create -f ServiceAccountIntegrationCloudFroms.json

Cluster Role

check if the cluster role “management-infra-admin” already exists with “oc get ClusterRole” command:

[root@openshift-master ~]# oc get ClusterRole | grep management

if not, create it with (not tested):

$ cat ClusterRoleIntegrationCloudFroms.json
    "kind": "ClusterRole",
    "apiVersion": "v1",
    "metadata": {
        "name": "management-infra-admin",
        "creationTimestamp": null
    "rules": [
            "verbs": [
            "attributeRestrictions": null,
            "apiGroups": null,
            "resources": [
$ oc create -f ClusterRoleIntegrationCloudFroms.json


Create the following polocies to gice enough permission to your service account:

oadm policy add-role-to-user -n management-infra admin -z management-admin
oadm policy add-role-to-user -n management-infra managementinfra-admin -z management-admin
oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:management-infra:management-admin

Token name:

[root@openshift-master ~]# oc get -n management-infra sa/management-admin --template='{{range .secrets}}{{printf "%s\n" .name}}{{end}}'


[root@openshift-master ~]# oc get -n management-infra secrets management-admin-token-wbj84 --template='{{.data.token}}' | base64 -d

Then use this token in the CF UI in the default endpoint of the container setup.


Let’s encrypt TLS setup for nodejs

Following my first test to setup a HTTPS server to dialogue with facebook API describe in my previous article here I had error when trying to register the facebook webhook:


I dig deeper and also verified the domain certificate with https://www.ssllabs.com/ssltest :


It seems good but there is a warning about the certificate chain… I done some quick research and it seem to be the root cause. After some investigation (and mainly thanks to this post) it seems the error comes from my nodejs server setup and more particularly the missing certificate authority certificate info. I miss it since it is not used in the official documentation. It is indeed an optional parameter

If this is omitted several well-known “root” CAs will be used, like VeriSign

Let’s add it in the options:

var options = {
    key: fs.readFileSync('/etc/letsencrypt/live/djynet.xyz/privkey.pem'),
    cert: fs.readFileSync('/etc/letsencrypt/live/djynet.xyz/cert.pem'),
    ca: fs.readFileSync('/etc/letsencrypt/live/djynet.xyz/chain.pem')

I done the ssl check another time and the error is now gone…. And the facebook webhook work fine too:


HTTPS with let’s encrypt

If you want to try the new facebook bot capability you could come across the need of an HTTPS webserver for the callback URL:


Anyway….since https is becoming the standard (http://trends.builtwith.com/ssl/SSL-by-Default, https://security.googleblog.com/2014/08/https-as-ranking-signal_6.html) it could be interesting to learn more about it and give it a try…

Want to know more about https? Google!

Next step… you need a certificate. It needs to be provided by a certificate authority and it will cost you some money (depending on the authority and certificate type but once again…..google). You could buy one on rapidSSL for hundred dollars (https://www.rapidssl.com/) but since few weeks there is a new player in town provided free certificates: let’s encrypt.

“Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG).”

The service went out of beta in April 2016 with some limitation but the initiative is promising so I decided to try it.

The documentation is pretty good :

First you retrieved the client with

wget https://dl.eff.org/certbot-auto
chmod a+x ./certbot-auto

then you check the options

$ ./certbot-auto --help
Usage: certbot-auto [OPTIONS]
A self-updating wrapper script for the Certbot ACME client. When run, updates
to both this script and certbot will be downloaded and installed. After
ensuring you have the latest versions installed, certbot will be invoked with
all arguments you have provided.
Help for certbot itself cannot be provided until it is installed.
  --debug                                   attempt experimental installation
  -h, --help                                print this help
  -n, --non-interactive, --noninteractive   run without asking for user input
  --no-self-upgrade                         do not download updates
  --os-packages-only                        install OS dependencies and exit
  -v, --verbose                             provide more output

You need to find the plugin to use depending on your webserver (more info HERE). I used the standalone plugin since there is nothing for nodejs. With this plugin the client will use the port 443 to act as a webserver to handle some challenge to prove that its own the domain.

./certbot-auto certonly --standalone --email charles.walker.37@gmail.com -ddjynet.xyz

The output will give you information about where the certificat/key have been generated so you can use them :

Congratulations! Your certificate and chain have been saved at

Then we can try it with a simple page served by nodejs.

Here is a very simple https nodejs server (from the official doc : https://nodejs.org/api/https.html)

var fs = require('fs');
 var https = require('https');
 var options = {
 key: fs.readFileSync('/etc/letsencrypt/live/djynet.xyz/privkey.pem'),
 cert: fs.readFileSync('/etc/letsencrypt/live/djynet.xyz/cert.pem')
 https.createServer(options, function (req, res) {
 console.log(new Date()+' '+
 req.connection.remoteAddress+' '+
 req.method+' '+req.url);
 res.end("hello world\n");

Let’s run it with

$ sudo node main.js
 Fri Jun 03 2016 02:41:57 GMT+0000 (UTC) GET /
 Fri Jun 03 2016 02:41:57 GMT+0000 (UTC) GET /favicon.ico

And check the result


Nice green lock… we’re safe !


I discover few days after that it was node 100% working. The nodejs server does not provide the chain of certificate. See my follow up article to fix it HERE.

Determine file system type

It will avoid me to stackoverflow it every time…

[myuser@myserver ~]$ df -T
Filesystem     Type     1K-blocks    Used Available Use% Mounted on
/dev/sda1      xfs       10473900 2185416   8288484  21% /
/dev/sdb1      xfs      209611780  315256 209296524   1% /ephemeral

Works fine unless the FS is not yet mounted…. Otherwise use “file”:

[myuser@myserver ~]$ sudo file -sL /dev/sdb1
/dev/sdb1: SGI XFS filesystem data (blksz 4096, inosz 256, v2 dirs)

Openshift installation on GCE using terraform

I wanted to try to install openshift on a GCE cluster with the official “ansible installer” available on github https://github.com/openshift/openshift-ansible. Nevertheless I did not manage to have the installer creating the VM on GCE and I’m not even sure it is possible (even if it seems based on libcloud). In the meantime I discover Terraform which allow describing an infrastructure in a common language and deploying it on multiple cloud (including GCE and AWS).

Finally I decided to work on a project that will include these 2 topics “Openshift installation with ansible” and “infrastructure creation with terrafrom”.
I did not had to search too long before I found an open source project that aim to do that:

“This repo contains Ansible and terraform scripts for installing openshift onto OpenStack or AWS EC2.

The repo is organized into the different deployment models. Currently tested with EC2 and OpenStack, but can be extended to Google Compute, Digital Ocean, etc. Happy to take pull requests for additional infrastructure.”

That was perfect since I wanted to use GCE. I decided to contribute to this project by adding the GCE support.

Here is an overview of the whole process (more detail on the github project) :

  1. Used Terrafrom to create the VMs cluster on the cloud
    this is based on an Infrastructure file and Terrafrom.
  2. Use Ansible to customize the VMs
    this part use Ansible and an external Opensource project made by cisco to create dynamically a Ansible Inventory file from the Terrafrom files: https://github.com/CiscoCloud/terraform.py. This is not obvious today since the Cisco code is copied in the repo (see my comment later)
  3. Use the Openshift-Ansible installer to install Openshift on these VMs
    This part use the official installer but require a manual action first to create the ansible inventory file.

Remove static “Terraform.py” script

During my changes on the repo I noticed that it was relying on an Cisco project to create an Ansible inventory from the Terrafrom files. Nevertheless instead of cloning the cisco repo (like it is done for Openshift-Ansible Repo) it was committed.
I think it was done like this since the original creator was thinking to modify it later on but for now it prevent us to benefit from the changes done on the official github repository of Cisco. This is particularly true for my usecase since there was a bug preventing to create the inventory file for GCE in the actual version (but fix on the github last versions).
I thus decided first to create a PR to clone the Cisco repo in the procedure and remove the old version which was committed.


GCE Terrafrom integration


Artifactory cache (no root/no internet)

Artifactory is a repository manager. It is the one used in my current company to store various packages like RPM/Puppet/Pypi/Vagrant… You can find more documentation on their website:

This post gathers my note to install an artifactory cache connected to another instance (master instance) to speed up the package retrieval. This procedure is used on a target server where you have no internet connection and no root access.


Download Java + Artifactory free on a computer with internet access (or from Artifactory) and transfer them on the target server.


unzip jfrog-artifactory-oss-4.5.2.zip
tar zxvf jre-8u73-linux-x64.gz

2/Java install

export PATH=/home/sbox/jre1.8.0_73/bin:$PATH
export JAVA_HOME=/home/sbox/jre1.8.0_73

3/Artifactory install

nohup ./bin/artifactory.sh &

4/Add the link to Master repo (to act as cache)

Use the admin UI on port 8080 :


Be careful when choosing the KEY. I strongly suggest to use the name of the remote repo you want to cache locally otherwise you will have different URL to download from depending on if you want to download from the MASTER or this CACHE.

5/Test it

Curl http://www.<target server IP>/repository/KEY/<stuff>

Stuff is an artifact that should exist on the MASTER artifactory otherwise it will fails…