Cheap stocks options finder

The idea is to create a bot to detect cheap option that may have high volatility and thus generate some gain. The financial data (stocks/options/earnings) are retrieve from IEX API (I had to pay to get to options data).

For example, the following call for AGNC (which already had a significant value increase) is only 30$ (more or less) and may pay off after the earnings

In case you are not familiar with option trading you can read Info about option trading >>


Incoming earnings

I want option that may have some high volatility and to achieve that I decided to look at the stocks that will report earnings and trade the options that expire few days after that. I hope that earning report may have a significant impact on the options and thus generate gain.

I use this endpoint to get the companies that report earnings in the next 7 days and then filter by company market cap to only work on the 50 biggest stocks. I do that to ensure that the options are liquid enough and that there will be plenty of them.

aPotentialCompanyToCheck = getNextWeekEarning()"We have {len(aPotentialCompanyToCheck)} potential Earnings before filtering")
# We keep the biggest companies only to have lot of options (thus data)
aPotentialCompanyToCheck = __filterPotentialEarning(aPotentialCompanyToCheck)"We have {len(aPotentialCompanyToCheck)} potential Earnings before filtering: ")

Options finder

The next step of the process is to find all options in the next 7 days after the earnings and filter the one with a strike price in the 10% range of the actual stock value (so they have some chance to happen). For those options we keep the one that are less than 20$ since i don t want to gamble more than that 😉

Past earnings

The program will also get the stock prices for the last 2 earnings so that i can see if the stock is usually volatile around its earning to determine if our option have a change to gain. For example if we have a cheap option with a strike price which is +5% of the actual stock price but in the past the stock never move more than 3% after its earning….we know there are very few chances that our option will be in the money. On the opposite if we have another case with a company that was very volatile in its past earnings with move around 10% then we know that it has some chance to happen again.

To the end we get the stock prices on a 10 days range starting 3 days before earning. The value will be display.

Stock value

The last information the script will extract is the actual stock price of the company to see its trend and help me choose a good option.


All the information retrieved are presented to the user on a website. The graphics are made with plotly library that i strongly recommend


Here is the page for one stock

It shows the different data gather by the script:
1 – Stock ticker and earning date
2 – Past earnings for the last 6 months
3 – Actual stock value for the last months
4 – Possible cheap option with their detail

In this example we could buy a call which need to gain 4.9% to generate gain and only costs around 20/30 $.


I tried it for 3 weeks but did not make a gain. There were too many lost to compensate with the winner and end up losing around 10$.

Code is here >>

Machine Learning to reduce API calls – follow up

Compare NN, Decision Tree and Forest 

In addition to the Neural network solution I explain in the previous article I also tried other algorithms like Decision Tree and Random Forest. 

Neural Network96.63%
Random Forest96.96%
Decision Tree96.76%
Refined Decision Tree97.64%

Refine decision tree? 

One of the conclusions of all my tests with ML from the previous article is the complexity to choose the parameters needed by each model. Hopefully a friend suggests me a solution “GridSearchCV” which allows to test various parameter for an algorithm and find the best ones. 

The algorithm I called “refined decision tree” is a decision tree based on the best parameters “GridSearchCv” found. 

#Now let s try to refine the Decision tree by trying several parameters 

aGridSearchParams = {'max_features': [None, 'sqrt'],'max_depth' : [3, 5, 10, None],'min_samples_leaf': [1, 2, 3, 5, 10],'min_samples_split': [2, 4, 8, 16],'max_leaf_nodes': [10, 20, 50, 100, 500, 1000]}  # instantiate the grid   aGridSearchResult = GridSearchCV(DecisionTreeClassifier(), aGridSearchParams, cv=5, )   # fit the grid with data, atrainDataY) 
#let s see how good it is  
aDecisionTreeRefinedPrediction = aGridSearchResult.best_estimator_.predict(atestDataX)  

I used the Neural Network and the “refined decision tree” in the application to compare them and notice that the neural network was slightly better. For example 

root - INFO - Checking if group should be refresh by calling ML with: [1, 47, 10, 0] 
root - INFO - We found 1 new message and the ML probability were NN: [[0.11245716]], DT[[1. 0.]] 

When trying to predict if a group with characteristics: 

  • Latest refresh done 1 days ago 
  • 47 users in the chat room 
  • Latest message in the group was posted 10 days ago 
  • 0 messages posted in the last week in the chatroom 

The Neural network predict a probability of 11% of new messages while the “refined decision tree” predicted 0% chance. We found new messages in the room leading for a false positive for the “refined decision tree” which we want to avoid at all cast. I just stick to the neural network for now. 

Optimizer change on NN 

When training the Neural Network, I sometime end up with very poor results. The model seems stuck and always predict the same output: 

NN atestDataYPredictedKeras: [0.05339569 0.05339569 0.05339569 ... 0.05339573 0.05339573 0.05339573] 

Even if the test case is composed of around 1500 lines. 

NN len(atestDataYPredictedKeras): 1484 

This happen from time to time and it usually get away if I retrain the model nevertheless it makes the final results very bad if I did not check the training results every time. 

Luckily, I’m not the only one in this case 😉 according to this github ticket.

I tried some of the suggestions proposed on the page and one that seems to be the best was to modified the optimizer from SGD to Adam. After some reading, I decided to go with it since Adam seems to be a good choice according to the ML community. This youtube video explain some of the possible optimizer algorithm and also suggest adam as default choice. Nevertheless, like all topic/parameters in ML you can always find arguments about the opposite like this article:  

“We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half.” 

I will still stick to Adam for now since it fixes my original issue with the same accuracy and smaller loss: 


These are the 2 topics I wanted to follow up 😉

Machine Learning to reduce API calls

I have a bot part of several (around 250) chat groups (think discord rooms). The bots connect everyday with an undocumented API to get for each room the new messages. Since the API is not fully documented I’m not sure it was designed to be used for robotic access. I thus decided to try to predict if a room will have new messages and reduce the number of calls. That was a fun opportunity to try to do ML.


First, I need to find some “features” that will be used to predict the output (there are new messages to get or not). I tried several versions and the actual features are

number of days since the last refresh

integer to indicate how many days have passed since the latest time we call the API to refresh messages.
For example, if we are the 7 JAN when we are doing a refresh and the latest was done 1 JAN this field value will be 7 – 1 = 6 days.

number of users in the chat group

integer which indicates the number of users in the group for which we call the API to refresh messages.

number of days since the latest message was posted in this group

integer to indicate the number of days has passed since the latest message was posted in the group (compared to the date of the refresh)
For example, if we last refresh for this chat room was done the 7 JAN and we old message in the chat group at this time was 1 JAN the value of this field will be 7 – 1 = 6 days.

number of messages in the latest 7 days

integer to indicate the number of messages in the chat room in the latest 7 days

I logged the values for each of these features when calling the API for few days as well as the result of the call: were there new messages in the group or not. I write the results in 2 files which will be used to train and test the MN.



One day we call the refresh API on a chat room to get new messages and did not get anything. At this time the number of days since the latest refresh was 1 (we checked the day before) and the number of users in the chat room was 8. We also know that the most recent message is 555 days old and there were 0 message in the latest week.


One day we call the refresh API on a chat room to get new messages and found some. At this time the number of days since the latest refresh was 1 (we checked the day before) and the number of users in the chat room was 10. We also know that the most recent message is 4 days old and there were 2 messages in the latest week.

This is still a work in progress and I’m getting feedback from other people I’m working with so I share the file and feature explanation on a dedicated google drive folder. You should rather check it to get more info and the latest feature used.


I decided to use Keras since it has good review. It works on top of various ML engines and allow fast experimentation “Keras is a high-level API capable of running on top of TensorFlow, CNTK, Theano, or MXNet (or as tf.contrib within TensorFlow). Since its initial release in March 2015, it has gained favor for its ease of use and syntactic simplicity, facilitating fast development. It’s supported by Google.”


I split the data in 2 files with the 80/20% proportion. The data are csv files formatted as explained in the previous section. Latest data and info are available on the following folder.

We load the data using numpy loadtxt function:

aTrainData = "mlDataTrain.csv" 
aTestData = "mlDataTest.csv" 

#Data have 4 fields
atrainDataX = np.loadtxt(aTrainData, delimiter=';',usecols=[1,2,3,4], dtype=int) 
atrainDataY = np.loadtxt(aTrainData, delimiter=';',usecols=[0], dtype=int)  

atestDataX = np.loadtxt(aTestData, delimiter=';', usecols=[1,2,3,4], dtype=int) 
atestDataY =np.loadtxt(aTestData, delimiter=';', usecols=[0], dtype=int) 


I decided to use a Neural network with 4 input (since we have 4 features) in the input layer connected to a single neuron as output layer.

It’s the most basic design I could imagine. I saw some article where people suggest to add a hidden layer but I was not sure how to decide. The “design” of the neural network was the first challenge I face. I will do a dedicated post on this point later.

There are several other parameters like the activation functions for each layer. I choose “relu” for the first layer and “sigmoid” for the output after some quick reading but I’m clearly not sure it’s the best choice. There are so many possibilities and no clear real explanations on which to choose.

aKerasNnModel = Sequential() 
aKerasNnModel.add(Dense(4, input_dim=4, activation='relu'))
aKerasNnModel.add(Dense(1, activation='sigmoid'))


One of the reason I choose Keras was the simplicity to get results “developed with a focus on enabling fast experimentation” ( Training is a simple call to the “fit” function, atrainDataY, epochs=100,verbose = 1) 

The model is train with the training data and I choose a random value of 100 epochs which seems to be a good value from what I read from other article (especially for the small amount of data I have).

When call the python code will output the results of each epoch

Epoch 1/100   5936/5936 [==============================] - 0s 46us/step - loss: 0.4604 - accuracy: 0.9252  
Epoch 100/100 
 5936/5936 [==============================] - 0s 31us/step - loss: 0.1513 - accuracy: 0.9559 
 1484/1484 [==============================] - 0s 26us/step 

At the end of the training the model has a 95% rate success in its prediction. The next step is to evaluate the model with unknown new cases from the testing set.


Once the model has been trained with the training data (as a reminder I split my data 80% train / 20% test) we can evaluate how good it predicts with the test data. This is done with the evaluate method of Keras:

aKerasNnModelScore = aKerasNnModel.evaluate(atestDataX, atestDataY) 
print("NN algorithm results: {0} for folowwing metrics : {1}".format(aKerasNnModelScore,aKerasNnModel.metrics_names))

Which will output

NN algorithm results: [0.14495629791642295, 0.9595687389373779] for folowwing metrics : ['loss', 'accuracy'] 

We achieve good results too on the testing set with 96% accuracy.


The neural network output a percentage as prediction:

root - INFO - Checking if group should be refresh by calling ML with: [4, 123, 560, 0] 
root - INFO - We found 0 new message and the ML probability were NN: [0.01339133]

root - INFO - Checking if group should be refresh by calling ML with: [4, 16, 0, 9]
root - INFO - We found 1 new message and the ML probability were NN: [0.75237719]

Since I want to be sure to never miss a possible message I decided to take a very low threshold at 2% which means we are probably calling some time and not find anything. I will review it after the ML results are compare to the reality for a few days. Nevertheless, if you never heard the term “confusion matrix” you may want to have a look at it now since we will use it later to review our threshold. There are some explanations about it here.


I saved the model with"model.h5") 

And then used it in my real-life application. I logged the prediction of the model but still called the API to get the new messages from the chat rooms so I can log a confusion matrix.

Here is the result for one day

aKerasConfusionMatrix: Counter({'TN': 238, 'FP': 18, 'TP': 17}) 

As explain previously I choose a very low threshold to ensure to avoid any false negative even if it means having few false positive because I do not want to miss any messages. At the end we reduce our number of calls to the API from 273 to only 35 and did not lost any messages. The threshold seems good enough for now.


I’m glad I had a project where I could have some fun discovering ML with a real-life application. As a non-expert and first-time user of Neural network I find it quite complicated and easy at the same time. It s easy since I manage to get good results very quickly without too much efforts but… It’s hard because there are lot of unknow variables like the network shape or the different function (activation, loss, optimizer). For most of these parameters I did not find any good documentation on which one to choose (and the articles sometimes contradicts each other).

This article is just a short sum up of my work on this project since I did not discuss of the other machine learning algorithms I tried (and compare to NN): Decision tree and Random Forest. I also did not discuss an issue I had when training the network and get stuck with a model which always answer the same prediction. I plan to do a follow up to develop these issues later.


Mbed OS on Arduino Nano 33 BLE

Arduino release several new boards including the Nano 33 BLE which pick my interest due to the fact that it is based on the Nrf52480 chip ( which should be compatible with Bluetoot MESH. I decided to give it a try…

This is the first board based on the Nrf52840 and thus there is no existing Arduino core for this board which means they would face some challenge to have Arduino running on it. They went for a smart solution by deciding to run Arduino Core on top of MBED OS to be able to use all the works already done by MBEDOS to integrate the nrf52840 chip. This is explain on their blog if you are curious:

That’s when I discover about MBEDOs ” Arm Mbed OS is a free, open-source embedded operating system designed specifically for the “things” in the Internet of Things. ” which seems promising especialy since it has a very complet Bluetooth stack Cordio ( which include Bluetooth MESH capability ( which is not offer today by the Arduino Bluetooth library ( That’s why i decided to give a try to program with MBEDos directly instead of Arduino. Nevertheless my first idea was to keep the arduino bootloader on the board so that I can program it WITHOUT any extra device and benefits from arduino bootloader.


I used MBED studio to avoid dealing with the setup of a toolchain on windows:

The only “difficulty” is to use a mbed version recent enough (>=5.14) to have the adruino nano 33 in the list of target.

I ve done a very simple program that will blink 3 times a LED on pin

#include "mbed.h"
#include "ble/BLE.h"

DigitalOut led1(p47);

// main() runs in its own thread in the OS
int main()
    //Pattern is 3 blink and wait 
    while (true) {
        // Turn led on
        led1 = 1;
        led1 = 0;
        led1 = 1;
        led1 = 0;
        led1 = 1;
        led1 = 0;

Also in the folder of your project you should have a /mbed-os/targets/targets.json with the NANO33 definition.

I added 2 parameters “”OUTPUT_EXT”: “bin”” to generate a bin file instead of an hex file (the uploader use bin files)

The second change is “”mbed_app_start”: “0x10000″” so that the toolchain know my program will be located in memory at address 0x10000. This info is part of the arduino bootlaoder code and was explain to me here: The arduino bootloader will call the code at this address after it boot.

After you compile the code with MBED studio you will have the result binary here “C:\Users\charl\Mbed Programs\mbed-os-example-blinky\BUILD\ARDUINO_NANO33BLE\ARMC6” that can be uploaded to the board


This part is tricky since my original goal is to use the arduino bootloader to upload my code to the board without any other tools. It took me some time to understand how to do it but hopefuly i got some help here and so i decided to use BOSSAC which is the same tools used by the arduino IDE. You can see it in the logs if you activate the extra logs in the IDE.

I re-use the bossac binary that is shipped with the arduino IDE and added an option to upload at address 0x10000. You need to press twice the reset button on the board to force the bootloader to run forever before starting the upload command.

C:\Users\charl\AppData\Local\Arduino15\packages\arduino\tools\bossac\1.9.1-arduino1> .\bossac.exe -d --port=COM5 -o 0x10000 -U -i -e -w 'C:\Users\charl\Mbed Programs\mbed-os-example-blinky\BUILD\ARDUINO_NANO33BLE\ARMC6\mbed-os-example-blinky.bin' -R                                                    Set binary mode                                                                                                                                         version()=Arduino Bootloader (SAM-BA extended) 2.0 [Arduino:IKXYZ]
 Connected at 921600 baud
 version()=Arduino Bootloader (SAM-BA extended) 2.0 [Arduino:IKXYZ]
 Device       : nRF52840-QIAA
 Version      : Arduino Bootloader (SAM-BA extended) 2.0 [Arduino:IKXYZ]
 Address      : 0x0
 Pages        : 256
 Page Size    : 4096 bytes
 Total Size   : 1024KB
 Planes       : 1
 Lock Regions : 0
 Locked       : none
 Security     : false
 Erase flash
 Done in 0.003 seconds                                                                                                                                   Write 43884 bytes to flash (11 pages)                                                                                                                   [                              ] 0% (0/11 pages)write(addr=0x34,size=0x1000)                                                                            writeBuffer(scr_addr=0x34, dst_addr=0x10000, size=0x1000)                                                                                               [==                            ] 9% (1/11 pages)write(addr=0x34,size=0x1000)                                                                            writeBuffer(scr_addr=0x34, dst_addr=0x11000, size=0x1000)                                                                                               [=====                         ] 18% (2/11 pages)write(addr=0x34,size=0x1000)                                                                           writeBuffer(scr_addr=0x34, dst_addr=0x12000, size=0x1000)                                                                                               [========                      ] 27% (3/11 pages)write(addr=0x34,size=0x1000)                                                                           writeBuffer(scr_addr=0x34, dst_addr=0x13000, size=0x1000)                                                                                               [==========                    ] 36% (4/11 pages)write(addr=0x34,size=0x1000)                                                                           writeBuffer(scr_addr=0x34, dst_addr=0x14000, size=0x1000)                                                                                               [=============                 ] 45% (5/11 pages)write(addr=0x34,size=0x1000)                                                                           writeBuffer(scr_addr=0x34, dst_addr=0x15000, size=0x1000)                                                                                               [================              ] 54% (6/11 pages)write(addr=0x34,size=0x1000)                                                                           writeBuffer(scr_addr=0x34, dst_addr=0x16000, size=0x1000)                                                                                               [===================           ] 63% (7/11 pages)write(addr=0x34,size=0x1000)                                                                           writeBuffer(scr_addr=0x34, dst_addr=0x17000, size=0x1000)                                                                                               [=====================         ] 72% (8/11 pages)write(addr=0x34,size=0x1000)                                                                           writeBuffer(scr_addr=0x34, dst_addr=0x18000, size=0x1000)                                                                                               [========================      ] 81% (9/11 pages)write(addr=0x34,size=0x1000)                                                                           writeBuffer(scr_addr=0x34, dst_addr=0x19000, size=0x1000)                                                                                               [===========================   ] 90% (10/11 pages)write(addr=0x34,size=0x1000)                                                                          writeBuffer(scr_addr=0x34, dst_addr=0x1a000, size=0x1000)                                                                                               [==============================] 100% (11/11 pages)                                                                                                     Done in 2.076 seconds                                                                                                                                   reset()                                                                                                                                                 PS C:\Users\charl\AppData\Local\Arduino15\packages\arduino\tools\bossac\1.9.1-arduino1>

Seems to be working… nevertheless the LED do not blink 🙁

In the meantime I bought a programmer for the chip since the manufacturer has a “education/hobbyist” version for only 20$… I was not aware that i could had a official programmer for such low price….which made my original idea to reuse the Arduino bootloader to upload to avoid buying one useless…

Now that I have my hand on a programmer i decided to try to understand why my program was not running….is it my code or the upload failed or something else. I modify my code to remove the offset and start at address 0x0 and I upload it directly on the chip and the LED start blinking. The code is thus good and the issue seems located in the upload process… I decided to do another test and put back my offset 0x10000 in my program, erase the board, upload the Arduino bootloader in the board, and finally upload my program at address 0x10000 but not with BOSSAC but using the J-Link tool. The LED start blinking !

I compare the memory at address 0x10000 for both cases. First the test where i upload with BOSSAC at address 0x10000 and my program do not works

and when I upload with J-Link

It seems the upload with BOSSAC do not works which seems to confirms what i thought.


I stop here since I now have a programmer to directly program the chip and so I can move on with my Bluetooth MESH tests.

Thx to for helping me understand more about Arduino bootloader and Thx to for selling a powerful programmer for hobbyist at a very reasonable price.

If you want to try MBED OS you should take a board compatible offering a DAP-Link for USB upload “Arm Mbed DAPLink is an open-source software project that enables programming and debugging application software on running on Arm Cortex CPUs. Commonly referred to as interface firmware, DAPLink runs on a secondary MCU that is attached to the SWD or JTAG port of the application MCU. This configuration is found on nearly all development boards. It creates a bridge between your development computer and the CPU debug access port. DAPLink enables developers with drag-and-drop programming, a serial port and CMSIS-DAP based debugging.” I would suggest this one for the Nrf52840 chip >>

Burn bootloader Arduino nano 33 BLE

Quick post to explain how to upload the original Arduino nano 33 BLE bootloader in case you erased it by mistake (another story for another post)…


I have a J-Link EDU Mini as external programmer for the chip of the board (Nrf52840 – It s a very cheap programmer (15$) that can only be used for non commercial application but offer the same functionality that a 400$ programmer. Thx Nordic for offering that to hobbyist ! More info about it here:
I would also suggest to buy an adapter for it “designed to make it easier to use ARM dev boards that use slimmer 2×5 (0.05″/1.27mm pitch) SWD cables for programming” like this one:


There are 5 cables to connect between the Arduino nano 33 BLE and the J-Link programmer but first let’s locate the Probe on the Nano 33 BLE (I used another nano 33 BLE sense board in pic but it is the same as the Nano 33 BLE).

The pins are:
1 – Vref (3.3V in our case)
2 – SWDIO (data)
3 – SWCLK (clock)
4 – Ground
5 – Reset

The tricky part will be to solder the cable because all the points are small and close to each other (I wish they expose them with a more practical way but we are lucky to have access to them already;) )

The mapping of the J-Link is explain on the official doc and it s even better if you use the adapter since the name will be written of it

The mapping is thus

Nano 33 BLEJ-Link EDU
Vref Vref
GroundGND (any)


You need to full J-link package (which is free) that you can download on Nordic website here: We are going to use J-Flash lite to be specific but better install the whole package 😉

Turn on the 2 board (don t forget to plug tha nano 33 or Vref will be 0V and J-Link will fails to connect to the Arduino board).

We need to locate the Arduino Bootloader of the nano 33 BLE file that we are going to upload with J-Flash lite….. It is located here C:\Users\charl\AppData\Local\Arduino15\packages\arduino\hardware\mbed\1.1.2\bootloaders\nano33ble (of course you need to adapt) and contains 3 files

We are going to use the bin one (should works with hex one too) and give this path in J-Flasher. Be sure to choose nrf52840 in the list of device (the rest of parameters can stay with their default value) and then click “Program Device”.

You should had some logs similar to these one and then the board should have back its original Arduino bootloader and should be usable again in the Arduino UI 😉

DEFCON scale

Let’s start with an explanation of what is the DEFCON scale from

The DEFCON system was developed by the Joint Chiefs of Staff (JCS) and unified and specified combatant commands.[2] It prescribes five graduated levels of readiness (or states of alert) for the U.S. military. It increases in severity from DEFCON 5 (least severe) to DEFCON 1 (most severe) to match varying military situations.

and a picture of the final result to better understand how it look like


The scale is made in wood with 9 LEDs behind each level/number

The logic of the scale comes from an arduino Micro with a BLE sparkfun BLE module “SparkFun Bluetooth Mate Silver – ” (which is not used for now). The Arduino is powered by 9V regulated from the 12V main power (used for LED). Each number back light is control by a MOSFET (P30N06LE) driven by the Arduino. There are also 2 buttons for tests to increase/decrease the level.



The arduino micro communicate with a computer using USB to receive the level it should set on the scale. It will do that buy driving 5 MOSFET to light the proper panel. It also listen to press on 2 buttons to raise/decrease the level (for test purpose). The code is quite simple.


The scale communicate with a computer to receive its level it should set. The level is computed from my work company issue tracking tool. The computation part code interface with some of my company API and is thus not part of the code…. You will have to code your logic in the python code in the function “getSeverity” which should return an integer between 5 (low level) and 1 (critical level) as the DEFCON standard 😉

The python part should be put in a crontab to regularly update the scale 😉

More picture of the project HERE and the code is HERE.


The goal is still to create a process which get a random cat picture and post it on a yammer group daily. Last week I tried with the Yammer connector available in MS flow but it does not support posting image as I explain in . This time I decided to go one level deeper and us the Yammer REST API which support it. 

Yammer REST API 

We want to use the /messages POST REST call describe in the official REST API doc here: which mention the support of attachments “Yammer provides two methods to associate attachments with a message. Both make use of multi-part HTTP upload (see RFC1867)”. 

To be able to post we need to Authentify our self with the yammer Oauth2 flow describe here: I don t want to detail it too much since it’s pretty standard but basically our server offers and /login route which redirect to  

 app.get('/login', (req, res) => {
    var aLoginUri = "";
    console.log('Sent login URI response');

Then call redirect back the user to our server on /callback route with a user token we can use from our server when querying the API to post messages. 

 // OAuth2 endpoint (callback)
app.get('/callback', (req, res) => {
    console.log('Received Oauth login callback with code ' + req.url);

    //Calling Oauth to authenticate the APP
    var aUriAuthent = "" + aClientSecret + "&code=" + req.query.code + "&grant_type=authorization_code";
        .then((res) => {
            //console.log("Dumping response for debuging: " +res)
            //console.log("Dumping data from response for Debug: ",
            aAUthTest2 =;
        .catch((error) => {

Getting random cat picture 

Of course there is an API for that 😉 The API is free but you need to register to get a API KEY that you specify in your header when calling with ‘x-api-key’. The endpoint we need is that we call without any parameters and will give us a random cat url.  

 async function getCatUrl() {
    console.log('Entering getCatUrl');
    var aCatUrl = "";
    const aTemp = await axios.get("", { params: {}, headers: { 'x-api-key': aCatApiKey } });
    aCatUrl =[0].url
    console.log('Existing getCatUrl with: ', aCatUrl);
    return aCatUrl;

One note here is the use of async/await for us to “wait” the response of catApi before we can proceed and post our picture in the yammer room. I will not detail await/asynch….so many good doc already (google it) 

Posting the image 

We now have a token and a cat picture URL that we can use to post our message. This is the only complicated part of this whole project due to “Both make use of multi-part HTTP upload (see RFC1867)”. I find this NPM module which should make this process easier: that we can use to create the “multipart/form-data” and then give to another module to send it to Yammer API. Here is the form part 

var formData = new FormData(); 
formData.append('attachment1', Request(aCatUrl));  

Which is quite straightforward as explain in their readme. Then we pass the form to another node module to send 


The first module I tried to use to post the REST call is AXIOS: which we use to get the random cat picture. Nevertheless, the documentation of form-data to use AXIOS has a bug which I was unable to understand so I open a bug report and switch to another library than Axios. The bug has now been fixed by a documentation change: 


Instead of axios we can use the native HTTPS nodejs module describe here: and pass him our form:  

// Patch header to add the key
var aHeader = formData.getHeaders();
aHeader['Authorization'] = "Bearer " + aAUthTest2.access_token.token;

var request = https.request({
    method: 'post',
    host: '',
    //very dirty.... did not find a way to pass param otherwise :(
    path: '/api/v1/messages.json?body=Cat%20of%20the%20day%20&group_id=7799980032',
    headers: aHeader

//send it

Final touch 

I added a secret key in the postcat route to ensure nobody else will use it to spam the room with cat 

 if (req.query.key !== aPostCatSecretKey) {
        console.log("Invalid key: ",req.query.key," - send back 401")

And then I added a crontab to call our API everyday 

0 1 * * * curl 

All code is here: 

And the result:  

Yammer connector

The goal is to create a MS office 365 Flow which will call a homemade connector to get a random cat picture and post it on a yammer group. 

Yammer connector 

The first thing to do is check if there is a MS Office365 connector that can get use a random cat picture (who know…. Maybe someone already done one). After checking it seems it s not the case. So we have to create one.  

I will not go too much in the details since the MS doc is quite good One interesting point was the fact it needs to be HTTPS so I played with Let’s encrypt and their docker version of certbot…. Nothing fancy apart that…. classical NodeJS server with only one endpoint with an hardcode to start with 

app.get(‘/v2/cat’, (req,res) => { 

var aCatUrl = ““; 


console.log(‘Sent response’); 


The full code is here 

Once the connector is ready to be used you can validate it with a quick postman call like: 

Then you can add it on your MS 365 personal connector as explain in the MS doc. I used the “from OpenApi file” method with the following file 

Flow creation 

Now that the connector is publish on your Office365 space you can create a flow using it. I copy past and existing template “Post an update to my company’s Yammer page” that I customize with an extra step to call my connector and post the response of the connector into the yammer room 

Then you can test it and see that the result is not exactly what we excepted…. 

I was hoping to have the picture posted on yammer like it is the case when I post the URL myself on the UI like  

The reason why it did not work is that the YAMMER connector do not allow to post picture. There is already an open request on MS side so that it is supported in the future (feel free to vote for it) here:


I will wait until MS implement that like I done for the JS function in Excel online 😉 and in the meantime I will try to do a REST call directly to Yammer API to see if I can post an image directly (without going throw flow). 

Let’s encrypt with docker

Few years ago i wrote this post to explain how to get a SSL certificate with let’s encypt:

They now have a docker version which is even more easy to use:

docker run -it --rm --name certbot -v "/etc/letsencrypt:/etc/letsencrypt" -p 80:80 -v "/var/lib/letsencrypt:/var/lib/letsencrypt" certbot/certbot certonly --standalone --email

Interaction with jobs on remote kube cluster

This demo is made on a windows 10 computer. It shows how to interact with a Kube cluster in python and start a simple job on it and wait for the job to end and get its status/logs. The tricky part is to get the logs since the job object do not directly contains the info and thus, we need to get the pod associated with the job and get the pod logs. 

Kube setup (server) 

I use the Kube functionality of Docker for windows. Start Kubernetes which is part of Docker for windows 

Once the kube cluster is up and running you can interact with it from a terminal (I use PowerShell) that we will call T1 and will be use for the kube server-side interaction. 

Create service account 

PS C:\Users\charl> kubectl create serviceaccount jobdemo 
serviceaccount "jobdemo" created 

Get full permission to the SA (not clean but not the goal here) 

PS C:\Users\charl> kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --serviceaccount default:jobdemo "cluster-admin-binding" created 

Get secret token of the SA (from the secrets) 

PS C:\Users\charl> kubectl get secret jobdemo-token-jk59q -o json 


It s base64…decode it in a string and save it.

Python script setup (client) 

Start a new powershell terminal (let’s call it T2) to work on this part. Build the container from the dockerfile included in the repo 

PS C:\Code\kubejobs> docker build -t quicktest . 

Start the container and mount the repo in the container (not mandatory but allow to edit code in windows) 

PS C:\Code\kubejobs> docker run -it -v C:\Code\kubejobs:/mountfolder quicktest 

Export the token (the decode version of the base64 token we retrieved previously) 

[root@54a8362da7d1 mountfolder]# export KUBE_TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI...Es5howDOSTWqzl9JQAFH_xCpo1Q 

Start the python script 

[root@54a8362da7d1 mountfolder]# python3.6 
Starting job 
Checking job status 
Job is still running. Sleep 1s 
Checking job status 
Job is still running. Sleep 1s 
Checking job status 
Job is still running. Sleep 1s 
Checking job status 
Job is still running. Sleep 1s 
Checking job status 
Job is still running. Sleep 1s 
Checking job status 
Job is still running. Sleep 1s 
Checking job status 
Job is still running. Sleep 1s 
Checking job status 
Job is over 
getting job pods 
Checking job status 
getting job logs 
Job is over without error. Here are the logs:  3.141592653589793 
Cleaning up the job 
[root@54a8362da7d1 mountfolder]# 

You can also chech the job creation when the python script is running (but not after because job is deleted at the end) from the T1 terminal used before. 

PS C:\Users\charl> kubectl get jobs 
pi        1         0            3s 

As you can see, we also print the logs of the job. I use this python script daily when I spawn jobs on a remote Kube cluster from a Jenkins server (my Jenkins jobs are just spawning Kube job on remote cluster and waiting for them to be over). I’m sharing it hoping it can help some ppl. 

The code is quite simple and the only tricky part is to get the pod associated to the job so that we can get the logs (BTW this may not works in case the job spawn several pods). 

The link Job-Pod is done with the use of selector since it was the recommended methode when I done the script ( 

Full code is here: