Recent Posts (all)

Open-plan office

I forgot to link to this very good article from David. Having almost 6 kids, I am usually not bothered by noise outside my head, but by noise inside my head.

Noise inside my head comes mostly from not having a long or well defined task. These are the kind of tasks that tends to come in through Slack1.

To fix it I offload most of these tasks — before they reach me — to people who are better at handling them.

Companies that are business-savvy about the hidden costs of interruptions should know that they should be penny foolish and pound wise on this one.


  1. Instant messaging in general: I have nothing against the company besides creating a Mac application that loves to eat all my resources and that doesn’t feel Mac-like. ↩︎

Posted on 02 Jul 2019

Netlified

A couple of days ago I moved the blog and the website over to Netlify.

The reasons are simple:

Every time I’ve read comments on Netlify the message was the same: it’s easy to set up, and once you’ve set it up, you can forget about it.

I gave myself 5’ to try: if I could do it, good, otherwise I would stay on the current setup.

Well, not only I could do it, but the whole project was undistinguishable from magic. They took care of everything for blog.lanzani.nl and lanzani.nl, including serving the naked domain (previously it would be forwarded to www.lanzani.nl, something that always bothered me).

As they integrate with GitHub and hugo, I don’t even need to build the website anymore, they do it for me every time I push the repo!

So the end result is that you can read this blog without fearing that someone has tampered with the content!

Posted on 30 Jun 2019

Get started with miniflux

This is another post that is totally a note for my future self.

I don’t write on this blog often. But what I do, a lot, is read what other people write on their blog. I do that through the wonderful capabilities of RSS.

Doing so in a sane manner involves a few moving parts:

Up until a couple of weeks ago I was using a simple pair: Stringer, hosted on a spare GCP machine, and Unread on iOS. Stringer offers a nice reading experience on the web, so I didn’t need an app for my Mac.

However, as the spare machine wasn’t spare anymore I started looking for something else as I did not like the fact that Stringer was an unmaintained Ruby app anymore. I have nothing against Ruby, but the fact that the app was unmaintained meant running a potentially insecure application.

There are many RSS readers as a service since Google Reader shut down:

The only “problem” is that these services cost from approximately $2 to $5 a month. Can I do something for free?

At first I thought about running stringer on one of my Raspberry Pis. They are pretty powerful and I don’t have that many feeds I need to read.

But if I do that, I possibly want to have everything working in a semi-automatic fashion, so that there’s little to no manual work if the SD is my Raspberry Pi goes south.

The easiest solution — for single machine scenarios and where seconds of downtime are OK — is to use Docker with docker-compose.

This is where, however, Ruby and the Ruby version stringer uses (2.3.3) are painful:

If you’re a bit like me, the above feels like a chore and change of many headaches (that’s probably why all those RSS as a service services exist in the first place).

So I turned to Reddit to see what others are doing. While searching here and there, I came across a thread where they mention miniflux.

When I looked at the website, I couldn’t believe it: it has everything I need and then some more:

Requirements

Now that I have settled down on the server, what else do I need?

The solution

After a bit of googling, I’ve come up with the following folder structure and files to serve my needs:

.
├── data
│   └── nginx
│       └── app.conf
├── docker-compose.yml
└── init-letsencrypt.sh

Let’s see the content of each file.

version: '3'
services:
  database.postgres:
    image: postgres:9.6-alpine
    container_name: postgres
    ports:
      - 5432:5432
    environment:
      - POSTGRES_PASSWORD=<insert_pg_password>
      - POSTGRES_USER=miniflux
      - POSTGRES_DB=miniflux
    volumes:
      - ./data/postgres:/var/lib/postgresql/data
    restart: always

  service.rss:
    image: miniflux/miniflux:latest
    container_name: miniflux
    ports:
      - 8080:8080
    depends_on:
      - database.postgres
    environment:
      - DATABASE_URL=postgres://miniflux:<insert_pg_password>@database.postgres:5432/miniflux?sslmode=disable
      - RUN_MIGRATIONS=1
      - CREATE_ADMIN=1
      - ADMIN_USERNAME=admin
      - ADMIN_PASSWORD=<insert_miniflux_password>
    restart: always

  nginx:
    image: nginx
    restart: unless-stopped
    volumes:
      - ./data/nginx:/etc/nginx/conf.d
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
    ports:
      - "80:80"
      - "443:443"
    command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"

  certbot:
    image: tobi312/rpi-certbot
    restart: unless-stopped
    volumes:
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"

  watchtower:
    image: v2tec/watchtower:armhf-latest
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /root/.docker/config.json:/config.json
    command: --interval 604800

The docker-compose.yml contains quite some images:

For nginx the app.conf file is needed. Its content is

server {
    listen 80;
    server_name <my_domain>;
    server_tokens off;

    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

server {
    listen 443 ssl;
    server_name <my_domain>;
    server_tokens off;
    resolver 127.0.0.11;
    set $upstream service.rss:8080;

    ssl_certificate /etc/letsencrypt/live/<my_domain>/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/<my_domain>/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

    location / {
        proxy_pass  http://$upstream;
        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header  X-Forwarded-Ssl     on;
        proxy_set_header  X-Forwarded-Proto   $scheme;
        proxy_set_header  X-Frame-Options     SAMEORIGIN;

        client_max_body_size        100m;
        client_body_buffer_size     128k;

        proxy_buffer_size           4k;
        proxy_buffers               4 32k;
        proxy_busy_buffers_size     64k;
        proxy_temp_file_write_size  64k;
    }
}

There’s not much to explain here. The last snippet is the init-letsencrypt.sh. The script “bootstraps” nginx for the first time: since we want https, but we cannot have it without certificates, but we cannot ask certificates without a running nginx, this script creates fake certificates, start nginx, removes the certificates, and then request real ones through letsencrypt. The content is quite long, but here you go:

#!/bin/bash

if ! [ -x "$(command -v docker-compose)" ]; then
  echo 'Error: docker-compose is not installed.' >&2
  exit 1
fi

domains=(<my_domain>)
rsa_key_size=4096
data_path="./data/certbot"
email="" # Adding a valid address is strongly recommended
staging=0 # Set to 1 if you're testing your setup to avoid hitting request limits

if [ -d "$data_path" ]; then
  read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision
  if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then
    exit
  fi
fi


if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then
  echo "### Downloading recommended TLS parameters ..."
  mkdir -p "$data_path/conf"
  curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/options-ssl-nginx.conf > "$data_path/conf/options-ssl-nginx.conf"
  curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot/ssl-dhparams.pem > "$data_path/conf/ssl-dhparams.pem"
  echo
fi

echo "### Creating dummy certificate for $domains ..."
path="$data_path/conf/live/$domains"
mkdir -p "$data_path/conf/live/$domains"
openssl req -x509 -nodes -newkey rsa:1024 -days 1 \
  -keyout $path/privkey.pem \
  -out $path/fullchain.pem \
  -subj '/CN=localhost'
echo


echo "### Starting nginx ..."
docker-compose up --force-recreate -d nginx
echo

echo "### Deleting dummy certificate for $domains ..."
docker-compose run --rm --entrypoint "\
  rm -Rf /etc/letsencrypt/live/$domains && \
  rm -Rf /etc/letsencrypt/archive/$domains && \
  rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot
echo


echo "### Requesting Let's Encrypt certificate for $domains ..."
#Join $domains to -d args
domain_args=""
for domain in "${domains[@]}"; do
  domain_args="$domain_args -d $domain"
done

# Select appropriate email arg
case "$email" in
  "") email_arg="--register-unsafely-without-email" ;;
  *) email_arg="--email $email" ;;
esac

# Enable staging mode if needed
if [ $staging != "0" ]; then staging_arg="--staging"; fi

docker-compose run --rm --entrypoint "\
  certbot certonly --webroot -w /var/www/certbot \
    $staging_arg \
    $email_arg \
    $domain_args \
    --rsa-key-size $rsa_key_size \
    --agree-tos \
    --force-renewal" certbot
echo

echo "### Reloading nginx ..."
docker-compose exec nginx nginx -s reload

For this and for app.conf file I took inspiration from the nginx-certbot repository with some modification: I’m using rpi-certbot instead of certbot and the openssl utility that comes with the Raspberry (if it doesn’t, use sudo apt-get install openssl to get it).

Outside the Raspberry

The outside world need to know where to find your Raspberry and should be able to get there. Doing so is outside the scope of this post, but in general

Start everything

Once all these files are in place, you are in the right folder, and you have updated the various variables marked with <> (passwords and domain name) in the files above you can get rolling with

curl -sSL https://get.docker.com | sh
pip install --user docker-compose
bash init-letsencrypt.sh
docker-compose up

Now visit your (sub)domain, use admin as the user and the password you have chosen to log in. Enjoy the rest!

Posted on 28 Jun 2019

Git patch workflow

This is totally a note for my future self.

Sometimes when working with git I find myself having to create a patch, because I had to merge in my feature branch more than once, but I want to have a single commit when doing a PR.

Assuming I want to merge against master, and my branch is called feature, I can do the following

git checkout feature
git merge master
git diff master..feature > patch.diff
git checkout master  # the new branch should stem from master
git checkout -b feature-patch  # need a different name
git apply --ignore-space-change --ignore-whitespace patch.diff
git add .  # assuming it's a clear working directory, besides the branch
git commit -m "Add reassuring commit message here"
git push -f origin feature-patch:feature  # this will push feature-patch on the feature branch

It seems involved, but once you get the hang of it, it’s pretty fast.

Posted on 21 Jun 2018

A biased view of the whole Mac vs PC discussion

The Mac vs PC debate is practically as old as the youngest of the two platform. I’ve tried to take a biased look at the whole thing again.

My first machine was an IBM x286. I was 6 years old and our neighbor was working for IBM and he thought my brother and I would be interested in playing with a computer. Boy, was he right!

For practically 15 years I’ve use DOS, Windows 3.1, 95, 98, and Windows XP.

When I started university, I got myself an Acer laptop. What a piece of junk that was. After a couple of months (in 2005) I’ve wiped it and installed the second Ubuntu version ever released1. At the university we were using Scientific Linux: Ubuntu felt like fresh air.

When I decided to move to the Netherlands in 2006, I’ve bought my first Mac. What convinced me was the iPod: it was so more intuitive than any other electronic device used up to that moment that I thought that if Macs were half that good, I was missing out.

I was right. My first Mac was an iBook G4. The battery would last 7 hours and after replacing the hard drive with an 80GB 7200rpm variant and upgrading the RAM to 1GB (IIRC) it was flying compared to the Acer.

That wasn’t the only great thing about the Mac. Everything felt as good as the iPod.

I’ve kept using Macs during my PhD, with various iMacs, Macbook Pros, Macbook Airs, and what not. When I first got into industry I had to use a Dell with Windows 7 Enterprise Edition. It was a piece of junk, commodity enterprise laptop.

Once I joined GoDataDriven I immediately got a Macbook Pro.

In the meantime, however, something happened. Microsoft was changing course, developing an open source friendly attitude and people were kind of discontent of the hit Apple software quality was taking, reportedly due to the huge success that the iPhone is.

After two year I decided to get a Dell XPS 15" for work. I wanted to challenge myself and see how far could I go using Windows (10). After two years of use, I went back to a Mac. Why?

Where the Dell shined

There are a number of area’s where the Dell shined for me. The Dell:

On the other hand the Dell runs Windows, and this has also a number of advantages:

Given all the above, I could work on my laptop just fine for two years. I installed all the various Python packages, virtualenvs and whatnot (without Anaconda), Scala, Spark, Docker, and databases such as Postgres and MySQL. I even got PyCuda working with the NVidia GPU I had.

Verdict: If you want to use a Windows machine to use the above, you will be fine. Don’t drink too much Apple kool aid.

Where the Mac shines

That said: why did I come back? The single, biggest factor, is applications. I think macOS has much better frameworks to develop applications.

It is also true that third party apps usually cost more, but they give me a much better experience. In particular I love:

Besides apps there are a lot of other touches that I really enjoy about the Mac:

Where the Mac falls short

Not all is good in macOS land however:

As for work: I could install all the stuff I wanted, excluding PyCuda because, guess what, these things don’t ship with a NVidia card, no matter how much money you have.

Conclusion

Well, I already gave it away: I’m back to Mac, apps being the biggest factor, but I gained a lot of nice touches in the switch!


  1. Feeling old now. ↩︎

  2. Some things are worse, such as the trackpad, some are better, such as the 4K screen. ↩︎

  3. There are third party applications in Windows that offer the same behavior, but having it built in the OS is always more stable. ↩︎

Posted on 03 Nov 2017

Unread for iOS got acquired by an awesome developer!

I always hoped app.net would have become a new Twitter.

Not only was it friendlier than Twitter towards developers and users, but I was delighted every time I had to interact with it: I was using Riposte. If you followed the previous link I can tell you that I share Manton’s opinion:

Riposte is arguably the best social networking client out there.

The developer behind Riposte, Jared Sinclair also developed another delightful app: Unread, a beautiful RSS reader for iPhone and iPad.

Unread, luckily, didn’t end up the way Riposte did. It was acquired by Supertop, but, due to the success of Supertop’s other app, Castro, it did not get the attention it deserved.

I was therefore delighted and scared when Supertop announced that they sold Unread to Golden Hill Software. Delighted because the new owner has more time to work on it, and scared as what change is always scary.

Being scared turned out not to be such a unjustified feeling: the 1.7 version broke my set up with Stringer, a self-hosted, anti-social RSS reader1.

Basically Stringer offers a Fever-compatible API. The new version of Unread tried to do some smart things with the API, that Stringer was not offering.

I immediately wrote to John, Golden Hill Software’s owner. He quickly replied that he would look into it.

A few days later a new version of Unread came out but, alas, no fix for my issue.

I was kinda disappointed. I had to wait for the next release.

But no. John wrote me saying he pushed a PR to Stringer fixing the issue.

What a great developer! If you’re on iOS, please give it a spin. As a bonus, the newest version merges the iPhone and iPad version, which is timely as I realized some months ago that Mr. Reader, my favorite iPad RSS reader, was not developed anymore.

So give it a go, and support a great developer!


  1. I don’t think anyone is surprised when I characterize myself as anti-social. ↩︎

Posted on 08 Aug 2017

No system holding your private information is failing

Yesterday somebody called my wife. She didn’t pick up. Today (Saturday), upon picking up, the voice on the other hand presented himself as a Vodafone emploee. He said one of Vodafone’s system suffered some data loss and they wanted to check some personal information.

We recently moved, so my wife assumed that it might have been related. On the other hand she hears from me the worse stories about phishing and scamming that she got suspicious. She thought: “Why aren’t they sending me an email or a letter?” and she asked to be called one hour later, when I could have picked up the phone. The caller pushed back, saying it would only take 20’’. She pushed back again, so he agreed to call back.

I know no systems containing the sort of information you could give on the phone would fail without a backup lying around. Especially for a company the size of Vodafone.

But maybe my wife misunderstood something, it could have been related to a new contract, etc. She said that on the background many people were talking, like in a real call center.1

So I called Vodafone myself. The lady on the phone told me I was the third person that day notifying them about it. She told me no systems had failed.

“Good”, I thought. Let me handle the guy. An hour later I picked up the phone. “Zakaria from Vodafone” he told me.

At that point, I already had LinkedIn open. I wanted to ask him his family name to look him up there.

“What’s your family name Zakaria?”. “I’m just here to ask some details about… “.

“I said what’s your family name Zakaria”.

He hung up.

There was not much to do, but what followed shows that Vodafone really handled it classy via their Twitter account. At 16:12 I tweeted

@vodafoneNL someone is calling your clients, pretending to be you, and asking personal info. Time to send an SMS around?

At 16:45 they got back at me and then via PM they asked:

Very good that you send us this message Giovanni. If I understand correctly someone called you pretending to be someone from Vodafone Customer Service and asks you for personal information. Can you tell me exactly how this conversation went ? And can you give me what day and time it was? I also need your mobile number so I can figure this out with our security department :-).

After telling them what happened, they promptly replied:

Thanks for sending the information. That does sound like a very strange conversation. I’ll send it right away to our security department for research. Good that you indicate this to us. If you encounter anything suspicious again let us know, then we investigate this immediately.

I have no idea what they’re going to do with it, but I really do hope they stop Zakaria and his likes!


  1. Although when I was on the phone later the background noise from other people talking was much higher than professional call centers. ↩︎

Posted on 25 Feb 2017

Use tab to cycle through Visual Studio Code completion

Sometimes, instead of using NeoVim, I do like to use Visual Studio Code (with Vim keybindings).

Visual Studio Code is a great editor with amazing Intellisense and debugging capabilities (for Python as well). There is however one thing that I could not swallow: the tab behavior when a completion was suggested.

With (Neo)Vim shift+tab and tab respectively cycle up and down the completion list.

I wanted to have the same experience in Visual Studio Code. After some Googling a lot of trial and errors, this is what I came up with (it works pretty nicely! You can paste the text below in the file that is opened after you click on File -> Preferences -> Keyboard Shortcuts)

[
        {
        "key": "tab",
        "command": "selectNextQuickFix",
        "when": "editorFocus && quickFixWidgetVisible"
    },
        {
        "key": "shift+tab",
        "command": "selectPrevQuickFix",
        "when": "editorFocus && quickFixWidgetVisible"
    },
        {
        "key": "tab",
        "command": "selectNextSuggestion",
        "when": "editorTextFocus && suggestWidgetMultipleSuggestions && suggestWidgetVisible"
    },
        {
        "key": "shift+tab",
        "command": "selectPrevSuggestion",
        "when": "editorTextFocus && suggestWidgetMultipleSuggestions && suggestWidgetVisible"
    }
]

The commands should be pretty self-explanatory! To accept the suggestion, you can use enter (that’s the default together with tab).

Posted on 13 Oct 2016

Make jupyter notebook work in WSL

In case you are playing around with the Windows Subsystem for Linux and jupyter, you might have notice this error:

Invalid argument (bundled/zeromq/src/tcp_address.cpp:171)

The issue, which arises because Bash on Windows does not currently expose any network interfaces, has been fixed by Adam Seering in its WSL PPA.

The fix he proposed, though, only works on when you install jupyter using Ubuntu repositories. In case you want to have it working with pip, I found the following to be helpful:

pip uninstall pyzmq
sudo add-apt-repository ppa:aseering/wsl
sudo apt-get update
sudo apt-get install libzmq3 libzmq3-dev
export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu
pip install --no-use-wheel -v pyzmq
pip install jupyter
Posted on 10 Aug 2016

Hugola

Approximately a month ago I fell for the new Dell XPS 15. It had the right price/spec combination (I went for the high-end model with 512GB SSD) so I went for it. This had of course some drawbacks, the main one leaving a *NIX like system.

Of course I am not the first guy making the switch, so there are lots of helpful resources around (although I might write one for the data scientists using Python).

One thing struck me though: Jekyll has no official Windows version. There is on unofficial guide but I wasn’t too eager to follow that. So I started playing around with Pelican, as I know some Python and as this is the tool that I already use with the GoDataDriven blog and website.

Around the same time, I saw the announcement, on Hacker News, of Hugo 0.15, that introduces a Jekyll import tool. And, more importantly maybe, it offers a simple .exe binary that you can just drop anywhere and call with hugo (or hugo.exe).

So in between things (mostly in the train) I gave it a go and issued:

hugo import jekyll jekyll_blog_folder hugo_blog_folder

Magically, all of my posts were converted. The code highlighting and the front matter were adjusted accordingly to hugo syntax (although I had to adjust the url). What was left out was the theme.

So the longish part of the transition began: rewriting my old theme for Hugo. Albeit the process was not difficult, some extra docs could have helped. Moreover the way to two theming engines works is different. But in the end I’ve finished and now my blog (and website) too is running with the hugola them (from hugo + Lanzani, but it nicely resembles ugola, which is the Italian for uvula).

In the process I removed some cruft: I removed JQuery and therefore Bigfoot and rewrote the other only thing that was using JQuery: the activation of the blue above the current page in the side bar. The old code was the typical exaample of JQuery usage:

$( ".blog-active" ).addClass( "menu-item-divided pure-menu-selected" );
$( ".blog-active" ).removeClass( "blog-active" );

The above snippet searches for the blog-active element and add the classes needed for the highlighting. It turns out that you can do the same in pure (modern) javascript if you make blog-active or whatever other class you’re using, into an id. Then you’re off with the following:

window.onload = function() {
document.getElementById("blog-active").className = "menu-item-divided pure-menu-selected pure-menu-item";
};

While these changes do not make it easier to blog, they are a symptom that I should have a bit more time to do so. So hopefully I can begin crunching posts again!

Posted on 03 Dec 2015
5/11