Recent Posts (all)

Causes of burnout

Today HBR published an article about some causes of burnout1. One struck a cord with me, and, as a physicist that went more into the managerial path, I’m sure I’m not the only one:

Workload […]: assess how well you’re doing in these key areas: planning your workload, prioritizing your work, delegating tasks, saying no, and letting go of perfectionism.

I think they’re all tightly coupled: if you’re good at planning, you must have prioritized properly by knowing what you can and cannot accomplish with your time, and if you have prioritized you must say no and you must have delegated tasks. If you’re good at planning, you also can’t be a perfectionist, because perfection is difficult to plan.

I struggle with three of them mostly: delegating tasks, letting go of perfection, and saying no.

Delegating tasks is hard because I can’t let go of perfection, and because I am usually not good at communicating the end result. And I am not good at communicating the end result because I delegate too little: if I were to delegate more, I would learn — from all the times it went wrong — what things are important to communicate.

Since I know that, I also know that the first times I delegate, the end result will not be what I want: again, I can’t let go of perfection.

Luckily I’m learning the hard way that I need to let go quickly in these key areas:

  • Before my last holiday I was real close to losing it, and I felt it and it scared me;
  • As the line of business I am running grew, I let potential opportunities slide, as I didn’t have time.

So, right before the summer, I tricked myself into start delegating. Two things helped me out:

  • My daughter was going to be born (she’s arrived yesterday), so if I wanted to enjoy time with her, I had to have my hands free from work;
  • I said to myself that delegating didn’t mean recognizing that somebody else was better than me at doing a task, in absolute term2 and that I couldn’t do the job just as well: I said to myself that other people had either more time, or more focus, or better tools, or more experience in doing it. In other words, I could do it myself, but it was not efficient.

So here I am now, with time in my hands to write this post :)

  1. Six according to the Areas of Worklife model, but I’m sure there’s more, depending who you ask. ↩︎

  2. Though this is frequently the case. ↩︎

Algorithms to drive engagement

Brent Simmons doesn’t mince words when he talks about algorithms to drive engagement, honed and “abused” by companies such as Facebook and Twitter:

My hypothesis: these algorithms — driven by the all-consuming need for engagement in order to sell ads — are part of what’s destroying western liberal democracy, and my app will not contribute to that.

Open-plan office

I forgot to link to this very good article from David. Having almost 6 kids, I am usually not bothered by noise outside my head, but by noise inside my head.

Noise inside my head comes mostly from not having a long or well defined task. These are the kind of tasks that tends to come in through Slack1.

To fix it I offload most of these tasks — before they reach me — to people who are better at handling them.

Companies that are business-savvy about the hidden costs of interruptions should know that they should be penny foolish and pound wise on this one.

  1. Instant messaging in general: I have nothing against the company besides creating a Mac application that loves to eat all my resources and that doesn’t feel Mac-like. ↩︎


A couple of days ago I moved the blog and the website over to Netlify.

The reasons are simple:

  • The site was previously hosted on S3 + Cloudfront, but I didn’t have https enabled;
  • I didn’t know how to enable https although I must not be too hard;
  • The application I was using to deploy to S3 — called Stout — was unmaintained and growing old.

Every time I’ve read comments on Netlify the message was the same: it’s easy to set up, and once you’ve set it up, you can forget about it.

I gave myself 5' to try: if I could do it, good, otherwise I would stay on the current setup.

Well, not only I could do it, but the whole project was undistinguishable from magic. They took care of everything for and, including serving the naked domain (previously it would be forwarded to, something that always bothered me).

As they integrate with GitHub and hugo, I don’t even need to build the website anymore, they do it for me every time I push the repo!

So the end result is that you can read this blog without fearing that someone has tampered with the content!

Get started with miniflux

This is another post that is totally a note for my future self.

I don’t write on this blog often. But what I do, a lot, is read what other people write on their blog. I do that through the wonderful capabilities of RSS.

Doing so in a sane manner involves a few moving parts:

  • One or multiple feeds you want to read. This is the easy part;
  • A server that keeps track of them
  • The server should provide a good interface to read on the web;
  • Bonus points if the server also provides an API so that I can use apps to read the articles.

Up until a couple of weeks ago I was using a simple pair: Stringer, hosted on a spare GCP machine, and Unread on iOS. Stringer offers a nice reading experience on the web, so I didn’t need an app for my Mac.

However, as the spare machine wasn’t spare anymore I started looking for something else as I did not like the fact that Stringer was an unmaintained Ruby app anymore. I have nothing against Ruby, but the fact that the app was unmaintained meant running a potentially insecure application.

There are many RSS readers as a service since Google Reader shut down:

  • Feedbin
  • Newsblur
  • Bazqux
  • The Old Reader
  • And many more.

The only “problem” is that these services cost from approximately $2 to $5 a month. Can I do something for free?

At first I thought about running stringer on one of my Raspberry Pis. They are pretty powerful and I don’t have that many feeds I need to read.

But if I do that, I possibly want to have everything working in a semi-automatic fashion, so that there’s little to no manual work if the SD is my Raspberry Pi goes south.

The easiest solution — for single machine scenarios and where seconds of downtime are OK — is to use Docker with docker-compose.

This is where, however, Ruby and the Ruby version stringer uses (2.3.3) are painful:

  • There were no official Ruby 2.3.3 images for ARM (that I could find);
  • Updating to the latest 2.3 version (2.3.8 at the time of this writing) would trigger some bug that I was not able to fix;
  • Updating to 2.3.4 would have everything working, but
  • The image for stringer using Ruby 2.3.4 is 857MB: not exactly small;
  • As I would need to customize the stringer docker image heavily (to make it compatible with 2.3.4, plus some other small details), that would mean building the images from time to time when new security updates get pushed (if they ever do);
  • Doing the above is doable with one of the many CI/CD services out there (for example Azure Pipelines) but since the build needs to be for ARM on an X86 server, extra care and configuration is needed (buildx is not generally available yet).

If you’re a bit like me, the above feels like a chore and change of many headaches (that’s probably why all those RSS as a service services exist in the first place).

So I turned to Reddit to see what others are doing. While searching here and there, I came across a thread where they mention miniflux.

When I looked at the website, I couldn’t believe it: it has everything I need and then some more:

  • Easy to get started with;
  • Provides ARM images out of the box;
  • Written in Go and hence very small to host (compared to the 857MB of stringer, miniflux docker image is 17MB, 50x smaller);
  • Written in Go and hence faster than non-optimized Ruby (and it certainly feels a lot faster than stringer);
  • Like stringer, it implements the Fever API, meaning I can use it with Unread.


Now that I have settled down on the server, what else do I need?

  • I already said that I want Docker support;
  • Possibly everything should be scriptable, for 99% of the code (I am OK running a couple of scripts manually);
  • I want https: if I’m entering a password it should be secure;
  • I (ideally) want the latest security patches quickly, without much maintenance;
  • It should be easy.

The solution

After a bit of googling, I’ve come up with the following folder structure and files to serve my needs:

├── data
│   └── nginx
│       └── app.conf
├── docker-compose.yml

Let’s see the content of each file.

version: '3'
    image: postgres:9.6-alpine
    container_name: postgres
      - 5432:5432
      - POSTGRES_PASSWORD=<insert_pg_password>
      - POSTGRES_USER=miniflux
      - POSTGRES_DB=miniflux
      - ./data/postgres:/var/lib/postgresql/data
    restart: always

    image: miniflux/miniflux:latest
    container_name: miniflux
      - 8080:8080
      - database.postgres
      - DATABASE_URL=postgres://miniflux:<insert_pg_password>@database.postgres:5432/miniflux?sslmode=disable
      - CREATE_ADMIN=1
      - ADMIN_USERNAME=admin
      - ADMIN_PASSWORD=<insert_miniflux_password>
    restart: always

    image: nginx
    restart: unless-stopped
      - ./data/nginx:/etc/nginx/conf.d
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
      - "80:80"
      - "443:443"
    command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"

    image: tobi312/rpi-certbot
    restart: unless-stopped
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"

    image: v2tec/watchtower:armhf-latest
    restart: always
      - /var/run/docker.sock:/var/run/docker.sock
      - /root/.docker/config.json:/config.json
    command: --interval 604800

The docker-compose.yml contains quite some images:

  • The postgres one is simple: we need a database with a user;
  • The miniflux one is configured as per their docs;
  • ngix is also pretty simple: we add a couple of touches: we mount the configuration folder, we mount the letsencrypt certificates, and the www folder to put the verification for letsencrypt;
  • rpi-certbot is an ARM-ready image with certbot
  • watchtower is a docker image that looks (in my case every 604800 seconds, every week) that all the images I’m using are up to date. If not, the image will be updated. This is especially relevant for nginx and miniflux which are the containers facing the users.

For nginx the app.conf file is needed. Its content is

server {
    listen 80;
    server_name <my_domain>;
    server_tokens off;

    location /.well-known/acme-challenge/ {
        root /var/www/certbot;

    location / {
        return 301 https://$host$request_uri;

server {
    listen 443 ssl;
    server_name <my_domain>;
    server_tokens off;
    set $upstream service.rss:8080;

    ssl_certificate /etc/letsencrypt/live/<my_domain>/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/<my_domain>/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

    location / {
        proxy_pass  http://$upstream;
        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header  X-Forwarded-Ssl     on;
        proxy_set_header  X-Forwarded-Proto   $scheme;
        proxy_set_header  X-Frame-Options     SAMEORIGIN;

        client_max_body_size        100m;
        client_body_buffer_size     128k;

        proxy_buffer_size           4k;
        proxy_buffers               4 32k;
        proxy_busy_buffers_size     64k;
        proxy_temp_file_write_size  64k;

There’s not much to explain here. The last snippet is the The script “bootstraps” nginx for the first time: since we want https, but we cannot have it without certificates, but we cannot ask certificates without a running nginx, this script creates fake certificates, start nginx, removes the certificates, and then request real ones through letsencrypt. The content is quite long, but here you go:


if ! [ -x "$(command -v docker-compose)" ]; then
  echo 'Error: docker-compose is not installed.' >&2
  exit 1

email="" # Adding a valid address is strongly recommended
staging=0 # Set to 1 if you're testing your setup to avoid hitting request limits

if [ -d "$data_path" ]; then
  read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision
  if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then

if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then
  echo "### Downloading recommended TLS parameters ..."
  mkdir -p "$data_path/conf"
  curl -s > "$data_path/conf/options-ssl-nginx.conf"
  curl -s > "$data_path/conf/ssl-dhparams.pem"

echo "### Creating dummy certificate for $domains ..."
mkdir -p "$data_path/conf/live/$domains"
openssl req -x509 -nodes -newkey rsa:1024 -days 1 \
  -keyout $path/privkey.pem \
  -out $path/fullchain.pem \
  -subj '/CN=localhost'

echo "### Starting nginx ..."
docker-compose up --force-recreate -d nginx

echo "### Deleting dummy certificate for $domains ..."
docker-compose run --rm --entrypoint "\
  rm -Rf /etc/letsencrypt/live/$domains && \
  rm -Rf /etc/letsencrypt/archive/$domains && \
  rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot

echo "### Requesting Let's Encrypt certificate for $domains ..."
#Join $domains to -d args
for domain in "${domains[@]}"; do
  domain_args="$domain_args -d $domain"

# Select appropriate email arg
case "$email" in
  "") email_arg="--register-unsafely-without-email" ;;
  *) email_arg="--email $email" ;;

# Enable staging mode if needed
if [ $staging != "0" ]; then staging_arg="--staging"; fi

docker-compose run --rm --entrypoint "\
  certbot certonly --webroot -w /var/www/certbot \
    $staging_arg \
    $email_arg \
    $domain_args \
    --rsa-key-size $rsa_key_size \
    --agree-tos \
    --force-renewal" certbot

echo "### Reloading nginx ..."
docker-compose exec nginx nginx -s reload

For this and for app.conf file I took inspiration from the nginx-certbot repository with some modification: I’m using rpi-certbot instead of certbot and the openssl utility that comes with the Raspberry (if it doesn’t, use sudo apt-get install openssl to get it).

Outside the Raspberry

The outside world need to know where to find your Raspberry and should be able to get there. Doing so is outside the scope of this post, but in general

  • Find your external IP address (and hope it’s static or use a service such as Dyn)
  • Update the DNS of your (sub)domain (For example
  • Assign a static DNS to your Raspberry from your router
  • Route port 80 and 443 in your router so that the traffic is handled by the Raspberry (port 80 is necessary for the letsencrypt verification).

Start everything

Once all these files are in place, you are in the right folder, and you have updated the various variables marked with <> (passwords and domain name) in the files above you can get rolling with

curl -sSL | sh
pip install --user docker-compose
docker-compose up

Now visit your (sub)domain, use admin as the user and the password you have chosen to log in. Enjoy the rest!

Git patch workflow

This is totally a note for my future self.

Sometimes when working with git I find myself having to create a patch, because I had to merge in my feature branch more than once, but I want to have a single commit when doing a PR.

Assuming I want to merge against master, and my branch is called feature, I can do the following

git checkout feature
git merge master
git diff master..feature > patch.diff
git checkout master  # the new branch should stem from master
git checkout -b feature-patch  # need a different name
git apply --ignore-space-change --ignore-whitespace patch.diff
git add .  # assuming it's a clear working directory, besides the branch
git commit -m "Add reassuring commit message here"
git push -f origin feature-patch:feature  # this will push feature-patch on the feature branch

It seems involved, but once you get the hang of it, it’s pretty fast.

A biased view of the whole Mac vs PC discussion

The Mac vs PC debate is practically as old as the youngest of the two platform. I’ve tried to take a biased look at the whole thing again.

My first machine was an IBM x286. I was 6 years old and our neighbor was working for IBM and he thought my brother and I would be interested in playing with a computer. Boy, was he right!

For practically 15 years I’ve use DOS, Windows 3.1, 95, 98, and Windows XP.

When I started university, I got myself an Acer laptop. What a piece of junk that was. After a couple of months (in 2005) I’ve wiped it and installed the second Ubuntu version ever released1. At the university we were using Scientific Linux: Ubuntu felt like fresh air.

When I decided to move to the Netherlands in 2006, I’ve bought my first Mac. What convinced me was the iPod: it was so more intuitive than any other electronic device used up to that moment that I thought that if Macs were half that good, I was missing out.

I was right. My first Mac was an iBook G4. The battery would last 7 hours and after replacing the hard drive with an 80GB 7200rpm variant and upgrading the RAM to 1GB (IIRC) it was flying compared to the Acer.

That wasn’t the only great thing about the Mac. Everything felt as good as the iPod.

I’ve kept using Macs during my PhD, with various iMacs, Macbook Pros, Macbook Airs, and what not. When I first got into industry I had to use a Dell with Windows 7 Enterprise Edition. It was a piece of junk, commodity enterprise laptop.

Once I joined GoDataDriven I immediately got a Macbook Pro.

In the meantime, however, something happened. Microsoft was changing course, developing an open source friendly attitude and people were kind of discontent of the hit Apple software quality was taking, reportedly due to the huge success that the iPhone is.

After two year I decided to get a Dell XPS 15" for work. I wanted to challenge myself and see how far could I go using Windows (10). After two years of use, I went back to a Mac. Why?

Where the Dell shined

There are a number of area’s where the Dell shined for me. The Dell:

  • Came with an NVidia video card, a touch screen, and overall a sturdiness that makes me feel more comfortable when carrying it around. The new Macbook Pros are good, but they seem more fragile to me (it seems I’m not the only one);
  • Is considerably cheaper for similar specs2;
  • Comes with more ports than Thunderbolt 3 (it does have a single Thunderbolt 3/USB C port);
  • Has a physical ESC key. As a NeoVim user this is a big deal to me :)

On the other hand the Dell runs Windows, and this has also a number of advantages:

  • Windows management is built into the OS;
  • The OS feels really fast, everywhere;
  • I was on the beta release of the OS and it felt really stable, as in one blue screen of death in several months (which is fine for a beta OS);
  • The OS has several features that made me very productive in it, such as the fact that it remembered most used folders and placed them in the favorites; that I could easily open Explorer and more nifty things with super easy shortcuts, etc.;
  • With the inclusion of WSL you could get a full fledged Linux distribution inside Windows with minimal overhead;
  • Microsoft is actively listening to its users and releasing tons of stuff in each beta update. Contrast that to macOS, that feels mostly stagnant compared to its younger sibling iOS.

Given all the above, I could work on my laptop just fine for two years. I installed all the various Python packages, virtualenvs and whatnot (without Anaconda), Scala, Spark, Docker, and databases such as Postgres and MySQL. I even got PyCuda working with the NVidia GPU I had.

Verdict: If you want to use a Windows machine to use the above, you will be fine. Don’t drink too much Apple kool aid.

Where the Mac shines

That said: why did I come back? The single, biggest factor, is applications. I think macOS has much better frameworks to develop applications.

It is also true that third party apps usually cost more, but they give me a much better experience. In particular I love:

  • Mailmate: this is my favorite email client. It’s so good I don’t even know where to start;
  • Launchbar is an application launcher so complete you can hardly find something it can’t do. I’ve tried countless of these on Windows, but nothing comes close;
  • Vimr is a Neovim client. Again, nothing even remotely as polished on Windows;
  • Transmit is the golden reference for file transfer on macOS. The developers sweated every detail;
  • Things is a to-do app that doesn’t get in the way and it’s just so beautiful I can’t resist it :)
  • Soulver deserves so many words of praise I’d have to spend the night writing about it;
  • iTerm: you’d think it’s not so difficult to create a high quality terminal emulator. If I look at the status of Windows terminal emulator, I’d say you’re wrong.
  • is so invaluable that you realize how much you miss it only when you don’t have it anymore. Microsoft: build your own;
  • is another app that Windows should copy 1:1. There’s nothing close to it on Windows (I’ve tried!)
  • iMessage and Facetime. Sorry, my wife has an iPhone :)
  • Omnigraffle: Visio alternative, with a twist: it’s a joy to use!

Besides apps there are a lot of other touches that I really enjoy about the Mac:

  • Three-finger-selection and drag. You just drag with three fingers and you can select text and move windows around. So good!
  • HiDPI on macOS is way better than what you have on Windows; it seems a joke to leave HiDPI to apps, as Windows does;
  • The trackpad is just slighty better than the Dell I had, but vastly better of the majority of PCs;
  • Battery life is still better on the Mac, although the Dell I had had a much higher res display, that used tons of energy. And colors on the Dell are better;
  • PDF support is everywhere, from screenshots to fonts, to every single thing in the OS;
  • Spell correction in n languages in basically every text field of the OS, without apps having to do anything fancy;
  • Access to accented letters using dead keys;3
  • When in an app, typing ⇧ ⌘ / opens up a dialog where you can search all menu items of an app. Not remembering a shortcut? Just type ⇧ ⌘ / and type it! Again, you don’t know how much you miss it until you don’t have it anymore.

Where the Mac falls short

Not all is good in macOS land however:

  • You can’t turn off the internal display when you connect an external laptop unless you close the lid. I can’t imagine the difficult computer science challenges that makes this still a thing;
  • No native windows management. Third party apps exist, but Windows does a much better job here;
  • The app switcher does not show a preview, and groups windows by app. I thought this wouldn’t bother me much, but after two years on Windows, it does;
  • (High) Sierra feels more unstable that the Windows beta I was running. It doesn’t feel an OS coming out of the richest company on earth.

As for work: I could install all the stuff I wanted, excluding PyCuda because, guess what, these things don’t ship with a NVidia card, no matter how much money you have.


Well, I already gave it away: I’m back to Mac, apps being the biggest factor, but I gained a lot of nice touches in the switch!

  1. Feeling old now. ↩︎

  2. Some things are worse, such as the trackpad, some are better, such as the 4K screen. ↩︎

  3. There are third party applications in Windows that offer the same behavior, but having it built in the OS is always more stable. ↩︎

Unread for iOS got acquired by an awesome developer!

I always hoped would have become a new Twitter.

Not only was it friendlier than Twitter towards developers and users, but I was delighted every time I had to interact with it: I was using Riposte. If you followed the previous link I can tell you that I share Manton’s opinion:

Riposte is arguably the best social networking client out there.

The developer behind Riposte, Jared Sinclair also developed another delightful app: Unread, a beautiful RSS reader for iPhone and iPad.

Unread, luckily, didn’t end up the way Riposte did. It was acquired by Supertop, but, due to the success of Supertop’s other app, Castro, it did not get the attention it deserved.

I was therefore delighted and scared when Supertop announced that they sold Unread to Golden Hill Software. Delighted because the new owner has more time to work on it, and scared as what change is always scary.

Being scared turned out not to be such a unjustified feeling: the 1.7 version broke my set up with Stringer, a self-hosted, anti-social RSS reader1.

Basically Stringer offers a Fever-compatible API. The new version of Unread tried to do some smart things with the API, that Stringer was not offering.

I immediately wrote to John, Golden Hill Software’s owner. He quickly replied that he would look into it.

A few days later a new version of Unread came out but, alas, no fix for my issue.

I was kinda disappointed. I had to wait for the next release.

But no. John wrote me saying he pushed a PR to Stringer fixing the issue.

What a great developer! If you’re on iOS, please give it a spin. As a bonus, the newest version merges the iPhone and iPad version, which is timely as I realized some months ago that Mr. Reader, my favorite iPad RSS reader, was not developed anymore.

So give it a go, and support a great developer!

  1. I don’t think anyone is surprised when I characterize myself as anti-social. ↩︎

No system holding your private information is failing

Yesterday somebody called my wife. She didn’t pick up. Today (Saturday), upon picking up, the voice on the other hand presented himself as a Vodafone emploee. He said one of Vodafone’s system suffered some data loss and they wanted to check some personal information.

We recently moved, so my wife assumed that it might have been related. On the other hand she hears from me the worse stories about phishing and scamming that she got suspicious. She thought: “Why aren’t they sending me an email or a letter?” and she asked to be called one hour later, when I could have picked up the phone. The caller pushed back, saying it would only take 20''. She pushed back again, so he agreed to call back.

I know no systems containing the sort of information you could give on the phone would fail without a backup lying around. Especially for a company the size of Vodafone.

But maybe my wife misunderstood something, it could have been related to a new contract, etc. She said that on the background many people were talking, like in a real call center.1

So I called Vodafone myself. The lady on the phone told me I was the third person that day notifying them about it. She told me no systems had failed.

“Good”, I thought. Let me handle the guy. An hour later I picked up the phone. “Zakaria from Vodafone” he told me.

At that point, I already had LinkedIn open. I wanted to ask him his family name to look him up there.

“What’s your family name Zakaria?”. “I’m just here to ask some details about… “.

“I said what’s your family name Zakaria”.

He hung up.

There was not much to do, but what followed shows that Vodafone really handled it classy via their Twitter account. At 16:12 I tweeted

@vodafoneNL someone is calling your clients, pretending to be you, and asking personal info. Time to send an SMS around?

At 16:45 they got back at me and then via PM they asked:

Very good that you send us this message Giovanni. If I understand correctly someone called you pretending to be someone from Vodafone Customer Service and asks you for personal information. Can you tell me exactly how this conversation went ? And can you give me what day and time it was? I also need your mobile number so I can figure this out with our security department :-).

After telling them what happened, they promptly replied:

Thanks for sending the information. That does sound like a very strange conversation. I’ll send it right away to our security department for research. Good that you indicate this to us. If you encounter anything suspicious again let us know, then we investigate this immediately.

I have no idea what they’re going to do with it, but I really do hope they stop Zakaria and his likes!

  1. Although when I was on the phone later the background noise from other people talking was much higher than professional call centers. ↩︎

Use tab to cycle through Visual Studio Code completion

Sometimes, instead of using NeoVim, I do like to use Visual Studio Code (with Vim keybindings).

Visual Studio Code is a great editor with amazing Intellisense and debugging capabilities (for Python as well). There is however one thing that I could not swallow: the tab behavior when a completion was suggested.

With (Neo)Vim shift+tab and tab respectively cycle up and down the completion list.

I wanted to have the same experience in Visual Studio Code. After some Googling a lot of trial and errors, this is what I came up with (it works pretty nicely! You can paste the text below in the file that is opened after you click on File -> Preferences -> Keyboard Shortcuts)

        "key": "tab",
        "command": "selectNextQuickFix",
        "when": "editorFocus && quickFixWidgetVisible"
        "key": "shift+tab",
        "command": "selectPrevQuickFix",
        "when": "editorFocus && quickFixWidgetVisible"
        "key": "tab",
        "command": "selectNextSuggestion",
        "when": "editorTextFocus && suggestWidgetMultipleSuggestions && suggestWidgetVisible"
        "key": "shift+tab",
        "command": "selectPrevSuggestion",
        "when": "editorTextFocus && suggestWidgetMultipleSuggestions && suggestWidgetVisible"

The commands should be pretty self-explanatory! To accept the suggestion, you can use enter (that’s the default together with tab).