I recently received the Preonic keyboard, as my two Model 01 were not really portable.
The Preonic is a super keyboard that is highly customizable and connects either wired or by bluetooth. I couldn’t pass the opportunity.
Since it’s customizable, one of the first things I wanted to do was change the layout a bit. The suggested way is to use Chrysalis but, since I don’t use Chrome-based browser, I wanted to do it from the terminal.
As there were no step-by-step instructions, I’m using this blog to document how I’ve done it.
Getting started
The first step is to clone the repository and set things up. From your terminal, type:
git clone https://github.com/keyboardio/Kaleidoscope
cd Kaleidoscope
export KALEIDOSCOPE_DIR=${HOME}/git/Kaleidoscope # or, if you use fish, use set -x KALEIDOSCOPE_DIR ${HOME}/git/Kaleidoscopemake setup # this is done the first time
Customizing the layout
If everything goes well, we should be ready to customize our layout. To do so
# for some reason, it's not where their other firmwares arecd plugins/Kaleidoscope-Hardware-Keyboardio-Preonic/examples/Devices/Keyboardio/Preonic
vim Preonic.ino # edit the layout with whatever editor you havemake compile
If everything went well (e.g., you didn’t mess up), we should be ready to flash the layout.
Flashing the layout into the Preonic
To flash the layout, we need to put the keyboard in bootloader model (these instructions minus the last step come from their website):
First, make sure the power switch on your keyboard is turned off (with the white side of the power switch on the back hidden and the black side of the switch visible)
Disconnect the keyboard’s USB cable from the computer.
Hold in the “Hyper” key in the bottom left corner of the keyboard.
Plug the USB cable back into the computer.
The butterfly on the top of the keyboard should glow green.
You can now release the “Hyper” key.
In the terminal, type make flash
That’s it! You should now have a working layout. If you want to generate an SVG of the layout, you can type
python generate_layout.py
There are a couple of quirks in the python file, but it will output something that looks like
Yesterday, I attended the AWS GenAI for Healthcare Summit in Lisbon, hosted by the Champalimaud Foundation
Here’s my impressions for the day.
Where is the Medical Sector using (Gen)AI
A non-exhaustive list:
Clinical Decision Support
AI Buddy in Diagnostics
Enhancing Imaging
For example, using AI to adjust X-ray imaging when there’s missing data, low resolution, cropping, or low contrast.
Data Storage Management
Some institutions have kept all their digital X-ray images. Since storage is so costly they started leveraging AI to discard 70% of their data and reconstruct when needed from the remaining 30% without information loss.
Reducing Invasive Examinations
Research suggests up to 5% of all cancers may be linked to exposure from CT scans. To avoid it, less precise scans can be used, making diagnosis harder. TheraPanacea trained a variational autoencoder on 250,000 CT scans to generate high-quality images from lower-quality scans. The demo results were impressive!
I found it fascinating that 3–5 are “human-out-of-the-loop” applications—offering scalable, software-like gains!
The most moving example, however, was a highly “human-in-the-loop,” non-scalable solution. In Africa, there’s a shortage of radiologists and widespread tuberculosis. To help, a new portable X-ray box with built-in AI models is being used. It’s transported village-to-village by motorbike. Everyone in the village can be scanned, and if the AI flags someone’s scan, they’re referred to a hospital for specialized care. The motorcyclist doesn’t need medical training—the X-ray box and AI do the work!
Challenges
Customized/fine-tuned foundational models for the medical subdomain is a must. That is, not only should models be specialized in health care: they should be specialized in the discipline within health care where they will be deployed. Why?
Medical accuracy (goes without saying)
Clinical relevance
Direct access and quoting medical sources and evidence
Healthcare context understanding (steering the model towards prioritizing clinical info—which is something the models should be able to do already, but I suspect that there’s a lot of improvised medical advice on the internet that a non-specialized model won’t be steered enough by prompting alone)
A didactic approach in the replies geared towards what health professionals are used to. I cannot validate this, but I’ve heard that general models tends to be terser in their replies by default. I don’t know by which measure prompting can address it!
All these items contribute to trust, one of the big challenges in deploying AI in the medical sector. And with trust, adoption follows, and, from adoption, impact.
Which brings me to the observation that if AI doesn’t become operational, there’s no value. You could tell that health care professionals are not looking for the n-th pilot of PoC. They’re looking for stuff that makes their job easier, improve patients’ lives, and doesn’t require an Einstein to operate.
Closing thoughts
To close: everybody was screaming agents but there were no agentic demos.
Last year, I blogged about the Binepad BNK8, a macropad by Binepad. This week, I received the larger brother, the Binepad BNK9. It sports 9 buttons and a larger knob1.
Since the firmware is customizable, I started exploring it through VIA. I could create new layers, control the light effects, etc. But, once I started adding layers, I had a dilemma:
Either keep a key pressed to activate a layer. That, however, required some finger gymnastic, especially if I wanted to use the knob while keeping a key pressed. Or
Be left wondering in which layer I was as there is no visual clue.
I then started looking around and soon it turned out I had to write a qmk firmware by hand.
The getting started guide is straightforward. On macOS, just fire up a terminal and type:
brew install qmk/qmk/qmk
qmk setup # take note of where the qmk_firware is cloned. It will be your $QMK_FIRMWARE_HOMEqmk compile -kb binepad/bnk9 -km default
qmk config user.keyboard=binepad/bnk9
Afterwards, I created a copy of the keymap into $QMK_FIRMWARE_HOME/keyboards/binepad/bnk9/keymaps/gglanzani, where $QMK_FIRMWARE_HOME is whatever folder qmk_firmware was cloned into.
After some trials and errors, I ended up with a keymap.c, config.h, and rules.mk that work the way I want. I’ve uploaded them to Github and you’re free to use them.
To compile the custom firmware type in a terminal:
cd $QMK_FIRMWARE_HOME
qmk compile -kb binepad/bnk9 -km gglanzani # use your own if you don't use my repo!
This will create a binepad_bnk9_gglanzani.uf2 file in your $QMK_FIRMWARE_HOME folder.
But how do you get it on your macropad? To do so, disconnect the USB cable, press the knob, and then connect the cable. That will mount an RPI-RP2 volume on your computer. One you copy the uf2 file into it, the volume will unmount and your macropad will be ready to use!
If you look into my repository, there should be enough comments to understand what’s going on an adapt it to your needs!
For those curious about the quality: the product finish of the knob and the buttons is nice, while the USB-C port feels finnicky at times ↩︎
Note that when you have an ordered list, the trick will put the same number (e.g., 1.) on each line. That’s not a problem though because they will still be rendered properly with ascending numbers.
Reading Simon Willison‘s excellent blog on “Things we learned about LLMs in 2024”, made me realize why we’re not seeing much economic benefit1 from #llms yet.
Simon writes:
Most users are thrown in at the deep end. The default LLM chat UI is like taking brand new computer users, dropping them into a Linux terminal and expecting them to figure it all out.
What should we work on, then?
As it was always the case with AI, work on the pains and the gains of the end users and think about a UI and UX they can use and be more productive with!
Don’t expect them to change how they work simply because you turned on a feature on your Office or GitHub subscription!
Quoting the Economist’s world in brief from a week ago or so: Artificial intelligence has already made many people—particularly shareholders in Al firms or chipmakers—very rich. But so far it has had little impact on the global economy. ↩︎
Recently, the major browsers introduced the capability to go to a webpage highlighting and scrolling to a particular piece of text. Once you see it in action, it really is neat.
The way I engage with it is simple: on a page, I copy some text, and then, in another application (it doesn’t work in Safari—although with some tweaks it might) I type ;frag and the URL including the #:~text=something appears instead.
The script works as follows:
Line 2 expands the clipboard and URI encodes it (for example, replacing spaces with %20).
Line 3 grabs an object that lets you manipulate the Safari application. This Javascript system built into the system is one of those big details in macOS that sets it apart from the competition, in my opinion.
Line 4 gets the URL of the active tab, and appends first #:~:text= and then the encoded text.
A thoughtful, as often, article by the Economist on how the usual defenders of free speech (the liberals) are not defending it, leaving the task to the (far) right.
The killer quote:
Our long-standing position is clear: only with the freedom to be wrong can societies advance slowly towards what is right. What has changed is that today the loudest objections to the crackdown on free speech come from right-wingers such as Elon Musk, X’s boss, while many self-described liberals applaud what they see as a blow against Trump-supporting billionaires. As speech becomes a culture-war battleground, those who disagree with the politics of Mr Musk and his allies have become relaxed about the onslaught.
Up to today, I’ve been bothered by having local https websites served by Caddy, whose certificates were not trusted by macOS. Today, I rectified it.
For macOS (and Safari) to trust what Caddy deploys locally, I had to:
Find the root certificate (if you’re using the Docker image, they’re in /data/caddy/pki/authorities/local/root.crt).
Copy its content into, say, caddy.pem (it’s just a name that the Keychain Access understands and that won’t clutter it with a non-informative name as root.crt)
Double-click caddy.pem to open it in the Keychain Access.app
Double-click the certificate name, open up the Trust “tab”, and click on “Always Trust”.
Then, all the local websites Caddy is serving will be trusted automatically.