Yesterday, I attended the AWS GenAI for Healthcare Summit in Lisbon, hosted by the Champalimaud Foundation
Here’s my impressions for the day.
Where is the Medical Sector using (Gen)AI
A non-exhaustive list:
Clinical Decision Support
AI Buddy in Diagnostics
Enhancing Imaging
For example, using AI to adjust X-ray imaging when there’s missing data, low resolution, cropping, or low contrast.
Data Storage Management
Some institutions have kept all their digital X-ray images. Since storage is so costly they started leveraging AI to discard 70% of their data and reconstruct when needed from the remaining 30% without information loss.
Reducing Invasive Examinations
Research suggests up to 5% of all cancers may be linked to exposure from CT scans. To avoid it, less precise scans can be used, making diagnosis harder. TheraPanacea trained a variational autoencoder on 250,000 CT scans to generate high-quality images from lower-quality scans. The demo results were impressive!
I found it fascinating that 3–5 are “human-out-of-the-loop” applications—offering scalable, software-like gains!
The most moving example, however, was a highly “human-in-the-loop,” non-scalable solution. In Africa, there’s a shortage of radiologists and widespread tuberculosis. To help, a new portable X-ray box with built-in AI models is being used. It’s transported village-to-village by motorbike. Everyone in the village can be scanned, and if the AI flags someone’s scan, they’re referred to a hospital for specialized care. The motorcyclist doesn’t need medical training—the X-ray box and AI do the work!
Challenges
Customized/fine-tuned foundational models for the medical subdomain is a must. That is, not only should models be specialized in health care: they should be specialized in the discipline within health care where they will be deployed. Why?
Medical accuracy (goes without saying)
Clinical relevance
Direct access and quoting medical sources and evidence
Healthcare context understanding (steering the model towards prioritizing clinical info—which is something the models should be able to do already, but I suspect that there’s a lot of improvised medical advice on the internet that a non-specialized model won’t be steered enough by prompting alone)
A didactic approach in the replies geared towards what health professionals are used to. I cannot validate this, but I’ve heard that general models tends to be terser in their replies by default. I don’t know by which measure prompting can address it!
All these items contribute to trust, one of the big challenges in deploying AI in the medical sector. And with trust, adoption follows, and, from adoption, impact.
Which brings me to the observation that if AI doesn’t become operational, there’s no value. You could tell that health care professionals are not looking for the n-th pilot of PoC. They’re looking for stuff that makes their job easier, improve patients’ lives, and doesn’t require an Einstein to operate.
Closing thoughts
To close: everybody was screaming agents but there were no agentic demos.
Last year, I blogged about the Binepad BNK8, a macropad by Binepad. This week, I received the larger brother, the Binepad BNK9. It sports 9 buttons and a larger knob1.
Since the firmware is customizable, I started exploring it through VIA. I could create new layers, control the light effects, etc. But, once I started adding layers, I had a dilemma:
Either keep a key pressed to activate a layer. That, however, required some finger gymnastic, especially if I wanted to use the knob while keeping a key pressed. Or
Be left wondering in which layer I was as there is no visual clue.
I then started looking around and soon it turned out I had to write a qmk firmware by hand.
The getting started guide is straightforward. On macOS, just fire up a terminal and type:
brew install qmk/qmk/qmk
qmk setup # take note of where the qmk_firware is cloned. It will be your $QMK_FIRMWARE_HOMEqmk compile -kb binepad/bnk9 -km default
qmk config user.keyboard=binepad/bnk9
Afterwards, I created a copy of the keymap into $QMK_FIRMWARE_HOME/keyboards/binepad/bnk9/keymaps/gglanzani, where $QMK_FIRMWARE_HOME is whatever folder qmk_firmware was cloned into.
After some trials and errors, I ended up with a keymap.c, config.h, and rules.mk that work the way I want. I’ve uploaded them to Github and you’re free to use them.
To compile the custom firmware type in a terminal:
cd $QMK_FIRMWARE_HOME
qmk compile -kb binepad/bnk9 -km gglanzani # use your own if you don't use my repo!
This will create a binepad_bnk9_gglanzani.uf2 file in your $QMK_FIRMWARE_HOME folder.
But how do you get it on your macropad? To do so, disconnect the USB cable, press the knob, and then connect the cable. That will mount an RPI-RP2 volume on your computer. One you copy the uf2 file into it, the volume will unmount and your macropad will be ready to use!
If you look into my repository, there should be enough comments to understand what’s going on an adapt it to your needs!
For those curious about the quality: the product finish of the knob and the buttons is nice, while the USB-C port feels finnicky at times ↩︎
Note that when you have an ordered list, the trick will put the same number (e.g., 1.) on each line. That’s not a problem though because they will still be rendered properly with ascending numbers.
Reading Simon Willison‘s excellent blog on “Things we learned about LLMs in 2024”, made me realize why we’re not seeing much economic benefit1 from #llms yet.
Simon writes:
Most users are thrown in at the deep end. The default LLM chat UI is like taking brand new computer users, dropping them into a Linux terminal and expecting them to figure it all out.
What should we work on, then?
As it was always the case with AI, work on the pains and the gains of the end users and think about a UI and UX they can use and be more productive with!
Don’t expect them to change how they work simply because you turned on a feature on your Office or GitHub subscription!
Quoting the Economist’s world in brief from a week ago or so: Artificial intelligence has already made many people—particularly shareholders in Al firms or chipmakers—very rich. But so far it has had little impact on the global economy. ↩︎
Recently, the major browsers introduced the capability to go to a webpage highlighting and scrolling to a particular piece of text. Once you see it in action, it really is neat.
The way I engage with it is simple: on a page, I copy some text, and then, in another application (it doesn’t work in Safari—although with some tweaks it might) I type ;frag and the URL including the #:~text=something appears instead.
The script works as follows:
Line 2 expands the clipboard and URI encodes it (for example, replacing spaces with %20).
Line 3 grabs an object that lets you manipulate the Safari application. This Javascript system built into the system is one of those big details in macOS that sets it apart from the competition, in my opinion.
Line 4 gets the URL of the active tab, and appends first #:~:text= and then the encoded text.
A thoughtful, as often, article by the Economist on how the usual defenders of free speech (the liberals) are not defending it, leaving the task to the (far) right.
The killer quote:
Our long-standing position is clear: only with the freedom to be wrong can societies advance slowly towards what is right. What has changed is that today the loudest objections to the crackdown on free speech come from right-wingers such as Elon Musk, X’s boss, while many self-described liberals applaud what they see as a blow against Trump-supporting billionaires. As speech becomes a culture-war battleground, those who disagree with the politics of Mr Musk and his allies have become relaxed about the onslaught.
Up to today, I’ve been bothered by having local https websites served by Caddy, whose certificates were not trusted by macOS. Today, I rectified it.
For macOS (and Safari) to trust what Caddy deploys locally, I had to:
Find the root certificate (if you’re using the Docker image, they’re in /data/caddy/pki/authorities/local/root.crt).
Copy its content into, say, caddy.pem (it’s just a name that the Keychain Access understands and that won’t clutter it with a non-informative name as root.crt)
Double-click caddy.pem to open it in the Keychain Access.app
Double-click the certificate name, open up the Trust “tab”, and click on “Always Trust”.
Then, all the local websites Caddy is serving will be trusted automatically.
Insightful linked post from John Gruber on why the AI pin flopped. This bit stood out to me as particularly insightful:
It is the kiss of death for any endeavor, creative or technical, to have a culture where brutally honest internal criticism is not welcome, especially when it goes up the chain. In fact it needs to be the expectation, if you’re pursuing excellence.
After setting it up to run on a non-Gmail email host, however, I noticed the logs were complaining that isbg couldn’t find Gmail-specific folders. That meant that, somehow, my configuration was telling isbg that I was on a gmail host, even though the line"isGmail": "no" was in my config.
After some looking around, I saw the offending piece of code in docker_isbg (newlines added for clarity):
if( confLoader.tableHasKey( config, "isGmail" )
and config.isGmail )
then gmailOption =" --gmail"else gmailOption =""end
The code checks whether config (which is a dictionary-like object resulting from the parsing of the above JSON) has a isGmail key, and whether it is set to anything: it can be yes, no or any other string and it will pass the --gmail option to isbg.
To fix it, I opened a PR that changes the above lines to
if( confLoader.tableHasKey( config, "isGmail" )
and config.isGmail =="yes" )
then gmailOption =" --gmail"else gmailOption =""end
While the project seems to be mostly abandoned so I don’t expect a prompt merge, there’s another easy fix: changing the "isGmail": "no", line to "isGmail": false in the configuration file as in that case the config.isGmail code will evaluate to false.