Links, April 18, 2025
§Xenographics
(from the about page)
Xeno.graphics is a collection of unusual charts and maps, managed by Maarten Lambrechts. Its objective is to create a repository of novel, innovative and experimental visualizations to inspire you, to fight xenographphobia and popularize new chart types.
If, like me, you tend to push the edge with data visualizations, this is a great resource of ideas an inspirations.
§Solving The Trolley Problem: Towards Moral Abundance
The Trolley Problem is, as it is usually presented, a false dilemma. There is no correct answer to the problem; both options are tragedies, neither better, regardless of permutations. Even being asked to make the choice is morally corrosive
The framing presented here helped something click for me: people who argue that morality and ethics must be a zero-sum game do so to justify their lack of commitment morals and ethics.
The box-which-one-must-not-think-outside-of in the question Do you divert the trolley to kill one person instead of many? is Why is the trolley in this situation to begin with? What can we do to prevent this situation from occurring to begin with?
If you start asking these sorts of questions in responses to the false dilemmas so often presented as trolley problems in emerging technology, you’ll often times find the answer is because this dilemma benefits the people with money.
§I’m getting fed up of making the rich, richer
I would like to co-sign this.
§Without a Trace: How to Take Your Phone Off the Grid
The last straw?
When the federal government traced my phone number back to me and blocked me from communicating with incarcerated people during the COVID-19 pandemic.
I know a lot of people who’ve started to think more about this kind of stuff lately. If you’re taking the route of having a burner phone, you need to be thorough — a false sense of confidence can lead to you overlooking something that will be your undoing.
No I Won’t Stop Dissing “AI”
§The Rise of Slopsquatting: How AI Hallucinations Are Fueling a New Class of Supply Chain Attacks
One such risk is slopsquatting, a new term for a surprisingly effective type of software supply chain attack that emerges when LLMs “hallucinate” package names that don’t actually exist. If you’ve ever seen an AI recommend a package and thought, “Wait, is that real?”—you’ve already encountered the foundation of the problem.
And now attackers are catching on.
If I were to make one general characterization of people who call themselves coders, especially the kind who champion code above all else, it would be that they often spectacularly fail to understand the social systems in which they and their projects operate. Package repositories and the resulting ecosystems are one such social system.
§another mildly silly article about AI disinformation, but at least this one gives you something to think about
This is a reaction to a paywalled Washington Post piece about Russian disinformation efforts to skew LLM training:
second of all, it should be obvious — not just to experts in the field, but to the average person on the street — that the problem of ensuring that this thing doesn’t “believe” disinformation is an even harder variant of the problem of ensuring that humans don’t spread disinformation manually, the old fashioned way, with hands on a keyboard and eyes on a monitor
I am reminded at times the distinction between Intelligence and Wisdom, and how many people working in technology seem to have the prior but not the latter. Again, it comes back to thinking about social systems.
I love the way the AI field feels compelled to invent new, cutting-edge-sounding phrases like “generative engine optimization” and “LLM grooming” to make these problems sound new rather than absurd hyperintensifications of the same problem that has existed for god knows how long.