I’ve gone snowboarding about 30 times since I started learning a few years ago, but every time I’m on a lift, most of the other riders have been out 90 days just this season. In fact, almost everyone I see has been skiing or snowboarding for decades, and comes out almost every day.
It’s hard to stay motivated when I’m the worst snowboarder on the mountain.
This might seem like a big coincidence, but I’m also one of the worst runners I know AND one of the worst writers that I’m aware of.
I used to get carpel tunnel symptoms while working on a computer all day, and the thing that finally solved it was a vertical mouse. Unfortunately, there’s only a couple options, and the one I like best has an annoying issue where the wheel wears out after a year or so. It’s cheap enough that this wasn’t a huge deal, but I finally got around to trying to fix it and realized it’s stupidly easy.
Claude has trouble playing Pokemon partially because it can’t see the screen very well. This made me wonder if Claude would be better at an ASCII game like Dwarf Fortress, where it doesn’t need to rely on image recognition.
To check this, I built an MCP server to let Claude control an interactive terminal, and installed a text version of Dwarf Fortress.
There’s a semi-common meme on Twitter where people share their most X opinion, where X is a group the poster doesn’t identify with; or sometimes my least X opinion, where X is a group they do identify with. In that spirit, my least libertarian opinion is that exclusivity deals with sufficiently entrenched companies* are bad and should be illegal.
AI training data comes from humans, not AIs, so every piece of training data for “What would an AI say to X?” is from a human pretending to be an AI. The training data does not contain AIs describing their inner experiences or thought processes. Even synthetic training data only contains AIs predicting what a human pretending to be an AI would say. AIs are trained to predict the training data, not to learn unrelated abilities, so we should expect an AI asked to predict the thoughts of an AI to describe the thoughts of a human pretending to be an AI.