Archives for July 2025 | Random thoughts | The Wired Schoolhouse

Musings about AI

AI progress

There's an old quote I recall - "never say something is impossible, you'll just piss off the people already doing it."

Remember the good old days (early July of this year) when it was commonly acknowledged that AI wasn't particularly good at math? Well, Google DeepMind just received gold level marks at the International Mathematical Olympiad. What's really interesting about this accomplishment is that the AI did NOT require that the problem to be solved was first translated into a programming language - it was all done using natural language… this is HUGE I think. And, clearly the AI is demonstrating high level reasoning and abstract thinking (should that be in quotes?) as well.

To go back to the quote; it seems increasingly fraught to argue that AI can't do something "human"… maybe not now but by next week? Perhaps. All swords are double sided… this sword is particularly sharp I think.

Decisions and rabbit holes

So, we're working on a new module about project management and AI. The goal of this particular module is to introduce project managers to the different use cases for AI: what CAN be done, what SHOULD be done, and where do you start? Seems straightforward enough but it is so easy to get caught up in the trap of always being current… keeping current in AI is like drinking from a firehose. And, if you ask ten people, or ten AIs for their recommendations you get a dozen answers it seems. Finding the "perfect" answer more and more feels like a rabbit hole. The trick seems to be making an informed decision and then taking action. I don't work for Nike but maybe they're right - Just Do It!

Don't always agree with me!

Why is my AI telling me what I want to hear?
I was chatting (odd choice of words I know but that's what it felt like) with "my" AI about music and it recommended two composers to me… oddly, two composers that I already listen to and my AI knew that because of an earlier interaction. I asked it if it was recommending those composers because it thought they were aligned with my interests and tastes OR because it knew I already listened to them… it was the latter. Which is disturbing to me - I DON'T want an AI telling me what it thinks I want to hear; that's worse than telling me nothing at all. I did a bit of digging and as you might have guessed there is a very active conversation going on about this issue. It's complicated I know… but if we keep getting pushed towards what the machine thinks we'd like, then how to we get exposed to new material? How do we start to think we might in fact be wrong about something?