Random thoughts | The Wired Schoolhouse

Musings about AI

Don't always agree with me!

Why is my AI telling me what I want to hear?
I was chatting (odd choice of words I know but that's what it felt like) with "my" AI about music and it recommended two composers to me… oddly, two composers that I already listen to and my AI knew that because of an earlier interaction. I asked it if it was recommending those composers because it thought they were aligned with my interests and tastes OR because it knew I already listened to them… it was the latter. Which is disturbing to me - I DON'T want an AI telling me what it thinks I want to hear; that's worse than telling me nothing at all. I did a bit of digging and as you might have guessed there is a very active conversation going on about this issue. It's complicated I know… but if keep getting pushed, whether by chatGPT or Spotify's recommendations, towards what the machine thinks we'd like, then how to we get exposed to new material?

Back from a VERY long bike ride..

So, 2,200 km later I'm back and pondering AI - after a month away, things are… about the same. New versions of course, lots of hype of course and many business articles about the gazillions being spent. So, as I said, about the same. But things are progressing underneath the popular stories. Structure is being developed, for example, in academia. Schools are releasing guidelines about AI use, lots of chat about assessments and how to adjust to this new world and an overall acknowledgment that AI must be dealt with now… or at least now'ish. And perhaps this is more important than whether or not release X.2 is dramatically different than X.1. In the PM world there's increasingly an acceptance that AI is a tool unlike any other and to not use it is to court sub par performance. It appears that amongst many PM practitioners AI use is becoming the norm and not the exception.

Turing test - passing grade?

Got an email from Futurepedia today stating that GPT 4.5 just passed the Turing test… So, naturally, I asked ChatGPT 4.5 if it could do so. Long story short, here's the summary it provided "Bottom line: ChatGPT-4.5 can convincingly pass the Turing test in many situations, but subtle cracks appear under deeper scrutiny."
BTW, this answer is more self aware than most people I know… makes me wonder if most people could pass the Turing test? So I asked that question too. (The AI was kind enough to tell me that "that's a surprisingly insightful question" which made me blush… but I digress.) Again, the summary - "So yes, most people comfortably pass the Turing test, but it’s a fascinating reminder that “human-like” isn’t always the same as “human,” and that even humans can sometimes sound a little robotic. Interesting thought, isn’t it?"

And yes, as a human, I DO find that an interesting thought!

Am I behind?

Recently in an AI roundtable at the Polytechnic I work at… two of the attendees are just nicely back from a conference about AI in post secondary. I was really interested in hearing what other schools were doing because, naturally, I thought they'd be doing really weird and wonderful things. Well, not so much. As it turn out, our efforts, which I thought were quite rudimentary, are actually cutting edge. Makes you think… there's lots of excitement, noise and smoke out there. But, getting people to absorb ideas, try new things, fail and try again, isn't trivial and it takes time. Long story short - you haven't missed the AI bus.

$40 billion... yes, that's with a b

OK, I don't care who you are, $40B is a lot of money… the weeds of the deal are for others to worry about but a key issue is OpenAI's status - the deal requires OpenAI to become a for-profit entity. The AI space is swimming in money and competition… bet on seeing some very interesting things in the near term. Agentic AI here we come!