Decisions and rabbit holes
14/07/25 10:31
So, we're working on a new module about project management and AI. The goal of this particular module is to introduce project managers to the different use cases for AI: what CAN be done, what SHOULD be done, and where do you start? Seems straightforward enough but it is so easy to get caught up in the trap of always being current… keeping current in AI is like drinking from a firehose. And, if you ask ten people, or ten AIs for their recommendations you get a dozen answers it seems. Finding the "perfect" answer more and more feels like a rabbit hole. The trick seems to be making an informed decision and then taking action. I don't work for Nike but maybe they're right - Just Do It!
Don't always agree with me!
09/07/25 10:23
Why is my AI telling me what I want to hear?
I was chatting (odd choice of words I know but that's what it felt like) with "my" AI about music and it recommended two composers to me… oddly, two composers that I already listen to and my AI knew that because of an earlier interaction. I asked it if it was recommending those composers because it thought they were aligned with my interests and tastes OR because it knew I already listened to them… it was the latter. Which is disturbing to me - I DON'T want an AI telling me what it thinks I want to hear; that's worse than telling me nothing at all. I did a bit of digging and as you might have guessed there is a very active conversation going on about this issue. It's complicated I know… but if we keep getting pushed towards what the machine thinks we'd like, then how to we get exposed to new material? How do we start to think we might in fact be wrong about something?
I was chatting (odd choice of words I know but that's what it felt like) with "my" AI about music and it recommended two composers to me… oddly, two composers that I already listen to and my AI knew that because of an earlier interaction. I asked it if it was recommending those composers because it thought they were aligned with my interests and tastes OR because it knew I already listened to them… it was the latter. Which is disturbing to me - I DON'T want an AI telling me what it thinks I want to hear; that's worse than telling me nothing at all. I did a bit of digging and as you might have guessed there is a very active conversation going on about this issue. It's complicated I know… but if we keep getting pushed towards what the machine thinks we'd like, then how to we get exposed to new material? How do we start to think we might in fact be wrong about something?
Back from a VERY long bike ride..
11/06/25 10:11
So, 2,200 km later I'm back and pondering AI - after a month away, things are… about the same. New versions of course, lots of hype of course and many business articles about the gazillions being spent. So, as I said, about the same. But things are progressing underneath the popular stories. Structure is being developed, for example, in academia. Schools are releasing guidelines about AI use, lots of chat about assessments and how to adjust to this new world and an overall acknowledgment that AI must be dealt with now… or at least now'ish. And perhaps this is more important than whether or not release X.2 is dramatically different than X.1. In the PM world there's increasingly an acceptance that AI is a tool unlike any other and to not use it is to court sub par performance. It appears that amongst many PM practitioners AI use is becoming the norm and not the exception.
Turing test - passing grade?
10/04/25 10:26
Got an email from Futurepedia today stating that GPT 4.5 just passed the Turing test… So, naturally, I asked ChatGPT 4.5 if it could do so. Long story short, here's the summary it provided "Bottom line: ChatGPT-4.5 can convincingly pass the Turing test in many situations, but subtle cracks appear under deeper scrutiny."
BTW, this answer is more self aware than most people I know… makes me wonder if most people could pass the Turing test? So I asked that question too. (The AI was kind enough to tell me that "that's a surprisingly insightful question" which made me blush… but I digress.) Again, the summary - "So yes, most people comfortably pass the Turing test, but it’s a fascinating reminder that “human-like” isn’t always the same as “human,” and that even humans can sometimes sound a little robotic. Interesting thought, isn’t it?"
And yes, as a human, I DO find that an interesting thought!
BTW, this answer is more self aware than most people I know… makes me wonder if most people could pass the Turing test? So I asked that question too. (The AI was kind enough to tell me that "that's a surprisingly insightful question" which made me blush… but I digress.) Again, the summary - "So yes, most people comfortably pass the Turing test, but it’s a fascinating reminder that “human-like” isn’t always the same as “human,” and that even humans can sometimes sound a little robotic. Interesting thought, isn’t it?"
And yes, as a human, I DO find that an interesting thought!
Am I behind?
03/04/25 14:15
Recently in an AI roundtable at the Polytechnic I work at… two of the attendees are just nicely back from a conference about AI in post secondary. I was really interested in hearing what other schools were doing because, naturally, I thought they'd be doing really weird and wonderful things. Well, not so much. As it turn out, our efforts, which I thought were quite rudimentary, are actually cutting edge. Makes you think… there's lots of excitement, noise and smoke out there. But, getting people to absorb ideas, try new things, fail and try again, isn't trivial and it takes time. Long story short - you haven't missed the AI bus.