Chain of thought prompting
16/02/25 11:55
There is LOTS of conversation "out there" about prompting. And there's a good reason why: better questions result in better answers! It's tempting to ask your AI of choice one question and see what you get back… tempting but not optimal it turns out. Chain of Thought (CoT) prompting is when you ask the AI to provide a rationale for intermediate steps taken to answer the question. By asking the AI to break the problem down into steps and explain each step, the quality of the response goes up. (Don't take my word for it; here's a key paper discussing the matter.)
But here's the thing: we need to provide the logic to the AI by using examples. OR you can use zero-shot prompting - rather than providing logical examples we simply ask the AI to think through the problem step-by-step. Why CoT works isn't completely clear but it does work. So, next time you're querying your AI, remember to ask it to solve the problem step-by-step; it certainly can't hurt and you'll likely get a better answer.
But here's the thing: we need to provide the logic to the AI by using examples. OR you can use zero-shot prompting - rather than providing logical examples we simply ask the AI to think through the problem step-by-step. Why CoT works isn't completely clear but it does work. So, next time you're querying your AI, remember to ask it to solve the problem step-by-step; it certainly can't hurt and you'll likely get a better answer.