I hate the phrase prompt engineering.
It turns the simple act of creating a prompt into a scientific process reserved for folks with IQs above 135.
It overcomplicates an activity (i.e. prompting) that is actually quite straightforward.
But I want to help you simplify. Without sacrificing your results.
Here’s a simple tactic that you can use to rapidly improve your prompts — and the resulting outputs from your LLMs.
Introducing “pre-prompting”
The first conversation I ever had with ChatGPT was about the voice in my head. Specifically, I wanted to know:
How can I quiet my inner critic?
ChatGPT’s response was serviceable. But it wasn’t better than an SEO-optimized blog post.
I was so young and innocent back then — I didn’t know that a bad prompt would lead to a bad answer.
And with that first query, I failed to recognize that ChatGPT knew nothing about me.
ChatGPT didn’t know:
My age
The typ…
Keep reading with a 7-day free trial
Subscribe to Future-Proof Your Career with AI to keep reading this post and get 7 days of free access to the full post archives.