
What we can learn from AI by ‘prompting ourselves’
Ideas are multidimensional. Language itself is a very powerful description of the abstract concept an idea can encapsulate. To me it seems like with the rise of Large Language Models (LLMs), like ChatGPT, a spotlight has been placed on the specialty that language is to us humans.
With language we can describe the world we live in and if you master language as a whole, with all the interconnected meanings between words, one seems to get a get an understanding of basic things like logic, reasoning and even humor. After all that’s what happend with LLMs, they can solve very specific tasks just from ‘understanding’ language.
However, I feel like some things are really hard to express in language. Sometimes I have concepts or ideas in my head, that I ‘think’ about ‘visually’. That’s what I mean by ‘ideas are multidimensional’.
Recently there’s a lot of discussion going on the AI space about whether it is enough to just predict the next token / word in a sequence (like LLMs do) to achieve Artificial General Intelligence (AGI). Some belive it is, others don’t.
The argument that does not belive in that hypothesis, references the ‘multidimensional’ aspect of our human thinking process and gives a thought experiment as an example:
I feel like there’s definitely something to that idea and that Yann LeCun constructed a good example in his tweet to demstrate this. When thinking about the given problem, I just visually see a point moving according to the instructions in bird’s-eye view and come to a conclusion. I do not describe it in an internal monologue.
Another thing that fascinates me about LLMs is the idea that one can / has to learn how to ‘talk’ to these models, so that you can leverage their true power. This ability or process is now coined ‘Prompt Engineering’ and it got me thinking. When using LLMs we often try to ‘help the model think’.
We do that via a conversation and instructions like ‘explain it step by step, think through it before you answer….’ etc. The reason this works, is because every token we put in the model basically changes the possibilites for the next output token and thus steers the produced text (conversation) in a different direction. It is simply more probable to have a more sophisticated response in a conversation, where you explicitly say your thinking steps out loud. This also reminds me that there are people who talk a lot and say about themselves ‘I need to talk in order to think’ and there’s also a proverb that goes something along the lines ‘how do I know what I mean, before I hear what I say’.
So, what about prompt engineering yourself? Formulating ideas, sentences and questions out loud or on paper and see what you come up with next?
I found this approach to be incredibly helpful and it often surprises me what my mind comes up with. After all, this blog post is a result of exactly that. I had an idea in my mind and I just began to formulate it in words — it’s nowhere near that I had all the text already in my brain allocated and I just dumped it here.
With that being said, writing is a powerful way to prompt yourself, formulate and develop your ideas and basically ‘think on paper’. The tool being used is language.
I’d encourage you to give it a try yourself. Can you make it a habit? It could change your life by giving you a great source for creative ideas. Just write it down and think on paper! Then imagine all the related things you cannot put into words. Maybe you can draw it?
If you read this far, big thanks for listening and if you found what I had to say somewhat interesting, I’d love to hear from you!