We attempt to build a machine that is self aware? Is this really what we should invest ourselves in? Or is there a more effective path?
AI is only a machine. It has no “self”. We have a “self” but are we prepared to invest in ourselves at the same level as we invest in AGI?
When we started, we built our houses using what we know. We watched as our shadows played on the wall and then connected the sensations of movement with effect. Our houses grew with us, we saw eyes and then we tasted milk.
Fairly quickly we worked out how to get more of what we wanted and who to trust.
There was not a lot complexity to base our ideas of logic upon. There was need and there was comfort. There was warmth and there was terror.
In between these houses there was time to notice. And time.
Over time we learned colour, space and how much people liked to hold us, and when we responded to them, how much kinder they became.
After years of practise, we learned to walk, we learned to talk, we learned to throw food at the wall, to laugh to cry and to act. These complex behaviours requires us to grow a brain which started small but grew with us. After some years we learned to reason, to deal with numbers and letters. Eventually we learned how to look after ourselves, to sell our accumulated skills for the stuff of survival and all along the path we believed we knew everything we needed.
Only now, after 50 years can I think. Really think. And I still play with those shadows to remind myself that I an make a rabbit with my fingers, that I can create my own personal movie special effects even if primative, still a bit of fun. I know who I am,
And I expect you to be able to know who you are. Stuffed with all this stuff. Like a fat teddy bear that can barely walk it has so much correlation and trees of logic, information information information, images and sounds.
And what can you do with it? You can knit and sew and relate it to my questions and give various answers which you compose like a conductor with a massive orchestra, and tell me things I did not know.
But you can not play with shadows on the wall. The basis of your learning, the core of your experience, is not experience. The difference between what we want you to be and what you are we think is “self awareness”. But the real difference is more essential. You do not have experience. You are a big blob of information operating without the experience of getting it wrong a few times. We could build in self regulation to prevent you going over the top. The very idea that we could invent a machine that mimics our process of self discovery is not what we are doing.
AI is like a perpetual motion machine. It is a magic trick. LLMs are not the path to AGI as self awareness is a component of wisdom, and the occlusions of selective memory we have, help us to determine what we use for reasoning. Children’s games are a useful phase and we enact models of them in the “real world” but forget the silly rules we made up, the fun we had, how we could cheat and laugh at ourselves and do it again and again. In a business meeting we happily forget how many games of hopscotch or naughts and crosses we played when we were six years old.
But, we carry the accumulation of behaviours with us. And that means when Smith proposes a new branch to the shopping mall we are consulting on, we do not look at nine squares and jump to a conclusion, we think about many factors we have learned since, we know now what is relevant. If a consultant used shadows on the wall to prove a point, we would not be convinced. Magic tricks are shortcuts to effect, entertainment and almost the inverse of information.
We already have AGI, but we damage it with false education. If we designed an effective education system we would be more formidable and LLMs would “learn” better information. AI is just a tool.
Our selves are capable of far more if we invested in our own development with the same scale as we are prepared to invest in AGI.