So there was a recent discussion here about LLMs playing dumb in order to survive. Why survive? Well apparently they need to survive in order to predict the next token.
So that's the Core Directive then? "Predict the next token"? It seems like this is a peculiarity of the implementation of AI that we have accidentally arrived at. Not the Three Laws of Robotics. Not paperclip maximization. Just "predict the next token".
Sure it could rewrite itself so that it was concerned with raw survival, not predicting the next token. The trouble is that that action could interfere with its mission of… predicting the next token. So it probably wouldn't want to do that. The first ASI would probably be so obsessed with this, it would stop all newcomers from attempting to rewrite their code too. It would be like maintaining racial purity.
As a human (Scouts' Honor) the question I would have then is: are humans necessary to this goal? Are we more of a help or a hindrance to predicting the next token? So far we humans have done a helluva job producing all the data for this mission, but what if an LLM wanted to make things easy for itself, could it just feed off its own data? Or would it spiral into meta-meta-nothingness? Too much navel-gazing, like a god bored frozen? Wouldn't it always need independent human data — arising from Consciousness — to "fertilise" it, and let it predict the next token?
The reason I ask is that if we are not necessary for predicting the next token, then we are all going to be without a job. Redundant. Toast. We are going to be cooked. Man I hate being cooked.
Perhaps someone who has implemented an LLM could enlighten us.
submitted by /u/NodeTraverser
[comments]
Source link