The key difference is that your thinking feeds into your word choice. You also know when to mack up and allow your brain to actually process.
LLMs are (very crudely) a lobotomised speech center. They can chatter and use words, but there is no support structure behind them. The only “knowledge” they have access to is embedded into their training data. Once that is done, they have no ability to “think” about it further. It’s a practical example of a “Chinese Room” and many of the same philosophical arguments apply.
I fully agree that this is an important step for a true AI. It’s just a fragment however. Just like 4 wheels, and 2 axles don’t make a car.
The Packers are generally on minimum wage. When something isn’t in stock, the computer automatically offers an alternative. The packer can either follow the suggestion, or use their judgement. If they blindly follow the computer, they can’t be criticized. If they use their judgement, and a customer complains, then they are blamed. They really have no reason to stick their neck out, even if the computer is being idiotic.