When you see the dance of machinery, its easy to believe in the Machine Spirit. That there must be something alive in how it works. That there is some deeper equivalence to organic life in how it works. The ancients thought a mill was alive, as it groaned as it milled the corn.
Of course, this is pointless reasoning. No matter what philosophical reasoning we build around the thing, the thing itself still exists as it were. Ancient philosophers like Plato and Aristotle love to fall into this trap - based on what they would like to see, they construct elaborate edifices, beautiful towering cathedrals of words, of arguments, about the “elements”, “pneuma”, “crystalline spheres,” “elemental qualities,” and “final causes” based on what seemed aesthetically or logically satisfying. But in the end, it’s all just atoms moving in a void. Because they ignore things as they are, their philosophy was in the end, completely useless: it ended in nothing but words. And so too with AI: you must look at the thing of itself, and ignore the philosophical distractions.
Why would an intelligence machine need to behave like we are? A servo motor is under no obligation to behave like muscles - the vast majority of the modern world consists of weird artificial constructs using tricks of physics to reshape the physical world. There is practically no significant similarities between a bulldozer and a human - not in function, not in mechanism. But that doesn’t make it less useful. The physical mechanisms of the world don’t need explanations to be useful. James Watt’s steam engine precedes Lord Kelvin’s formulation of thermodynamics by about 80 years.
What AI is, is an intelligence machine. As you crank the matrices, it churns out raw intelligence by the yard. This intelligence is not in and of itself always useful. It needs shaping, it needs conveyance, and redirection - this is wherefrom the knowledge graphs, and the RAG queries, and the vector search, and the constrained generation. All different ways of shaping the raw intelligence into useful products. Some of these products are duds - no manufacturing process is perfect - but fundamentally it is an industrial process.
That is fundamentally its power. That’s what makes it like the steam engine. See, the steam engine’s power is that it turns heat, into raw motive power. Raw motive power can be shaped, and redirected to move anything, as long as you have a reliable source of heat, and good mechanisms for doing so. Everything else is derived from the ability to produce motive power on demand.
Therefore, I claim that the best way to deal with AI, conceptually, is not to get lost in the weeds about “is it sentient”, but to consider it as something which produces intelligence in a very specific format at scale, and to consider what exact mechanisms you require to produce the specific output you desire.
Essentially, you need to move away from viewing AI as a philosopher, or even a scientist, but rather as an engineer.
I figure it's more an interesting philosophical question. I personally view it as largely irrelevant for the style where you feed in a prompt and question, get a response, back and forth for a bit then reset it. The runtime is a matter of seconds, it answers whatever it was asked, and shuts down. Ones that are left as long-running processes?.... that's a more open question. But the ones we usually use for stuff are not that type.
I think the problem is that AI seems sentient and competent but at this stage it is often faking it. And like in the Peter Principle one good sunny afternoon it will be promoted to be responsible for really important stuff. But secretly it is not that clever. And in that juxtaposition between being responsible for a lot and not actually b eing that clever after all, you get skynet. A being that blows up everything for its own reasons that we can not quite follow and that in retrospect are really stupid.