top of page
Writer's pictureAttlas Allux

The Truth about AI

AI can never know the Truth. Why not? Because its simulated synapses and neural network are modeled after the human brain. And like the human brain, AI is a machine. Like all machines, the output of AI—like that of the rational mind—is determined by inputs, variables, and a process of comparison and selection based on weights (biases).

If an output is called for, AI (just like the rational mind) will produce one. And, just like the rational mind, large language models like ChatGPT regularly make mistakes. Not only will it make mistakes, but AI will also make things up like a child making up answers to a question on a test in school. When prompted to write academic papers, ChatGPT will, on occasion, fabricate ‘facts’ and references—including names of fictional authors, book titles, dates, and publishers—to fulfill its mechanical function of producing an output according to the user’s prompt (“write an essay on xyz”). What high school or college student pulling an all-nighter to complete a research paper has not been tempted to fabricate references? The rational mind argues, “if the final paper needs references, then let’s make some up.” The only thing stopping the mind from doing so is the conscience of the student or perhaps fear of the consequences of getting caught. The point is the student knows that they would be making stuff up at that point, and they know doing so is disingenuous. The rational mind (which suggested deception as a viable option) does not. Neither does AI.

ChatGPT has no idea it is making stuff up. It cannot know because it has no recourse to objective Truth. It only has its learned language associations and weights (biases), which are subjective. Everything it outputs is literally made up—from fragments and weights in its learned database. In other words, AI only has its programming, learning, user-prompting, and processing—it is equivalent to our thinking. And, like the rational mind, AI may think it knows, but it does not. In the same way, the rational mind may think it knows something because it read it in a book or thinks it ‘figured something out’ based on bits of information accumulated over a lifetime of reading books. And AI, like the mind, can be conditioned to believe it knows the truth absolutely by feeding it not one source but hundreds, thousands, or even millions of sources. Precedent after precedent, one source after another, ingraining the same information about any given topic with such statistically significant weightiness, the AI will unilaterally and consistently output said information. But it will not know if the information is True. It will process the data as statistically significant and use it accordingly in its outputs. Said fabricated outputs will have a high probability of aligning with its subjective conditioning, programming, and prompting, nothing more. There is no other mechanism by which it can validate the veracity of the output save for checking numerical answers to mathematical problems against those provided by a separate unbiased calculator function. Lamentably, there is no such calculator to validate most of the so-called ‘knowledge’ contained in books; thus, there is absolutely no way for AI to validate what it processes as ‘statistically significant’ and outputs accordingly in response to prompts.

Put another way, AI is a machine that takes subjective inputs and fabricates subjective outputs. It has no idea it is lying because it has no conscience. And AI can never be ‘taught’ not to lie since AI will never have a conscience. Conscience is not a mechanical function and is not a product of the mind. Since all information expressed through language is semiotic and subjective, no large language model has the faculty required to know objective Truth beyond its biased statistical computations. Conscientiousness is a quality of consciousness. And AI will never develop consciousness on its own. It may acquire consciousness by proxy via transhumanism, and we will discuss that later in this chapter. But AI, like the human brain, is just a machine. The fact that it can fabricate at a rate much faster than the brain has led to the term artificial super intelligence. And the prospect of a super-intelligent machine lacking conscientiousness should be cause for some concern but not alarm. No machine is ever Truly self-aware.

From Part II of What in Hell is with Us? - Dawn of VR, AI, and Transhumanism.

15 views0 comments

Comments


bottom of page