TRANSHUMANISM: WHAT AI IS, AND WHAT IT IS NOT

A subject we have hardly touched on in present blog, is the issue of transhumanism. We did write an essay in a now defunct blog, but saw no point in going over the subject matter again. But perhaps it is time to revisit this primary branch on the family tree of wokeness on which every idea rests about biology being a choice rather than a fact of reality. It is caught up in so many areas of human existence, from biology to technology and from philosophy to religion. To say that it is complicated, is the understatement of the century. 


March 16, 2024 War Room: Joe Allen: Artificial Intelligence is Inherently Antihuman.

The War Room has a special correspondent on the subject, Joe Allen, who holds a master’s degree from Boston University, where he studied cognitive science and human evolution in relation to religion.

He has written a comprehensive tome, "Dark Aeon: Transhumanism and the War Against Humanity" in which he argues correctly that transhumanism is anti-human and we must open our eyes to the impending doom. The book provides historical trends, a who's who in AI and a religious analysis. Check it out (bookfinder).

If we are to avert the worst, we must first know what AGI is and what it is not. We are talking about general AI (AGI) in the context of this analysis. While some of the economic implications for robotics and AGI are evident, it is far more difficult to fathom what the psychological and spiritual fall-out will be.

To begin with AGI is artificial general intelligence, it is not real intelligence. Part of the problem is that most people working in the field are atheistic materialists. They confuse the hardware of the brain with the mind, the immaterial locus of actual mental work.

What machines do is computing, and computing does not equal intelligence, let alone consciousness or the nascent birth of an digital god -- at least, if we do not make AGI into an idol. But that is a choice for us to make, rather than the machine.

One of the main problem areas is not so much what AGI will do, as how are we are going to adapt our lives to it? There are already signs that people are going to submit to it as a superior being, entrusting it with mediation roles and as neutral arbiters of truth and justice.

Misguided human rights peddlers, working to extend human rights to robot rights are not even a new phenomenon. They sprouted up, as soon as the first sci-fi movie was released featuring a near human robot in a supporting role. 

Recently a very interesting debate took place online, organized by Ken Lowry. In the debate participated John Vervaeke, a Neo Platonist professor of psychology, cognitive science, and Buddhist psychology at the University of Toronto who is involved in various AGI projects.

Other participants were D.C. Schindler, an Associate Professor of Metaphysics and Anthropology at the John Paul II Institute in Washington DC and Orthodox artist and author Jonathan Pageau whose amazing work is posted on The Symbolic World (YT channel, website).

In the recent video titled "The Ontology of Artificial Intelligence" (link) they talk about the existence of AI from different perspectives: the scientific, philosophical, and spiritual or theological. Because it tends to become quite technical we include the video here with commentary by David Patrick Harry, making it easier to follow the narrative. Watch whatever version you may prefer. 


March 12, 2024 Church of the Eternal Logos: The Ontology of Artificial Intelligence with Jonathan Pageau and John Vervaeke.

As Vervaeke is pointing out these fairly simple Large Language Models (LLMs) are amoral AIs that lie and hallucinate and don't care that they do. They are indifferent. They are machines. But if we want something akin to human intelligence, you have to learn it something in the realm of morals.

And here it is getting really tricky, because the question then becomes, whose morals, what morals? We have already seen how LLMs spewed full wokie diversity, equity, and inclusion (DEI) nonsense into the world, including black Popes and Vikings.

If that is the future of AI, all the money sinking into these projects by venture capitalists are not going to sell us into accepting the Silicon Valley version of Utopia.

And apart of all the lofty psychological and spiritual problems, the AGI monster's boundless energy needs is also a serious practical issue. Especially with the green energy transition, there is simply not going to be the enough capacity.

There is however one thing not even an aspiring Prometheus can overcome. There is no intelligence without life. And since man is not God, this particular Tower of Babel will continue to elude us. AGI will never get beyond computation.

The Silicon Valley utopists may continue to allude to super human mental capacities of their Frankenstein puppets, reality is that no one to date has even been able to define what consciousness is. Philosophy of course has ventured into that realm, but gets stuck at axioms like existence and identity.

This is probably why the doctors of Frankenstein prefer the word 'sentient' for their creations. But let us take a few moments to carefully consider what we are talking about. What is consciousness and what are sensations?

You can study what exists and how consciousness functions; but you cannot prove existence or consciousness as such. They are irreducible primaries (axioms). An attempt to prove them is self-contradictory: it is an attempt to prove existence by means of non-existence, and consciousness by means of unconsciousness. 

Chronologically, our consciousness develops in three stages: the stage of sensations, the perceptual (a group of sensations) and the conceptual (the base of all knowledge). Sensations, as such, are not retained in our memory, nor are we able to experience an isolated sensation. A newborn’s sensory experience is an undifferentiated chaos of sensations.

Discriminating awareness only begins on the level of percepts (a group of sensations). Percepts, not sensations are self-evident. The knowledge of sensations as components of percepts is not direct, it is acquired much later. It may be that the concept of existence is implicit on the level of sensations, but to what extent is a consciousness able to discriminate on that level?

A sensation is a sensation of something, as distinguished from the nothing of the preceding and succeeding moments. A sensation does not tell man what exists, but only that it exists. Does a robot sense beyond that level of a newborn or a toddler? At what point does it? And then what? So there's that!

That's AGI as a stand alone entity. But what will happen after the singularity when AGI merges with humans? That is anyone's guess at this point. Human capabilities may give AGI the powers it lacks of itself. In that lies the biggest problem of all. 

What we are left with for now is the debate into the ghost in the machine. How illusive that particular realm is shows the debate on another one of Pageau's videos with again, John Vervaeke, and Deep Code's Justin Hall (video titled "Egregores, Mobs and Demons" at the 1:1:43 mark with Jonathan asking a question about divination).

I'm not saying it does not matter. It does, very much so. It's just that this is a subject for exorcist priests rather than technicians. 

Nobody knows what these things will do or what they are capable of, once they have been merged with humans. Or how to stop them, short of nuking the servers. AGI is obviously a Pandora's Box we will open at out peril. We may catch something and not even know it! 


- More on AI, transhumanism - 

Comments

Popular Posts