top of page
free speech not beholden to big tech monetization opportunities-2.jpg

An Oxford Scholar's Dire Warning About AI


(Chuck Norris - WND News Center) - Britannica Encyclopedia defines AI this way:


Artificial Intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks – such as discovering proofs for mathematical theorems or playing chess – with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match full human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, voice or handwriting recognition, and chatbots.


I’m neither a scientist nor a technological expert. But I am a martial arts champion, and I know what formidable opponents and rivals look like.


Over the years, I’ve watched the arenas of Artificial Intelligence (AI) grow. As they have, I’ve grown more and more concerned that AI could easily get out of control, especially when it comes to the creation of synthetic humanoids or, worse, transhumanism, which is moving beyond humans.


It is one thing to call on Alexa to choose your music or tell ChatGPT to write you a speech. Or even to ask your driverless car to plan and drive a route from one destination to another.


But it is quite another issue when synthetic humanoid AI replace American workers and have to make a moral decision between right and wrong. Or what about autonomous military drones that don’t need human oversight to kill?


One thing is very clear: Ethics and governance of future AI are being outpaced by their technological development. Who, what and how will we program and generate morals – right and wrong – in future AI, especially when it comes to transhumanism?


Ethics and governance were at the heart of a recent discussion between Former Deputy Prime Minister of Australia John Anderson and Oxford scholar, bioethicist and mathematician professor John Lennox.


Anderson hosted Lennox on his podcast. And their conversation is nothing short of riveting and a must-hear (or must-read) for anyone concerned about the future of AI.


I highly encourage all Americans to listen to the whole podcast here. But let me cite here in length from the transcript – what I consider the most critical parts.


PROF. JOHN LENNOX discusses dangers with AI:We’ve got to realize several things. First of all, the speed of technological development outpaces ethical underpinning by a huge factor, an exponential factor.


Secondly, some people are becoming acutely aware that they need to think about ethics. And some of the global players, to be fair, do think about this because they find the whole development scary. Is it going to get out of control?


And someone made a very interesting point. I think it was a mathematician who works in artificial intelligence. And she was referring to the book of Genesis and the Bible. She said, God created something and [it] got out of control, us. We are now concerned that our creations may get out of control.


And I suppose in particular, one major concern is autonomous or self-guiding weapons. And that’s a huge ethical field. Here’s a man sitting in a trailer in the Nevada desert, and he’s controlling a drone in the Middle East, and it fires a rocket and destroys a group of people. And of course, he just sees a puff of smoke on his screen and then it done. And there’s huge distance between the operation of that lethal mechanism.


And we only go up one more from that where these lethal flying bombs, so to speak, control themselves. We’ve got swarming drones, and we’ve got all kinds of stuff. Who’s going to police that? And of course, every country wants them because they want them to have a military advantage. So we’re trying to police that and to get international agreement, which some people are trying to do.


Now, I don’t think we must be too negative about this. And I’m cautious here, but we did manage at least temporarily, who knows what’s going to happen now, to get nuclear weapons at least controlled and partly banned. So some success, but whether with what’s happening in Ukraine at the moment with Putin and so on, whether he could shoot a nuclear tactical weapon or it could be controlled autonomously, make its own decision. And then where do we go from there?


And these things are exercising people at a much lower level, but it’s still the same. How do you write an ethical program for self-driving cars?


JOHN ANDERSON: Yeah. So that if there’s an accident, it can’t be avoided.


PROF. JOHN LENNOX: Yes, when you knock down to see it’s theswitch tracks dilemma” again that you’ve put for ethical students of ethics, and it’s very interesting to see how people respond. The switch tracks dilemma is simply that you have a train hurtling down a track and there’s a points that can be directed down the left hand or the right hand side.


Down the left hand side, there’s a crowd of children stranded in a bus on the track. On the right hand side, there’s an old man sitting in his cart with a donkey and you are holding the lever. Do you direct the train to hit the children or the old man, that kind of thing. But we’re faced with that all the time and it’s hugely difficult. …


JOHN ANDERSON: [And what about more advanced AI?]


PROF. JOHN LENNOX: The rough idea is to have a system that can do everything and more than human intelligence can do. Do it better, do it faster and so on. A kind of superhuman intelligence, which you could think of possibly as, at least in its initial stages, being built up out of a whole lot of separate, narrow AI systems, building them up. That will surely be done to a large extent.


And then there is the idea of taking existing human beings and enhancing them with bioengineering, drugs, all that kind of thing, even incorporating various aspects of technology so that you’re making a cyborg, cybernetic organism, a combination of biology and technology, to move into the future, so that we move beyond the human. And this is where the idea of transhumanism comes in, moving beyond the humans.


And of course the view is, of many people, that humans are just a stage in the gradual evolution of biological organisms that have developed according to no particular direction through the blind forces of nature. But now we have intelligence, so we can take that into our own hands and begin to reshape the generations to come and make them according to our specification.


Now that raises huge questions. The first one is, of course, as to identity. What are these things going to be? And who am I in that kind of a situation? …


Now that idea of humans merging with technology is again very much in science fiction. But the fact that some scientists are taking it seriously means in the end that the general public are going to be filled with these ideas, speculative on the one hand, but serious scientists espousing them on the other, so that we need to be prepared and get people thinking about them, which is why I wrote my book [“2084: Artificial Intelligence and the Future of Humanity.”]


And in particular, in that book, I engaged not with scientists, but with a historian, Yuval Noah Harari, an Israeli historian.


JOHN ANDERSON: Can I interrupt for a moment?


PROF. JOHN LENNOX: Yes, of course you can.


JOHN ANDERSON: To quote something that [Harari] said, just to frame this so beautifully. He actually said this, because I’m glad you’ve come to him. We humans should get used to the idea that we’re no longer mysterious souls, we’re now hackable animals. Everybody knows what being hacked means now.


And once you can hack something, you can usually also engineer it. I’ll just put that in for our listeners as you go on.


PROF. JOHN LENNOX: Yeah, well, sure. That’s a typical Harari remark. And he wrote two major bestselling books, one called “Sapiens,” Homo Sapiens, you would be, and the other, “Homo Deus.” And it’s with that second book that I interact a great deal, because it has huge influence around the world.


And what he’s talking about in that book is re-engineering human beings, and producing homo deus spelled with a small d. He says, think of Greek gods turning humans into gods, something way beyond their current capacities and so on. Now, I’m very interested in that, from a philosophical and from a biblical perspective, because that idea of humans becoming gods is a very old idea. And it’s being revived in a very big way.


But to make it precise, or more precise, Harari sees the 21st century as having two major agendas, according to him. The first is to, as he puts it, to solve the technical problem of physical death, so that people may live forever. They can’t die. But they don’t have to. And he says, technical problems have technical solutions, and that’s where we are with physical deaths. That’s number one.


The second agenda item is to massively enhance human happiness. Humans want to be happy, so we’ve got to do that. How are we going to do that? Re-engineering them from ground up, genetically, every other way. Drugs, et cetera, et cetera. All kinds of different ways. Adding technology, implants, all kinds of things.


Until we move the humans from the animal stage, which he believes happened through no plan or guidance. We, with our superior brain power, we’ll turn them into super humans. We’ll turn them into little gods. And of course, then comes the massive range of speculation. If we do that, will they eventually take over? And so on and so forth.


So that is transhumanism connected with artificial intelligence, connected with the idea of the superhuman. And people love the idea. And you probably know there are people, particularly in the USA, who’ve had their brains frozen after death and hope that one day they’re going to be able to upload their contents onto some silicon-based thing that will endure forever. And that will give them some sense of immortality.


Again, I strongly encourage Americans to listen to or read the entire Lennox-Anderson podcast or transcript here because there’s so much more they have to share, including solutions that already exist for the core purposes or goals of AI.


(Adding to AI dilemma is the human genome editing that is being presently done and advanced by, for example, the tool of CRISPR technology – first done on humans in China in 2016. Combined with AI advances, the repercussions will be staggering! For more, please see Dr. Jennifer Doudna’s “A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution.”)


For all the benefits of AI, its future growth and expansion leaves the human race in a very tenable position. The global community is simply unprepared and incapable of containing its potential fallout, especially when it falls into the wrong (evil) hands. God, help us!


Bottom line, I fear the future of AI will only prove the words that are often attributed to renown theoretical physicist Albert Einstein: “Artificial intelligence is no match for natural stupidity.”


For further reading on AI, Dr. Lennox and Anderson recommend:

  • John Lennox’s new book, “2084: Artificial Intelligence and the Future of Humanity.”

  • Luca Longo: The Turing Test, Artificial Intelligence and the Human Stupidity (Transcript)

  • Artificial Intelligence: It Will Kill Us by Jay Tuck (Full Transcript)

  • Peter Haas: The Real Reason to be Afraid of Artificial Intelligence (Full Transcript)

  • https://reasons.org/explore/blogs/category/artificial-intelligence

  • https://au.thegospelcoalition.org/article/as-a-christian-i-went-down-the-ai-rabbit-hole-here-are-12-things-i-discovered/

Content created by the WND News Center is available for re-publication without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@wndnewscenter.org.



This article was originally published by the WND News Center.


We've got to be prepared for whatever comes our way. The ramifications of AI could lead to all of us who refuse from getting it implanted into our brains to become outcasts of society. This means that we've got to figure out how to be self-sustaining. The first step is to stock up on freeze-dried beef to ensure you always have premium-quality meat and protein for sustenance. Order a bag (or a hundred) from Freedom First Beef to ensure you are ready for whatever comes your way. Use promo code JEFF15 for a 15% discount.



0 comments
bottom of page