Skip to content

Better Natural Intelligence Or Better Artificial Intelligence?

  • by

When I was a kid, I was a big fan of Sherlock Holmes (Amazon). I admired the way his brain worked and how he could take the smallest of clues and put them together and come up with the solution to the mystery facing him.

I was also a fan of Thomas Jefferson and Leonardo da Vinci, both polymaths who had a deep knowledge of a wide number of subjects.

I’m still a fan of all three of them. They all embody a trait I’ve always wanted – to be able to solve all the problems. Okay, maybe they couldn’t solve all the problems, but they did a pretty good job solving the multitudes they did.

Then there were the Mentats in Frank Herbert’s Dune (Amazon) who trained their minds to the utmost degree possible. And later there was the drug NZT in Limitless (Amazon) which turned a nobody slacker into the smartest man on the panet.

For most of my life, artificial intelligence has been strictly in the realm of science fiction. HAL 9000, the computer in 2001: A Space Odyssey (IMDB) (Amazon) was the best known example of an artificially intelligent computer. Of course, I think it also set off the general feeling of mistrust of artificial intelligence prevalent in our society, and later reinforced by The Terminator (IMDB) and its sequels.

Now, we have ChatGPT and Google Gemini (previously Bard) and other Large Language Model (LLM) apps that are very good at providing information but aren’t yet ‘artificially intelligent’ the way we think of when we think of HAL 9000. These apps are based on trillions of words as their base data set. They use this data to predict the most likely next word of a response, based on all the examples they have at their disposal.

Predictive text is not as sorcerous as it might seem. Try finishing these phrases yourself:

A stitch in time saves _____.

A bird in the hand is wort two in the ____.
What doesn’t kill you, makes you _____.

(The answers being ‘nine’, ‘bush’ and ‘stronger’ for those not up on their cliches.)

ChatGPT, et al., are a nifty achievement to be sure. But they are not true ‘artificial intelligence’ yet.

All of which brings me to the question – which is better? Improving natural human intelligence, or improving artificial intelligence?

Should we be working on developing Mentats whose brains have been trained to rival that of a computer for sheer computing power, synthesizing information, and keeping it all in their memories?

Or should we be aiming to create Hal 9000 to keep track of all the mundane things, to free up people to do more important things like be more creative, and design new things? (Or maybe just watch cat videos on the Internet all day.)

I’m not sure if this is strictly an either / or situation. I’d say let’s develop both and see which one is better at solving the problems of the world today. Let’s face it – there are plenty of problems to go around.

More on this topic tomorrow.

Note: The Amazon link(s) above is an affiliate link. If you make a purchase from clicking that link, you don’t get charged anything extra, and I make a small commission from the sale.

If you’d like to support my efforts, why not buy me a chocolate chip cookie through my Ko-Fi page?

Leave a Reply

Your email address will not be published. Required fields are marked *