Are AI Systems Artificially Stupid?

So many are talking of ChatGPT and many of us are often fooled into thinking that AI Systems are smart, very smart!

ChatGPT was introduced in November 2022 by OpenAI, and further shifted our understanding of how AI and human creativity might interact. Structured as a chatbot – a program that mimics human conversation – ChatGPT is capable of a lot more than conversation. It is capable of writing working computer code, solving mathematical problems and mimicking common writing tasks, from book reviews to academic papers and legal contracts. Very impressive indeed but its creators warn it can also spread fake facts, embed dangerous ideologies, and even trick people into doing tasks on its behalf.

Microsoft has invested billions of dollars on OpenAI and now got its search engine Bing as a ChatGPT-powered chatbot, and massively boosting its popularity by doing so. But despite the online rush to consult ChatGPT on almost every conceivable problem, its relationship to knowledge itself is somewhat shaky.

Bard, on the other hand is Google’s answer to OpenAI’s ChatGPT and Microsoft’s Bing Chat. Unlike ChatGPT, Bard is hooked up to the live internet and can pull answers in from other sites when needed.  However, Bard is still only open for testers in the US and the UK.  So for us in Sri Lanka, it has this to say: “Bard isn’t currently supported in your country. Stay tuned!”

The New York Times Magazine said GPT-3 writes “with mind-boggling fluency,” but according to Dr. Gary Marcus, AI has been over-reliant on deep learning, which he believes has inherent limitations. He says that what these systems are ultimately doing is mimicry. “They’re mimicking vast databases of text.

Regardless of what direction AI is heading, the takeaway is still the same. We should be all worried, no matter how smart (or not) it is.

Leave a Comment