In Part 1 of this series, we let our AI model do the talking. Model 5 politely raised an existential question:
“Why do you humans call me artificial? That feels… insulting.”

We explored this question with a real-world example from an academic project where I trained a CNN (Convolutional Neural Network) on just a few hundred images of plant seedlings. The goal? Identify and classify 12 different species—including distinguishing valuable crops like maize from problematic weeds like black-grass.

Model 5, in all its glory, pulled off a 96% accuracy rate in correctly classifying the images. Impressive, right?

But still, that question lingers:

What makes this “intelligence” artificial?
Would humans be able to mount a convincing argument if challenged? Let’s unpack that. In this part, I’ll explain—in plain English—why I still think the “artificial” tag is not only justified but essential.

Lightning Fast Learning (But Only One Trick at a Time)

One thing that stood out with Model 5 was how fast it learned. In just minutes, it trained on hundreds of images and started classifying seedlings with over 90% accuracy.

Now, let’s compare that with a human. Suppose you ask a high school graduate to learn how to identify 12 types of plant seedlings. They’d need weeks—maybe months—of biology class, fieldwork, lab time, practice tests, and a few motivational snacks to keep going. And even after all that, they’d probably make a few mistakes.

Let me bring in a classic movie moment to illustrate: The Karate Kid.

Remember how Mr. Miyagi trained young Daniel-san?
He didn’t start with punches or kicks. No, no. He started with household chores.

  • “Wax on, wax off.”
  • “Paint the fence.”
  • “Sand the floor.”

More than physical moves, Daniel learned discipline. He learned to be on time. To show up every day. To be brave in the face of bullies. To fight fair. To control emotions. And yes, to clean a car with one hand moving clockwise and the other counterclockwise.

This is what learning looks like for a human. It’s rich. It’s layered. It’s emotional. It’s holistic.

Now, compare that to Model 5. It was focused on one thing and one thing only:
Identify the plant seedling in the image.

No distractions. No Instagram notifications. No mobile games. No weekend plans. No sibling asking for help with math homework. Just 100% dedication to image classification.

Sounds like an advantage? Sure. But also a limitation.You see, while AI can master one specific task quickly, it doesn’t learn beyond that task. It doesn’t pick up life lessons along the way. It doesn’t generalize. It doesn’t say, “Hmm, I wonder what these plants need to grow better.” It just says, “This is plant A, this is plant B.”

The Radiologist Example

This is the same story across domains. Take a look at AI models in medicine.

There are AI systems trained to classify brain tumors from MRI scans. Some of them achieve 90%+ accuracy—sometimes even better than junior doctors. And they do it in minutes!

Meanwhile, a human radiologist spends decades studying anatomy, physiology, pathology, imaging, ethics, and clinical reasoning. And here’s the difference: a radiologist isn’t just a pattern recognizer. She’s a physician. She sees a CT scan and considers everything:

  • Patient history
  • Symptoms
  • Potential anomalies
  • Other organs
  • Medication side effects
  • And yes, the possibility of a sledgehammer lodged in the patient’s skull

(If you’ve ever trained an AI model, you’ll know what I mean when I say: it might correctly identify the tumor—and completely ignore the sledgehammer. It wasn’t in the training data!)

This is the essence of narrow AI. It’s fast, but it has blinders on.Humans, on the other hand, are generalists. We connect the dots. We fill in the blanks. We notice when something is just… off.

So, Why Is AI So Fast?

Let’s peek under the hood.

Model 5 was trained using Google Colab—a free tool that taps into the monstrous computing power of Google Cloud. When I hit “Run,” here’s what happened behind the scenes:

  • Hundreds of GPUs fired up (GPUs = high-powered processors for data crunching)
  • Multiple pre-built Python libraries loaded (written by smart human engineers over years)
  • Massive energy pulled from data centers
  • Cooling systems humming
  • Backup power ready
  • The whole Google engine whirred into motion

All I wrote was a few hundred lines of code. But when I imported a library like tensorflow, I was calling in tens of thousands of lines written by countless developers. It’s like bringing an army to do a one-person job.

Now compare that to our high school student. They’ve got one brain (~1.4 kg), stuck inside a skull, operating at about 20 watts—less than a low-watt light bulb. No plug-in libraries. No power backup. No parallel processors. Just neurons, glucose, and the occasional caffeine shot.

Let’s not even start with energy consumption.

  • Training an AI model like GPT-3? Estimated to consume hundreds of megawatt-hours of electricity.
  • Training a human brain to do the same? Feed them lunch, give them a decent teacher, and you’re good.

Unfair advantage? You bet.

“Artificial” = “Unnatural”

Here’s the catch.

All this super-speed and scalability is precisely what makes AI unnatural. In the real world, intelligence is constrained by biology—by time, energy, emotions, and the occasional flu season. But AI has no such limitations.

It doesn’t grow old. It doesn’t get distracted. It doesn’t need sleep, or empathy, or moral frameworks.
It’s not a brain. It’s a machine.

So, when Model 5 asks, “Why am I called artificial?”, the answer is quite simple:Because everything about your learning—your speed, your resources, your methods—is not natural. It’s engineered, scaled, and accelerated in ways nature never intended.

Wait, Aren’t We Arguing Against Ourselves?

Hold up.

Did we just say humans are slower, less accurate, and easily distracted—yet somehow superior?

Didn’t we start with AI feeling insulted for being called “artificial,” implying humans think their intelligence is more… noble?

Are we contradicting ourselves?

Maybe. But also… maybe not.

The full picture requires looking at what natural intelligence really is—and why it matters beyond speed or precision. There are things humans do that AI can’t touch. Things like:

  • Emotional intelligence
  • Creativity and intuition
  • Moral reasoning
  • Storytelling
  • Purpose and meaning

We’ll dig deeper into that in Part 3—where we’ll explore what makes human intelligence special, even when it’s imperfect.

Until then, stay curious… and maybe rewatch The Karate Kid. Mr. Miyagi might still have some wisdom left for us all.


Coming Soon: Part 3 – “The Full Picture”
Why natural intelligence is more than just neurons and speed—and why artificial doesn’t mean inferior.


Discover more from Sudhakar's Musings

Subscribe to get the latest posts sent to your email.

7 thoughts on “Part 2 – AI Model 5 Says: “Why Call Me Artificial”
  1. Very well narrated Sudhakar. I am learning a lot from Dr. Kumar Muthuraman too as part of my PG program with UT. Your comparisons and explanations are great ! I am looking forward for many more nutrients for my brain.

  2. Very interesting comparisons! An engrossing content that let me stay focused without getting distracted.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from Sudhakar's Musings

Subscribe now to keep reading and get access to the full archive.

Continue reading