Can A.I. Help Us Understand Babies?
Feb 04, 2024Artificial intelligence is everywhere in our day-to-day lives and our interactions with the world.
Can we learn things from ChatGPT and other large language models that shed light on human language acquisition? Sounds like it would be cool if it were possible, but how would it work?
Well to start, human babies have to learn lots of things: how the objects around them behave, the rules of social interaction, the various parts of language. And it certainly seems like A.I.s learn language too. But what if all they do is imitate human speech? Do they think and have actual knowledge, or are they just fancy computer programs?
Of course, what if our own minds are just fancy computer programs, running on the hardware of our brains? Why be so confident that A.I.s are different from us? Look at all the things A.I.s can do these days: they can make analogies, they can summarize a paragraph, they can even write poetry—at least not anything worse than what a bad human poet might write.
But do A.I.s understand what they're doing when they write poetry? After all, we know how it works: the A.I. has a giant database that includes a lot of poems. When you ask it to write a haiku, it keeps guessing the next line based on its word-bank. That seems less like understanding than a glorified lookup table. Understanding something means having a sense of the underlying principles. It’s not enough to get the answers right—you have to get them right for the right reasons.
Could A.I. ever do both? Kids, for example, start learning their times tables by looking up the answers and memorizing them. If they do it enough times, they start to understand multiplication. Looking things up isn’t understanding, but it can lead to genuine understanding.
It seeems like that could happen with A.I., but it sure hasn't hasn’t yet. When you ask ChatGPT for quotes or philosophy papers or scientific facts, its answers sound plausible, but it often makes stuff up. It can’t tell the difference between a truth and a lie.
So it's not perfect right now—but is getting better every day, and at some point it will give the right answers. And yet we still might not know whether it’s giving those right answers for the right reasons. A small child will still understand more than it does.
Of course, how do we know if a child is giving the right answers for the right reasons? Kids make things up too, and they have a lot of trouble telling the difference between make-believe and reality. But kids eventually grow up, and maybe A.I. will grow up too, at which point who knows what it will be capable of. With the right parenting, it could be running the country in a few years.
But before anyone endorses ChatGPT for president, let's hear what our guest, Michael Frank, Director of the Stanford Symbolic Systems Program, has to say.
Photo by Jelleke Vanooteghem on Unsplash
Comments (7)
karinnngarrett
Saturday, March 30, 2024 -- 5:10 PM
I'm currently in need ofI'm currently in need of assistance with a research paper and I'm on a tight budget. Are there any services or individuals who can help me write a quality paper without breaking the bank? Looking for someone who can write a paper for me cheap. Can someone help?
Daniel
Thursday, April 11, 2024 -- 1:28 PM
Have you read Emerson?Have you read Emerson?
iguanainjure
Wednesday, June 5, 2024 -- 6:54 PM
This exploration of theThis exploration of the parallels between human language acquisition and the learning processes of large language models like ChatGPT offers a fascinating perspective on the evolution of artificial intelligence and its potential intersections with our understanding of cognition. It provokes thought on the nature of understanding, the role of memorization in learning, and the possibilities of future advancements in AI capabilities.
From: slice masters
Isabella38
Wednesday, June 5, 2024 -- 8:03 PM
Using large language modelsUsing large language models like ChatGPT to understand human language acquisition raises intriguing questions about the nature of AI comprehension and its parallels to human cognitive development. That's Not My Neighbor
sulo167
Thursday, July 4, 2024 -- 11:23 PM
Please know how much IPlease know how much I appreciate your time and insight. Your guidance is highly motivating.
five nights at freddy's
russellyy
Monday, August 5, 2024 -- 9:20 PM
LLMs are trained on vastLLMs are trained on vast datasets of human language, learning to predict the next word in a sentence based on context. While they can generate coherent and contextually relevant text, they do not possess understanding or consciousness. Their strands hint is statistical rather than experiential.
bettyking
Sunday, August 25, 2024 -- 11:36 PM
I'm glad to see that you haveI'm glad to see that you have a unique way of writing the post. Now it's easy for me to understand the idea and put it into practice io games