Artificial intelligence is everywhere in our day-to-day lives and our interactions with the world.
Can we learn things from ChatGPT and other large language models that shed light on human language acquisition? Sounds like it would be cool if it were possible, but how would it work?
Well to start, human babies have to learn lots of things: how the objects around them behave, the rules of social interaction, the various parts of language. And it certainly seems like A.I.s learn language too. But what if all they do is imitate human speech? Do they think and have actual knowledge, or are they just fancy computer programs?
Of course, what if our own minds are just fancy computer programs, running on the hardware of our brains? Why be so confident that A.I.s are different from us? Look at all the things A.I.s can do these days: they can make analogies, they can summarize a paragraph, they can even write poetry—at least not anything worse than what a bad human poet might write.
But do A.I.s understand what they're doing when they write poetry? After all, we know how it works: the A.I. has a giant database that includes a lot of poems. When you ask it to write a haiku, it keeps guessing the next line based on its word-bank. That seems less like understanding than a glorified lookup table. Understanding something means having a sense of the underlying principles. It’s not enough to get the answers right—you have to get them right for the right reasons.
Could A.I. ever do both? Kids, for example, start learning their times tables by looking up the answers and memorizing them. If they do it enough times, they start to understand multiplication. Looking things up isn’t understanding, but it can lead to genuine understanding.
It seeems like that could happen with A.I., but it sure hasn't hasn’t yet. When you ask ChatGPT for quotes or philosophy papers or scientific facts, its answers sound plausible, but it often makes stuff up. It can’t tell the difference between a truth and a lie.
So it's not perfect right now—but is getting better every day, and at some point it will give the right answers. And yet we still might not know whether it’s giving those right answers for the right reasons. A small child will still understand more than it does.
Of course, how do we know if a child is giving the right answers for the right reasons? Kids make things up too, and they have a lot of trouble telling the difference between make-believe and reality. But kids eventually grow up, and maybe A.I. will grow up too, at which point who knows what it will be capable of. With the right parenting, it could be running the country in a few years.
But before anyone endorses ChatGPT for president, let's hear what our guest, Michael Frank, Director of the Stanford Symbolic Systems Program, has to say.