Computers have been beating the best human Go players since 2016. The Go world champion retired in part because AI is “an entity that cannot be defeated.”
But a human just trounced one of the world’s best Go AIs 14 games to 1: https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai
I think this news story is more interesting than it might first appear (without knowing details, so grain of salt). It isn’t just a gaming curiosity; it points to a fundamental flaw with “deep learning” approaches in general.
1/
The Go AI was trained by feeding it a huge number of Go games. It built a model based on w̶h̶a̶t̶ ̶h̶u̶m̶a̶n̶s̶ ̶d̶o̶. CORRECTION: KataGo is trained by playing against itself; the model input is “past AI games.”
The human beat it by doing something so obvious a human or sensible AI would never do it — so it wasn’t in the training data, so the AI didn't counter it.
(Basically, the human forms a conspicuous giant capture ring while distracting the AI with tactical battles the AI knows how to counter.)
#ai
2/
Why is this interesting?
The recent eye-popping advances in AI have come from models that scan huge datasets, either generated by humans (e.g. “all competitive Go games” or “all the text we could find on the web”) or by computer (“the AI plays itself a billion times”), and imitating the patterns in that dataset — with no underlying model of meaning, no experience to check against, no underlying theory formation, just parroting.
3/
These systems produce such striking results, it raises the question of whether our brains are any different. Are we also just fancy pattern mimics?
Would an LLM gain human-like intelligence if only we had more processing power?
This result suggests that no, there’s something else our brains do. The tactic the AI missed would be painfully obvious to a human player. There’s still something our brains do — theorizing, generalizing, reasoning through the unfamiliar — that these AIs don’t.
4/
As usual on Ars, there are actually some good comments.
This person wisely reminds us that AI has been littered with bold predictions that human-like AI is just around the corner — and every time, we realized that we’d failed to understand what the hard part even was. The Marvin Minsky quote here is eye-popping:
https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/?comments=1&post=41644794
5/
But does this matter? Sure, the AI did a faceplant on some bizarro strategy that would never fool a competent human. So what? It’s just a board game.
OK, what if it’s a self-driving car?
Have you ever encountered a traffic situation that was just totally bizarre, but had a common-sense solution like “wait” or “just go around?” What would an AI do in that situation?
Think of the reports of self-driving Teslas suddenly swerving or accelerating straight into an obvious crash.
6/
Or as this comment remarks: what if it’s an autonomous killbot? (arguably a superset of the previous item, I know) https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/?comments=1&post=41644720
Several comments point out the parallels to the recent story about Marines defeating an AI with Jim-Carrey-style nonsense antics that bore no relationship to the training data: https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/?comments=1&post=41644757
The linked article: https://taskandpurpose.com/news/marines-ai-paul-scharre/
7/
This is probably what’s going on with the hilarious ChatGPT faceplants making the rounds on social media.
People try to fool GPT with esoteric questions, but those are easy for it: if anybody anywhere on the web already answered the question, no problem — and making it esoteric just narrows the search space.
But give it a three-digit addition problem, and there’s no single specific example to match. And GPT can’t turn all those examples into a generalized theory of how to do addition.
8/
What’s the moral here?
1. Beware the AI hype.
2. Know your AI history, at least a little.
3. If AI ever does become intelligent (whatever that means), it’s going to be •weird• for a long time first.
4. Beware your own heuristics about what “intelligence” looks like, because •that• is a place where our brains are as easily fooled as the Go AI was. Our brains didn’t evolve around things like GPT, and we’re easily bamboozled by them.
/end
@inthehands A lot of the challenge of AI specialists trying to achieve humanlike intelligence is the classic problem of trying to solve the questions of philosophy with engineering. Some of the greatest minds of history have wrestled with what understanding is and how consciousness operates. Many modern engineers respond with "lol philosophy doesn't answer anything" before crashing into the most basic philosophical problems and completely failing to solve or even understand them.
@glenatron @inthehands Engineers that dismiss philosophy as worthless should not be allowed to work on anything related to humans or the environment (i.e. anything). Dangerous people susceptible to the dumbest scams and capable of inhuman horrors without a shred of conscience or second thoughts.
@arclight @inthehands i don't disagree but it is very common. Every couple of years there's a new book from a physicist claiming to have "solved" philosophy and it's always misbegotten from the start because they haven't grasped that the scientific method and logic and the nature of numbers are all philosophy. If you can solve it with physics it's not metaphysics, that's right there in the name! Other fields do the same and as a philosophy grad it is infuriating.