dice.camp is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon server for RPG folks to hang out and talk. Not owned by a billionaire.

Administered by:

Server stats:

1.8K
active users

@mcc @PTR_K @futurebird Exactly! When LLM developers say that “Well, LLMs’ underlying tech means they can’t verify the information they provide”, my response is “Then they’re a terrible tool, and shouldn’t use that tech for that purpose.” Don’t push AI on us if you know it doesn’t work — get us tools that *do* work.

LLMs might be great *front-ends* that allow natural conversation with separate actual expert systems. But they aren’t experts on their own.

@michaelgemar @mcc @futurebird
Not sure if this is exactly part of the same issue, but I've heard there is actually a "black box problem" for AI.

Basically: What exact process did the AI make its decisions or what specific aspects of the data presented did the AI latch onto in order to provide its output in any given case.

This seems to be a problem for researchers themselves and they're trying to come up with ways to figure it out.

@PTR_K @michaelgemar @mcc @futurebird It's actually worse than that.. You can ask the AI for its reasoning easily enough. But it can't actually answer because the way they work internally doesn't retain that information. Instead they will just generate a new answer to that question, based only on their previous answer. A generative predictor actually has no memory, at all, other than its own output.

@Qybat @PTR_K @michaelgemar @mcc

Whoa. It's obvious to me that asking something like chat GPT "Where did you get that answer?" will only produce text that sounds like what GPT's matrices say ought to be the response to that question... and it couldn't possibly be an actual answer to the question.

But if many or most people don't see this? it shows a deep fundamental misunderstanding about what these tools are doing... might explain why people keep trying to get them to do things they can't.

@futurebird @Qybat @PTR_K @michaelgemar @mcc yes that's absolutely it!

I think the trouble is that, for us humans, language is our interface to the world. So much of our understanding of reality is communicated through language that it's kind of like our single point of failure, the perfect hack. We can't comprehend of something being able to say all those clever words, without actually being smart, because words are also the only way we have of telling if other people are smart.

@nottrobin @mcc @michaelgemar @PTR_K @Qybat @futurebird the way I describe it, LLMs are "intelligent" but not necessarily "smart", with both terms being complete junk to begin with which is why people argue over them constantly.

They possess certain cognitive abilities around language processing, but not a full set of cognitive abilities and many fall in "sub-human" or "lower end of human" ranges (a popular usage of one it's good at is Executive Function, which gets it used by a lot of ADHD/Autistic people who have an impairment in our executive functioning).

I definitely agree regardless that they're either applied poorly or presented poorly in most cases. (ie. applied poorly meaning cases like customer service LLMs that get companies sued, and presented poorly being cases like search where people are treating it as authoritative as opposed to supplementary).

And the programming assistant side gets wildly misrepresented (as someone who happily uses AI as a programming assistant, but never in the ways people seem to think it gets used...)

@shiri @futurebird

I'm afraid I just disagree.

LLMs aren't a "lower end of human" intelligence. It's completely different in kind.

Someone's written a complex formal model of language and run it over insanely huge amounts of text to calculate millions of statistical data points about what text comes next. Then they wrote a program to receive a blob of text input and use the statistical graph to generate a blob of text in response.

Intelligence means many things, but this is none of them.

@shiri @futurebird There is nothing like "understanding". That's the language trick I was talking about.

When it says "I'm sorry that was my mistake", it's just regurgitating what some humans have said before in a slightly different order.

When you ask it what it's like to be an AI, it regurgitates an amalgam of the sci fi & fan fic people have written about what an AI might say.

It's what Timnit Gebru called a stochastic parrot.

@nottrobin @shiri

I remember being surprised to learn that some people never think without hearing words, a kind of narrated version of their thoughts.

My thoughts don't work like that all the time, thoughts don't always have narration.

It seems to vary from person to person. I wonder if people who always hear their thoughts as words are more likely to see a LLM as "thinking" ?

Artemis

@futurebird @nottrobin @shiri
I dunno...why would you think that would be the case?

My thoughts are all verbal. I think and interact with the world almost entirely through words (I *can't* think visually—I appear to have some form of aphantasia), and I find LLMs to be total horseshit.