If AI flunks François Chollet’s test, maybe it just struggles with colorful grids—not intelligence itself.
https://www.theatlantic.com/technology/archive/2025/04/arc-agi-chollet-test/682295/
#AI #AGI #Intelligence #Chollet #ARCAGI #PhilosophyOfAI

If AI flunks François Chollet’s test, maybe it just struggles with colorful grids—not intelligence itself.
https://www.theatlantic.com/technology/archive/2025/04/arc-agi-chollet-test/682295/
#AI #AGI #Intelligence #Chollet #ARCAGI #PhilosophyOfAI
Il dibattito sull'avvento delle #AGI si sta accendendo: che si sia apocalittici o integrati, è necessario un dibattito di società. Non basato sulla paura ma nemmeno sulla sottovalutazione. Quale sarà il futuro? Quello distopico descritto dal report AI 2027? Parliamone.
"AI as Normal Technology" — a timely and important inquiry into the social hazards of #AI. Among other points, the authors reject "fast takeoff" scenarios and describe what's dangerous about the "superintelligence" framing —TL;DR "drastic interventions premised on the difficulty of controlling superintelligent AI will, in fact, make things much worse if AI turns out to be normal technology."
Google DeepMind is hiring a Research Scientist to explore what comes after AGI.
From law to machine consciousness, help map AGI’s societal impact & shape future outcomes.
PhD + LLM experience preferred.
http://job-boards.greenhouse.io/deepmind/jobs/6789253
#AI #AGI #DeepMind #AIethics #ResearchJobs #LLMs #PostAGI
(3/3)
"... the belief that AGI can be realized is harmful. If the power of technology is overestimated and human skills are underestimated, the result will in many cases be that we replace something that works well with something that is inferior.."
#RagnarFjelland, 2020
https://doi.org/10.1057/s41599-020-0494-4
This is what's happening, eg government thinking that replacing human judges with Trained MOLEs allows them to cut costs *and* get more "rational" judgments. It doesn't do either.
@animalculum Unfortunately, it fits perfectly into the concept of the techbros with their #TESCREAL ideologies. They dream of #eugenics to ‘improve’ a "surviving human elite" genetically and with #AI or even #AGI. I would like to see ethical debates in science back (they still existed in my childhood).
At the same time we debate about #invasiceSpecies searching their habitats.
@academicchatter
I recommend this paper on LLMs from Murray Shanahan of Imperial College.
Talking About Large Language Models
https://arxiv.org/pdf/2212.03551
It was recommended to me by one of his ex-students after I made similar points, no doubt more clumsily, as we discussed LLMs, AI, etc.
It's good on the inherent limitations of the underlying mechanism and on the distinction between the underlying LLM and the larger system in which the LLM is embedded (the chatbot, or whatever). I continue to believe that if LLMs end up being part of a real A("G")I system, they play the role random idea generators and most of the intelligence come from the rest of the system.
Künstliche Intelligenz: AGIs könnten schwere Schäden verursachen - Golem.de
https://glm.io/195037?n #KünstlicheIntelligenz #KI #AGI
@billbennett's 2021 piece on AI hype references a paper by Ragnar Fjelland entitled Why general artificial intelligence will not be realized. Here's a permanent link to it;
It's amazing to me that both the secular and evangelical cults within MAGA are apocalyptic in nature, with their own versions of "the rapture". For TESCREALists, it's the singularity. For Evangelicals, it's disappearing into the ether to "meet with Jesus"
»#Google #Deepmind says #AGI might #outthink #humans by 2030, and it's planning for the #risks detailing its approach to developing #safe artificial general intelligence (#AGI).« https://the-decoder.com/google-deepmind-says-agi-might-outthink-humans-by-2030-and-its-planning-for-the-risks/?eicker.news #tech #media
"AI firms are interested in developing tools and marketing strategies that revolve around the allure of AGI—around a stillborn god that will transform large swaths of society into excessively profitable enterprises and incredibly efficient operations. Think of it as a desperate attempt to defend capitalism, to preserve the status quo (capitalism) while purging recent reforms that purportedly undermine it (democracy, liberalism, feminism, environmentalism, etc.). Sam Altman, OpenAI’s co-founder, has repeatedly called for “a new social contract,” though most recently has insisted the “AI revolution” will force the issue on account of “how powerful we expect [AGI] to be.” It doesn’t take much to imagine that the new social contract will be a nightmarish exterminist future where AI powers surveillance, discipline, control, and extraction, instead of “value creation” for the whole of humanity.
The subsuming of art springs out of the defense of capitalism—more and more will have to be scavenged and cannibalized to sustain the status quo and somehow, someday, realize this supposedly much more profitable horizon. The ascendance of fascism comes with the purge—the attempt to rollback institutions and victories seen as shackles on the ability of capitalism to deliver prosperity (and limiters on the inordinate power and privilege for an unimaginably pampered and cloistered elite).
Both are part and parcel to what’s going on, but one project is objectively more dangerous (and ambitious) than the other. In that way, then, all of this is a distraction."
https://thetechbubble.substack.com/p/does-openais-latest-marketing-stunt
@theaigrid #ai #llm #yannlecun $META
AI Godfather Stuns AI Community "No Way In HELL #AGI In 2 Years
( Ed : does it have to be full #AGI to be economically useful? )