dice.camp is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon server for RPG folks to hang out and talk. Not owned by a billionaire.

Administered by:

Server stats:

1.5K
active users

#neuralnetworks

0 posts0 participants0 posts today
EuroSciPy<p>4/97<br>1️⃣ First, we have "<a href="https://fosstodon.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a>, <a href="https://fosstodon.org/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuralNetworks</span></a> and <a href="https://fosstodon.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> Development." This track is for the <a href="https://fosstodon.org/tags/builders" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>builders</span></a>—those creating the new architectures, frameworks, and foundational models. 🏗️</p>
a<p>New on 'Stuff': Book Review - Animals, Robots, Gods \n<br><a href="https://stuff.graves.cl/posts/2025-07-20_12_53-book-review-animals,-robots,-gods.html" rel="nofollow noopener" target="_blank">https://stuff.graves.cl/posts/2025-07-20_12_53-book-review-animals,-robots,-gods.html</a></p><p><a class="hashtag" href="https://91268476.xyz/collections/tags/anthropology" rel="nofollow noopener" target="_blank">#anthropology</a><br><a class="hashtag" href="https://91268476.xyz/collections/tags/ethnography" rel="nofollow noopener" target="_blank">#ethnography</a><br><a class="hashtag" href="https://91268476.xyz/collections/tags/psychology" rel="nofollow noopener" target="_blank">#psychology</a><br><a class="hashtag" href="https://91268476.xyz/collections/tags/neuralnetworks" rel="nofollow noopener" target="_blank">#neuralnetworks</a><br><a class="hashtag" href="https://91268476.xyz/collections/tags/philosophy" rel="nofollow noopener" target="_blank">#philosophy</a><br><a class="hashtag" href="https://91268476.xyz/collections/tags/cogsci" rel="nofollow noopener" target="_blank">#cogsci</a><br><a class="hashtag" href="https://91268476.xyz/collections/tags/ethics" rel="nofollow noopener" target="_blank">#ethics</a><br><a class="hashtag" href="https://91268476.xyz/collections/tags/moral" rel="nofollow noopener" target="_blank">#moral</a><br>#book-review<br>#⭐️⭐️⭐️⭐️<br><a class="hashtag" href="https://91268476.xyz/collections/tags/english" rel="nofollow noopener" target="_blank">#english</a></p>
US<p><a href="https://www.europesays.com/us/77625/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">europesays.com/us/77625/</span><span class="invisible"></span></a> The analysis of learning investment effect for artificial intelligence English translation model based on deep neural network <a href="https://pubeurope.com/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://pubeurope.com/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://pubeurope.com/tags/FutureInformationGuidance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FutureInformationGuidance</span></a> <a href="https://pubeurope.com/tags/HumanitiesAndSocialSciences" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HumanitiesAndSocialSciences</span></a> <a href="https://pubeurope.com/tags/LearningInvestmentEffect" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LearningInvestmentEffect</span></a> <a href="https://pubeurope.com/tags/multidisciplinary" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multidisciplinary</span></a> <a href="https://pubeurope.com/tags/MultimodalTranslation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MultimodalTranslation</span></a> <a href="https://pubeurope.com/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://pubeurope.com/tags/Science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Science</span></a> <a href="https://pubeurope.com/tags/Technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Technology</span></a> <a href="https://pubeurope.com/tags/UnitedStates" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UnitedStates</span></a> <a href="https://pubeurope.com/tags/UnitedStates" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UnitedStates</span></a> <a href="https://pubeurope.com/tags/US" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>US</span></a> <a href="https://pubeurope.com/tags/VisualInformation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VisualInformation</span></a></p>
Miguel Afonso Caetano<p>Gary Marcus is onto something in here. Maybe true AGI is not so impossible to reach after all. Just probably not in the near future but likely within 20 years. </p><p>"For all the efforts that OpenAI and other leaders of deep learning, such as Geoffrey Hinton and Yann LeCun, have put into running neurosymbolic AI, and me personally, down over the last decade, the cutting edge is finally, if quietly and without public acknowledgement, tilting towards neurosymbolic AI.</p><p>This essay explains what neurosymbolic AI is, why you should believe it, how deep learning advocates long fought against it, and how in 2025, OpenAI and xAI have accidentally vindicated it.</p><p>And it is about why, in 2025, neurosymbolic AI has emerged as the team to beat.</p><p>It is also an essay about sociology.</p><p>The essential premise of neurosymbolic AI is this: the two most common approaches to AI, neural networks and classical symbolic AI, have complementary strengths and weaknesses. Neural networks are good at learning but weak at generalization; symbolic systems are good at generalization, but not at learning."</p><p><a href="https://garymarcus.substack.com/p/how-o3-and-grok-4-accidentally-vindicated" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">garymarcus.substack.com/p/how-</span><span class="invisible">o3-and-grok-4-accidentally-vindicated</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://tldr.nettime.org/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a> <a href="https://tldr.nettime.org/tags/SymbolicAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SymbolicAI</span></a> <a href="https://tldr.nettime.org/tags/NeuroSymbolicAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuroSymbolicAI</span></a> <a href="https://tldr.nettime.org/tags/AGI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AGI</span></a></p>
Lord Caramac the Clueless, KSC<p>If we ever see a real artificial mind, some kind of LLM will probably be a small but significant component of that, but the current wave of machine learning will most likely come to a grinding halt very soon because of a lack of cheap training data. The reason why all of this is happening now is simple: The technologies behind machine learning have been around for decades, but computers weren't fast enough and didn't have enough memory for those tools to become really powerful until the early 2000s, and around the same time, the Internet went mainstream and got filled with all kinds of data that could be datamined for training sets. Now there is so much synthetic content out there that automated data mining won't work much longer, you need humans to curate and clean the training data, which makes the process slow and expensive. I expect to see another decades long AI winter after the commercial hype is over. </p><p>If you look for real intelligence, look at autonomous robots and computer game NPCs. There you can find machine learning and artificial neural networks applied to actual cognitive tasks in which an agent interacts with its environment. Those things may not even be as intelligent as a rat yet, but they are actually intelligent, unlike LLMs. </p><p><a href="https://discordian.social/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://discordian.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://discordian.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://discordian.social/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://discordian.social/tags/neuralnetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuralnetworks</span></a></p>
Soh Kam Yung<p>Looks interesting.</p><p>"[N]eural networks are compositions of differentiable primitives, and studying them means learning how to program and how to interact with these models, a particular example of what is called differentiable programming.</p><p>This primer is an introduction to this fascinating field imagined for someone, like Alice, who has just ventured into this strange differentiable wonderland."</p><p><a href="https://arxiv.org/abs/2404.17625" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2404.17625</span><span class="invisible"></span></a></p><p>Via Hacker News [ <a href="https://news.ycombinator.com/item?id=44426153" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.ycombinator.com/item?id=4</span><span class="invisible">4426153</span></a> ]</p><p><a href="https://mstdn.io/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mstdn.io/tags/Primer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Primer</span></a></p>
Harald Klinke<p>I thoroughly enjoyed Antonio Somaini’s lecture tonight on the Politics of Latent Spaces at the conference Art in the Age of Average. The new AI-thoritarians.</p><p>His reflections on compression as a cultural and epistemic process were truly inspiring — and the sources cited were excellent, too ;)</p><p><a href="https://det.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://det.social/tags/LatentSpaces" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LatentSpaces</span></a> <a href="https://det.social/tags/DigitalCulture" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalCulture</span></a> <a href="https://det.social/tags/Compression" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compression</span></a> <a href="https://det.social/tags/Vectorization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Vectorization</span></a> <a href="https://det.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://det.social/tags/ArtAndAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtAndAI</span></a> <a href="https://det.social/tags/MachineVision" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineVision</span></a> <a href="https://det.social/tags/epistemiccompression" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>epistemiccompression</span></a> <a href="https://det.social/tags/AIAesthetics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIAesthetics</span></a> <span class="h-card" translate="no"><a href="https://dair-community.social/@databasecultures" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>databasecultures</span></a></span></p>
Nicola Fabiano :xmpp:<p>📘 My new book is out today — in Italian:<br>“Intelligenza Artificiale, Privacy e Reti Neurali: L’equilibrio tra innovazione, conoscenza ed etica nell’era digitale”</p><p>With Forewords by Danilo Mandic and Carlo Morabito, and an introduction by Guido Scorza.</p><p>🗣️ The English edition will be available soon.</p><p>Now available on all major online bookstores.</p><p><a href="https://fosstodon.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://fosstodon.org/tags/privacy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>privacy</span></a> <a href="https://fosstodon.org/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://fosstodon.org/tags/Ethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ethics</span></a> <a href="https://fosstodon.org/tags/DigitalRights" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalRights</span></a> <a href="https://fosstodon.org/tags/AIAct" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIAct</span></a> <a href="https://fosstodon.org/tags/NewBook" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NewBook</span></a></p>
Gert :debian: :gnu: :linux:<p>Una introduzione "old style" alle reti neurali, semplice, ben fatta e comprensibile per chi non vuole affidarsi a pacchetti e applicazioni preformattate e vuole provare a implementare da solo il sistema che più gli serve e gli piace.<br>(Free resource)<br><a href="https://qoto.org/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://qoto.org/tags/neuralnetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuralnetworks</span></a> <a href="https://qoto.org/tags/books" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>books</span></a> <br><a href="https://books.ugp.rug.nl/index.php/ugp/catalog/book/130" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">books.ugp.rug.nl/index.php/ugp</span><span class="invisible">/catalog/book/130</span></a></p>
Dr. Carlotta A. Berry, PhD<p><a href="https://blacktwitter.io/tags/BlackInRobotics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BlackInRobotics</span></a> workshop series <a href="https://blacktwitter.io/tags/ROS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROS</span></a> <a href="https://blacktwitter.io/tags/ROS2" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROS2</span></a> <a href="https://blacktwitter.io/tags/Robot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Robot</span></a> <a href="https://blacktwitter.io/tags/Robotics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Robotics</span></a> <a href="https://blacktwitter.io/tags/STEM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>STEM</span></a> <a href="https://blacktwitter.io/tags/STEAM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>STEAM</span></a> <a href="https://blacktwitter.io/tags/BlackSTEM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BlackSTEM</span></a> <a href="https://blacktwitter.io/tags/BlackSTEAM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BlackSTEAM</span></a> <a href="https://blacktwitter.io/tags/Drone" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Drone</span></a> <a href="https://blacktwitter.io/tags/ComputerVision" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputerVision</span></a> <a href="https://blacktwitter.io/tags/Drones" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Drones</span></a> <a href="https://blacktwitter.io/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://blacktwitter.io/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://blacktwitter.io/tags/Neuralnetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Neuralnetworks</span></a> <a href="https://blacktwitter.io/tags/ReinforcementLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ReinforcementLearning</span></a> <a href="https://blacktwitter.io/tags/Learning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Learning</span></a></p>
Dr. Carlotta A. Berry, PhD<p><a href="https://blacktwitter.io/tags/BlackInRobotics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BlackInRobotics</span></a> workshop series <a href="https://blacktwitter.io/tags/ROS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROS</span></a> <a href="https://blacktwitter.io/tags/ROS2" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROS2</span></a> <a href="https://blacktwitter.io/tags/Robot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Robot</span></a> <a href="https://blacktwitter.io/tags/Robotics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Robotics</span></a> <a href="https://blacktwitter.io/tags/STEM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>STEM</span></a> <a href="https://blacktwitter.io/tags/STEAM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>STEAM</span></a> <a href="https://blacktwitter.io/tags/BlackSTEM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BlackSTEM</span></a> <a href="https://blacktwitter.io/tags/BlackSTEAM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BlackSTEAM</span></a> <a href="https://blacktwitter.io/tags/Drone" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Drone</span></a> <a href="https://blacktwitter.io/tags/ComputerVision" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputerVision</span></a> <a href="https://blacktwitter.io/tags/Drones" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Drones</span></a> <a href="https://blacktwitter.io/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://blacktwitter.io/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://blacktwitter.io/tags/Neuralnetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Neuralnetworks</span></a> <a href="https://blacktwitter.io/tags/ReinforcementLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ReinforcementLearning</span></a> <a href="https://blacktwitter.io/tags/Learning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Learning</span></a></p>

🧠 Neural networks can ace short-horizon predictions — but quietly fail at long-term stability.

A new paper dives deep into the hidden chaos lurking in multi-step forecasts:
⚠️ Tiny weight changes (as small as 0.001) can derail predictions
📉 Near-zero Lyapunov exponents don’t guarantee system stability
🔁 Short-horizon validation may miss critical vulnerabilities
🧪 Tools from chaos theory — like bifurcation diagrams and Lyapunov analysis — offer clearer diagnostics
🛠️ The authors propose a “pinning” technique to constrain output and control instability

Bottom line: local performance is no proxy for global reliability. If you care about long-horizon trust in AI predictions — especially in time-series, control, or scientific models — structural stability matters.

#AI #MachineLearning #NeuralNetworks #ChaosTheory #DeepLearning #ModelRobustness
sciencedirect.com/science/arti

"Neurons in brains use timing and synchronization in the way that they compute. This property seems essential for the flexibility and adaptability of biological intelligence. Modern AI systems discard this fundamental property in favor of efficiency and simplicity. We found a way of bridging the gap between the existing powerful implementations and scalability of modern AI, and the biological plausibility paradigm where neuron timing matters. The results have been surprising and encouraging.
(...)
We introduce the Continuous Thought Machine (CTM), a novel neural network architecture designed to explicitly incorporate neural timing as a foundational element. Our contributions are as follows:

- We introduce a decoupled internal dimension, a novel approach to modeling the temporal evolution of neural activity. We view this dimension as that over which thought can unfold in an artificial neural system, hence the choice of nomenclature.

- We provide a mid-level abstraction for neurons, which we call neuron-level models (NLMs), where every neuron has its own internal weights that process a history of incoming signals (i.e., pre-activations) to activate (as opposed to a static ReLU, for example).

- We use neural synchronization directly as the latent representation with which the CTM observes (e.g., through an attention query) and predicts (e.g., via a projection to logits). This biologically-inspired design choice puts forward neural activity as the crucial element for any manifestation of intelligence the CTM might demonstrate."

pub.sakana.ai/ctm/

Continuous Thought MachinesContinuous Thought MachinesIntroducing Continuous Thought Machines: a new kind of neural network model that unfolds and uses neural dynamics as a powerful representation for thought.

𝘏𝘶𝘮𝘢𝘯 𝘤𝘰𝘯𝘴𝘤𝘪𝘰𝘶𝘴𝘯𝘦𝘴𝘴 𝘪𝘴 𝘢 ‘𝘤𝘰𝘯𝘵𝘳𝘰𝘭𝘭𝘦𝘥 𝘩𝘢𝘭𝘭𝘶𝘤𝘪𝘯𝘢𝘵𝘪𝘰𝘯,’ 𝘴𝘤𝘪𝘦𝘯𝘵𝘪𝘴𝘵 𝘴𝘢𝘺𝘴 — 𝘢𝘯𝘥 𝘈𝘐 𝘤𝘢𝘯 𝘯𝘦𝘷𝘦𝘳 𝘢𝘤𝘩𝘪𝘦𝘷𝘦 𝘪𝘵

popularmechanics.com/science/a

Popular Mechanics · Human Consciousness Is a ‘Controlled Hallucination,’ Scientist Says—And AI Can Never Achieve ItBy Darren Orf