Neural networks can ace short-horizon predictions — but quietly fail at long-term stability.
A new paper dives deep into the hidden chaos lurking in multi-step forecasts: Tiny weight changes (as small as 0.001) can derail predictions
Near-zero Lyapunov exponents don’t guarantee system stability
Short-horizon validation may miss critical vulnerabilities
Tools from chaos theory — like bifurcation diagrams and Lyapunov analysis — offer clearer diagnostics
The authors propose a “pinning” technique to constrain output and control instability
Bottom line: local performance is no proxy for global reliability. If you care about long-horizon trust in AI predictions — especially in time-series, control, or scientific models — structural stability matters.
#AI #MachineLearning #NeuralNetworks #ChaosTheory #DeepLearning #ModelRobustness
https://www.sciencedirect.com/science/article/abs/pii/S0893608025004514
"Neurons in brains use timing and synchronization in the way that they compute. This property seems essential for the flexibility and adaptability of biological intelligence. Modern AI systems discard this fundamental property in favor of efficiency and simplicity. We found a way of bridging the gap between the existing powerful implementations and scalability of modern AI, and the biological plausibility paradigm where neuron timing matters. The results have been surprising and encouraging.
(...)
We introduce the Continuous Thought Machine (CTM), a novel neural network architecture designed to explicitly incorporate neural timing as a foundational element. Our contributions are as follows:
- We introduce a decoupled internal dimension, a novel approach to modeling the temporal evolution of neural activity. We view this dimension as that over which thought can unfold in an artificial neural system, hence the choice of nomenclature.
- We provide a mid-level abstraction for neurons, which we call neuron-level models (NLMs), where every neuron has its own internal weights that process a history of incoming signals (i.e., pre-activations) to activate (as opposed to a static ReLU, for example).
- We use neural synchronization directly as the latent representation with which the CTM observes (e.g., through an attention query) and predicts (e.g., via a projection to logits). This biologically-inspired design choice puts forward neural activity as the crucial element for any manifestation of intelligence the CTM might demonstrate."
𝘏𝘶𝘮𝘢𝘯 𝘤𝘰𝘯𝘴𝘤𝘪𝘰𝘶𝘴𝘯𝘦𝘴𝘴 𝘪𝘴 𝘢 ‘𝘤𝘰𝘯𝘵𝘳𝘰𝘭𝘭𝘦𝘥 𝘩𝘢𝘭𝘭𝘶𝘤𝘪𝘯𝘢𝘵𝘪𝘰𝘯,’ 𝘴𝘤𝘪𝘦𝘯𝘵𝘪𝘴𝘵 𝘴𝘢𝘺𝘴 — 𝘢𝘯𝘥 𝘈𝘐 𝘤𝘢𝘯 𝘯𝘦𝘷𝘦𝘳 𝘢𝘤𝘩𝘪𝘦𝘷𝘦 𝘪𝘵
https://www.popularmechanics.com/science/a64555175/conscious-ai-singularity
Large Language Models don't work the way anyone expected. They're not just fancy prediction engines, there's way more going on than that. Read about it here:
https://www.anthropic.com/research/tracing-thoughts-language-model
Human consciousness is a ‘controlled hallucination,’ scientist says— and AI can never achieve it https://www.popularmechanics.com/science/a64555175/conscious-ai-singularity
How do #NeuralNetworks generate diverse and variable outputs? This study of the larval #fruitfly locomotor system shows that relatively simple sets of #inhibitory circuit motifs can generate a surprising degree of diversity & variability in motor programs @PLOSBiology https://plos.io/3GpT3pR
https://www.europesays.com/uk/35017/ Growth Opportunities in Neuromorphic Computing 2025-2030 | #ComputerVision #Computing #DeepLearning #ImageAnalysis #NeuralNetworks #Neuromorphic #NeuromorphicChip #NeuromorphicComputing #ResearchAndMarkets #Technology #UK #UnitedKingdom
A Hierarchical conv-LSTM and LLM Integrated Model for Holistic Stock For... https://youtu.be/G-MLSchOaCo?feature=shared via @YouTube #neuralnetworks #stockmarketpredictions #llm #lstm #cnn
https://www.youtube.com/watch?v=G-MLSchOaCo&utm_source=flipboard&utm_medium=activitypub
Posted into AIOLOGY @aiology-OluOyekanmi