dice.camp is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon server for RPG folks to hang out and talk. Not owned by a billionaire.

Administered by:

Server stats:

1.7K
active users

#inference

1 post1 participant0 posts today
Continued thread

Day 19 cont 🙏⛪🕍🕌⛩️🛕 💽🧑‍💻

“The #LiberalParty has accidentally left part of its email provider’s #subscriber details exposed, revealing the types of #data harvested by the party during the #election campaign.

This gives rare #insight into some of the specific kinds of data the party is keeping on voters, including whether they are “predicted Chinese”, “predicted Jewish”, a “strong Liberal” and other #PersonalInformation.”

#AusPol / #DataScience / #inference / #voters / #Liberal / #LNP / #Nationals <crikey.com.au/2025/04/17/victo>

Crikey · ‘Predicted Chinese’, ‘predicted Jewish’: Liberals accidentally leave voter-tracking data exposedBy Cam Wilson
Replied in thread

"In real life, we weigh the anticipated consequences of the decisions that we are about to make. That approach is much more rational than limiting the percentage of making the error of one kind in an artificial (null hypothesis) setting or using a measure of evidence for each model as the weight."
Longford (2005) stat.columbia.edu/~gelman/stuf

I've made a small interactive web app with @observablehq in #javascript to help students visualise and elicit a Beta prior for #Bayesian #inference on a proportion.

There are sliders that control the mean and precision of the distribution and automatically update the sliders for the standard shape parameters.
One can also set alpha and beta directly. But I could not figure out how to make them update the first without getting some recursive definition error 😞.

astre.gitlab.cirad.fr/training

The summer is coming to an end... let's see what publications it brought to Computo!

First, "AdaptiveConformal: An R package for adaptive conformal inference" by Herbert Susmann, Antoine Chambaz and Julie Josse is available with ­(you guessed it) R code at doi.org/10.57750/edan-5f53

The authors put together a detailed review of 5 algorithms for adaptive conformal inference (used to provide prediction intervals for sequentially observed data), complete with theoretical guarantees and experimental results both in simulations and on a real case study of producing prediction intervals for influenza incidence in the United States.

The paper highlights the importance of properly chosing tuning parameters to obtain good utility and of having access to good point predictions.

As the title implies, the paper comes with an R package, AdaptiveConformal, available from GithHub at github.com/herbps10/AdaptiveCo. It provides implementations for the 5 algorithms, as well as tools for visualization and summarization of prediction intervals.

Replied in thread

@carnage4life in the near future, what will change this will be when generative AI gains the ability to run cross-discipline inferences across many diverse sources and points of view, which will, unlike this, actually add something useful to the output

At the moment generative AI is very much a style formulator, which like the earlier experimental style GANs would be able to generate something asked for in the style of something also asked for – what we have now with LLMs in general is retrieval of stuff it ‘knows’ but in the style of something you want to see it in (or increasingly, something not at all that, but instead a vapid harmless soft-touch safe-option style of answer)

In the future it should be more possible to generate a précis or summarised pile of words that not only have something vaguely to do with what was asked for, but preferably also have additional viewpoints from other stances than the ones normally associated with the already written literature, and furthermore, with cross-discipline inferencing, actually invent new knowledge by comparing this scenario with other similarly-patterned or similarly-shaped scenarios in totally different topic areas – an AI would see those far easier than a human could, thus far the only times a human invents something new like that by crossing over the boundaries of topic areas is through flashes of insight, or episodes of divine inspiration or drunkenness or sleep, otherwise it’s actually difficult for us to do that but if there were a system which could see all the options all at once in parallel, inventing the novel would be so easy it’d be just another process instead of something mysterious
#GenerativeAI #inference #inspiration

Huge congrats to the amazing @Karyna_mi for her new paper! She found that mice need the ventral hippocampus to perform hidden state inference while performing a 2-armed bandit task. #neuroscience #hippocampus #inference

biorxiv.org/content/10.1101/20

bioRxiv · Hidden state inference requires abstract contextual representations in ventral hippocampusThe ability to form and utilize subjective, latent contextual representations to influence decision making is a crucial determinant of everyday life. The hippocampus is widely hypothesized to bind together otherwise abstract combinations of stimuli to represent such latent contexts, and to allow their use to support the process of hidden state inference. Yet, direct evidence for this remains limited. Here we show that the CA1 area of the ventral hippocampus is necessary for mice to perform hidden state inference during a 2-armed bandit task. vCA1 neurons robustly differentiate between the two abstract contexts required for this strategy in a manner similar to the differentiation of spatial locations, despite the contexts being formed only from past probabilistic outcomes. These findings offer insight into how latent contextual information is used to optimize decision-making processes, and emphasize a key role of the hippocampus in hidden state inference. ### Competing Interest Statement The authors have declared no competing interest.