dice.camp is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon server for RPG folks to hang out and talk. Not owned by a billionaire.

Administered by:

Server stats:

1.8K
active users

#ramalama

0 posts0 participants0 posts today
Berkubernetus<p><span class="h-card" translate="no"><a href="https://hachyderm.io/@TheNewStack" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>TheNewStack</span></a></span> interviews Eric and Dan, maintainers of RamaLama about containerizing <a href="https://m6n.io/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> development. If you haven't heard of the <a href="https://m6n.io/tags/RamaLama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RamaLama</span></a> project before, this is a quick intro:</p><p><a href="https://thenewstack.io/ramalama-project-brings-containers-and-ai-together/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">thenewstack.io/ramalama-projec</span><span class="invisible">t-brings-containers-and-ai-together/</span></a></p><p><a href="https://m6n.io/tags/containers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>containers</span></a> <a href="https://m6n.io/tags/Kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Kubernetes</span></a></p>
Eric Curtin<p>🚀 Exciting news! ramalama.ai is officially live! 🎉</p><p>RamaLama makes AI inferencing boring. Just OCI containers handling AI models seamlessly.</p><p>Huge thanks to Jessica Chitas &amp; Cara Delia for making this happen! 🙌</p><p><a href="https://social.treehouse.systems/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://social.treehouse.systems/tags/ML" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ML</span></a> <a href="https://social.treehouse.systems/tags/RamaLama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RamaLama</span></a></p>
Adam :redhat: :ansible: :bash:<p>How RamaLama runs AI models in isolation by default</p><p><a href="https://developers.redhat.com/articles/2025/02/20/how-ramalama-runs-ai-models-isolation-default" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">developers.redhat.com/articles</span><span class="invisible">/2025/02/20/how-ramalama-runs-ai-models-isolation-default</span></a></p><p><a href="https://fosstodon.org/tags/ramalama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ramalama</span></a> <a href="https://fosstodon.org/tags/podman" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>podman</span></a> <a href="https://fosstodon.org/tags/cncf" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cncf</span></a> <a href="https://fosstodon.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://fosstodon.org/tags/artificialintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>artificialintelligence</span></a> <a href="https://fosstodon.org/tags/security" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>security</span></a> <a href="https://fosstodon.org/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a> <a href="https://fosstodon.org/tags/containers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>containers</span></a> <a href="https://fosstodon.org/tags/deepseek" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>deepseek</span></a></p>
Markus Eisele<p>Tired of the AI hype? 😴 I am too! 😅<br>RAMalama makes working with AI models boring (and that's a GOOD thing!).<br>Check out the latest <a href="https://mastodon.online/tags/redhat" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>redhat</span></a> <a href="https://mastodon.online/tags/developers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>developers</span></a> article to learn how RAMalama simplifies AI workflows:<br> - Streamline model deployment <br> - Reduce boilerplate code <br> - Focus on results, not infrastructure <br><a href="https://mastodon.online/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.online/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.online/tags/RAMalama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RAMalama</span></a> </p><p><a href="https://developers.redhat.com/articles/2024/11/22/how-ramalama-makes-working-ai-models-boring" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">developers.redhat.com/articles</span><span class="invisible">/2024/11/22/how-ramalama-makes-working-ai-models-boring</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/RedHat" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RedHat</span></a> Developing <a href="https://hachyderm.io/tags/Ramalama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ramalama</span></a> To "Make <a href="https://hachyderm.io/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> Boring" By Offering Simplicity &amp; Ease Of Use<br>Ramalama leverages <a href="https://hachyderm.io/tags/OCI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OCI</span></a> <a href="https://hachyderm.io/tags/containers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>containers</span></a> and makes it easy to run AI inferencing across GPU, seamlessly fallback to CPU if no GPU support is present, and interfaces with <a href="https://hachyderm.io/tags/Podman" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Podman</span></a> and <a href="https://hachyderm.io/tags/Llamacpp" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llamacpp</span></a> to do heavy lifting while fetching models from <a href="https://hachyderm.io/tags/HuggingFace" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HuggingFace</span></a> and <a href="https://hachyderm.io/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a>. Goal is to have native GPU support working across Intel, NVIDIA, Arm, and Apple. CPU support includes AMD, Intel, RISC-V, &amp; Arm.<br><a href="https://www.phoronix.com/news/Red-Hat-Ramalama" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/news/Red-Hat-Rama</span><span class="invisible">lama</span></a></p>