dice.camp is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon server for RPG folks to hang out and talk. Not owned by a billionaire.

Administered by:

Server stats:

1.7K
active users

#pytorch

0 posts0 participants0 posts today
Anita Graser 🇪🇺🇺🇦🇬🇪<p><a href="https://fosstodon.org/tags/torchGeo" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>torchGeo</span></a> Release v0.7.0</p><p><a href="https://www.osgeo.org/community-news/torchgeo-release-v0-7-0/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">osgeo.org/community-news/torch</span><span class="invisible">geo-release-v0-7-0/</span></a></p><p>torchGeo is a <a href="https://fosstodon.org/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a> domain library designed to make it simple for <a href="https://fosstodon.org/tags/machineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>machineLearning</span></a> experts to work with <a href="https://fosstodon.org/tags/EO" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EO</span></a> data, providing datasets, samplers, transforms, and pre-trained models specific to <a href="https://fosstodon.org/tags/geospatial" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>geospatial</span></a> data. </p><p><a href="https://fosstodon.org/tags/GISChat" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GISChat</span></a> <a href="https://fosstodon.org/tags/GeoAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GeoAI</span></a></p>
Ryan Daws 🤓<p>Security flaws hit PyTorch Lightning deep learning framework <a href="https://www.developer-tech.com/news/security-flaws-pytorch-lightning-deep-learning-framework/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">developer-tech.com/news/securi</span><span class="invisible">ty-flaws-pytorch-lightning-deep-learning-framework/</span></a> <a href="https://techhub.social/tags/pytorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>pytorch</span></a> <a href="https://techhub.social/tags/developers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>developers</span></a> <a href="https://techhub.social/tags/security" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>security</span></a> <a href="https://techhub.social/tags/infosec" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>infosec</span></a> <a href="https://techhub.social/tags/coding" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>coding</span></a> <a href="https://techhub.social/tags/programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>programming</span></a> <a href="https://techhub.social/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://techhub.social/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://techhub.social/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://techhub.social/tags/technology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>technology</span></a></p>
Amartya<p>My brain is absolutely fried. <br>Today is the last day of coursework submissions for this semester. What a hectic month. <br>DNN with PyTorch, Brain model parallelisation with MPI, SYCL and OpenMP offloading of percolation models,hand optimizing serial codes for performance.<br>Two submissions due today. Submitted one and finalising my report for the second one. <br>Definitely having a pint after this</p><p><a href="https://fosstodon.org/tags/sycl" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>sycl</span></a> <a href="https://fosstodon.org/tags/hpc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hpc</span></a> <a href="https://fosstodon.org/tags/msc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>msc</span></a> <a href="https://fosstodon.org/tags/epcc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>epcc</span></a> <a href="https://fosstodon.org/tags/cuda" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cuda</span></a> <a href="https://fosstodon.org/tags/pytorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>pytorch</span></a> <a href="https://fosstodon.org/tags/mpi" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mpi</span></a> <a href="https://fosstodon.org/tags/openmp" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openmp</span></a> <a href="https://fosstodon.org/tags/hectic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hectic</span></a> <a href="https://fosstodon.org/tags/programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>programming</span></a> <a href="https://fosstodon.org/tags/parallelprogramming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>parallelprogramming</span></a> <a href="https://fosstodon.org/tags/latex" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>latex</span></a></p>
LavX News<p>Unlocking Deep Learning Performance: Understanding Compute, Memory Bandwidth, and Overhead</p><p>In the quest for optimizing deep learning models, developers often resort to trial-and-error methods that can lead to suboptimal performance. This article explores the critical factors affecting deep ...</p><p><a href="https://news.lavx.hu/article/unlocking-deep-learning-performance-understanding-compute-memory-bandwidth-and-overhead" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/unlocking</span><span class="invisible">-deep-learning-performance-understanding-compute-memory-bandwidth-and-overhead</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepLearning</span></a> <a href="https://mastodon.cloud/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a> <a href="https://mastodon.cloud/tags/OperatorFusion" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OperatorFusion</span></a></p>
LavX News<p>Unlocking AI's Potential with Scallop: The Neurosymbolic Programming Revolution</p><p>Scallop, a groundbreaking declarative language, merges symbolic reasoning with AI applications, offering developers a powerful tool to enhance their machine learning models. By integrating seamlessly ...</p><p><a href="https://news.lavx.hu/article/unlocking-ai-s-potential-with-scallop-the-neurosymbolic-programming-revolution" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/unlocking</span><span class="invisible">-ai-s-potential-with-scallop-the-neurosymbolic-programming-revolution</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a> <a href="https://mastodon.cloud/tags/NeurosymbolicAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeurosymbolicAI</span></a> <a href="https://mastodon.cloud/tags/Scallop" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Scallop</span></a></p>
Titus von der Malsburg 📖👀💭<p>I'd like to buy <a href="https://scholar.social/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a> cloud compute, with SSH access for <a href="https://scholar.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> inference (<a href="https://scholar.social/tags/pytorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>pytorch</span></a>). 32GB GPU memory is enough because I'm working with smaller models. Any recommendations?</p>
LavX News<p>TorchServe: The Future of PyTorch Model Deployment Faces Limited Maintenance</p><p>TorchServe, a pivotal tool for serving PyTorch models in production, has announced that it is no longer actively maintained. This development raises concerns about the future of model serving in AI ap...</p><p><a href="https://news.lavx.hu/article/torchserve-the-future-of-pytorch-model-deployment-faces-limited-maintenance" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/torchserv</span><span class="invisible">e-the-future-of-pytorch-model-deployment-faces-limited-maintenance</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a> <a href="https://mastodon.cloud/tags/TorchServe" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TorchServe</span></a> <a href="https://mastodon.cloud/tags/ModelServing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ModelServing</span></a></p>
Pyrzout :vm:<p>Import GPU: Python Programming with CUDA <a href="https://hackaday.com/2025/02/25/import-gpu-python-programming-with-cuda/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">hackaday.com/2025/02/25/import</span><span class="invisible">-gpu-python-programming-with-cuda/</span></a> <a href="https://social.skynetcloud.site/tags/graphicsprocessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>graphicsprocessing</span></a> <a href="https://social.skynetcloud.site/tags/parallelprocessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>parallelprocessing</span></a> <a href="https://social.skynetcloud.site/tags/developer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>developer</span></a> <a href="https://social.skynetcloud.site/tags/pytorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>pytorch</span></a> <a href="https://social.skynetcloud.site/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NVIDIA</span></a> <a href="https://social.skynetcloud.site/tags/python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>python</span></a> <a href="https://social.skynetcloud.site/tags/torch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>torch</span></a> <a href="https://social.skynetcloud.site/tags/News" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>News</span></a> <a href="https://social.skynetcloud.site/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://social.skynetcloud.site/tags/gpu" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gpu</span></a></p>
LavX News<p>Revolutionizing AI Performance: The Launch of AI CUDA Engineer for Optimizing Large Language Models</p><p>The AI CUDA Engineer is set to transform how developers optimize large language models by automating CUDA kernel discovery and optimization. With impressive speedups and a robust dataset, this tool pr...</p><p><a href="https://news.lavx.hu/article/revolutionizing-ai-performance-the-launch-of-ai-cuda-engineer-for-optimizing-large-language-models" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/revolutio</span><span class="invisible">nizing-ai-performance-the-launch-of-ai-cuda-engineer-for-optimizing-large-language-models</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a> <a href="https://mastodon.cloud/tags/CUDAOptimization" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDAOptimization</span></a> <a href="https://mastodon.cloud/tags/AICUDAEngineer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AICUDAEngineer</span></a></p>
Steve Leach<p><span class="h-card" translate="no"><a href="https://mastodon.social/@rotopenguin" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>rotopenguin</span></a></span> Could be that??? dunno... some instructions I copied and pasted with no luck nor understanding were to do with secure boot and adding something called kam or pam or something... it didn't work of course. </p><p>And of course I "need" <a href="https://sigmoid.social/tags/pytorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>pytorch</span></a>+cuda, and don't "need" an external display.... which means I just wind up not using the machine at all. It has 8GB GPU+64GB ram... I use an older system with 4GB gpu+16G ram instead. And for the most part my "good" machine just sits unused.</p>
Steve Leach<p>Well Damn. So my <a href="https://sigmoid.social/tags/Alienware" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Alienware</span></a> m15 R5 *seemed* to finally have a working HDMI port after a fresh install of <a href="https://sigmoid.social/tags/Debian" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Debian</span></a> stable. Then I installed the <a href="https://sigmoid.social/tags/Nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nvidia</span></a> drivers and... poof... no more external display. </p><p>So I still have a two year old laptop with a great GPU, tons of ram that can't be used with a monitor.</p><p>And my main workhorse, an <a href="https://sigmoid.social/tags/Asus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Asus</span></a> from 2019 that works just fine. </p><p>Sometimes I literally just SSH into the "good machine" to put it to work with <a href="https://sigmoid.social/tags/Pytorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Pytorch</span></a>.</p>
Custard! 🍮:fedora:<p>Pues <a href="https://bonito.cafe/tags/Fedora" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Fedora</span></a> sí está bien chido para todo, y es estable, pero <a href="https://bonito.cafe/tags/AMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMD</span></a> decidió que enfocarse en el <a href="https://bonito.cafe/tags/OpenComputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenComputing</span></a> en <a href="https://bonito.cafe/tags/Linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Linux</span></a>, sólo para <a href="https://bonito.cafe/tags/RHEL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RHEL</span></a> y <a href="https://bonito.cafe/tags/Ubuntu" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ubuntu</span></a> era buena idea</p><p>Pues babay por el momento, tendré que instalar <a href="https://bonito.cafe/tags/Kubuntu" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Kubuntu</span></a> para seguir chambeando con <a href="https://bonito.cafe/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> y <a href="https://bonito.cafe/tags/Pytorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Pytorch</span></a> 🤷‍♂️</p>
:rss: Qiita - 人気の記事<p>ローカルで動くGPUを利用した簡易的な画像生成Webアプリを作ってみた<br><a href="https://qiita.com/kaz_prg/items/f2d1039a4f07462e512d?utm_campaign=popular_items&amp;utm_medium=feed&amp;utm_source=popular_items" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">qiita.com/kaz_prg/items/f2d103</span><span class="invisible">9a4f07462e512d?utm_campaign=popular_items&amp;utm_medium=feed&amp;utm_source=popular_items</span></a></p><p><a href="https://rss-mstdn.studiofreesia.com/tags/qiita" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>qiita</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/huggingface" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>huggingface</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/StableDiffusion" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>StableDiffusion</span></a></p>
rich<p>Homebrew BLIP captioning going well...😅 <br><a href="https://mastodon.gamedev.place/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.gamedev.place/tags/pytorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>pytorch</span></a></p>
aoIt finally happed :blobcatfrowningbig: <br>My harddwive is bogged down by all the <a class="hashtag" href="https://yusaao.com/tag/python" rel="nofollow noopener noreferrer" target="_blank">#python</a> installs<br><br>*Curse you <a class="hashtag" href="https://yusaao.com/tag/pytorch" rel="nofollow noopener noreferrer" target="_blank">#pytorch</a> * and I need to upgrade my hdd<br> <a href="https://ry3yr.github.io/mypc" rel="nofollow noopener noreferrer" target="_blank">https://ry3yr.github.io/mypc</a><br><br>to a 1 tb model <a href="https://www.compare.de/datacontent.php?EAN=7636490060670" rel="nofollow noopener noreferrer" target="_blank">https://www.compare.de/datacontent.php?EAN=7636490060670</a><br><br>Sigh<br>Why does python require so much space :Wah:
:rss: Qiita - 人気の記事<p>Pythonの非同期処理: これだけは知っておきたい!<br><a href="https://qiita.com/Leapcell/items/d379cdb66c7cd3828b66?utm_campaign=popular_items&amp;utm_medium=feed&amp;utm_source=popular_items" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">qiita.com/Leapcell/items/d379c</span><span class="invisible">db66c7cd3828b66?utm_campaign=popular_items&amp;utm_medium=feed&amp;utm_source=popular_items</span></a></p><p><a href="https://rss-mstdn.studiofreesia.com/tags/qiita" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>qiita</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepLearning</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a></p>
LavX News<p>Unlocking JAX: A PyTorch User's Guide to High-Performance Machine Learning</p><p>Transitioning from PyTorch to JAX can be daunting, but a new tutorial bridges the gap, showcasing how to leverage JAX's powerful features while drawing parallels to familiar PyTorch constructs. Dive i...</p><p><a href="https://news.lavx.hu/article/unlocking-jax-a-pytorch-user-s-guide-to-high-performance-machine-learning" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/unlocking</span><span class="invisible">-jax-a-pytorch-user-s-guide-to-high-performance-machine-learning</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/JAX" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>JAX</span></a> <a href="https://mastodon.cloud/tags/Flax" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Flax</span></a> <a href="https://mastodon.cloud/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a></p>
:rss: Qiita - 人気の記事<p>VRAMが足りないならPytorchのメモリ割り当て方式そのものを変えちゃえばいいじゃない<br><a href="https://qiita.com/SuperHotDogCat/items/d4637dde013dd609b8f9?utm_campaign=popular_items&amp;utm_medium=feed&amp;utm_source=popular_items" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">qiita.com/SuperHotDogCat/items</span><span class="invisible">/d4637dde013dd609b8f9?utm_campaign=popular_items&amp;utm_medium=feed&amp;utm_source=popular_items</span></a></p><p><a href="https://rss-mstdn.studiofreesia.com/tags/qiita" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>qiita</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>機械学習</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a></p>
:rss: 窓の杜<p>機械学習ライブラリ「PyTorch」、公式の「Anaconda」チャンネルを廃止へ/使われていないのに労力がかかりすぎる<br><a href="https://forest.watch.impress.co.jp/docs/news/1640199.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">forest.watch.impress.co.jp/doc</span><span class="invisible">s/news/1640199.html</span></a></p><p><a href="https://rss-mstdn.studiofreesia.com/tags/forest_watch_impress" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>forest_watch_impress</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Anaconda" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Anaconda</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a></p>
InfoQ<p><a href="https://techhub.social/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a> 2.5 is here!</p><p>Highlights include Intel GPU support, the new FlexAttention API, TorchInductor CPU backend optimizations, and a regional compilation feature that reduces compilation time.</p><p>Check it out: <a href="https://bit.ly/3AiO7Ax" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">bit.ly/3AiO7Ax</span><span class="invisible"></span></a> </p><p><a href="https://techhub.social/tags/InfoQ" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>InfoQ</span></a></p>