dice.camp is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon server for RPG folks to hang out and talk. Not owned by a billionaire.

Administered by:

Server stats:

1.7K
active users

#lxc

5 posts4 participants0 posts today

I have finally caved in and dove into the rabbit hole of #Linux Container (#LXC) on #Proxmox during my exploration on how to split a GPU across multiple servers and... I totally understand now seeing people's Proxmox setups that are made up exclusively of LXCs rather than VMs lol - it's just so pleasant to setup and use, and superficially at least, very efficient.

I now have a
#Jellyfin and #ErsatzTV setup running on LXCs with working iGPU passthrough of my server's #AMD Ryzen 5600G APU. My #Intel #ArcA380 GPU has also arrived, but I'm prolly gonna hold off on adding that until I decide on which node should I add it to and schedule the shutdown, etc. In the future, I might even consider exploring (re)building a #Kubernetes, #RKE2 cluster on LXC nodes instead of VMs - and if that's viable or perhaps better.

Anyway, I've updated my
#Homelab Wiki with guides pertaining LXCs, including creating one, passing through a GPU to multiple unprivileged LXCs, and adding an #SMB share for the entire cluster and mounting them, also, on unprivileged LXC containers.

🔗 https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc

Wiki about everything Homelab. Contribute to irfanhakim-as/homelab-wiki development by creating an account on GitHub.
GitHubhomelab-wiki/topics/proxmox.md at master · irfanhakim-as/homelab-wikiWiki about everything Homelab. Contribute to irfanhakim-as/homelab-wiki development by creating an account on GitHub.

I... actually managed to do this and it was somewhat messy to get through with it, but I did it. My 'stoppers' initially were simply needing to update some of the #Jellyfin's xml configs for any wrong/old paths/values, and lastly, the #SQLite DBs themselves which had old paths as well - most of which were easy to fix as they're text values, but some were (JSON) blobs, using the same extension on #VSCode, this wasn't that hard to do either by simply exporting the blob, editing the blob's JSON text value, and reimporting the blob to the column.

Now my Jellyfin
#LinuxServer.io container sitting in an unprivileged (#Debian #Linux) #LXC container on #Proxmox is set up with hardware transcoding using the #AMD Ryzen 5 5600G onboard iGPU (cos I'm getting impatient in waiting for my #Intel #ArcA380 to arrive). I'll update my #ErsatzTV container to do the same. Everything's perfect now, 'cept, I still wouldn't recommend users to stream Jellyfin on the web or a web-based client using transcoding, cos while the transcoding itself is perfect, Jellyfin seems to have an issue (that I never got on #Plex) whereby the subtitle would desync pretty consistently if not direct playing - with external or embedded subs, regardless. Dk if that can ever be fixed though, considering the issue has been up since 2023 with no fix whatsoever.

There's also a separate issue I'm having where Jellyfin does not seem to support discovering/serving media files that are contained in a symlink directory (even though there were some people on their forums reporting in the past that it should) - I've reported it last week, but it's not going anywhere for now. Regardless though, I'm absolutely loving Jellyfin despite some of its rough edges, and my users are loving it too. I think I've considered myself 'migrated' from Plex to Jellyfin, but I'll still keep Plex around as backup for these 2 cases/issues I've mentioned, for now.

🔗 https://github.com/jellyfin/jellyfin-web/issues/4346

🔗 https://github.com/jellyfin/jellyfin/issues/13858

RE:
https://sakurajima.social/notes/a6j9bhrbtq

Please describe your bug Upgraded 10.8.8 > 10.8.9 and now subtitles desync or loop if you jump ahead in a file. Steps: Start a show with subs, Jump ahead a few mins This will cause the subs to loop...
GitHub[Issue]: Sub title desync JF 10.8.9 · Issue #4346 · jellyfin/jellyfin-webBy MrToast99

Bruh, I might've wasted my time learning how to passthrough a GPU to an #LXC container on #Proxmox (as well as mount a SMB/CIFS share) and write up a guide (haven't been able to test yet, cept with the latter) - all by doing some seemingly magic #Linux fu with some user/group mappings and custom configs, if it turns out that you could actually achieve the same result just as easily graphically using a standard wizard on PVE.

It's 4am, I'll prolly try to find time later during the day, or rather evening (open house to attend at noon), and try using the wizard to 1) Add a device passthrough on an LXC container for my
#AMD iGPU (until my #Intel #ArcA380 GPU arrives) and see if the root user + service user on the container could access it/use it for transcoding on #Jellyfin/#ErsatzTV, and 2) Add a SMB/CIFS storage on the Proxmox Datacenter, tho my #NAS is also just a Proxmox VM in the same cluster (not sure if this is a bad idea?) and see if I could mount that storage to the LXC container that way.

#Homelab folks who have done this, feel free to give some tips or wtv if you've done this before!

I'm writing a guide on splitting a GPU passthrough across multiple #Proxmox #LXC containers based on a few resources, including the amazing Jim's Garage video.

Does anyone know the answer to this question of mine though, on why he might've chosen to map a seemingly arbitrary GID
107 on the LXC Container to the Proxmox host's render group GID of 104 - instead of mapping 104 -> 104, as he did with the video group, where he mapped 44 -> 44 (which seems to make sense to me)?

I've watched his video seemingly a million times, and referred to his incredibly simplified guide on his GitHub that's mostly only meant for copy-pasting purposes, and I couldn't quite understand why yet - I'm not sure if it really is arbitrary and
107 on the LXC Container could be anything, including 104 if we wanted to... or if it (i.e. 107) should've been the LXC Container's actual render group GID, in which case then it should've also been 104 instead of 107 on his Debian LXC Container as it is on mine.

Anyway, super excited to test this out once my
#Intel #ArcA380 arrives. I could probably already test it by passing through one of my node's Ryzen 5 5600G iGPU, but I worry if I'd screw something up, seeing that it's the only graphics onboard the node.

🔗 https://github.com/JamesTurland/JimsGarage/issues/141

Referencing to the following resources: https://youtu.be/0ZDr5h52OOE https://github.com/JamesTurland/JimsGarage/tree/main/LXC/Jellyfin May I know the reasoning behind the GID mapping choice for the...
GitHub[QUESTION] Clarification on GID mapping choice for render group · Issue #141 · JamesTurland/JimsGarageBy irfanhakim-as

I love #Podman, but gosh is it needlessly complicated (to setup, correctly) compared to #Docker. I'll continue using it over Docker on my systems, but recommending/advocating to people's sake (when it comes to containerisation), maybe I'll stick with Docker.

If you're just setting it up on your personal machine, it's easy - some aspects may even be simpler than Docker - but the moment you start getting into things like getting it to work on a
#Proxmox #LXC container... it gets messy real fast.

Hey networking/LXC specialists.

I have NextCloudPi running as an LXC container.

To access it, I set up routing on my Mikrotik router (screenshot).

The problem is that accessing NCP this way is very slow, I need to wait 5-10 seconds for the page to load.

I have Tailscale installed in the container, and accessing NCP using the Tailscale host name is nearly instantaneous.

Certains qui ne connaissent pas #Alpine Linux, sont sceptiques quand on leur dit que si on veut travailler léger, il n'y a pas mieux.
Encore plus quand c'est un container ! Les deux premières images, montrent un #LXC Alpine Linux 3.21 nouvellement installé, les deux suivantes montrent les ressources utilisées avec #SSH et #Docker installé et en route.

So I have this idea to move at least some of my self-hosted stuff from Docker to LXC.

Correct my if I'm wrong dear Fedisians, but I feel that LXC is better than Docker for services that are long-lasting and keeping an internal state, like Jellyfin or Forgejo or Immich? Docker containers are ephemeral in nature, whereas LXC containers are, from what I understand, somewhere between Docker containers and VMs.

OK, for many IT people this will be obvious, but it blew my mind just now.

I have NextCloud running in an LXC container in my NAS, and I was looking for an easy way to access it over the LAN.

I usually access it over Tailscale but it feels silly to access a local service over a VPN.

And thanks to this video I learnt that I can add a route in my Mikrotik router config to just route the LXC network through my NAS as a gateway. And it works!

#mikrotik #lxc #nextcloud

youtube.com/watch?v=TmGvbXfwJE

ProxLB 1.0.7 (an opensource DRS alike solution for #Proxmox clusters) is just around the corner!

1.0.7 will be the last version before I'm going to publish the new refactored code base in a modern and object oriented way. Version 1.1.0 squashes some more bugs that were postponed on the current code base and makes the overall future handling much easier (including new features).

Website: proxlb.de
GitHub: lnkd.in/eEZWEU7s
Blog post: lnkd.in/e5_b6u-A
Tags: #ProxmoxVE #DRS #Loadbalancer #opensource #virtualization #gyptazy #Proxmox #ProxLB #homelab #enterprise #balancer #balancing #virtualmachines #VM #VMs #VMware #LXC #container #cluster

After taking the nickle tour of #Qubes, my hasty conclusion is that it is anti-#KISS; there are seemingly many moving parts under the surface, and many scripts to grok to comprehend what is going on.

I plan to give it some more time, if only to unwrap how it launches programs in a VM and shares them with dom0's X server and audio and all that; perhaps it's easier than I think.

I also think #Xen is a bit overkill, as the claim is that it has a smaller kernel and therefore smaller attack surface than the seemingly superior alternative, #KVM. Doing some rudimentary searching out of identified / known VM escapes, there seem to be many more that impact Xen than KVM, in the first place.

Sure, the #Linux kernel may be considerably larger than the Xen kernel, but it does not need to be (a lot can be trimmed from the Linux kernel if you want a more secure hypervisor), and the Linux kernel is arguably more heavily audited than the Xen kernel.

My primary concern is compartmentalization of 'the web', which is the single greatest threat to my system's security, and while #firejail is a great soltion, I have run into issues maintaining my qutebrowser.local and firefox.local files tuned to work well, and it's not the simplest of solutions.

Qubes offers great solutions to the compartmentalization of data and so on, and for that, I really like it, but I think it's over-kill, even for people that desire and benefit from its potential security model, given what the threats are against modern workstations, regardless of threat actor -- most people (I HOPE) don't have numerous vulnerable services listening on random ports waiting to be compromised by a remote threat.

So I am working to refine my own security model, with the lessons I'm learning from Qubes.

Up to this point, my way of using a system is a bit different than most. I have 2 non-root users, neither has sudo access, so I do the criminal thing and use root directly in a virtual terminal.

One user is my admin user that has ssh keys to various other systems, and on those systems, that user has sudo access. My normal user has access to some hosts, but not all, and has no elevated privileges at all.

Both users occasionally need to use the web. When I first learned about javascript, years and years ago, it was a very benevolent tool. It could alter the web page a bit, and make popups and other "useful" things.

At some point, #javascript became a beast, a monster, something that was capable of scooping up your password database, your ssh keys, and probe your local networks with port scans.

In the name of convenience.

As a result, we have to take browser security more seriously, if we want to avoid compromise.

The path I'm exploring at the moment is to run a VM or two as a normal user, using KVM, and then using SSH X forwarding to run firefox from the VM which I can more easily firewall, and ensures if someone escapes my browser or abuses JS in a new and unique way, that no credentials are accessible, unless they are also capable of breaking out of the VM.

What else might I want to consider? I 'like' the concept of dom0 having zero network access, but I don't really see the threat actor that is stopping. Sure, if someone breaks from my VM, they can then call out to the internet, get a reverse shell, download some payloads or build tools, etc.

But if someone breaks out of a Qubes VM, they can basically do the same thing, right? Because they theoretically 'own' the hypervisor, and can restore network access to dom0 trivially, or otherwise get data onto it. Or am I mistaken?

Also, what would the #LXC / #LXD approach look like for something like this? What's its security record like, and would it provide an equivalent challenge to someone breaking out of a web browser (or other program I might use but am not thinking of at the moment)?

I need to take a step back and reassess my network setup. Here’s what I have:
• Proxmox VE running on a mini PC, directly connected to my router (no VLANs).
• The Proxmox host has a single virtual adapter with a static private IP, which is also reserved on the router.
• A Cloudflared LXC (running in Proxmox) with its own reserved private IP on the same subnet as the Proxmox host.
• A VM on the same subnet running Docker, where the containers are on a user-defined bridge network, but this bridge network is on a different subnet than the host.

My goal:

I want the Cloudflared LXC to properly route public hostname(s) to the appropriate Docker containers (which provide public services) on the VM.

The challenge:

Since the Docker containers are on a different subnet than the VM itself, how should I structure my networking so that:
1. Cloudflared can route requests correctly to the Docker services.
2. The setup remains clean and maintainable.

What’s the best approach to configure this? Should I adjust Proxmox networking, use additional routes, or take a different approach?

I know it's normal and not all that exciting, but the simple pleasure of moving my #LXC containers from my old server to my new server and having them work with zero touch is just lovely.

It's also kinda neat contending with Alma #Linux — my first RHEL-like in about fifteen years — then having all my lovely Arch containers to hand.

Replied in thread

@kde@floss.social @kde@lemmy.kde.social

Can you tell us what happens on the "sandbox all the things" goal?

I think this is a pretty crucial step forward, even though #sandbox technologies (most often through user namespaces) are more problematic than I initially thought.

(Basically, user #namespaces open up #privesc dangers to the monolithic #kernel, which is incredible. #Android and #ChromeOS use #LXC, mounts and #SELinux for #sandboxing)