dice.camp is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon server for RPG folks to hang out and talk. Not owned by a billionaire.

Administered by:

Server stats:

1.7K
active users

#SoHo

2 posts2 participants0 posts today

9 AM the morning before the Easter long weekend, what are your plans?

Me?

Two days of family events.

Getting one of my 'toy' cars out of winter storage (B8.5 Audi A4).

Working on a different toy car (Miata gets new Xida coil overs).

More winter clean up yard work and burning.

Architecting a #soho edge router, file server plus local services, disaster recovery strategy for my niece's new business; parts ordering.

Realistically, the car stuff probably won't get done.

Off to grind out that last work day before the weekend.

Upgraded my Mastodon instances to v4.3.4 from v4.3.2 finally.

With containers it's relatively painless.

That said, I really would like the Mastodon admin interface have settings for post lengths and number of poll items.

It's a bit of a pain to vet and correct whether ./app/javascript/mastodon/features/compose/containers/compose_form_container.js, ./app/validators/status_length_validator.rb, and ./app/validators/poll* need to be changed/updated/migrated for every new minor release. Not a big deal, truly, but it is something to fat finger and something else to track in your environment.

Mastodon is fabulous and complex but a little addition like this would go a long way to ease admin burdens.

cc: @Gargron

Friends...

#Colocation and/or #VPS #options in eastern #Ontario #Canada run by a #Canadian company?

Preferably centred around or near #Ottawa for coloco. VPS less concerned on location other than #Toronto/ #Ottawa/ #Montreal backbone areas.

Any options? Web searching seems to pull up enterprise grade ($$$) or mickey mouse stuff or US companies with Toronto PoPs.

For frame of reference I currently have stuff on Linode/Akamai and Ionos.

Alternatively, if anyone is operating or interested in starting a coloco cooperative in the #Almonte/ #Renfrew/ #Arnprior/ #CarletonPlace/ #Perth areas please reach out. Maybe there is critical mass to build or I can help something already operating...

#homelab#soho#Linux

Remember to audit your backups!

When I migrated my old Arch servers to Debian with ZFS last year I also changed the entire data layout on the arrays. I broke things out into smaller sized chunks to make ad hoc backups run swifter, modularize things so I stood a change of remembering what was what just by doing a directory listing, that kind of thing.

All week I've been working on finally getting a web site going for the root of this domain. Basically composed of trying out a bunch of hugo themes, loading some markdown files from another live site into the content directory to see how it renders, yadda, yadda.

In the process of doing this, at some point I must have fat fingered something and deleted/moved the source files for the active site. I only discovered this when trying to copy the posts from the active site to a new one I was testing with a new theme.

rsync -av ~oldsite/site/content/posts ~newsite/site/content/.
"... No such file or directory ..."

Wat...?

Ok, whatever, start looking at the incremental backups... hmm, not there... not there... not there... wat? List the backup crontabs again... no reference. Other host with backups that don't match this pattern... not there... WTAF?!

The location where I have my web dev stuff located is not backed up at all!

In the root of the array where this is located there are 4 other directories that are backed up but for some reason this one is not. Either I had a blind spot on it or I forgot or I didn't set it up because there were no web projects when the server was set up or...

I got lucky - had made a bunch of copies while experimenting during the week and was able to find "out of band" copies of the files. But, if during testing I had blown away all my testing data without knowing there were no backups I would have not been happy!

Anyway...

Audit your backups!

Test restoring from your backups!

Comet C/2024 G3 (ATLAS) - seen by the C3 coronograph on SOHO.

Download these images from:
umbra.nascom.nasa.gov/pub/lasc
...specifically, go to the desired date, and camera (C3), such as:
umbra.nascom.nasa.gov/pub/lasc

Images are in FITS format, which can be processed by GIMP:
gimp.org/

en.wikipedia.org/wiki/Solar_an

The comet is within hours of perihelion.
en.wikipedia.org/wiki/C/2024_G

1st image is in IR. Three tails seen.

2nd image is visible light. The brightest part of the comet is grossly overexposed, but faint tail details are seen. I enhanced visibility with unsharp masking.

When you download FITS files, also get a copy of the img_hdr.txt file because it tells you which file was taken with which filter.

3rd image is what NASA puts up for public consumption, but the scaling/histogram manipulation makes all parts of the comet white and featureless.
soho.nascom.nasa.gov/data/real

Anyone have experience getting #Mediawiki to work properly behind #nginx proxy manager?

I've got a certificate setup for Mediawiki in the nginx config so that wiki.host.mydomain.tld points to https://wiki.mydomain.tld:10443.

wiki.host.mydomain.tld is fronted by nginx with a Let's Encrypt cert.

wiki.mydomain.tld:10443 is using a self signed cert.

If I access the site directly via wiki.mydomain.tld:10443 it runs like a scaled cat in spite of being a very heavy Mediawiki site (many years of rich content).

If I access it "properly" via the nginx reverse proxy setup (wiki.host.mydomain.tld) there is a delay of 5-10s before it starts rendering. This is inconsistent - let's say 90% of the time this happens.

I feel like it's getting lost between the layers and something dumb like name resolution is getting in the way.

I've hard coded the ip addrs in the different containers (nginx, mariadb, mediawiki). nginx and mariadb are using host networking. Mediawiki is not. The MW container is the official image, based on apache.

Nothing special in the nginx config: wiki.host.mydomain.tld -> https://wiki.mydomain.tld:10443

Pointing via non-ssl on the receiving host makes no difference... it's nginx that is causing the delay.

It does always work - just slow.

I'm at the "can't see the trees for the forest" stage. I always suck at networking in general. Ha!

Ideas?