dice.camp is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon server for RPG folks to hang out and talk. Not owned by a billionaire.

Administered by:

Server stats:

1.7K
active users

#postgresql

7 posts7 participants0 posts today

On June 12, join our CEO @c2main at 11:30 AM CEST for a FREE livestream @ @posetteconf regarding Pressure Stall Information within the world of #PostgreSQL.

Use this link to add the livestream to your calendar & optionally register, or follow us to get the recording after the conference when it becomes available!

posetteconf.com/2025/talks/resource-control-admission-i-have-a-date-with-my-psi/

Continued thread

L'estructura tècnica d'#appy funciona gràcies a Python, fastAPI, #Postgresql (base de dades) i #Redis (memòria cau).
Tot de manera asincrona per evitar punts de bloqueig, passen moltes coses en parall·lel, simultàniament, bàsic per a tenir fluïdesa tant en les interaccions amb altres servidors com amb les aplicacions client.
appy funciona bé fins i tot en una #Raspberry 4B, per tant també és una opció vàlida per a tenir el perfil fediversal auto gestionat i a casa.

Some time ago I mentioned here, in half-joking way, self-fixing software I work with. I said Patroni #Postgres has the best regeneration ability I've ever seen. And currently "the best ability" includes:

> After network migrations servers changed IP addresses. It broke etcd config so I had to completely delete that config and initialize etcd cluster again. Which also forced cleaning and renewing Patroni config because it is strongly dependent on etcd. Even when configuration temporarily didn't exist, connection with WAL archives (technically other separate server) wasn't interrupted (I am not even sure if real data transfer could happen at that time). It was seemingly enough to start new #database cluster from last timeline. I don't know WHAT forced servers to immediately pull that data on fresh start. At migration time there weren't any real production data so I didn't even purposely try to restore anything.

> Not so long time later (and now with real production things) some script tests, causing lots of database changes in relatively short time, beyond former server's capacity, killed master server. Patroni switched as intended and I could work on increasing server's capacity (had to do it live, not very convenient). First server finally decided data corruption was too big and to fix it automatically deleted whole /var/lib/postgresql/* directory and started to recreate thing from scratch, using data from new master server (and was doing it with at least 2 GB/s speed because why not? :blobcatjoy:).

> During above mentioned process impatient tester hit again with their not optimised scripts, finally killing whole cluster. Swearing silently I increased remaining servers as it was only thing I really could do. Postgresql API mostly wasn't responsive, it had limited info about last state before final failure. It wasn't possible to force any change or affect it in any way.
First server decided to delete whole directory again and recreate it (at least this time I saw exact moment in logs), at the same time second server did rewind to state of third server (why??). All these things happened automatically, without my help. I wouldn't even know what to do :blobcatsweat:

And it's only beginning when we use it on production. Now I wait for stubborn users to do some more unintended durability tests... Maybe I would see it's even more invincible :blobcatamused:

#admin#sysadmin#it

New episode of #TalkingPostgres #podcast 🎙️

Bruce Momjian, #PostgreSQL core team member, joined me on Ep26 to talk about open source leadership—with rabbitholes on servant leadership, the art of public speaking, the power of gratitude, & bow ties

Boosts appreciated 🚀 to help more people discover the podcast

🎧 talkingpostgres.com/episodes/o

📺 youtu.be/gYG3FP-Csho?feature=s

Hello, hachyderm! we've been working hard on building up our ansible runbooks and improving hachyderm's overall resilience. Recently, we've been focusing on is database resilience.

We're getting close to retiring our original database server (finally!) and preparing to move to a fully ansible-managed set of databases servers, primary and replica on new hardware. We'll send another announcement when we do the cut over. The team has done excellent work to make this highly automated, quick, and painless! :blobfoxscience:

Done:

✅ author ansible roles for managing postgresql, pgbackrest (backups), pgbouncer, and primary/replica failover
✅ decide to continue with pgbouncer and *not* use pgcat
✅ rotate database passwords
✅ order new replica database hardware
✅ order new future primary database hardware

To do soon:

🟨 rebuild replica database with ansible scripts
🟨 prepare primary database with ansible scripts
🟨 start replicating to new database replica
🟨 cut over to new database server 🎉

We're also planning on open-sourcing our ansible roles in the coming weeks - just a little housekeeping & tidying up before we do!