Just another Swedish programming sysadmin person.
Coffee is always the answer.
And beware my spaghet.
The EU AI act classifies AI based on risk (in case of mistakes etc), and things like criminality assessment is classed as an unacceptable risk, and is therefore prohibited without exception.
There’s a great high level summary available for the act, if you don’t want to read the hundreds of pages of text.
They couldn’t possibly do that, the EU has banned it after all.
To quote Microsoft themselves on the feature;
“No content moderation” is the most important part here, it will happily steal any and all corporate secrets it can see, since Microsoft haven’t given it a way not to.
If you’re going to post release notes for random selfhostable projects on GitHub, could you at least add the GitHub About text for the project - or the synopsis from the readme - into the post.
I’ve been looking at the rewrite of Owncloud, but unfortunately I really do need either SMB or SFTP for one of the most critical storage mounts in my setup.
I don’t particularly feel like giving Owncloud a win either, they’ve not been behaving in a particularly friendly manner for the community, and their track record with open core isn’t particularly good, so I really don’t want to end up with a decent product that then steadily mutilates itself to try and squeeze money out of me.
The Owncloud team actually had a stand at FOSDEM a couple of years back, right across from the Nextcloud team, and they really didn’t give me much confidence in the project after chatting with them. I’ve since heard that they’re apparently not going to be allowed to return again either, due to how poorly they handled it.
I’ve been hoping to find a non-PHP alternative to Nextcloud for a while, but unfortunately I’ve yet to find one which supports my base requirements for the file storage.
Due to some quirks with my setup, my backing storage consists of a mix of local folders, S3 buckets, SMB/SFTP mounts (with user credential login), and even an external WebDav server.
Nextcloud does manage such a thing phenomenally, while all the alternatives I’ve tested (including a Radicale backed by rclone mounts) tend to fall completely to pieces as soon as more than one storage backend ends up getting involved, especially when some of said backends need to be accessed with user-specific credentials.
I feel like this could go really well together with Piet.
Just imagine; an album consisting of a bunch of Velato programs with Piet code as the artwork.
The first official implementation of directly connecting WhatsApp to another chat system - using APIs built specifically for purpose instead of third-party bridges - was indeed done against the Matrix protocol, as part of a collaboration in testing ways to satisfy the interoperability requirements of the EU Digital Services Act.
So not a case of a third-party bridge trying to act as a WhatsApp client enough to funnel communication, but instead using an official WhatsApp endpoint developed - by them - explicitly for interoperation with another chat system.
I think the latest update on the topic is the FOSDEM talk that Matthew held this February.
Edit: It’s worth noting that the goal here is to even support direct E2EE communication between users of WhatsApp and Matrix, something that’s not likely to happen with the first consumer-available release.
Well, the first tests for interconnected communication with WhatsApp were done with Matrix, so that’s a safe bet.
Haven’t really used any proper JMAP clients - since the setup is broken anyway, so mainly just curl.
You could also just run IMAP/JMAP/SMTP as separate components, I can’t see any place in the Stalwart documentation - or in the Docker image itself - where monolith is the only option.
I haven’t tested the setup myself yet, but me and another root are planning on testing a setup of Stalwart to replace a semi-broken IMAP/JMAP setup for a computer club, keeping the SMTP as is.
I’ve been personally using KDEs Itinerary app, but it might not be what you’re looking for
We’ve recently kicked out our entire Cisco networking core due to it actively refusing to interoperate with other pieces of necessary hardware for us, which was causing us to have to run an almost entire second redundant core network. Switching it out with ALE has been really nice in that regard, SPB scales like a dream even between locations and cities, we even get working L2 routes all the way over to some of our locations almost half a country away.
For us, Dell has been the far better of the two (HPE/Dell) big server-providing beasts in terms of just being able to use the hardware they provide, but they’re very close to getting a complete block from future procurement due to how they’ve been treating us.
Honestly, Fujitsu is probably our best current provider; their hardware is reasonably solid, their rack-kits aren’t insane, their BMC doesn’t do a bunch of stupid things, they don’t do arbitrary vendor locking on expansion cards, etc. Unfortunately their EFI/BIOS is a complete mess, especially in regards to boot ordering and network boot, and they’ve so far not been able to provide us Linux-based firmware upgrade packages - despite using a RHEL image in their BMC-orchestrated offline firmware upgrade process.
Got a pair of old HPE gen8 1U servers that are chewing through fan packages like nobody’s business, replaced at least five burnt-out fans on them in a similar amount of years.
We’re running a mix of HPE, Dell, and Fujitsu servers and they all absolutely suck in their individual ways - HP(E) adds a bunch of arbitrary hardware limitations which we have to work around, Dell intentionally degrades our multi-system setups with firmware updates, and Fujitsu’s boot firmware goes absolutely pants-on-head retarded if you’re doing netboot first.
We’ve gotten some Supermicro systems now as well, and they’ve been a real treat in comparison, though their software UX feels like it’s about two decades behind.
It’s great to hear that they’re not just giving up. And it’s also definitely good to hear that they’re not sticking with PHP either, that language is a true bane to modern hosting - and especially Kubernetes.
I’ll remain cautiously optimistic that they’ll be able to stay relevant, and not go hard in again on cutting away core functionality in the name of enterprise offerings - what caused the NextCloud split in the first place.
Has anything actually happened in ownClouds development?
The last I saw of them was FOSDEM a few years back, where NextCloud were handing out whitepapers and showing off their new Hub, chat, VoIP stack, group sharing system, and more. And ownCloud were sat somewhat opposite with two people and a screen showing a screenshot of a default ownCloud install, along with a big sign hanging from the ceiling saying “Join the winning team.”
Lots of people instantly think of security when they look at WiFi-connected IoT devices, but oftentimes they never think of the WiFi signal itself - what with all the added communication noise and send time limitations of having lots of small devices.
Especially with regular consumer equipment, it doesn’t actually require that many devices to fully saturate a regular home router or AP.
I don’t get the “Game Porting Toolkit” they made, content-wise it basically looks like a regular Wine packaging - much like what Proton is, but then it has one of the strangest licenses I’ve ever seen for something designed to help development and shipping.
To paraphrase, you can’t include any part of the toolkit with your product. Not the development components, the runtime components, the translation layers, nothing. So good luck using it to actually ship game ports, since that would be a license violation.
Well, one available case you can look at is Uru: Live / Myst Online, currently running under the name Myst Online: Uru Live: Again.
They open-sourced their Dirt/Headspin/Plasma engine, which required stripping out - among other things - the PhysX code from it.