> Even better, there is a little-advertised utility called extrepo that has a curated list of external repositories users might want to enable for Debian. To enable the Mozilla repository, for example, a user only needs to install extrepo, run "extrepo enable mozilla" as root (or with sudo), update the package cache, and look for the regular Firefox package. In all, extrepo includes more than 160 external repositories for applications like Docker CE, Signal, and Syncthing. Unfortunately, the extrepo utility does not have a separate "list" command to show the available repositories, though running "extrepo search" with no search parameter will return all of its DEB822-formatted repository entries.
TIL. What a superpower!
psittacus 6 hours ago [-]
I was browsing through the issues of extrepo and found deb-get, seems pretty useful too:
> deb-get makes it easy to install and update .debs published in 3rd party apt repositories or made available via direct download on websites or GitHub release pages.
I learned about extrepo when trying out LibreWolf. Then I realized theres plenty of stuff on extrepo I could install. No more curl installs or third-party package sources for me mostly.
I was thrilled to learn about this, too. I wasn't aware of it even as a long-time Debian user.
However, trying the specific example that was listed in the article, I installed extrepo and enabled the mozilla repo. Unfortunately, firefox is not installable on trixie in it's current form because it depends on libasound2 and the trixie package is called libasound2t64. : (
jzb 4 hours ago [-]
That's odd... I had no problem installing Firefox.
skydhash 5 hours ago [-]
Wait, what! I have firefox installed through extrepo and it worked well as of yesterday.
geokon 8 hours ago [-]
what's the difference with PPAs?
the main reason i always fall back to Ubuntu is bc everyone has a PPA for it. Sometimes the PPA also works for Debian, but its 50/50 (from what i understand its not an official thing under Debian?)
AppImages have aleviated this.. but appimagelauncher is broken under Ubuntu and theyre annoying to integrate manually
Generally, these are repos maintained by the upstreams themselves, e.g. Docker, Tor, Armbian, Dovecot... my guess (and it's just a guess, I haven't used Ubuntu PPAs much lately) the PPAs are maintained by not-the-upstreams. Or perhaps some of the upstreams are maintaining both a PPA and their own hosted repo.
jjice 7 hours ago [-]
I've only used AppImages for software I didn't care about every being updated - can they do updates without having to manually redownload the AppImage?
hk1337 10 hours ago [-]
Does it enable access to software-properties-common?
scorpioxy 14 hours ago [-]
Installed trixie a few days ago and test driving it and it's been going very well. Coming from Ubuntu so it wasn't a big change but initially I went with Ubuntu many years ago due to its reputation in making Debian a more user-friendly distribution. I can say that my experience with trixie was quite friendly. This may have been the case for a few releases but I was invested in the Ubuntu platform so didn't see the need to switch.
Was bummed to see firefox at version 128 as I've been missing features from the more recent versions. I don't know how I'm going to address that yet as I prefer not to add external apt sources, if I can. This is on a desktop system so somewhat recent versions of software is desirable.
What do other people do for desktop systems? Go with testing/unstable or just another distro for desktops?
theandrewbailey 10 hours ago [-]
I work in the refurb division of an ewaste recycling company[0]. Due to the certifications we have (and the lack of MS licensing), we can't install Windows on anything we sell. I started off installing Debian on things I list, but switched to Mint. I've fallen in love with the OEM install option they have[1]. It sets up a pre-OOBE environment, letting me run things like fastfetch to get the system specs, then click the 'prepare for shipping to end user' command to trigger the user and password setup on next boot (so I don't have to set and write a password on a postit note on the laptop, and hope it doesn't get lost).
I believe that is because Debian ships Firefox "Extended Support Release" (ESR) as a security precaution, and the firefox-esr package[1] is quite out of date in absolute terms.
I tried debian several times over the years, but it was with bookworm (debian 12) that i decided to make the switch on all my PCs and laptops, macbook included.
(mainly, it was the fact that the installer finally included firmwares out of the box which made installing much, much easier on laptops)
Because i want updated packages, the first thing i do is enable backports (otherwise i think that trixie still comes with kicad 5? hugh!) and do a full upgrade.
as for firefox, debian's repositories use firefox esr, which is why you are still on 128. There are instructions on firefox's site on how to switch to the regular release channels, just do that. If you can't trust firefox's own sources i don't know how you can trust debian's.
Debian + KDE is my favourite combo. I don't do anything different for desktop. When there was the debian 13 freeze i simply waited a couple of days, edited the sources to point at trixie and did a full-upgrade and an autoremove to clean old stuff. That's it.
jbstack 9 hours ago [-]
How do you find the hardware compatibility to be? I've been keen to switch away from Ubuntu for years now and Debian would be my first choice but I'm wary of having problems e.g. with Nvidia GPUs, random peripheral devices such as printers and scanners, all of which mostly "just work" with Ubuntu. For this reason I'm leaning more towards Linux Mint but I'd like to be persuaded on Debian.
I don't consider outdated packages to be a problem on any distro because I just use Nix (which doesn't interfere with other package managers) whenever I want a more recent package.
lgeorget 10 hours ago [-]
I know that's not your point and I'm not saying this to cherry-pick your argument but in case that's particularly relevant to you, Debian Trixie ships with Kicad 9 : https://packages.debian.org/trixie/kicad. If you're stuck with an earlier version, maybe you have a dependency blocking your updates.
kevin_thibedeau 4 hours ago [-]
Kicad is easy to compile manually. The build process is surprisingly smooth for something so complex.
cricalix 14 hours ago [-]
I've been happy with Fedora for my personal systems, and it's the only blessed distro at work for those who don't want Windows or Mac.
Heck, I use Fedora Server as my homelab OS to run Incus. Works For Me.
scorpioxy 14 hours ago [-]
Nothing against Fedora and the rpm-based platforms but I prefer the debian-derived distros. My preference is due to Debian feeling like a community project rather than being driven by corporate interests. Ubuntu was doing for a while but that started changing a few years ago.
hnarn 13 hours ago [-]
> Heck, I use Fedora Server as my homelab OS to run Incus. Works For Me.
In your case I guess it makes sense since you have to run Fedora at work, but I was under the impression that the support for Incus (i.e. official packaging etc) was better on Debian.
trklausss 9 hours ago [-]
You can always work with backports! It's the way Debian has to bring more recent packages to older stable versions.
This will install backported package _and_ dependencies, so you will be good to go :)
phillybass 4 hours ago [-]
I prefer to install Firefox directly from the Mozilla. I’ve had too many issues with the Debian ESR version being too far behind the mainline in features.
kiney 11 hours ago [-]
if you want a newer firefox use flatpak, don't pollute your system with unofficial debs or source installs
hsbauauvhabzb 11 hours ago [-]
These kinds of statements suit servers with high availability requirements but really shouldn’t be made without context. Adding the Mozilla Debian Firefox repos probably won’t break anything catastrophic, and the time cost / risk of containers is non zero too.
I’ve had more trouble and time wasted with snap Firefox than I’ve had with official Mozilla repos under both Debian and Ubuntu.
kiney 8 hours ago [-]
flatpak is not snap
kevin_thibedeau 4 hours ago [-]
You can manually install Mozilla's Linux binary and it will update itself as on other platforms. I've been doing this since the Iceweasel days and it's always been solid.
notherhack 5 hours ago [-]
Firefox 140 ESR was released this month. I expect Debian will publish an update to it soon.
sandrello 12 hours ago [-]
I see testing as a better fit for general-purpose desktop setups, stable is a bit too conservative in that regard.
trklausss 9 hours ago [-]
That is true, at least for laptops that came to market after the respective Debian release.
You can however get all stability of a released version with newer packages if you use stable+backports. This would give you a stable system, and allow you to upgrade selected packages to newer versions. This can be tedious, so running testing is also possible.
And well, overall, you can also install other distributions that are bleeding-edge (Arch based?). That's why I like about the distro ecosystem :)
giancarlostoro 6 hours ago [-]
If you want Arch that's easy to setup, and manage, try EndeavourOS. Its the first time I've tried Arch and stuck with it. I tried Manjaro but it was a nightmare for me, I had just installed it and ran an update command, and it broke everything. I think it was my lack of understanding Pacman. I have to wonder if people just break Arch mostly because of Pacman nuances.
Protip: don't use Pacman directly, just use 'yay' which comes with EndeavourOS. Yay is an interface to Pacman, now while it may sound silly, its totally worth its salt. I'm probably still on Endeavour because of yay.
In order to update your system just type 'yay' into a terminal and it does the work prompting you for confirmation.
If you want to install anything its as simple as 'yay packagename' and then it gives you options, including from the User repos (AUR) which are like Ubuntu's PPAs.
I spent probably 15 years on Debian / Ubuntu (though it mostly became Ubuntu even for servers, I got too used to Ubuntu over the years). I installed Arch this past year because I wanted more up to date packages, I didnt want bleeding edge, but it hasn't been so much bleeding so I'm okay with it. I update every few days, or when Discord decides to tell me to download the DEB package or it wont open.
pmontra 14 hours ago [-]
I've been on Debian 11 for a few years and I'm installing 13 on another disk (dual booting until it's ready for my job.)
I did not use the Firefox coming with 11 and I won't use the ESR version in 13. I downloaded the deb from Mozilla's site once and it autoupdated itself up to the current version. No problem at all. I'll do the same on 13.
hsbauauvhabzb 11 hours ago [-]
Mozilla have an apt source you can add. No manual dpkg required.
eadmund 10 hours ago [-]
Doesn’t that give Mozilla the ability to replace any package on one’s computer?
I trust Debian, and I trust the Debian Firefox team to secure Firefox, but I do not trust Mozilla.
You can tell apt to prefer a given source list only for a few packages.
brnt 13 hours ago [-]
Same here. Only stick up is that for those with NVidia GPU's (yes...) for some reason the kernel headers don't install when you install the driver, plus the secure boot signing simply does not work (Ubuntu, Fedora, they all manage OOTB). I see it is generating MOK stuff, but it does not work. Because of that, it's pretty hard to troubleshoot. Plus, Debian-provided drivers do not work/enable at all on Optimus machines, and not a single option (I tried them all, except those no longer available) on the Debian wiki. (Let's hope the Arch colab works out.)
I solved all of the above by switching to the NVidia Cuda repo (well, I did not reenable Secure Boot, so not sure if that would work now).
ThisNameIsTaken 13 hours ago [-]
While being an avid Debian user on both server and desktop, I had never heard of the Extrepo[0] package mentioned in the article. It would be great if the repositories included in there would suggest this way of adding their repo. While it cannot guarantee the safety of added packages, it at least add an extra layer of checks.
Another useful thing from the article for me was `apt modernize-sources` to update the existing sources.list to the new structure. Now I need to check if scripts like this run automatically on my auto-updating desktop from my parents.
What I lack with the "modern" `sources.list.d/` file schema is a command to perform common types of edits. Something like `extrepo` but generic and with knowledge of Debian repos/dists. It's a small thing but I want to be able to type commands like
apt-sources available # prints known dists, marked by their support status
apt-sources list # prints all active dists
apt-sources add trixie # or "testing", "unstable", "sid"
apt-sources remove bookworm
apt-sources dist-upgrade # combo of the previous two
Perhaps `extrepo` would be extended to include Debian-proper or this hypothetical `apt-sources` would be kept Debian-repo-only or perhaps it would cover extrepo's scope.
pja 13 hours ago [-]
No mention of backports in this article as an alternative to tracking testing if you want (some) newer packages.
The Libreoffice 5.8 (which was just released very recently) is already packaged in backports for trixie for instance. Did things like updated KDE desktops make it to backports for bookworm?
genpfault 4 hours ago [-]
> Did things like updated KDE desktops make it to backports for bookworm?
I love Debian, but it does have some weaknesses. For example with virtualization, when you enable SR-IOV, apparmor goes bananas. With AlmaLinux + SELinux there are no problems. I use both Debian and AlmaLinux on my servers, and with that combo I feel I get the best of the best. But I think AlmaLinux is more polished and that SELinux is superior to apparmor.
mixmastamyk 5 hours ago [-]
Anyone happen to have a good primer on these? Have been around a decade or two but I know almost nothing about them.
jychang 14 hours ago [-]
The biggest problem with Debian 13 is not with Debian, it's with people like Google and Cloudflare.
Come on guys, Debian 13 has been in testing for months, and you can't be arsed to update your apt repos from bookworm to trixie by release, or even weeks after release? That's embarrassing.
~ sudo apt update --audit
[...]
Hit:8 https://packages.cloud.google.com/apt google-compute-engine-bookworm-stable InRelease
Hit:10 https://packages.cloud.google.com/apt cloud-sdk-bookworm InRelease
Hit:11 https://pkg.cloudflareclient.com bookworm InRelease
Hit:12 https://pkg.cloudflare.com/cloudflared bookworm InRelease
[...]
Fetched 407 kB in 2s (222 kB/s)
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
Warning: https://pkg.cloudflare.com/cloudflared/dists/any/InRelease: Policy will reject signature within a year, see --audit for details
Audit: https://pkg.cloudflare.com/cloudflared/dists/any/InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is:
Signing key on FBA8C0EE63617C5EED695C43254B391D8CACCBF8 is not bound:
No binding signature at time 2025-08-21T15:58:52Z
because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance
because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
These apt repos are still bookworm-only after the trixie release, and it's been weeks. And Cloudflare is still stuck on SHA1.
tremon 4 hours ago [-]
You are explictly requesting the bookworm versions, I'm not sure what you're expecting?
At least google's got you covered, if you simply ask nicely:
$ curl -s https://packages.cloud.google.com/apt/dists/cloud-sdk-trixie/InRelease | grep -E Suite:\|Date:
Suite: cloud-sdk-trixie
Date: Wed, 27 Aug 2025 18:45:14 UTC
$ curl -s https://packages.cloud.google.com/apt/dists/google-compute-engine-trixie-stable | grep -E Suite:\|Date:
Suite: google-compute-engine-trixie-stable
Date: Mon, 25 Aug 2025 21:24:03 UTC
iotku 14 hours ago [-]
The Nvidia CUDA repos are still on Debian 12 as well which was a blocker for me. (Some claim it works fine anyways, but not in my experience.)
It's not like the Debian release schedule is a secret, I suspect there's just less corporate pressure to prioritize Debian.
brnt 13 hours ago [-]
NVidia bookworm repo worked fine on all my machines. What did not work for you? I deduced there wasn't really anything Debian-12 specific in there (it's still a Linux kernel with SystemD).
_JamesA_ 7 hours ago [-]
Trixie has been great except for the proprietary nvidia driver. The upgraded 550 driver has known problems with 4k @ 120hz that causes crazy flickering [1].
I tried the 580 bundle with the same problem. I had to revert to the 535 bundle.
> Truly adventurous users may take their chances with the unstable ("sid") release.
It's been years since I've run Linux as a daily driver, but when I did it was Sid, and it didn't feel particularly adventurous. Over a 10-15 years timespan, I think there were 2 breakages, one of them being the difficult KDE 3.x transition.
I've long meant to try Fedora, but apt/dpkg is in my muscle memory, and I never got the handle of dnf/rpm.
sorrythanks 7 hours ago [-]
It's so cool that there's now a major Linux distro that has RISC-V as a target on their stable release.
smjburton 7 hours ago [-]
I've been running trixie for a few weeks now and it's been solid, great release so far.
One of the features I'm most excited about is access to Podman 5.4.2, and the ability to use Podman Quadlets. It'll be nice to start transitioning my systemd service units over to the new format for my containers.
DyslexicAtheist 14 hours ago [-]
> Truly adventurous users may take their chances with the unstable ("sid") release.
been running "unstable" since 2007 as my daily driver, work-horse, dev-machine, ... Not once faced a "problem" I couldn't recover from. Not once a restore from backup of the main OS due to something the upgrade or OS had caused, no booting from a rescue-image. For something that comes without warranty and has "unstable" in it's name, it's pretty solid.
Apples and oranges of course, but it holds up also well compared to Windows (which tbf, has gotten more stable since Win98), or even compared to MacOS that also crashes at times even after version MacOS 9.x (which was when MacOS became usable in the sense of "stability").
12345ieee 13 hours ago [-]
Yeah, the distro for "Truly adventurous users" has never broken in a decade of use by myself and is essentially as bleeding edge as Arch.
It's just old ideas that get repeated even once they stop being true.
hnarn 13 hours ago [-]
Just to be devil's advocate here, and pedantically point out that Debian Sid is not a "distro", I don't think it's correct to say that Debian unstable is "actually stable", because it's "unstable" from the perspective of Debian, not from a subjective, individual experience.
Debian release cycles have a strong focus on stability, and for those situations where it matters, like running a production server, that is a pretty important feature. Just because your desktop never broke doesn't mean it's not "unstable", it's more of a disclaimer that if you put serious things on top of it and it breaks, that's much more on you because you chose to go against maintainer advice.
For me personally, with exception of the Enterprise Linux family (Alma, Rocky etc.), there's no Linux distribution I'd rather run on a workhorse, production, long term deployment server than Debian.
Dunedan 12 hours ago [-]
> been running "unstable" since 2007 as my daily driver, work-horse, dev-machine, ... Not once faced a "problem" I couldn't recover from.
To be fair, sid had various bugs leading to unbootable systems since then. While it's possible to recover in such situations without re-installation or data loss, I believe that makes the term "unstable" quite fitting.
eMPee584 11 hours ago [-]
Well with a lot of packages, including from 3rd party repos, and only seldomly doing upgrades, one can get pretty stuck in resolver hell.. of course, noone to blame for the frankendebian approach but myself xD
Nursie 13 hours ago [-]
Never been a Sid user (occasionally for specific packages) but I do find articles like these amusing - for me the transition from testing to stable is usually where I say goodbye to a Debian release. So farewell Trixie! Onto forky I go.
I’ve had a few instances of X not starting, over the years. Nothing terrible, and that’s as much down to me using nvidia cards as anything.
yepguy 7 hours ago [-]
Lots of Debian Developers run unstable, and stable gets the most QA, but I'd be careful about running testing until it gets closer to the next freeze. When I used to daily drive testing, there was a period when it was completely broken. Stable and unstable were fine, but testing was borked.
hsbauauvhabzb 11 hours ago [-]
Isn’t it also unstable in that packages may be removed or updated which could break your workflows?
There’s a small number of packages unavailable in Deb 13 that exist in 12. I assume at some point all of them existed in pre-stable trixie.
bandrami 10 hours ago [-]
(AFAIK that's the only thing "unstable" means: no guarantee any given package will stay there)
yepguy 7 hours ago [-]
Unstable means that updates could change things about your system that you rely on. This could be a package getting removed, but it could also be a package upgrade that necessitates a change to your workflow or code running on the system..
trabant00 15 hours ago [-]
I was hoping for a review from a server perspective. That's where Debian shines in my opinion. I feel like the desktop part is a secondary priority for them. That's not a criticism, there's no other distribution I would use in production if it where my choice. On the desktop though they are a bit too stable. Even if one uses testing or unstable the focus on long term versions is still there.
jillesvangurp 4 hours ago [-]
Long term usually equates to a bit stale/out of date with distributions that only release every few years. Appropriate for stuff you don't really care about.
That's why I use rolling release distributions on my Desktop. For Debian, people recommend Debian testing usually. And that's fine. Maybe they should just call it Debian rolling releases and rename stable to Debian LTS. I think it's more appropriate to how people actually use these things.
Manjaro is not without issues but I've had it on one of my laptops for the last four years and it's nice to have the latest driver updates, kernels, etc. working together. It also helps that the community is just focused on current versions of stuff and fixing minor integrations with released packages rather than working around issues in some long forgotten release with distribution specific patches, etc. You find relatively little of that in Arch (which underlies Manjaro).
For production servers, the server just needs to boot my docker containers and get out of the way. IMHO There's no need to support > 10K packages for god knows what there. Most of that stuff probably has no business being installed on a server. I'm actually leaning towards immutable distributions and servers for that reason. The business of manually fiddling with servers in a production environment is something I'm trying to avoid/do less off. They shouldn't need a package manager if they are properly immutable.
goku12 14 hours ago [-]
> On the desktop though they are a bit too stable.
You're obviously correct here. But perhaps there are users who prefer stable packages on the desktop too. Corporate users most likely (yes, there are such users too). It helps with their security strategy and a development environment similar to their server.
To be very honest, I think the stable security-oriented approach is better than that of a rapid update distro. You should probably use an overlay package manager like flatpak, mise (for dev tools) or even Nix/Guix for anything modern. Preferably something with minimal installs and good sandboxing features. Please let us know if anybody has better suggestions to offer.
scbrg 14 hours ago [-]
I'm such a user. Been mostly running on debian/stable since the 90-ies. At work and privately. I cheated when I got a new computer in the beginning of August this year and installed Trixie a couple of weeks before release.
My reasoning is quite simple: I really don't need the latest versions of everything. Were computers useful two years ago? Yeah? OK then, then a computer is obviously useful today with software that is two years old. I'll get the new software eventually, with most of the kinks ironed out. And I've had time to read up on the changes before they just hit me in the face.
Sure, it was a bit painful with hardware support some twenty years ago or so, but I can barely remember the last time that was an issue.
For the very few select pieces of software where stable doesn't quite cut it there's backports, fasttrack and other side channels.
eadmund 10 hours ago [-]
I prefer stable packages on my desktop and laptop, both for professional and for personal use. I hate the current Javascript/Python/Rust bleeding-edge, left-pad, if you haven updated to yesterday’s latest version which breaks compatibility with everything culture.
I like to build things which last. I like to craft a software system and then use it for decades, moving it from machine to machine and intentionally upgrading the components at my pace.
skydhash 5 hours ago [-]
Same opinion. I tried Fedora and I really liked it. But the constant cache updating frustrated me quickly. I just want something that worked that I can update without doing more than running the command.
extraisland 11 hours ago [-]
I use Debian Stable on my laptop and workstation. Most packages you don't need newer versions. I don't need the latest version of Gnome or Gedit or whatever.
I don't understand why people like the rigmarole of constantly updating their systems. The only things that come down the wire are security updates.
Installer newer software can be managed. I use the following strategy:
- For Discord / Slack / <something that needs to be the newest>. I can normally use Flatpak.
- Use a third party repo. For Brave, Node and some other things. I use their repository.
- Open source stuff. For smaller stuff that is easy to compile from source e.g. vim / neo-vim I just compile from source so I have the newest versions.
- Python Apps / NPM tooling. I install them in my local user directory.
- Docker is installed in rootless mode.
hnarn 13 hours ago [-]
> On the desktop though they are a bit too stable.
>> You're obviously correct here.
It's neither obvious nor correct, the "stability vs. features" expected is completely subjective. I run Debian Stable on my desktop because I've almost never encountered needing newer versions of anything, and when I did I could usually jump to testing (i.e. the upcoming release) rather than unstable, and even then the next release usually wasn't that far away, so it was still very stable.
As other commenters have pointed out, you can run Debian Sid (unstable), but I'll also agree that if that is what you want long-term then maybe running something like Arch makes more sense anyway.
pmontra 14 hours ago [-]
I'm one of those users, but only because I don't need the be on the bleeding edge.
The only problem I had on Debian 11 desktop was related to the new openssh libraries. I could not install the latest nodes and rubies because 11 had older libraries. However there are workarounds related to providing some environment variables (from memory: some legacy_providers_*) so after a little googling I made them work on my dev machine (and on some old server from a customer of mine.) I'm installing Debian 13 in these days so no more workarounds, for a few years.
Everything else worked fine. I don't install much on this machine: no flatpacks, no appimages, no snaps (I left Ubuntu because of them.) Only debs and docker images. I install languages through their language manager, never through the OS: I could have only one version of them, which is useless. Same about databases. There are hardly two projects on the same language and db version. I could be using LibreOffice and GIMP from 20 years ago: they already had all the features I need.
skydhash 5 hours ago [-]
I use incus for my dev needs. But for work computers, I’ve mostly needed one version of everything.
scorpioxy 14 hours ago [-]
In my experience, corporate users have moved on to using containers(or VMs) for their development environments.
It's a tricky thing to solve. One the one hand, you don't want your system to stop working due to an update but also want to keep the software you use updated, both in terms of security and functionality.
Mark Shuttleworth talked about this many years ago before snaps were introduced as a solution to this. The idea at the time was that a rolling release distro is too much of a hassle to maintain and even the 6-month cycle was getting to be too much. So he talked about having a stable core with a long release cycle and rolling releases for software that need to be frequently updated, both desktop and server software. The idea was great but the details of the execution left a bitter taste for many users.
skydhash 5 hours ago [-]
Atomic distribution can be a nice solution for that. But the current portal ecosystem is a bit poor for integration between flatpack.
fh973 12 hours ago [-]
Indeed, with the tmpfs move (tmp in RAM) however it sounds like they have more Desktops in mind.
You don't want to use RAM for tmp files for which you probably can't do capacity planning, and you don't to enable swap on server either.
Dunedan 12 hours ago [-]
I honestly don't understand that change, as most desktops are RAM limited as well, especially as Debian is regularly used for older machines, which aren't supported by Windows 11 anymore.
Mashimo 10 hours ago [-]
Is it common for scripts to download multiple gigabytes to /tmp?
I sometimes manually changed the /tmp to be in memory, or used /dev/shm which by default is in memory. Did not run into any problems just yet, but then again it's just a home server.
Dunedan 4 hours ago [-]
Not sure about scripts, but I download and store everything I know I'll only need until the next reboot in /tmp and naturally that tends to be quite a lot from time to time. That worked fine for decades, so I'm not sure what's the benefit if storing the contents of /tmp in memory instead.
h43z 13 hours ago [-]
Can someone explain why we are still using a umask of 022 in ubuntu and debian?
Would it really be so hard to make that switch to a more privacy focused umask?
JdeBP 12 hours ago [-]
Because in June 2005 the simple response to the Debian bug filed in September 2004 was to comment the global setting out of /etc/login.defs rather than change it to 0027. And after some back and forth there's now the explanation in /etc/login.defs that you can read today (q.v.).
# UMASK is the default umask value for pam_umask and is used by
# useradd and newusers to set the mode of the new home directories.
# 022 is the "historical" value in Debian for UMASK
# 027, or even 077, could be considered better for privacy
# There is no One True Answer here : each sysadmin must make up his/her
# mind.
0x0dd 10 hours ago [-]
That comment was in Bullseye. In Trixie's /etc/login.defs the comment is gone.
With Trixie, PAM's "User Private Groups" are by default enabled and default umask thus is 002 instead of 022.
(Personally, I'm irritated by the rather silent way this invasive change got introduced -- it is mentioned in /usr/share/doc/libpam-modules/NEWS.Debian.gz together with instructions to restore the old behavior.)
IshKebab 11 hours ago [-]
Ah the classic "There is no One True Answer so it's ok to default to a bad answer".
eurg 10 hours ago [-]
And also, some tools still break when using the non-default umask.
Yes, yes, we all run Postgres in containers, but if you don't, and you upgrade to a new Postgres major version, gladly using the Debian scripts that make it all more comfortable, while using umask 027, you will enjoy your day. Though I don't remember if those upgrade-scripts where from Debian proper or from Postgres.
Since that experience I always wondered what other tools may have such bugs lurking around.
ggm 14 hours ago [-]
Does it have BBR3? Serious q. Have tried home-brew kernels for bookworm but want factory paint on the car.
hnarn 13 hours ago [-]
As far as I know, BBR3 is not in the mainline Linux kernel, so obviously it will not be in Debian by default.
> To make sure we're all on the same page: currently the TCP BBR code in Linux is BBRv1. We are working on getting BBRv3 upstream into Linux TCP.
> BBRv1 is definitely not ready to be the default on any Linux distribution. Whether BBRv3 is ready to be a distribution default is arguable.
ggm 13 hours ago [-]
Wish it was a klm. The FreeBSD model is (in that regard) easier to work with, from the Netflix stuff.
I'm running somebody's rebase tracking things. 6.13 I believe. Worked on one box, not on another. Oh well. Doubly irritating is that the sysctl only flags bbr not which version.
TIL. What a superpower!
> deb-get makes it easy to install and update .debs published in 3rd party apt repositories or made available via direct download on websites or GitHub release pages.
https://github.com/wimpysworld/deb-get/
In particular, the list of software is a bit longer than extrepo (e.g. includes zoom):
https://github.com/wimpysworld/deb-get/blob/main/01-main/REA...
This repo has a list of extrepo stuff - https://salsa.debian.org/extrepo-team/extrepo-data/-/tree/ma...
However, trying the specific example that was listed in the article, I installed extrepo and enabled the mozilla repo. Unfortunately, firefox is not installable on trixie in it's current form because it depends on libasound2 and the trixie package is called libasound2t64. : (
the main reason i always fall back to Ubuntu is bc everyone has a PPA for it. Sometimes the PPA also works for Debian, but its 50/50 (from what i understand its not an official thing under Debian?)
AppImages have aleviated this.. but appimagelauncher is broken under Ubuntu and theyre annoying to integrate manually
Generally, these are repos maintained by the upstreams themselves, e.g. Docker, Tor, Armbian, Dovecot... my guess (and it's just a guess, I haven't used Ubuntu PPAs much lately) the PPAs are maintained by not-the-upstreams. Or perhaps some of the upstreams are maintaining both a PPA and their own hosted repo.
Was bummed to see firefox at version 128 as I've been missing features from the more recent versions. I don't know how I'm going to address that yet as I prefer not to add external apt sources, if I can. This is on a desktop system so somewhat recent versions of software is desirable.
What do other people do for desktop systems? Go with testing/unstable or just another distro for desktops?
[0] https://www.ebay.com/str/evolutionecycling
[1] https://linuxmint-installation-guide.readthedocs.io/en/lates...
I believe that is because Debian ships Firefox "Extended Support Release" (ESR) as a security precaution, and the firefox-esr package[1] is quite out of date in absolute terms.
If you want the newest Firefox (not ESR), just add Mozilla's own repo instead: https://blog.mozilla.org/en/mozilla/4-reasons-to-try-mozilla...
[1]: https://packages.debian.org/trixie/firefox-esr
(mainly, it was the fact that the installer finally included firmwares out of the box which made installing much, much easier on laptops)
Because i want updated packages, the first thing i do is enable backports (otherwise i think that trixie still comes with kicad 5? hugh!) and do a full upgrade.
as for firefox, debian's repositories use firefox esr, which is why you are still on 128. There are instructions on firefox's site on how to switch to the regular release channels, just do that. If you can't trust firefox's own sources i don't know how you can trust debian's.
Debian + KDE is my favourite combo. I don't do anything different for desktop. When there was the debian 13 freeze i simply waited a couple of days, edited the sources to point at trixie and did a full-upgrade and an autoremove to clean old stuff. That's it.
I don't consider outdated packages to be a problem on any distro because I just use Nix (which doesn't interfere with other package managers) whenever I want a more recent package.
Heck, I use Fedora Server as my homelab OS to run Incus. Works For Me.
In your case I guess it makes sense since you have to run Fedora at work, but I was under the impression that the support for Incus (i.e. official packaging etc) was better on Debian.
https://backports.debian.org/Instructions/
If I'm not mistaken, repo is already included by default, so you just need:
''' # apt install -t trixie-backports <package> '''
This will install backported package _and_ dependencies, so you will be good to go :)
I’ve had more trouble and time wasted with snap Firefox than I’ve had with official Mozilla repos under both Debian and Ubuntu.
You can however get all stability of a released version with newer packages if you use stable+backports. This would give you a stable system, and allow you to upgrade selected packages to newer versions. This can be tedious, so running testing is also possible.
And well, overall, you can also install other distributions that are bleeding-edge (Arch based?). That's why I like about the distro ecosystem :)
Protip: don't use Pacman directly, just use 'yay' which comes with EndeavourOS. Yay is an interface to Pacman, now while it may sound silly, its totally worth its salt. I'm probably still on Endeavour because of yay.
In order to update your system just type 'yay' into a terminal and it does the work prompting you for confirmation.
If you want to install anything its as simple as 'yay packagename' and then it gives you options, including from the User repos (AUR) which are like Ubuntu's PPAs.
I spent probably 15 years on Debian / Ubuntu (though it mostly became Ubuntu even for servers, I got too used to Ubuntu over the years). I installed Arch this past year because I wanted more up to date packages, I didnt want bleeding edge, but it hasn't been so much bleeding so I'm okay with it. I update every few days, or when Discord decides to tell me to download the DEB package or it wont open.
I did not use the Firefox coming with 11 and I won't use the ESR version in 13. I downloaded the deb from Mozilla's site once and it autoupdated itself up to the current version. No problem at all. I'll do the same on 13.
I trust Debian, and I trust the Debian Firefox team to secure Firefox, but I do not trust Mozilla.
You can tell apt to prefer a given source list only for a few packages.
I solved all of the above by switching to the NVidia Cuda repo (well, I did not reenable Secure Boot, so not sure if that would work now).
Another useful thing from the article for me was `apt modernize-sources` to update the existing sources.list to the new structure. Now I need to check if scripts like this run automatically on my auto-updating desktop from my parents.
[0]: https://packages.debian.org/trixie/extrepo
What I lack with the "modern" `sources.list.d/` file schema is a command to perform common types of edits. Something like `extrepo` but generic and with knowledge of Debian repos/dists. It's a small thing but I want to be able to type commands like
Perhaps `extrepo` would be extended to include Debian-proper or this hypothetical `apt-sources` would be kept Debian-repo-only or perhaps it would cover extrepo's scope.The Libreoffice 5.8 (which was just released very recently) is already packaged in backports for trixie for instance. Did things like updated KDE desktops make it to backports for bookworm?
If only :(
https://packages.debian.org/bookworm-backports/kde/
Come on guys, Debian 13 has been in testing for months, and you can't be arsed to update your apt repos from bookworm to trixie by release, or even weeks after release? That's embarrassing.
These apt repos are still bookworm-only after the trixie release, and it's been weeks. And Cloudflare is still stuck on SHA1.At least google's got you covered, if you simply ask nicely:
It's not like the Debian release schedule is a secret, I suspect there's just less corporate pressure to prioritize Debian.
I tried the 580 bundle with the same problem. I had to revert to the 535 bundle.
[1]: https://forums.developer.nvidia.com/t/nvidia-555-58-4k-120hz...
> Truly adventurous users may take their chances with the unstable ("sid") release.
It's been years since I've run Linux as a daily driver, but when I did it was Sid, and it didn't feel particularly adventurous. Over a 10-15 years timespan, I think there were 2 breakages, one of them being the difficult KDE 3.x transition.
I've long meant to try Fedora, but apt/dpkg is in my muscle memory, and I never got the handle of dnf/rpm.
One of the features I'm most excited about is access to Podman 5.4.2, and the ability to use Podman Quadlets. It'll be nice to start transitioning my systemd service units over to the new format for my containers.
been running "unstable" since 2007 as my daily driver, work-horse, dev-machine, ... Not once faced a "problem" I couldn't recover from. Not once a restore from backup of the main OS due to something the upgrade or OS had caused, no booting from a rescue-image. For something that comes without warranty and has "unstable" in it's name, it's pretty solid.
Apples and oranges of course, but it holds up also well compared to Windows (which tbf, has gotten more stable since Win98), or even compared to MacOS that also crashes at times even after version MacOS 9.x (which was when MacOS became usable in the sense of "stability").
It's just old ideas that get repeated even once they stop being true.
Debian release cycles have a strong focus on stability, and for those situations where it matters, like running a production server, that is a pretty important feature. Just because your desktop never broke doesn't mean it's not "unstable", it's more of a disclaimer that if you put serious things on top of it and it breaks, that's much more on you because you chose to go against maintainer advice.
For me personally, with exception of the Enterprise Linux family (Alma, Rocky etc.), there's no Linux distribution I'd rather run on a workhorse, production, long term deployment server than Debian.
To be fair, sid had various bugs leading to unbootable systems since then. While it's possible to recover in such situations without re-installation or data loss, I believe that makes the term "unstable" quite fitting.
I’ve had a few instances of X not starting, over the years. Nothing terrible, and that’s as much down to me using nvidia cards as anything.
There’s a small number of packages unavailable in Deb 13 that exist in 12. I assume at some point all of them existed in pre-stable trixie.
That's why I use rolling release distributions on my Desktop. For Debian, people recommend Debian testing usually. And that's fine. Maybe they should just call it Debian rolling releases and rename stable to Debian LTS. I think it's more appropriate to how people actually use these things.
Manjaro is not without issues but I've had it on one of my laptops for the last four years and it's nice to have the latest driver updates, kernels, etc. working together. It also helps that the community is just focused on current versions of stuff and fixing minor integrations with released packages rather than working around issues in some long forgotten release with distribution specific patches, etc. You find relatively little of that in Arch (which underlies Manjaro).
For production servers, the server just needs to boot my docker containers and get out of the way. IMHO There's no need to support > 10K packages for god knows what there. Most of that stuff probably has no business being installed on a server. I'm actually leaning towards immutable distributions and servers for that reason. The business of manually fiddling with servers in a production environment is something I'm trying to avoid/do less off. They shouldn't need a package manager if they are properly immutable.
You're obviously correct here. But perhaps there are users who prefer stable packages on the desktop too. Corporate users most likely (yes, there are such users too). It helps with their security strategy and a development environment similar to their server.
To be very honest, I think the stable security-oriented approach is better than that of a rapid update distro. You should probably use an overlay package manager like flatpak, mise (for dev tools) or even Nix/Guix for anything modern. Preferably something with minimal installs and good sandboxing features. Please let us know if anybody has better suggestions to offer.
My reasoning is quite simple: I really don't need the latest versions of everything. Were computers useful two years ago? Yeah? OK then, then a computer is obviously useful today with software that is two years old. I'll get the new software eventually, with most of the kinks ironed out. And I've had time to read up on the changes before they just hit me in the face.
Sure, it was a bit painful with hardware support some twenty years ago or so, but I can barely remember the last time that was an issue.
For the very few select pieces of software where stable doesn't quite cut it there's backports, fasttrack and other side channels.
I like to build things which last. I like to craft a software system and then use it for decades, moving it from machine to machine and intentionally upgrading the components at my pace.
I don't understand why people like the rigmarole of constantly updating their systems. The only things that come down the wire are security updates.
Installer newer software can be managed. I use the following strategy:
- For Discord / Slack / <something that needs to be the newest>. I can normally use Flatpak.
- Use a third party repo. For Brave, Node and some other things. I use their repository.
- Open source stuff. For smaller stuff that is easy to compile from source e.g. vim / neo-vim I just compile from source so I have the newest versions.
- Python Apps / NPM tooling. I install them in my local user directory.
- Docker is installed in rootless mode.
>> You're obviously correct here.
It's neither obvious nor correct, the "stability vs. features" expected is completely subjective. I run Debian Stable on my desktop because I've almost never encountered needing newer versions of anything, and when I did I could usually jump to testing (i.e. the upcoming release) rather than unstable, and even then the next release usually wasn't that far away, so it was still very stable.
As other commenters have pointed out, you can run Debian Sid (unstable), but I'll also agree that if that is what you want long-term then maybe running something like Arch makes more sense anyway.
The only problem I had on Debian 11 desktop was related to the new openssh libraries. I could not install the latest nodes and rubies because 11 had older libraries. However there are workarounds related to providing some environment variables (from memory: some legacy_providers_*) so after a little googling I made them work on my dev machine (and on some old server from a customer of mine.) I'm installing Debian 13 in these days so no more workarounds, for a few years.
Everything else worked fine. I don't install much on this machine: no flatpacks, no appimages, no snaps (I left Ubuntu because of them.) Only debs and docker images. I install languages through their language manager, never through the OS: I could have only one version of them, which is useless. Same about databases. There are hardly two projects on the same language and db version. I could be using LibreOffice and GIMP from 20 years ago: they already had all the features I need.
It's a tricky thing to solve. One the one hand, you don't want your system to stop working due to an update but also want to keep the software you use updated, both in terms of security and functionality.
Mark Shuttleworth talked about this many years ago before snaps were introduced as a solution to this. The idea at the time was that a rolling release distro is too much of a hassle to maintain and even the 6-month cycle was getting to be too much. So he talked about having a stable core with a long release cycle and rolling releases for software that need to be frequently updated, both desktop and server software. The idea was great but the details of the execution left a bitter taste for many users.
You don't want to use RAM for tmp files for which you probably can't do capacity planning, and you don't to enable swap on server either.
I sometimes manually changed the /tmp to be in memory, or used /dev/shm which by default is in memory. Did not run into any problems just yet, but then again it's just a home server.
Would it really be so hard to make that switch to a more privacy focused umask?
* https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=269583
With Trixie, PAM's "User Private Groups" are by default enabled and default umask thus is 002 instead of 022.
(Personally, I'm irritated by the rather silent way this invasive change got introduced -- it is mentioned in /usr/share/doc/libpam-modules/NEWS.Debian.gz together with instructions to restore the old behavior.)
Yes, yes, we all run Postgres in containers, but if you don't, and you upgrade to a new Postgres major version, gladly using the Debian scripts that make it all more comfortable, while using umask 027, you will enjoy your day. Though I don't remember if those upgrade-scripts where from Debian proper or from Postgres.
Since that experience I always wondered what other tools may have such bugs lurking around.
From https://groups.google.com/g/bbr-dev/c/i-sZpfwPx-I/m/0jmNry0A... :
> To make sure we're all on the same page: currently the TCP BBR code in Linux is BBRv1. We are working on getting BBRv3 upstream into Linux TCP.
> BBRv1 is definitely not ready to be the default on any Linux distribution. Whether BBRv3 is ready to be a distribution default is arguable.
I'm running somebody's rebase tracking things. 6.13 I believe. Worked on one box, not on another. Oh well. Doubly irritating is that the sysctl only flags bbr not which version.