Ignoring SMS, which is vulnerable to SIM-swapping attacks, TOTP (Time-based One-Time Passwords) is probably the most popular second factor authentication method at the moment. While reviewing a pull request adding support for TOTP, I decided to investigate the current state of authenticators in 2025 with regards to their support for the various security parameters.
A previous analysis from 2019 found that many popular authenticators were happy to accept parameters they didn't actually support and then generate the wrong codes. At the time, a service wanting to offer TOTP to its users had to stick to the default security parameters or face major interoperability issues with common authenticator clients. Has the landscape changed or are we still stuck with security decisions made 15 years ago?
As an aside: yes, everybody is linking to a wiki page for an archived Google repo because there is no formal spec for the URI format.
Test results
I tested a number of Android authenticators against the oathtool client:
/usr/bin/oathtool --totp=SHA1 --base32 JVRWCZDTMVZWK5BAMJSSAZLOMVZGK5TJMVXGIZLDN5SGKZBAOVZI
/usr/bin/oathtool --totp=SHA256 --base32 JVRWCZDTMVZWK5BAMJSSAZLOMVZGK5TJMVXGIZLDN5SGKZBAOVZI
1Password:
- SHA1 (52 chars): yes
- SHA256: not available
Authy (Twillio):
- SHA1 (32 chars): yes
- SHA1 (52 chars): yes
- SHA256: no (treats it as SHA1)
- Note: they also pick random logos to attach to your brand.
Bitwarden Authenticator:
- SHA1 (32 chars): yes
- SHA1 (52 chars): yes
- SHA256: yes
Duo Security:
- SHA1 (32 chars): yes
- SHA1 (52 chars): yes
- SHA256: no (treats it as SHA1)
- SHA1 (32 chars): yes
- SHA1 (52 chars): yes
- SHA256: yes
Google Authenticator:
- SHA1 (32 chars): yes
- SHA1 (52 chars): yes
- SHA256: yes
LastPass Authenticator:
- SHA1 (32 chars): yes
- SHA1 (52 chars): yes
- SHA256: yes
Microsoft Authenticator:
- SHA1 (32 chars): yes
- SHA1 (52 chars): yes
- SHA256: no (treats it as SHA1)
I also tested the infamous Google Authenticator on iOS:
otpauth://totp/francois+1%40brave.com?secret=JVRWCZDTMVZWK5BAMJSSAZLOMVZGK5TJ&issuer=Brave%20Account&algorithm=SHA1&image=https://account.brave.com/images/email/brave-41x40.png
otpauth://totp/francois+1%40brave.com?secret=JVRWCZDTMVZWK5BAMJSSAZLOMVZGK5TJMVXGIZLDN5SGKZBAOVZI&issuer=Brave%20Account&algorithm=SHA1&image=https://account.brave.com/images/email/brave-41x40.png
- SHA1 (32 chars): yes
- SHA1 (52 chars): no (rejects it)
- SHA256 (32 chars): yes
Recommendations to site owners
So unfortunately, the 2019 recommendations still stand:
- Algorithm: SHA1
- Key size: 32 characters (equivalent to 20 bytes / 160 bits)
- Period: 30 seconds
- Digits: 6
You should also avoid putting the secret
parameter
last in the URI to avoid breaking
some versions of Google Authenticator which parse these URIs incorrectly.
Other security and user experience considerations:
- Keep track of codes that are already used for as long they are valid since these codes are meant to be one-time credentials.
- Avoid storing the TOTP secret directly in plaintext inside the main app database and instead store it in some kind of secrets manager. Note: it cannot be hashed because the application needs the secret to generate the expected codes.
- Provide a recovery mechanism since users will lose their authenticators. This is often done through the use of one-time "scratch codes".
- Consider including in generated URIs two parameters introduced by the best Android client:
image
andcolor
. Most clients will ignore them, but they also don't hurt.
Despite comments on my ikiwiki blog being fully moderated, spammers have been increasingly posting link spam comments on my blog. While I used to use the blogspam plugin, the underlying service was likely retired circa 2017 and its public repositories are all archived.
It turns out that there is a relatively simple way to drastically reduce the amount of spam submitted to the moderation queue: ban the datacentre IP addresses that spammers are using.
Looking up AS numbers
It all starts by looking at the IP address of a submitted comment:
From there, we can look it up using whois
:
$ whois -r 2a0b:7140:1:1:5054:ff:fe66:85c5
% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See https://docs.db.ripe.net/terms-conditions.html
% Note: this output has been filtered.
% To receive output for a database update, use the "-B" flag.
% Information related to '2a0b:7140:1::/48'
% Abuse contact for '2a0b:7140:1::/48' is 'abuse@servinga.com'
inet6num: 2a0b:7140:1::/48
netname: EE-SERVINGA-2022083002
descr: servinga.com - Estonia
geoloc: 59.4424455 24.7442221
country: EE
org: ORG-SG262-RIPE
mnt-domains: HANNASKE-MNT
admin-c: CL8090-RIPE
tech-c: CL8090-RIPE
status: ASSIGNED
mnt-by: MNT-SERVINGA
created: 2020-02-18T11:12:49Z
last-modified: 2024-12-04T12:07:26Z
source: RIPE
% Information related to '2a0b:7140:1::/48AS207408'
route6: 2a0b:7140:1::/48
descr: servinga.com - Estonia
origin: AS207408
mnt-by: MNT-SERVINGA
created: 2020-02-18T11:18:11Z
last-modified: 2024-12-11T23:09:19Z
source: RIPE
% This query was served by the RIPE Database Query Service version 1.114 (SHETLAND)
The important bit here is this line:
origin: AS207408
which referts to Autonomous System 207408, owned by a hosting company in Germany called Servinga.
Alternatively, you can use this WHOIS server with much better output:
$ whois -h whois.cymru.com -v 2a0b:7140:1:1:5054:ff:fe66:85c5
AS | IP | BGP Prefix | CC | Registry | Allocated | AS Name
207408 | 2a0b:7140:1:1:5054:ff:fe66:85c5 | 2a0b:7140:1::/48 | DE | ripencc | 2017-07-11 | SERVINGA-EE, DE
Looking up IP blocks
Autonomous Systems are essentially organizations to which IPv4 and IPv6 blocks have been allocated.
These allocations can be looked up easily on the command line either using a third-party service:
$ curl -sL https://ip.guide/as207408 | jq .routes.v4 >> servinga
$ curl -sL https://ip.guide/as207408 | jq .routes.v6 >> servinga
or a local database downloaded from IPtoASN.
This is what I ended up with in the case of Servinga:
[
"45.11.183.0/24",
"80.77.25.0/24",
"194.76.227.0/24"
]
[
"2a0b:7140:1::/48"
]
Preventing comment submission
While I do want to eliminate this source of spam, I don't want to block these datacentre IP addresses outright since legitimate users could be using these servers as VPN endpoints or crawlers.
I therefore added the following to my Apache config to restrict the CGI endpoint (used only for write operations such as commenting):
<Location /blog.cgi>
Include /etc/apache2/spammers.include
Options +ExecCGI
AddHandler cgi-script .cgi
</Location>
and then put the following in /etc/apache2/spammers.include
:
<RequireAll>
Require all granted
# https://ipinfo.io/AS207408
Require not ip 46.11.183.0/24
Require not ip 80.77.25.0/24
Require not ip 194.76.227.0/24
Require not ip 2a0b:7140:1::/48
</RequireAll>
Finally, I can restart the website and commit my changes:
$ apache2ctl configtest && systemctl restart apache2.service
$ git commit -a -m "Ban all IP blocks from Servinga"
Future improvements
I will likely automate this process in the future, but at the moment my blog can go for a week without a single spam message (down from dozens every day). It's possible that I've already cut off the worst offenders.
I have published the list I am currently using.
While most podcasts are available on multiple platforms and either offer an RSS feed or have one that can be discovered, some are only available in the form of a YouTube channel. Thankfully, it's possible to both monitor them for new episodes (i.e. new videos), and time-shift the audio for later offline listening.
Subscribing to a channel via RSS is possible thanks to the built-in, but not easily discoverable, RSS feeds. See these instructions for how to do it. As an example, the RSS feed for the official Government of BC channel is https://www.youtube.com/feeds/videos.xml?channel_id=UC6n9tFQOVepHP3TIeYXnhSA.
When it comes to downloading the audio, the most reliable tool I have found is yt-dlp. Since the exact arguments needed to download just the audio as an MP3 are a bit of a mouthful, I wrote a wrapper script which also does a few extra things:
- cleans up the filename so that it can be stored on any filesystem
- adds ID3 tags so that MP3 players can have the metadata they need to display and group related podcast episodes together
If you find that script handy, you may also want to check out the script I have in the same GitHub repo to turn arbitrary video files into a podcast.
A GitHub gist is backed by a regular git repository, but it's not exposed explicitly via the user interface.
For example, this "secret" gist can be cloned using this command:
git clone https://gist.github.com/fmarier/b652bad2e759675e8650f3d3ee81ab08.git test
Within this test
directory, the normal git commands can be used:
touch empty
git add empty
git commit -a -m "Nothing to see here"
A gist can contain multiple files just like normal repositories.
In order to push to this repo, add the following pushurl
:
git remote set-url --push origin git@gist.github.com:b652bad2e759675e8650f3d3ee81ab08.git
before pushing using the regular command:
git push
Note that the GitHub history UI will not show you the normal commit details such as commit message and signatures.
If you want to access the latest version of a file contained within this gist, simply access https://gist.githubusercontent.com/fmarier/b652bad2e759675e8650f3d3ee81ab08/raw/readme.md.
Using NetworkManager and systemd-resolved
together in Debian
bookworm does not work out of the box. The first sign of trouble was these constant
messages in my logs:
avahi-daemon[pid]: Host name conflict, retrying with hostname-2
Then I realized that CUPS printer discovery didn't work: my network printer could not be found. Since this discovery now relies on Multicast DNS, it would make sense that both problems are related to an incompatibility between NetworkManager and Avahi.
What didn't work
The first attempt I made at fixing this was to look for known bugs in Avahi. Neither of the work-arounds I found worked:
the one proposed in https://github.com/avahi/avahi/issues/117#issuecomment-1651475104:
[publish] publish-aaaa-on-ipv4=no publish-a-on-ipv6=no
nor the one proposed in https://github.com/avahi/avahi/issues/117#issuecomment-442201162:
[server] cache-entries-max=0
What worked
The real problem turned out to be the fact that NetworkManager turns on full
mDNS support in systemd-resolved
which conflicts with the mDNS support in
avahi-daemon
.
You can see this in the output of resolvectl status
:
Global
Protocols: -LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Link 2 (enp6s0)
Current Scopes: DNS mDNS/IPv4 mDNS/IPv6
Protocols: +DefaultRoute -LLMNR +mDNS -DNSOverTLS
DNSSEC=no/unsupported
Current DNS Server: 192.168.1.1
DNS Servers: 192.168.1.1
DNS Domain: lan
which includes +mDNS
for the main network adapter.
I initially thought that I could just uninstall avahi-daemon
and rely on the
systemd-resolved
mDNS stack, but it's not actually compatible with
CUPS.
The solution was to tell NetworkManager to set mDNS to resolve-only mode in
systemd-resolved
by adding the following to
/etc/NetworkManager/conf.d/mdns.conf
:
[connection]
connection.mdns=1
leaving /etc/avahi/avahi-daemon.conf
to the default Debian configuration.
Verifying the configuration
After rebooting, resolvectl status
now shows the following:
Global
Protocols: -LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Link 2 (enp6s0)
Current Scopes: DNS mDNS/IPv4 mDNS/IPv6
Protocols: +DefaultRoute -LLMNR mDNS=resolve -DNSOverTLS
DNSSEC=no/unsupported
Current DNS Server: 192.168.1.1
DNS Servers: 192.168.1.1
DNS Domain: lan
Avahi finally sees my printer (called hp
in the output below):
$ avahi-browse -at | grep Printer
+ enp6s0 IPv6 hp @ myprintserver Secure Internet Printer local
+ enp6s0 IPv4 hp @ myprintserver Secure Internet Printer local
+ enp6s0 IPv6 hp @ myprintserver Internet Printer local
+ enp6s0 IPv4 hp @ myprintserver Internet Printer local
+ enp6s0 IPv6 hp @ myprintserver UNIX Printer local
+ enp6s0 IPv4 hp @ myprintserver UNIX Printer local
and so does CUPS:
$ sudo lpinfo --include-schemes dnssd -v
network dnssd://myprintserver%20%40%20hp._ipp._tcp.local/cups?uuid=d46942a2-b730-11ee-b05c-a75251a34287
Firewall rules
Since printer discovery in CUPS relies on mDNS, another thing to double-check is that the correct ports are open on the firewall.
This is what I have in /etc/network/iptables.up.rules
:
# Allow mDNS for local service discovery
-A INPUT -d 100.64.0.0/10 -p udp --dport 5353 -j ACCEPT
-A INPUT -d 192.168.1.0/24 -p udp --dport 5353 -j ACCEPT
and in etc/network/ip6tables.up.rules
:
# Allow mDNS for local service discovery
-A INPUT -d ff02::/16 -p udp --dport 5353 -j ACCEPT
I know that people rave about GMail's spam filtering, but it didn't work for me: I was seeing too many false positives. I personally prefer to see some false negatives (i.e. letting some spam through), but to reduce false positives as much as possible (and ideally have a way to tune this).
Here's the local SpamAssassin setup I have put together over many years. In addition to the parts I describe here, I also turn off greylisting on my email provider (KolabNow) because I don't want to have to wait for up to 10 minutes for a "2FA" email to go through.
This setup assumes that you download all of your emails to your local machine. I use fetchmail for this, though similar tools should work too.
Three tiers of emails
The main reason my setup works for me, despite my receiving hundreds of spam messages every day, is that I split incoming emails into three tiers via procmail:
- not spam: delivered to inbox
- likely spam: quarantined in a
soft_spam/
folder - definitely spam: silently deleted
I only ever have to review the likely spam tier for false positives, which is on the order of 10-30 spam emails a day. I never even see the the hundreds that are silently deleted due to a very high score.
This is implemented based on a threshold in my .procmailrc
:
# Use spamassassin to check for spam
:0fw: .spamassassin.lock
| /usr/bin/spamassassin
# Throw away messages with a score of > 12.0
:0
* ^X-Spam-Level: \*\*\*\*\*\*\*\*\*\*\*\*
/dev/null
:0:
* ^X-Spam-Status: Yes
$HOME/Mail/soft_spam/
# Deliver all other messages
:0:
${DEFAULT}
I also use the following ~/.muttrc
configuration to easily report false
negatives/positives and examine my likely spam folder via a shortcut in
mutt:
unignore X-Spam-Level
unignore X-Spam-Status
macro index S "c=soft_spam/\n" "Switch to soft_spam"
# Tell mutt about SpamAssassin headers so that I can sort by spam score
spam "X-Spam-Status: (Yes|No), (hits|score)=(-?[0-9]+\.[0-9])" "%3"
folder-hook =soft_spam 'push ol'
folder-hook =spam 'push ou'
# <Esc>d = de-register as non-spam, register as spam, move to spam folder.
macro index \ed "<enter-command>unset wait_key\n<pipe-entry>spamassassin -r\n<enter-command>set wait_key\n<save-message>=spam\n" "report the message as spam"
# <Esc>u = unregister as spam, register as non-spam, move to inbox folder.
macro index \eu "<enter-command>unset wait_key\n<pipe-entry>spamassassin -k\n<enter-command>set wait_key\n<save-message>=inbox\n" "correct the false positive (this is not spam)"
Custom SpamAssassin rules
In addition to the default ruleset that comes with SpamAssassin, I've also accrued a number of custom rules over the years.
The first set comes from the (now defunct) SpamAssassin Rules
Emporium.
The second set is the one that backs bugs.debian.org
and
lists.debian.org
.
Note this second one includes archived copies of some of the SARE rules and
so I only use some of the rules in the common/
directory.
Finally, I wrote a few custom rules of my own based on specific kinds of emails I have seen slip through the cracks. I haven't written any of those in a long time and I suspect some of my rules are now obsolete. You may want to do your own testing before you copy these outright.
In addition to rules to match more spam, I've also written a ruleset to remove false positives in French emails coming from many of the above custom rules. I also wrote a rule to get a bonus to any email that comes with a patch:
describe FM_PATCH Includes a patch
body FM_PATCH /\bdiff -pruN\b/
score FM_PATCH -1.0
since it's not very common in spam emails
SpamAssassin settings
When it comes to my system-wide SpamAssassin configuration in
/etc/spamassassin/
, I enable the following plugins:
loadplugin Mail::SpamAssassin::Plugin::AntiVirus
loadplugin Mail::SpamAssassin::Plugin::AskDNS
loadplugin Mail::SpamAssassin::Plugin::ASN
loadplugin Mail::SpamAssassin::Plugin::AutoLearnThreshold
loadplugin Mail::SpamAssassin::Plugin::Bayes
loadplugin Mail::SpamAssassin::Plugin::BodyEval
loadplugin Mail::SpamAssassin::Plugin::Check
loadplugin Mail::SpamAssassin::Plugin::DKIM
loadplugin Mail::SpamAssassin::Plugin::DNSEval
loadplugin Mail::SpamAssassin::Plugin::FreeMail
loadplugin Mail::SpamAssassin::Plugin::FromNameSpoof
loadplugin Mail::SpamAssassin::Plugin::HashBL
loadplugin Mail::SpamAssassin::Plugin::HeaderEval
loadplugin Mail::SpamAssassin::Plugin::HTMLEval
loadplugin Mail::SpamAssassin::Plugin::HTTPSMismatch
loadplugin Mail::SpamAssassin::Plugin::ImageInfo
loadplugin Mail::SpamAssassin::Plugin::MIMEEval
loadplugin Mail::SpamAssassin::Plugin::MIMEHeader
loadplugin Mail::SpamAssassin::Plugin::OLEVBMacro
loadplugin Mail::SpamAssassin::Plugin::PDFInfo
loadplugin Mail::SpamAssassin::Plugin::Phishing
loadplugin Mail::SpamAssassin::Plugin::Pyzor
loadplugin Mail::SpamAssassin::Plugin::Razor2
loadplugin Mail::SpamAssassin::Plugin::RelayEval
loadplugin Mail::SpamAssassin::Plugin::ReplaceTags
loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody
loadplugin Mail::SpamAssassin::Plugin::SpamCop
loadplugin Mail::SpamAssassin::Plugin::TextCat
loadplugin Mail::SpamAssassin::Plugin::TxRep
loadplugin Mail::SpamAssassin::Plugin::URIDetail
loadplugin Mail::SpamAssassin::Plugin::URIEval
loadplugin Mail::SpamAssassin::Plugin::VBounce
loadplugin Mail::SpamAssassin::Plugin::WelcomeListSubject
loadplugin Mail::SpamAssassin::Plugin::WLBLEval
Some of these require extra helper packages or Perl libraries to be
installed. See the comments in the relevant *.pre
files or use this
command to install everything:
apt install spamassassin pyzor razor libencode-detect-perl liblog-log4perl-perl libgeoip-dev libmail-dkim-perl libarchive-zip-perl libio-string-perl libmail-dmarc-perl fuzzyocr
My ~/.spamassassin/user_prefs
file contains the following configuration:
required_hits 5
ok_locales en fr
# Bayes options
score BAYES_00 -4.0
score BAYES_40 -0.5
score BAYES_60 1.0
score BAYES_80 2.7
score BAYES_95 4.0
score BAYES_99 6.0
bayes_auto_learn 1
bayes_ignore_header X-Miltered
bayes_ignore_header X-MIME-Autoconverted
bayes_ignore_header X-Evolution
bayes_ignore_header X-Virus-Scanned
bayes_ignore_header X-Forwarded-For
bayes_ignore_header X-Forwarded-By
bayes_ignore_header X-Scanned-By
bayes_ignore_header X-Spam-Level
bayes_ignore_header X-Spam-Status
as well as manual score reductions due to false positives, and manual score increases to help push certain types of spam emails over the 12.0 definitely spam threshold.
Finally, I have the FuzzyOCR
package installed since it has
occasionally flagged some spam that other tools had missed. It is a little
resource intensive though and so you may want to avoid this one if you are
filtering spam for other people.
As always, feel free to leave a comment if you do something else that works well and that's not included in my setup. This is a work-in-progress.
I use
reboot-notifier
on most of my
servers to let me
know when I need to reboot them for kernel updates since I want to decide
exactly when those machines go down. On the other hand, my home backup
server has very predictable usage patterns and so I decided to go one step
further there and automate these necessary reboots.
To do that, I first installed
reboot-notifier
which puts the following script in /etc/kernel/postinst.d/reboot-notifier
to detect when a new kernel was installed:
#!/bin/sh
if [ "$0" = "/etc/kernel/postinst.d/reboot-notifier" ]; then
DPKG_MAINTSCRIPT_PACKAGE=linux-base
fi
echo "*** System restart required ***" > /var/run/reboot-required
echo "$DPKG_MAINTSCRIPT_PACKAGE" >> /var/run/reboot-required.pkgs
Note that
unattended-upgrades
puts a similar script in /etc/kernel/postinst.d/unattended-upgrades
:
#!/bin/sh
case "$DPKG_MAINTSCRIPT_PACKAGE::$DPKG_MAINTSCRIPT_NAME" in
linux-image-extra*::postrm)
exit 0;;
esac
if [ -d /var/run ]; then
touch /var/run/reboot-required
if ! grep -q "^$DPKG_MAINTSCRIPT_PACKAGE$" /var/run/reboot-required.pkgs 2> /dev/null ; then
echo "$DPKG_MAINTSCRIPT_PACKAGE" >> /var/run/reboot-required.pkgs
fi
fi
and so you only need one of them to be installed since they both write to
/var/run/reboot-required
. It doesn't hurt to have both of them though.
Then I created the following cron job (/etc/cron.daily/reboot-local
) to
actually reboot the server:
#!/bin/bash
REBOOT_REQUIRED=/var/run/reboot-required
if [ -s $REBOOT_REQUIRED ] ; then
cat "$REBOOT_REQUIRED" | /usr/bin/mail -s "Rebooting $HOSTNAME" root
/bin/systemctl reboot
fi
With that in place, my server will send me an email and then automatically reboot itself.
This is a work in progress because I'd like to add some checks later on to make sure that no backup is in progress during that time (maybe by looking for active ssh connections?), but it works well enough for now. Feel free to leave a comment if you've got a smarter script you'd like to share.
Over the last few months, I upgraded my Debian machines from bullseye to bookworm. The process was uneventful (besides the asterisk issue described below), but I ended up reconfiguring several things afterwards in order to modernize my upgraded machines.
Logcheck
I noticed in this release that the transition to
journald is essentially
complete. This means that rsyslog
is no longer needed on most of my systems:
apt purge rsyslog
Once that was done, I was able to comment out the following lines in
/etc/logcheck/logcheck.logfiles.d/syslog.logfiles
:
#/var/log/syslog
#/var/log/auth.log
I did have to adjust some of my custom logcheck rules, particularly the ones that deal with kernel messages:
--- a/logcheck/ignore.d.server/local-kernel
+++ b/logcheck/ignore.d.server/local-kernel
@@ -1,1 +1,1 @@
-^\w{3} [ :[:digit:]]{11} [._[:alnum:]-]+ kernel: \[[0-9. ]+]\ IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+
+^\w{3} [ :[:digit:]]{11} [._[:alnum:]-]+ kernel: (\[[0-9. ]+]\ )?IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+
Then I moved local entries from /etc/logcheck/logcheck.logfiles
to /etc/logcheck/logcheck.logfiles.d/local.logfiles
(/var/log/syslog
and /var/log/auth.log
are enabled by default when
needed) and removed some files that are no longer used:
rm /var/log/mail.err*
rm /var/log/mail.warn*
rm /var/log/mail.info*
Finally, I had to fix any unescaped |
characters in my local rules. For example
error == NULL || \*error == NULL
must now be written as error == NULL \|\| \*error == NULL
.
Networking
After the upgrade, I got a notice that the isc-dhcp-client
is now
deprecated and so I removed if from my system:
apt purge isc-dhcp-client
This however meant that I need to ensure that my network configuration software does not depend on the now-deprecated DHCP client.
On my laptop, I was already using NetworkManager for my main network interfaces and that has built-in DHCP support.
Migration to systemd-networkd
On my backup server, I took this opportunity to switch from ifupdown
to
systemd-networkd
by
removing ifupdown
:
apt purge ifupdown
rm /etc/network/interfaces
putting the following in /etc/systemd/network/20-wired.network
:
[Match]
Name=eno1
[Network]
DHCP=yes
MulticastDNS=yes
and then enabling/starting systemd-networkd
:
systemctl enable systemd-networkd
systemctl start systemd-networkd
I also needed to install polkit:
apt install --no-install-recommends policykit-1
in order to allow systemd-networkd
to set the hostname.
In order to start my firewall automatically as interfaces are brought up, I
wrote a dispatcher script to apply my existing iptables
rules.
Migration to predictacle network interface names
On my Linode server, I did the same as on the backup server, but I put the
following in /etc/systemd/network/20-wired.network
since it has a static
IPv6 allocation:
[Match]
Name=enp0s4
[Network]
DHCP=yes
Address=2600:3c01::xxxx:xxxx:xxxx:939f/64
Gateway=fe80::1
and switched to predictable network interface names by deleting these two files:
/etc/systemd/network/50-virtio-kernel-names.link
/etc/systemd/network/99-default.link
and then changing eth0
to enp0s4
in:
/etc/network/iptables.up.rules
/etc/network/ip6tables.up.rules
/etc/rc.local
(for OpenVPN)/etc/logcheck/ignored.d.*/*
Then I regenerated all initramfs:
update-initramfs -u -k all
and rebooted the virtual machine.
Giving systemd-resolved control of /etc/resolv.conf
After reading this history of DNS resolution on
Linux, I decided to
modernize my resolv.conf
setup and let systemd-resolved
handle
/etc/resolv.conf
.
I installed the package:
apt install systemd-resolved
and then removed no-longer-needed packages:
apt purge openresolv resolvconf avahi-daemon
I also disabled support for Link-Local Multicast Name Resolution (LLMNR) after reading this person's reasoning by putting the following in /etc/systemd/resolved.conf.d/llmnr.conf
:
[Resolve]
LLMNR=no
I verified that mDNS is enabled and LLMNR is disabled:
$ resolvectl mdns
Global: yes
Link 2 (enp0s25): yes
Link 3 (wlp3s0): yes
$ resolvectl llmnr
Global: no
Link 2 (enp0s25): no
Link 3 (wlp3s0): no
Note that if you want auto-discovery of local printers using CUPS, you
need to keep avahi-daemon
and ensure that systemd-resolved
does
not conflict with it.
DNS resolution problems with ifupdown
Also, if you haven't migrated to systemd-networkd
yet and are still using
ifupdown
with a static IP address, you will likely run into DNS
problems which
can be fixed using the following patch to /etc/network/if-up.d/resolved
:
@@ -43,11 +43,11 @@ if systemctl is-enabled systemd-resolved > /dev/null 2>&1; then
fi
if [ -n "$NEW_DNS" ]; then
cat <<EOF >"$mystatedir/ifupdown-${ADDRFAM}-$interface"
-"$DNS"="$NEW_DNS"
+$DNS="$NEW_DNS"
EOF
if [ -n "$NEW_DOMAINS" ]; then
cat <<EOF >>"$mystatedir/ifupdown-${ADDRFAM}-$interface"
-"$DOMAINS"="$NEW_DOMAINS"
+$DOMAINS="$NEW_DOMAINS"
EOF
fi
fi
@@ -66,7 +66,7 @@ EOF
# ignore errors due to nonexistent file
md5sum "$mystatedir/isc-dhcp-v4-$interface" "$mystatedir/isc-dhcp-v6-$interface" "$mystatedir/ifupdown-inet-$interface" "$mystatedir/ifupdown-inet6-$interface" > "$newstate" 2> /dev/null || true
if ! cmp --silent "$oldstate" "$newstate" 2>/dev/null; then
- DNS DNS6 DOMAINS DOMAINS6 DEFAULT_ROUTE
+ unset DNS DNS6 DOMAINS DOMAINS6 DEFAULT_ROUTE
# v4 first
if [ -e "$mystatedir/isc-dhcp-v4-$interface" ]; then
. "$mystatedir/isc-dhcp-v4-$interface"
and make sure you have nameservers setup in your static config, for example
one of my servers' /etc/network/interfaces
looks like this:
iface enp4s0 inet static
address 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.1
dns-nameservers 149.112.121.20
dns-nameservers 149.112.122.20
pre-up iptables-restore /etc/network/iptables.up.rules
Dynamic DNS
I replaced ddclient with inadyn since it doesn't work with no-ip.com anymore, using the configuration I described in an old blog post.
chkrootkit
I moved my customizations in /etc/chkrootkit.conf
to
/etc/chkrootkit/chkrootkit.conf
after seeing this message in my logs:
WARNING: /etc/chkrootkit.conf is deprecated. Please put your settings in /etc/chkrootkit/chkrootkit.conf instead: /etc/chkrootkit.conf will be ignored in a future release and should be deleted.
ssh
As mentioned in Debian bug#1018106, to silence the following warnings:
sshd[6283]: pam_env(sshd:session): deprecated reading of user environment enabled
I changed the following in /etc/pam.d/sshd
:
--- a/pam.d/sshd
+++ b/pam.d/sshd
@@ -44,7 +44,7 @@ session required pam_limits.so
session required pam_env.so # [1]
# In Debian 4.0 (etch), locale-related environment variables were moved to
# /etc/default/locale, so read that as well.
-session required pam_env.so user_readenv=1 envfile=/etc/default/locale
+session required pam_env.so envfile=/etc/default/locale
# SELinux needs to intervene at login time to ensure that the process starts
# in the proper default security context. Only sessions which are intended
I also made the following changes to /etc/ssh/sshd_config.d/local.conf
based on the advice of ssh-audit 2.9.0:
-KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha256
+KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,sntrup761x25519-sha512@openssh.com,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512
Unwanted power management
I ran into a problem with one of my servers where it would suspend
itself after a certain amount of time. This was due to default GDM behaviour
it turns out and while I could tell gdm not to sleep on inactivity,
I instead put the following in /etc/systemd/sleep.conf.d/nosuspend.conf
to fully
disable systemd-based suspend or hibernate:
[Sleep]
AllowSuspend=no
AllowHibernation=no
AllowSuspendThenHibernate=no
AllowHybridSleep=no
Asterisk has been removed from Debian
The only major problem I ran into while upgrading to bookworm is that I discovered
that Asterisk has been removed from stable
and testing
.
For some reason, this was not mentioned in the release notes
and I have not yet found a good solution.
If you upgrade to bookworm, be warned that the bullseye
packages will remain
installed (and will work fine in my experience) unless you "clean them up" with
apt purge '~o'
accidentally and then you'll have to fetch these old debs manually.
Using mitmproxy to intercept your packets is a convenient way to inspect a browser's network traffic.
It's pretty straightforward to setup on a desktop computer:
Install mitmproxy (
apt install mitmproxy
on Debian) and start it:mitmproxy --mode socks5 --listen-port 9000
Start your browser specifying the proxy to use:
chrome --proxy-server="socks5://localhost:9000" brave-browser --proxy-server="socks5://localhost:9000"
Add its certificate authority to your browser.
At this point, all of the traffic from that browser should be flowing through your mitmproxy instance.
Android setup
On Android, it's a little less straightforward:
Start mitmproxy on your desktop:
mitmproxy --mode regular --listen-port 9000
Open that port on your desktop firewall if needed.
- On your Android device, change your WiFi settings for the current access point:
- Proxy: Manual
- Proxy hostname:
192.168.1.100
(IP address of your desktop) - Proxy port:
9000
- Turn off any VPN.
- Turn off WiFi.
- Turn WiFi back on.
- Open http://mitm.it in a browser to download the certificate authority file.
- Open the system Settings, Security and privacy, More security and privacy, Encryption & credentials, Install a certificate and finally choose CA certificate.
- Tap Install anyway to dismiss the warning and select the file you just downloaded.
Once you have gone through all of these steps, you should be able to monitor (on your desktop) the HTTP and HTTPS requests made inside of your Android browsers.
Note that many applications will start failing due to certificate pinning.
Enabling AppArmor on a Debian Linode VPS is not entirely straightforward. Here's what I had to do in order to make it work.
Packages to install
The easy bit was to install a few packages:
apt install grub2 apparmor-profiles-extra apparmor-profiles apparmor
and then adding apparmor=1 security=apparmor
to the kernel command line
(GRUB_CMDLINE_LINUX
) in /etc/default/grub
.
Move away from using Linode's kernels
As mentioned in this blog post, I found out that these parameters are ignored by the Linode kernels.
I had to:
- login to the Linode Manager (i.e.
https://cloud.linode.com/linodes/<linode ID>/configurations
), - click the node relevant node,
- click "Edit" next to the configuration profile, and
- change the kernel to "GRUB 2".
Fix grub
Next I found out that grub doesn't actually install itself properly because it can't be installed directly on the virtual drives provided by Linode (KVM). Manually running this hack worked for me:
grub-install --grub-setup=/bin/true /dev/null
Unbound + Let's Encrypt fix
Finally, my local Unbound installation stopped working because it couldn't access the Let's Encrypt certificates anymore.
The solution to this was pretty
straightforward. All I needed to do was to add the following to
/etc/apparmor.d/local/usr.sbin.unbound
:
/etc/letsencrypt/archive/** r,
/etc/letsencrypt/live/** r,