modified: images/nobody_know_you_are_a_dog.webp

modified:   pandoc_templates/style.css
modified:   setup/contributor_code_of_conduct.md
modified:   setup/set_up_build_environments.md
modified:   setup/wireguard.md
This commit is contained in:
reaction.la 2023-02-18 21:04:50 +08:00
parent 9ce5bfc939
commit 6fc26cc9d0
No known key found for this signature in database
GPG Key ID: 99914792148C8388
5 changed files with 241 additions and 39 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.5 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

@ -45,6 +45,7 @@ td, th {
text-align: left;
}
pre.terminal_image {
font-family: 'Lucida Console';
background-color: #000;
color: #0F0;
font-size: 75%;

View File

@ -73,9 +73,28 @@ Login identities shall have no password reset, because that is a security
hole. If people forget their password, they should just create a new login
that uses the same GPG key.
Every pull request should be made using `git pull-request`, (rather than
some web UI, for the web UI is apt to identify people through the domain
name system and their login identities.)
The start argument of `git pull-request` should correspond to a signed
commit by the person requested, and the end argument to a signed and
tagged commit by the person requesting.
When creating the tag for a pull request, git drops one into an editor and
asks one to describe the tag. One should then give a lengthy description of
one's *pull request* documenting the changes made.
When accepting a pull request, the information provided by the requestor
through the tag and elsewhere should be duplicated by the acceptor into
the (possibly quite lengthy) mergenmessage.
Thus all changes should be made, explained, and approved by persons
identified cryptographically, rather than through the domain name system.
# No race, sex, religion, nationality, or sexual preference
![On the internet nobody knows you are a dog](./images/nobody_know_you_are_a_dog.webp)
![On the internet nobody knows you are a dog](../images/nobody_know_you_are_a_dog.webp)
Everyone shall be white, male, heterosexual, and vaguely Christian, even
if they quite obviously are not, but no one shall unnecessarily and

View File

@ -9,7 +9,7 @@ For a gpt partition table, sixteen MiB fat32 partition with boot and efi flags
set, one gigabyte linux swap, and the rest your ext4 root file system.
With an efi-gpt partition table, efi handles multiboot, so if you have
windows, going to need a biggger boot-efi partition. (grub takes a bit over
windows, going to need a bigger boot-efi partition. (grub takes a bit over
four MiB)
For an ms-dos (non efi) partition table, fivehundred and twelve MIB ext4
@ -30,7 +30,7 @@ And a gpt partition table for a linux system should look something like this
To build a cross platform application, you need to build in a cross
platform environment.
## Setting up Ubuntu in Virtual Box
## Setting up Ubuntu in VirtualBox
Having a whole lot of different versions of different machines, with a
whole lot of snapshots, can suck up a remarkable amount of disk space
@ -66,18 +66,19 @@ Debian especially tends to have security in place to stop random people
from sticking in CDs that get root access to the OS to run code to amend
the OS in ways the developers did not anticipate.
## Setting up Debian in Virtual Box
## Setting up Debian in VirtualBox
### Guest Additions
To install guest additions on Debian:
```bash
su -l root
sudo -i
apt-get -qy update && apt-get -qy install build-essential module-assistant git dnsutils curl sudo dialog rsync
apt-get -qy full-upgrade
m-a -qi prepare
mount -t iso9660 /dev/sr0 /media/cdrom
apt autoremove
mount /media/cdrom0
cd /media/cdrom0 && sh ./VBoxLinuxAdditions.run
usermod -a -G vboxsf cherry
```
@ -209,14 +210,113 @@ mkcd() { mkdir -p "$1" && cd "$1"; }
Setting them in `/etc/bash.bashrc` sets them for all users, including root. But the default `~/.bashrc` is apt to override the change of `H` for `h` in `PS1`
### fstab
The line for in fstab for optical disks needs to given the options `udf,iso9660 ro,users,auto,nofail` so that it automounts, and any user can eject it.
Confusingly, `nofail` means that it is allowed to fail, which of course it will
if there is nothing in the optical drive.
`'user,noauto` means that the user has to mount it, and only the user that
mounted it can unmount it. `user,auto` is likely to result in root mounting it,
and if `root` mounted it, as it probably did, you have a problem. Which
problem is fixed by saying `users` instead of `user`
## Setting up OpenWrt in VirtualBox
OpenWrt is a router, and needs a network to route. So you use it to route a
virtual box internal network.
Ignore the instructions on the OpenWrt website for setting up in Virtual
Box. Those instructions are wrong and do not work. Kind of obvious that
they are not going to work, since they do not provide for connecting to an
internal network that would need its own router. They suffer from a basic
lack of direction, purpose, and intent.
Download the appropriate gzipped image file, expand it to an image file, and convert to a vdi file.
You need an [x86 64 bit version of OpenWrt](https://openwrt.org/docs/guide-user/installation/openwrt_x86). There are four versions of them, squashed and not squashed, efi and not efi. Not efi is more likely to work and not squashed is more likely to work, but only squashed supports automatic updates of the kernel.
In git bash terminal
```bash
gzip -d openwrt-*.img.gz
/c/"Program Files"/Oracle/VirtualBox/VBoxManage convertfromraw --format VDI openwrt-22.03.3-x86-64-generic-ext4-combined.img openwrt-generic-ext4-combined.vdi
```
Add the vdi to oracle media using the oracle media manager.
The resulting vdi file may have things wrong with it that would prevent it from booting, but viewing it in gparted will normalize it.
Create a virtual computer, name openwrt, type linux, version Linux 2.6, 3.x, 4.x, 5.x (64 bit) The first network adaptor in it should be internal, the second one should be NAT or bridged/
Boot up openwrt headless, and any virtual machine on the internal network should just work. From any virtual machine on the internal network, configure the router at http://192.168.1.1
## Virtual disks
The first virtual disk attached to a virtual machine is `/dev/sda`, the second
is `/dev/sdb`, and so on and so forth.
Be warned that the default debian setup, when it encounters multiple
partitions that map to the same mount points is apt to make surprising and
seemingly random decisions as to which partitions to mount to what.
This does not necessarily correspond to order in which virtual drives have
been attached to the virtual machine
Be warned that the debian setup, when it encounters multiple partitions
that have the same UUID is apt to make seemingly random decisions as to which partitions to mount to what.
The problem is that virtual box clone does not change the partition UUIDs. To address this, attach to another linux system without mounting, change the UUIDs with `gparted`. Which will frequently refuse to change a UUID because it knows
better than you do. Will not do anything that would screw up grub.
`boot-repair` can fix a `grub` on the boot drive of a linux system different
from the one it itself booted from, but to boot a cdrom on an oracle virtual
box efi system, cannot have anything attached to SATA. Attach the disk
immediately after the boot-repair grub menu comes up.
The resulting repaired system may nonetheless take a strangely long time
to boot, because it is trying to resume a suspended linux, which may not
be supported on your device.
`boot-repair` and `update-initramfs` make a wild assed guess that if it sees
what looks like a swap partition, it is probably on a laptop that supports
suspend/resume. If this guess is wrong, you are in trouble.
If it is not supported this leads to a strangely long boot delay while grub
waits for the resume data that was stored to the swap file:
```bash
#to fix long waits to resume a nonexistent suspend
sudo -i
swapoff -a
update-initramfs -u
shutdown -r now
```
If you have a separate boot partition in an `efi `system then the `grub.cfg` in `/boot/efi/EFI/debian` (not to be confused with all the other `grub.cfgs`)
should look like
```terminal_image
search.fs_uuid «8943ba15-8939-4bca-ae3d-92534cc937c3» boot hd0,gpt«4»
set prefix=($boot)'/grub'
configfile $prefix/grub.cfg
```
Where the «funny brackets», as always, indicate mutas mutandis.
Should you dig all the way down to the efi boot menu, which boots grub,
which then boots the real grub, the device identifier used corresponds to
the PARTUUID in
`lsblk -o name,type,size,fstype,mountpoint,UUID,PARTUUID` while linux uses the UUID.
If you attach two virtual disks representing two different linux
systems,with the same UUIDs to the same sata controller while powered
down, big surprise is likely on powering up. Attaching one of them to
virtio will evade this problem.
But a better solution is to change all the UUIDs, since every piece of software expects them to be unique, and edit `/etc/fstab` accordingly. Which will probably stop grub from booting your system, because in grub.cfg it is searching for the /boot or / by UUID.
However, sometimes one can add one additional virtual disk to a sata
controller after the system has powered up, which will produce no
surprises, for the disk will be attached but not mounted.
So cheerfully attaching one linux disk to another linux system so that you
can manipulate one system with the other may well have surprising,
@ -224,12 +324,24 @@ unexpected, and highly undesirable results.
What decisions it has in fact made are revealed by `lsblk`
So when you attach a foreign linux disk to another linux system, attach
after it has booted, and detach when you are done, to ensure predictable
and expected behavior.
If one wants to add a several attached disks without surprises, then while
the virtual machines is powered down, attach the virtio-scsis controller,
and a bunch of virtual hard disks to it. The machine will then boot up with
only the sata disk mounted, as one would expect, but the disks attached to
the virtio controller will get attached as the ids /dev/sda, /dev/sdb,
/dev/sdc/, etc, while the sata disk gets mounted, but surprisingly gets the
last id, rather than the first.
The first partition on the first virtual disk is `/dev/sda1`, the third partition
on the second virtual disk is `/dev/sdb3`, and so on and so forth.
After one does what is needful, power down and detach the hard disks, for
if a hard disk is attached to multiple systems, unpleasant suprises are
likely to ensue.
So when you attach a foreign linux disk by sata to another linux system,
attach after it has booted, and detach before you shutdown, to ensure
predictable and expected behavior.
This however only seems to work with efi sata drives, so one can only
attach one additional disk after it has booted.
Dynamic virtual disks in virtual box can be resized, and copied to a
different (larger size)
@ -259,7 +371,7 @@ but not mounted, as `/dev/sdb1`.
You can then shrink it in the host OS with
```bash
VBoxManage modifyhd -compact thediskfile.vdi`
VBoxManage modifyhd -compact thediskfile.vdi
```
or make a copy that will be smaller than the original.
@ -281,13 +393,13 @@ create a fixed size copy of it using virtual media manager in the host
system. This, however, is an impractically slow and inefficient process for
any large disk. For a one terabyte disk, takes a couple of days, a day or
so to initialize the new virtual disk, during which the progress meter shows
zero progress, and another day or so to do actually the copy, during which
zero progress, and another day or so to do actually do the copy, during which
the progress meter very slowly increases.
For big disk images, it is a whole lot faster to create a new system, attach
the old system to it, mount the old system, and copy the files that you care about.
Cloning a fixed sized disk is quite fast, and a quite reasonable way of
backing stuff up.
To list block devices `lsblk`.
To list block devices `lsblk -o name,type,size,fsuse%,fstype,fsver,mountpoint,UUID`.
To mount an attached disk, create an empty directory, normally under
`mnt`, and `mount /dev/sdb3 /mnt/newvm`
@ -295,17 +407,17 @@ To mount an attached disk, create an empty directory, normally under
For example:
```terminal_image
root@example.com:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 24G 0 disk
├─sda1 8:1 0 23G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 975M 0 part [SWAP]
sdb 8:16 0 46G 0 disk
├─sdb1 8:17 0 36M 0 part
├─sdb2 8:18 0 45G 0 part
└─sdb3 8:19 0 1G 0 part
sr0 11:0 1 484M 0 rom
root@example.com:~#lsblk -o name,type,size,fsuse%,fstype,fsver,mountpoint,UUID
NAME TYPE SIZE FSTYPE MOUNTPOINT UUID
sda disk 20G
├─sda1 part 33M vfat /boot/efi E470-C4BA
├─sda2 part 3G swap [SWAP] 764b1b37-c66f-4552-b2b6-0d48196198d7
└─sda3 part 17G ext4 / efd3621c-63a4-4728-b7dd-747527f107c0
sdb disk 20G
├─sdb1 part 33M vfat E470-C4BA
├─sdb2 part 3G swap 764b1b37-c66f-4552-b2b6-0d48196198d7
└─sdb3 part 17G ext4 efd3621c-63a4-4728-b7dd-747527f107c0
sr0 rom 1024M
root@example.com:~# mkdir -p /mnt/sdb2
root@example.com:~# mount /dev/sdb2 /mnt/sdb2
root@example.com:~# ls -hal /mnt/sdb2
@ -319,14 +431,29 @@ drwxr-xr-x 2 root root 4.0K Dec 12 06:27 mnt
drwxr-xr-x 11 root root 4.0K Dec 12 06:27 var
```
# Actual server
when backing up from one virtual hard drive to another very similar one,
mount the source disk with `mount -r`
## disable password entry
We are not worried about permissions and symlinks, so use `rsync -rcv --inplace --append-verify`
If worried about permissions and symlinks `rsync -acv --inplace --append-verify`
There is some horrid bug with `rsync -acv --inplace --append-verify` that makes it excruciatingly slow if you are copying a lot of data.
`cp -vuxr «source-dir»/«.bit*» «dest-dir»` should have similar effect,
but perhaps considerably faster, but it checks only the times, which may
be disastrous if you have been using your backup live any time after you
used the master live. After backing up, run your backup live once briefly,
before using the backed up master, then never again till the next backup.
# Actual server
Setting up an actual server is similar to setting up the virtual machine
modelling it, except you have to worry about the server getting overloaded
and locking up.
## disable password entry
On an actual server, it is advisable to enable passwordless sudo for one user.
issue the command `visudo` and edit the sudoers file to contain the line:
@ -509,19 +636,53 @@ of (multi-)user utilities and applications.
## Setting up ssh
When your hosing service gives you a server, you will probably initially
have to control it by password. And not only is this unsafe and lots of
utilities fail to work with passwords, but your local ssh client may well fail
to do a password login, endelessly offering public keys, when no
`~/.ssh/authorized_keys` file yet exists on the freshly created server.
To force your local client to employ passwords:
```bash
ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no -o StrictHostKeyChecking=no root@«server»
```
And then the first thing you do on the freshly initialized server is
```bash
apt update -qy
apt upgrade -qy
shutdown -r now && exit
```
And the *next* thing you do is login again and set up login by ssh key,
because if you make changes and *then* update, things are likely to break
(because your hosting service likely installed a very old version of linux).
Login by password is second class, and there are a bunch of esoteric
special cases where it does not quite 100% work in all situations,
because stuff wants to auto log you in without asking for input.
Putty is the windows ssh client, but you can use the Linux ssh client in
windows in the git bash shell, and the Linux remote file copy utility
`scp` is way better than the putty utility PSFTP.
windows in the git bash shell, which is way better than putty, and the
Linux remote file copy utility `scp` is way better than the putty utility
`PSFTP`, and the Linux remote file copy utility `rsync` way better than
either of them, though unfortunately `rsync` does not work in the windows bash shell.
The filezilla client works natively on both windows and linux, and it is very good gui file copy utility that, like scp and rsync, works by ssh (once you set up the necessary public and private keys.) Unfortunately on windows, it insists on putty format private keys, while the git bash shell for windows wants linux format keys.
Usually a command line interface is a pain and error prone, with a
multitude of mysterious and inexplicable options and parameters, and one
typo or out of order command causing your system to unrecoverably die,but even though Putty has a windowed interface, the command line
typo or out of order command causing your system to unrecoverably
die,but even though Putty has a windowed interface, the command line
interface of bash is easier to use.
(The gui interface of filezilla is the easiest to us, but I tend not to bother
setting up the putty keys for it, and wind up using rsync linux to linux,
which, like all comand line interfaces is more powerful, but more difficult
and dangerous)
It is easier in practice to use the bash (or, on Windows, git-bash) to manage keys than PuTTYgen. You generate a key pair with
```bash
@ -1287,7 +1448,8 @@ map to the old server, until the new server works.)
```bash
apt-get -qy install certbot python-certbot-nginx
certbot register --register-unsafely-without-email --agree-tos
certbot run -a manual --preferred-challenges dns -i nginx -d reaction.la -d blog.reaction.la
certbot run -a manual --preferred-challenges dns -i nginx \
-d reaction.la -d blog.reaction.la
nginx -t
```
@ -1295,13 +1457,23 @@ This does not set up automatic renewal. To get automatic renewal going,
you will need to renew with the `webroot` challenge rather than the `manual`
once DNS points to this server.
This, ` --preferred-challenges dns`, also allows you to set up wildcard
certificates, but it is a pain, and does not support automatic renewal.
Automatic renewal requires of wildcards requires the cooperation of
certbot and your dns server, and is different for every organization, so only
the big boys can play.
But if you are doing this, not on your test server, but on your live server, the easy way, which will also setup automatic renewal and configure your webserver to be https only, is:
```bash
certbot --nginx -d mail.reaction.la,blog.reaction.la,reaction.la
certbot --nginx -d \
mail.reaction.la,blog.reaction.la,reaction.la,\
www.reaction.la,www.blog.reaction.la,\
gitea.reaction.la,git.reaction.la
```
If instead you already have a certificate, because you copied over your `/etc/letsencrypt` directory
If instead you already have a certificate, because you copied over your
`/etc/letsencrypt` directory
```bash
apt-get -qy install certbot python-certbot-nginx

View File

@ -247,13 +247,18 @@ Next, find the name of your servers main network interface.
```bash
ip addr | grep BROADCAST
server_network_interface=$(ip addr | grep BROADCAST |sed -r "s/.*:[[:space:]]*([[:alnum:]]+)[[:space:]]*:.*/\1/")
echo $server_network_interface
```
As you can see, its named `eth0` on my Debian server.
```terminal_image
:~# ip addr | grep BROADCAST
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 1000
:~# server_network_interface=$(ip addr | grep BROADCAST |sed -r "s/([[:alnum:]]+):[[:space:]]*(.*)[[:space:]]*:(.*)/\2/")
:~# echo $server_network_interface
eth0
```
To configure IP masquerading, we have to add iptables command in a UFW configuration file.
@ -651,6 +656,11 @@ You can also run the following command to get the current public IP address.
curl https://icanhazip.com
```
To get the geographic location
```bash
curl https://www.dnsleaktest.com |grep from
```
# Troubleshooting
## Check if UDP port «51820» is open