In the `[Seat:*]` section of the configuration file (there is another section of this configuration file where these changes have no apparent effect) edit
The full configuration built by `grub2-mkconfig` is built from the file `/etc/default/grub`, the file `/etc/fstab`, and the files in `/etc/grub.d/`.
Among the generated files, the key file is `menu.cfg`, which will contain a boot entry for any additional disk containing a linux kernel that you have attached to the system. You might then be able to boot into that other linux, and recreate its configuration files within it.
To set things to autostart on gui login under Mate and KDE Plasma create
the directory `~/.config/autostart` and copy the appropriate `*.desktop`
files into it from `/usr/share/applications` or
`~/.local/share/applications`.
### Don't let the screen saver log you out.
On Debian lightdm mate go to system/ control center/ Look and Feel/ Screensaver and turn off the screensaver screen lock
Go to go to system / control center/ Hardware/ Power Management and turn off the computer and screen sleep.
### setup ssh server
In the shared directory, I have a copy of /etc and ~.ssh ready to roll, so I just go into the shared directory copy them over, `chmod` .ssh and reboot.
If the host has a domain name, the default in `/etc/bash.bashrc` will not display it in full at the prompt, which can lead to you being confused about which host on the internet you are commanding.
Setting them in `/etc/bash.bashrc` sets them for all users, including root. But the default `~/.bashrc` is apt to override the change of `H` for `h` in `PS1`
### fstab
The line for in fstab for optical disks needs to given the options `udf,iso9660 ro,users,auto,nofail` so that it automounts, and any user can eject it.
Confusingly, `nofail` means that it is allowed to fail, which of course it will
if there is nothing in the optical drive.
`'user,noauto` means that the user has to mount it, and only the user that
mounted it can unmount it. `user,auto` is likely to result in root mounting it,
and if `root` mounted it, as it probably did, you have a problem. Which
problem is fixed by saying `users` instead of `user`
## Setting up OpenWrt in VirtualBox
OpenWrt is a router, and needs a network to route. So you use it to route a
virtual box internal network.
Ignore the instructions on the OpenWrt website for setting up in Virtual
Box. Those instructions are wrong and do not work. Kind of obvious that
they are not going to work, since they do not provide for connecting to an
internal network that would need its own router. They suffer from a basic
lack of direction, purpose, and intent.
Download the appropriate gzipped image file, expand it to an image file, and convert to a vdi file.
You need an [x86 64 bit version of OpenWrt](https://openwrt.org/docs/guide-user/installation/openwrt_x86). There are four versions of them, squashed and not squashed, efi and not efi. Not efi is more likely to work and not squashed is more likely to work, but only squashed supports automatic updates of the kernel.
Add the vdi to oracle media using the oracle media manager.
The resulting vdi file may have things wrong with it that would prevent it from booting, but viewing it in gparted will normalize it.
Create a virtual computer, name openwrt, type linux, version Linux 2.6, 3.x, 4.x, 5.x (64 bit) The first network adaptor in it should be internal, the second one should be NAT or bridged/
Boot up openwrt headless, and any virtual machine on the internal network should just work. From any virtual machine on the internal network, configure the router at http://192.168.1.1
## Virtual disks
The first virtual disk attached to a virtual machine is `/dev/sda`, the second
is `/dev/sdb`, and so on and so forth.
This does not necessarily correspond to order in which virtual drives have
been attached to the virtual machine
Be warned that the debian setup, when it encounters multiple partitions
that have the same UUID is apt to make seemingly random decisions as to which partitions to mount to what.
The problem is that virtual box clone does not change the partition UUIDs. To address this, attach to another linux system without mounting, change the UUIDs with `gparted`. Which will frequently refuse to change a UUID because it knows
better than you do. Will not do anything that would screw up grub.
`boot-repair` can fix a `grub` on the boot drive of a linux system different
from the one it itself booted from, but to boot a cdrom on an oracle virtual
box efi system, cannot have anything attached to SATA. Attach the disk
immediately after the boot-repair grub menu comes up.
The resulting repaired system may nonetheless take a strangely long time
to boot, because it is trying to resume a suspended linux, which may not
be supported on your device.
`boot-repair` and `update-initramfs` make a wild assed guess that if it sees
what looks like a swap partition, it is probably on a laptop that supports
suspend/resume. If this guess is wrong, you are in trouble.
If it is not supported this leads to a strangely long boot delay while grub
waits for the resume data that was stored to the swap file:
If you have a separate boot partition in an `efi `system then the `grub.cfg` in `/boot/efi/EFI/debian` (not to be confused with all the other `grub.cfgs`)
If you amend file system UUID's referenced in fstab and boot, have to amend `/etc/fstab` and `/boot/efi/EFI/debian/grub.cfg`, then rerun `update-grub`.
But a better solution is to change all the UUIDs, since every piece of software expects them to be unique, and edit `/etc/fstab` accordingly. Which will probably stop grub from booting your system, because in grub.cfg it is searching for the /boot or / by UUID.
However, sometimes one can add one additional virtual disk to a sata
controller after the system has powered up, which will produce no
surprises, for the disk will be attached but not mounted.
So cheerfully attaching one linux disk to another linux system so that you
can manipulate one system with the other may well have surprising,
unexpected, and highly undesirable results.
What decisions it has in fact made are revealed by `lsblk`
If one wants to add a several attached disks without surprises, then while
the virtual machines is powered down, attach the virtio-scsis controller,
and a bunch of virtual hard disks to it. The machine will then boot up with
only the sata disk mounted, as one would expect, but the disks attached to
the virtio controller will get attached as the ids /dev/sda, /dev/sdb,
/dev/sdc/, etc, while the sata disk gets mounted, but surprisingly gets the
last id, rather than the first.
After one does what is needful, power down and detach the hard disks, for
if a hard disk is attached to multiple systems, unpleasant suprises are
likely to ensue.
So when you attach a foreign linux disk by sata to another linux system,
attach after it has booted, and detach before you shutdown, to ensure
predictable and expected behavior.
This however only seems to work with efi sata drives, so one can only
attach one additional disk after it has booted.
Dynamic virtual disks in virtual box can be resized, and copied to a
different (larger size)
Confusingly, the documentation and the UI does not distinguish between
dynamic and fixed sized virtual disks - so the UI to change a fixed sized
disks size, or to copy it to a disk of different size is there, but has
absolutely no effect.
Having changed the virtual disk size in the host system, you then want to
change the partition sizes using gparted, which requires the virtual disk to
be attached, but not mounted, to another guest virtual machine in which
you will run `gparted`.
Over time, dynamic virtual disks occupy more and more physical storage,
because more and more sectors become non zero, even though unused.
You attach the virtual disk that you want to shrink to another guest OS as
`/dev/sdb`, which is attached but not mounted, and, in the other guest OS
`zerofree /dev/sdb1` which will zero the free space on partition 1. (And
similarly for any other linux file system partitions)
You run `zerofree`, like gparted, in another in a guest OS, that is mounted
on `/dev/sda` while the disk whose partitions you are zeroing is attached,
but not mounted, as `/dev/sdb1`.
You can then shrink it in the host OS with
```bash
VBoxManage modifyhd -compact thediskfile.vdi
```
or make a copy that will be smaller than the original.
To resize a fixed sized disk you have to make a dynamic copy, then run
gparted (on the other guest OS, you don't want to muck with a mounted
file system using gparted, it is dangerous and broken) to shrink the
partitions if you intend to shrink the virtual disk, resize the dynamic copy
in the host OS, then, if you expanded the virtual disk run gparted to expand
the partitions.
To modify the size of a guest operating system virtual disk, you need that
OS not running, and two other operating systems, the host system and a
second guest operating system. You attach, but not mount, the disk to a
second guest operating system so that you can run zerofree and gparted in
that guest OS.
And now that you have a dynamic disk that is a different size, you can
create a fixed size copy of it using virtual media manager in the host
system. This, however, is an impractically slow and inefficient process for
any large disk. For a one terabyte disk, takes a couple of days, a day or
so to initialize the new virtual disk, during which the progress meter shows
zero progress, and another day or so to do actually do the copy, during which
the progress meter very slowly increases.
Cloning a fixed sized disk is quite fast, and a quite reasonable way of
backing stuff up.
To list block devices `lsblk -o name,type,size,fsuse%,fstype,fsver,mountpoint,UUID`.
To mount an attached disk, create an empty directory, normally under
When the OS detects the cpu idling while waiting for pages to be loaded
into memory, should disable one process so its pages do not get loaded for
a while, and derank all pages in memory that belong to that process, and
derank all pages that belong to processes waiting on that process. When the
cpu has idle time, and nothing to do for enabled processes, because
everything they need has been done, and is only awaiting for disabled
processes to get their pages loaded, then the OS can re-enable a disabled
process, whereupon its virtualed paged get loaded back into physical
memory, possibly resulting in some other process starting to thrash and
getting disabled. So instead paging out the least recently used page, pages out an entire process, and stalls it until the cpu is adequately responsive to the remaining processes, and has been adequately responsive for a little
while. This is inefficient, but it is a lot more efficient than a computer
thrashing on paging. If the computer is stalling waiting on page load, then
it is just running more processes than it can run, and the least recently used page algorithm is not going to accomplish anything useful. Some entire
processes just have to be paged out, and stay paged out, until the
remaining processes have completed and are idling. A thrashing computer
is not running anything at all. Better that is run some things, and from time
to time changes those things.
When the cpu has nothing to do because all the processes are waiting for pages to be loaded, something has to be done.
windows in the git bash shell, which is way better than putty, and the
Linux remote file copy utility `scp` is way better than the putty utility
`PSFTP`, and the Linux remote file copy utility `rsync` way better than
either of them, though unfortunately `rsync` does not work in the windows bash shell.
The filezilla client works natively on both windows and linux, and it is very good gui file copy utility that, like scp and rsync, works by ssh (once you set up the necessary public and private keys.) Unfortunately on windows, it insists on putty format private keys, while the git bash shell for windows wants linux format keys.
`PermitRootLogin` defaults to prohibit-password, but best to set it
explicitly Within that file, find the line that includes
`PermitRootLogin` and if enabled modify it to ensure that users can only
connect with their ssh key.
`ssh` out of the box by default allows every cryptographic algorithm under the sun, but we know the NSA has been industriously backdooring cryptographic code, sometimes at the level of the algorithm itself, as with their infamous elliptic curves, but more commonly at the level of implementation and api, ensuring that secure algorithms are used in a way that is insecure against someone who has the backdoor, insecurely implementing secure algorithms. On the basis of circumstantial evidence
and social connections, I believe that much of the cryptographic code used
by ssh has been backdoored by the nsa, and that this is a widely shared
secret.
They structure the api so as to make it overwhelmingly likely that the code
will be used insecurely, and subtly tricky to use securely, and then make
sure that it is used insecurely. It is usually not that the core algorithms are
backdoored, as that the backdoor is on a more human level, gently steering
the people using core algorithms into a hidden trap.
The backdoors are generally in the interfaces between layers, the apis,
which are subtly mismatched, and if you point at the backdoor they say
"that is not a backdoor, the code is fine, that issue is out of scope. File a
bug report against someone else's code. Out of scope, out of scope."
And if you were to file a bug report against someone else's code, they
would tell you they are using this very secure NSA approved algorithm
with the approved and very secure api, the details of the cryptography are
someone else's problem, "out of scope, out of scope", and they have
absolutely no idea what you are talking about, because what you are
talking about is indeed very obscure, subtle, complicated, and difficult to
understand. The backdoors are usually where one api maintained by one
group is using a subtly flawed api maintained by another group.
The more algorithms permitted, the more places for backdoors. The
certificate algorithms are particularly egregious. Why should we ever
allow more than one algorithm, the one we most trust?
Therefore, I restrict the allowed algorithms to those that I actually use, and
only use the ones I have reason to believe are good and securely
implemented. Hence the lines:
```default
HostKey /etc/ssh/ssh_host_ed25519_key
ciphers chacha20-poly1305@openssh.com
macs hmac-sha2-256-etm@openssh.com
kexalgorithms curve25519-sha256
pubkeyacceptedkeytypes ssh-ed25519
hostkeyalgorithms ssh-ed25519
hostbasedacceptedkeytypes ssh-ed25519
casignaturealgorithms ssh-ed25519
```
Not all ssh servers recognize all these configuration options, and if you
give an unrecognized configuration option, the server dies, and then you
cannot ssh in to fix it. But they all recognize the first three, `HostKey,
ciphers, macs` which are the three that matter the most.
To put these changes into effect:
```bash
shutdown -r now
```
Now that putty can do a non interactive login, you can use `plink` to have a
script in a client window execute a program on the server, and echo the
output to the client, and psftp to transfer files, though `scp` in the Git Bash
window is better, and `rsync` (Unix to Unix only, requires `rsync` running on
On windows, FileZilla uses putty private keys to do scp. This is a much
more user friendly and safer interface than using scp – it is harder to
issue a catastrophic command, but rsync is more broadly capable.
Life is simpler if you run FileZilla under linux, whereupon it uses the same
keys and config as everyone else.
All in all, on windows, it is handier to interact with Linux machines
using the Git Bash command window, than using putty, once you have set
up `~/.ssh/config` on windows.
Of course windows machines are insecure, and it is safer to have your
keys and your `~/.ssh/config` on Linux.
Putty on Windows is not bad when you figure out how to use it, but ssh
in Git Bash shell is better:\
You paste stuff into the terminal window with right click, drag stuff
out of the terminal window with the mouse, you use nano to edit stuff in
the ssh terminal window.
Once your you can ssh into your cloud server without a password, you now need to update it and secure it with ufw. You also need rsync, to move files around
### Remote graphical access over ssh
```bash
ssh -cX root@reaction.la
```
`c` stands for compression, and `X` for X11.
-X overrides the per host setting in `~/.ssh/config`:
```default
ForwardX11 yes
ForwardX11Trusted yes
```
Which overrides the `host *` setting in `~/.ssh/config`, which overrides the settings for all users in `/etc/ssh/ssh_config`
If ForwardX11 is set to yes, as it should be, you do not need the X. Running a gui app over ssh just works. There is a collection of useless toy
apps, `x11-apps` for test and demonstration purposes.
I never got this working in windows, because no end of mystery
configuration issues, but it works fine on Linux.
Then, as root on the remote machine, you issue a command to start up the
graphical program, which runs as an X11 client on the remote
machine, as a client of the X11 server on your local machine. This is a whole lot easier than setting up VNC.
If your machine is running inside an OracleVM, and you issue the
command `startx` as root on the remote machine to start the remote
machines desktop in the X11 server on your local OracleVM, it instead
seems to start up the desktop in the OracleVM X11 server on your
OracleVM host machine. Whatever, I am confused, but the OracleVM
X11 server on Windows just works for me, and the Windows X11 server
just does not. On Linux, just works.
Everyone uses VNC rather than SSH, but configuring login and security
on VNC is a nightmare. The only usable way to do it is to use turn off all
security on VNC, use `ufw` to shut off outside access to the VNC host's port
and access the VNC host through SSH port forwarding.
X11 results in a vast amount of unnecessary round tripping, with the result
that things get unusable when you are separated from the other compute
by a significant ping time. VNC has less of a ping problem.
X11 is a superior solution if your ping time is a few milliseconds or less.
VNC is a superior solution if your ping time is humanly perceptible, fifty
milliseconds or more. In between, it depends.
I find no solution satisfactory. Graphic software really is not designed to be used remotely. Javascript apps are. If you have a program or
functionality intended for remote use, the gui for that capability has to be
javascript/css/html. Or you design a local client or master that accesses
and displays global host or slave information.
The best solution if you must use graphic software remotely and have a
significant ping time is to use VNC over SSH. Albeit VNC always exports
an entire desktop, while X11 exports a window. Though really, the best solution is to not use graphic software remotely, except for apps.
## Install minimum standard software on the cloud server
OK, MariaDB is working. We will use this trivial database and easily
guessed `example_user` with the easily guessed password
`mypassword` for more testing later. Delete him and his database
when your site has your actual content on it.
### domain names and PHP under nginx
Check again that the default nginx web page comes up when you browse to the server.
Create the directories `/var/www/blog.reaction.la` and `/var/www/reaction.la` and put some html files in them, substituting your actual domains for the example domains.
The first server is the default if no domain is recognized, and redirects the
request to an actual server, the next two servers are the actual domains
served, and the last server redirects to the second domain name if the
domain name looks a bit like the second domain name. Notice that this
eliminates those pesky `www`s.
The root tells it where to find the actual files.
The first location tells nginx that if a file name is not found, give a 404 rather than doing the disastrously clever stuff that it is apt to do, and the second location tells it that if a file name ends in `.php`, pass it to `php7.3-fpm.sock` (you did substitute your actual php fpm service for `php7.3-fpm.sock`, right?)
Now check that your configuration is OK with `nginx -t`, and restart nginx to read your configuration.
```bash
nginx -t
systemctl restart nginx
```
Browse to those domains, and check that the web pages come up, and that
www gets redirected.
Now we will create some php files in those directories to check that php works.
But if you are doing this, not on your test server, but on your live server, the easy way, which will also setup automatic renewal and configure your webserver to be https only, is:
Adjust `$table_prefix = 'wp_';` in `wp_config.php` if necessary.
```bash
systemctl start php7.3-fpm.service
systemctl start nginx
```
Your blog should now work.
## Logging and awstats.
### Logging
First create, in the standard and expected location, a place for nginx to log stuff.
```bash
mkdir /var/log/nginx
chown -R www-data:www-data /var/log/nginx
```
Then edit the virtual servers to be logged, which are in the directory `/etc/nginx/sites-enabled` and in this example in the file `/etc/nginx/sites-enabled/config`
```text
server {
server_name reaction.la;
root /var/www/reaction.la;
…
access_log /var/log/nginx/reaction.la.access.log;
error_log /var/log/nginx/reaction.la.error.log;
…
}
```
The default log file format logs the ips, which in a server located in the cloud might be a problem. People who do not have your best interests at heart might get them.
So you might want a custom format that does not log the remote address. On the other hand, Awstats is not going to be happy with that format. A compromise is to create a cron job that cuts the logs daily, a cron job that runs Awstats, and a cron job that then deletes the cut log when Awstats is done with it.
There is no point to leaving a gigantic pile of data, that could hang you and your friends, sitting around wasting space.
## Postfix and Dovecot
[Postfix and Dovecot are a pile of matchsticks and glue] from which you are
expected to assemble a boat.
Probably I should be using one of those email setup packages that set up
everything for you. [Mailinabox] seems to be primarily tested and
developed on ubuntu, and is explicitly not supported on debian.
[Mailcow] however, is Debian. But [Mailcow] wants 6GiB of ram, plus one
GiB swap, plus twenty GiB disk. Ouch. [Mailinabox] can get by with one
GiB of ram, plus one GiB of swap. Says 512MiB is OK, though two GiB
of ram is strongly recommended.
[Mailinabox] wants the domain name `box.yourdomain.com`, and, after it is
set up, wants the nameservers `ns1.box.yourdomain.com` and
`ns2.box.yourdomain.com`. They, fortunately, have a namecheap tutorial.
name will be sent from one and only one network address.
A DKIM record publishes a digital signature for sent mail, to prevent mail
from being modified or fake mails being injected as it goes through the
multiple intermediate servers.
By the time the ultimate recipient sees the email, no end of intermediate
computers have had their hands on it, but pgp signing is obviously
superior, since that is controlled by the actual sender, not one more
intermediary. DKIM is the not quite good enough being the enemy of the
good enough.
Worse, DKIM means that any email sent from your server is signed by
your server, so if you send a private message, and someone defects on you
by making it public, hard to claim that he is making it up. Sometimes you
want your emails signed with a signature verifiable by third parties, and
sometimes this is potentially dangerous. Gpg allows you to sign some
things and not other things. DKIM means that everything gets signed,
without you being aware of it.
DKIM renders all messages non repudiable, and some messages vitally
need to be repudiable.
Gpg is better than DKIM, but has the enormous disadvantage that it
cannot authenticate except by signing. If you send a message to a single
recipient or a quite small number of recipients, you usually want him to
know for sure it is from you, and has not been altered in transit, but not be
able to prove to the whole world that it is from you.
A DMARK record can tell the recipient that mail from
`rhocoin.org` will always and only come senders like
`user@rhocoin.org`. This can be an inconvenient restriction on
one's ability to use a more relevant identity.
Further, intermediate servers keep manging messages sent through them,
breaking the DKIM signatures, resulting in no end of spurious error messages
You want to stop other people's email servers from misbehaving on the
sender addresses. You don't want to stop your server from misbehaving.
Trouble with SPF and DKIM is that, without DMARK, they have no
impact on the sender address, thus don't stop spearphishing attacks using
your identity. They do stop spam attacks from getting your server
blacklisted by using your server identity to send junk mail.
SPF is sufficient to stop your server from getting blacklisted, and largely
irrelevant to preventing spearphishing. But SPF with DMARK at least
does something about spearphishing, while DKIM with DMARK does a far more
thorough job on spearphishing, but produces a lot of false warnings due to
intermediate servers mangling the email in ways that invalidate the signature.
The only useful thing that DMARK can do is ensure *address alignment*
with SPF and DKIM, so that only people who can perform an email login
to your server can send email as someone who has such a login.
But DKIM is complicated to install, and a lot more complicated to manage
because you will be endlessly struggling to resolve the problem of
signatures being falsely invalidated. It may be far more effective against
spearphishing if the end user pays attention, but if people keep seeing
false warnings, they are being trained to ignore valid warnings.
A solution to all these problems is to [use the value `ed25519-sha256` for `a=` in the DKIM header](https://www.mailhardener.com/kb/how-to-use-dkim-with-ed25519)
(which ensures that obsolete intermediaries will
ignore your DKIM, thus third parties will see fewer false warnings) and to
have a [cron job that regularly rotates your DKIM keys](https://rya.nc/dkim-privates.html "DKIM: Show Your Privates"), *and publishes the
old secret key on the DNS 36 hours after it has been rotated out* under the
`n=OldSecret_...` field of the DKIM record, thus rendering your emails
deniable. (RSA keys are inconveniently large for this protocol, since they
do not fit in a DNS record)
But that is quite bit of work, so I just have not gotten around to it. No one
seems to have gotten around to it. Needs to be part of [Mailinabox] so that it
becomes a standard.
Until you can install DKIM with a cron job that renders email repudiable, do
not install DKIM. (And disable it if [Mailinabox] enables it.])
### Install Postfix
Here we will install postfix on debian so that it can be used to send emails
by local users and applications only - that is, those installed on the same
server as Postfix, such as your blog, mailutils, and Gitea, and can receive
The `System mail name` should be the same as the name you assigned to the server when you were creating it. When you’ve finished, press `TAB`, then `ENTER`.
You now have Postfix installed and are ready to modify its configuration settings.
### configuring postfix
```bash
postconf -e home_mailbox=Maildir/
```
Which is incompatible with lots of modern mail software, but a lot more
compatible with all manner of programs trying to use Postfix, including
Dovecot
Your forwarding file is, by default, broken. It forwards all administrative
system generated email to the nonroot local user `deb10` who probably does
not exist on your system.
Set up forwarding, so you’ll get emails sent to `root` on the system at your
personal, external email address or to a suitable nonroot local user, or
create the local user `deb10`.
To configure Postfix so that system-generated emails will be sent to your
email address or to some other non root local user, you need to edit the
`/etc/aliases` file.
```bash
nano /etc/aliases
```
```default
mailer-daemon: postmaster
postmaster: root
nobody: root
hostmaster: root
usenet: root
news: root
webmaster: root
www: root
ftp: root
abuse: root
noc: root
security: root
root: «your_email_address»
```
After changing `/etc/aliases` you must issue the command `newaliases` to inform the mail system. (Rebooting does not do it.)
`/etc/aliases` remaps mail to users on your internal mail server, but likely your mail server is also the MX host for another domain. For this, you are going to need a rather more powerful tool, which I address later.
The `postmaster: root` setting ensures that system-generated emails are sent
to the `root` user. You want to edit these settings so these emails are rerouted
to your email address. To accomplish that, replace «your_email_address»
with your actual email address, or the name of a non root user.. Most systems do not allow email clients to
login as root, so you cannot easily access emails that wind up as `root@mail.rhocoin.org`
Probably you should create a user `postmaster`
If you’re hosting multiple domains on a single server, the other domains
must passed to Postfix using the `mydestination` directive if other people ware going tosend email addressed to users on those domains. But
chances are you also have other domains on another server, which declare in
their DNS this server as their MX record. `mydestination` is not the place
for the domain names of those servers, and putting them in `mydestination`
is apt to result in mysterious failures.
Those other domains, not hosted on this physical machine, but whose MX
record points to this machine are [virtual_alias_domains](#virtual-domains-and-virtual-users) and postfix has to
handle messages addressed to such users differently
Set the mailbox limit to an appropriate fraction of your total available disk space, and the attachment limit to an appropriate fraction of your mailbox size limit.
Check that `myhostname` is consistent with reverse ip search. (It should already be if you setup reverse IP in advance)
Set `mydestination` to all dns names that map to your server (it probably already does)
If you see a pile of warnings `warning symlink leaves directory: /etc/postfix/./makedefs.out` that is just noise. Turn it off by replacing the symbolic link with a hard link
# You don't want an error message response for invalid email
# addresses, as this may reveal too much to your enemies.
```
Every time your `/etc/postfix/virtual` is changed, you have to recompile it
into a hash database that postfix can actually use, with the command:
```bash
postmap /etc/postfix/virtual && postfix reload
```
#### set up an email client for a virtual domain
We have setup postfix and dovecot so that clients can only use ssl/tls, and not starttls.
On thunderbird, we go to account settings / account actions / add mail account
We then enter the email address and password, and click on `configure manually`
Select SSL/TLS and normal password
For the server, thunderbird will incorrectly propose `.blog.reaction.la`
Put in the correct value, `rhocoin.org`, then click on re-test. Thunderbird will then correctly set the port numbers itself, which are the standard port numbers.
But the problem is, we might have an actual host running postfix, which wants to ask the host to which its MX record points, to send emails for it.
Configuring postfix as a satellite system just works, at least for emails generated by services running on the same machine, but postfix does not provide for it logging in. Instead, postfix assumes it has been somehow authorized, typically in `mynetworks` to relay.
Another way of setting it up, which I have not checked out, is [postfix_relaying_through_another_mailserver](https://www.howtoforge.com/postfix_relaying_through_another_mailserver){target="_blank"}
## Your ssh client
Your cloud server is going to keep timing you out and shutting you down,
so if using OpenSSH need to set up `~/.ssh/config` to read
```default
ForwardX11 yes
Protocol 2
TCPKeepAlive yes
ServerAliveInterval 10
```
Putty has this stuff in the connection configuration, not in the
config file. Which makes it easier to get wrong, rather than harder.
### A cloud server that does not shut you down
Your cloud server is probably virtual private server, a vps running on KVM, XEN,
or OpenVZ.
KVM is a real virtual private server, XEN is sort of almost a virtual
server, and OpenVZ is a jumped up guest account on someone else’s
server.
KVM vps is more expensive, because when they say you get 2048 meg,
you actually do get 2048 meg. OpenVZ will allocate up to 2048 gig if it
has some to spare – which it probably does not. So if you are running
OpenVZ you can, and these guys regularly do, put far too many virtual
private servers on one physical machine. Someone can have a 32 Gigabyte
bare metal server with eight cores, and then allocate one hundred virtual
servers each supposedly with two gigabytes and one core on it, while if he
is running KVM, he can only allocate as much ram as he actually has.
## Debian on the cloud
Debian is significantly more lightweight than Ubuntu, harder to
configure and use, will crash and burn if you connect up to a software
repository configured for Ubuntu in order to get the latest and greatest
software. You generally cannot get the latest and greatest software, and
if you try to do so, likely your system will die on its ass, or plunge
you into expert mode, where no one sufficiently expert can be found.
Furthermore, in the course of setting up Debian, highly likely to break
it irretrievably and have to restart from scratch. After each change,
reboot, and after each successful reboot, take a snapshot, so that you
do not have to reboot all the way from scratch.
But, running stuff that is supposed to run, which not always the latest and
greatest, is more stable and reliable than Ubuntu. Provided it boots up
successfully after you have done configuring, will likely go on booting up
reliably and not break for strange and unforeseeable reasons. Will only
break because you try to install or reconfigure software and somehow
screw up. Which you will do with great regularity.
On a small virtual server, Debian provides substantial advantages.
Go Debian with ssh and no GUI for servers, and Debian with lightdm
Mate for your laptop, so that your local environment is similar to your
server environment.
On any debian you need to run through the apt-get cycle till it stops
However, this memory leak detection is incompatible with wxWidgets,
which does its own memory leak detection.
And since you are going to spend a lot of time in the debugger, Windows
Visual Studio recommended as the main line of development. But, for
cross compilation, wxWidgets recommended.
Code::Blocks (wxSmith GUI developer) is one open source cross platform which
may be out of date.
wxSQLite3 incorporates SQLite3 into wxWidgets, also provides ChaCha20
Poly1305 encryption. There is also a wrapper that wraps SQLite3 into
modern (memory managed) C++.
wxSQLite3 is undergoing development right now, indicating that wxWidgets
and SQLite3 are undergoing development right now. `wxSmith` is dead
`Tk` is still live, but you get confusingly directed to the dead version.
## Model View Controller Architecture
This design pattern separates the UI program from the main program, which is
thus more environment independent and easier to move between different
operating systems.
The Model-view-controller design pattern makes sense with peers on the
server, and clients on the end user’s computer. But I am not sure it makes
sense in terms of where the power is. We want the power to be in the
client, where the secrets are.
Model
: The central component of the pattern. It is the application’s dynamic data
structure, independent of the user interface.[5] It directly manages the
data, logic and rules of the application.
View
: Any representation of information such as a chart, diagram or table.
Multiple views of the same information are possible, such as a bar chart for
management and a tabular view for accountants.
Controller
: Accepts input and converts it to commands for the model or view.
So, a common design pattern is to put as much of the code as possible into
the daemon, and as little into the gui.
Now it makes sense that the Daemon would be assembling and checking large
numbers of transactions, but the client has to be assembling and checking
the end user’s transaction, so this model looks like massive code
duplication.
If we follow the Model-View-Controller architecture then the Daemon provides
the model, and, on command, provides the model view to a system running on
the same hardware, the model view being a subset of the model that the view
knows how to depict to the end user. The GUI is View and Command, a
graphical program, which sends binary commands to the model.
Store the master secret and any valuable secrets in GUI, since wxWidgets
provides a secret storing mechanism. But the daemon needs to be able to run
on a headless server, so needs to store its own secrets – but these secrets
will be generated by and known to the master wallet, which can initialize a
new server to be identical to the first. Since the server can likely be
accessed by lots of people, we will make its secrets lower value.
We also write an (intentionally hard to use) command line view and command
line control, to act as prototypes for the graphical view and control, and
test beds for test instrumentation.
## CMake
CMake is the best cross platform build tool, but my experience is that it is
too painful to use, is not genuinely cross platform, and that useful projects
rely primarily on autotools to build on linux, and on Visual Studio to build
on Windows.
And since I rely primarily on a pile of libraries that rely primarily on
autotools on linux and Visual Studio on windows ...
## Windows, Git, Cmake, autotools and Mingw
Cmake in theory provides a universal build that will build stuff on both
Windows and linux, but when I tried using it, was a pain and created
extravagantly fragile and complicated makefiles.
Libsodium does not support CMake, but rather uses autotools on linux like
systems and visual studio project files on Windows systems.
wxWidgets in theory supports CMake, but I could not get it working, and most people use wxWidgets with autotools on linux like systems, and visual studio project files on Windows systems. Maybe they could not get it working either.
Far from being robustly environment agnostic and shielding you from the
unique characteristics of each environment, CMake seems to require a whole
lot of hand tuning to each particular build environment to do anything useful.
is to have the post-update hook turn it into a old plain dumb files, and
then put a symlink to your directory in the repository in your apache
directories, whereupon the clone command takes as its argument the
directory url (with no trailing backslash).
## Sharing git repositories
### Git Daemon
git-daemon will listen on port 9418. By default, it will allow access to any directory that looks like a git directory and contains the magic file git-daemon-export-ok.
This is by far the simplest and most direct way of allowing the world to get at your git repository.
### Gitweb
Does much the same thing has git-daemon, makes your repository public with a
prettier user interface, and somewhat less efficient protocol.
Gitweb provides a great deal of UI for viewing and interacting with your
repository, while git-daemon just allows people to clone it, and then they can
look at it.
### [gitolite](https://gitolite.com/gitolite/)
It seems that the lightweight way for small group cooperation on public
projects is Gitolite, git-daemon, and Gitweb.
Gitolite allows you to easily make people identified by their ssh public key
and the filename of the file containing their public key write capability to
certain branches and not others.
On Debian host `apt-get install gitolite3`, though someone complains this version is not up to date and you should install from github.
It then requests your public key, and you subsequently administer it through
the cloned repository `gitolite-admin` on your local machine.
It likes to start with a brand new empty git account, because it is going to
manage the authorized-keys file and it is going to construct the git
repositories.
Adding existing bare git repositories (and all git repositories it manages
have to be bare) is a little bit complex.
So, you give everyone working on the project their set of branches on your
repository,and they can do the same on their repositories.
This seems to be a far simpler and more lightweight solution than Phabricator
or Gitlab. It also respects Git’s inherently decentralized model. Phabricator
and Gitlab provide a great big pile of special purpose collaboration tools,
which Gitolite fails to provide, but you have to use those tools and not other tools. Gitolite seems to be overtaking Phabricator. KDE seems to abandoning Phabricator:
> The KDE project [uses](https://community.kde.org/Sysadmin/GitKdeOrgManual)
> gitolite (in combination with redmine for issue tracking and reviewboard for
> code review). Apart from the usual access control, the KDE folks are heavy
> users of the "ad hoc repo creation" features enabled by wildrepos and the
> accompanying commands. Several of the changes to the "admin defined
> commands" were also inspired by KDE’s needs. See
> [section 5](https://community.kde.org/Sysadmin/GitKdeOrgManual#Server-side_commands) and
> [section 6](https://community.kde.org/Sysadmin/GitKdeOrgManual#Personal_repositories) of the above linked page for details.
So they are using three small tools, gitolite, redmine, and reviewboard,
instead of one big monolithic hightly integrated tool. Since we are creating a messaging system where messages can carry money and prove promises and
context, the eat-your-own dogfood principle suggests that pull requests and
code reviews should come over that messaging system.
Gitolite is designed around giving identified users controlled read and
write access, but [can provide world read access] through gitweb and git-daemon.
[can provide world read access]:https://gitolite.com/gitolite/gitweb-daemon#git-daemon
"gitweb and git-daemon – Gitolite"
### [Gitea] and [Gogs]
[Gitea]:https://gitea.io/en-us/
[Gogs]:https://gogs.io
"Gogs: A painless self-hosted Git service"
Gitea is the fork, and Gogs is abandonware. Installation seems a little
scary, but far less scary than Gitlab or Phabricator. Like Gitolite, it expects
an empty git user, and, unlike Gitolite, it expects a minimal ssh setup. If
you have several users with several existing keys, pain awaits.
It expects to run on lemp. [Install Lemp stack on Debian](#lemp-stack-on-debian)
It's default identity model is username/password/email, but it supports the
username/ssh/gpg user identity model. The user can, and should, enter an
ssh and gpg key under profile and settings / ssh gpg keys, and to
prevent the use of https/certificate authority as a backdoor, require
commits to be gpg signed by people listed as collaborators.
If email is enabled, password reset is by default enabled. Unfortunately
email password reset makes CA system the root of identity so we have to
disable it. We need to make gpg the root of identity, as a temporary
measure until our own, better, identity system is working.
It is development model is that anyone can fork a repository, then submit a pull request, which request is then handled by someone with authority to push to that repository.
gpg signatures should, of course, have a completely fake email address,
because the email address advertised in the gpg certificate will be
spammed and spearphished, rendering it useless. But gpg is completely
designed around being used with real email addresses. If you are using it
to sign, encrypt, and verify emails through its nice integration with the
thunderbird mail client, you have to use real email addresses.
Uploading a repository to Gitea is problematic, because it only accepts
repositories on other Gitea, Gog, Gitlab, and Github instances. So you have
to create a fresh empty repository on Gitea, set it to accept an ssh key
from your `.ssh/config`, and merge your existing repository into it.
Download the empty gitea repository. Make it even emptier than it already
is with `git rm`. Because the histories are unrelated, almost anything with
the same name will cause a merge conflict.
```bash
git rm *
git commit -am "preparing to merge into empty repository"
git push
```
Got to the real repository, and merge the empty gitea repository on top of it:
# The histories are no longer unrelated, so push will work.
```
We are doing this in the real repository, so that the original history remains
unchanged - we want the new empty gitea repository on top, not underneath.
Comes with password based membership utilities and web UI ready to
roll, unlike Gitolite which for all its power just declares all the key
management stuff that it is so deeply involved in out of scope with the result
that everyone rolls their own solution ad-hoc.
These involve fewer hosting headaches than the great weighty offerings of
GitLab and Phabricator. They can run on a raspberry pi, and are great ads for
the capability of the Go language.
Gitea, like Gitolite, likes to manage people’s ssh keys. You have to upload
your ssh and gpg public key as part of profile settings. Everything else, is
unfortunately, password based and email based, which is to say based on the
domain system and certificate authority system, which is an inherent massive
security hole, since the authorities can seize the website and put anything
on it they want. Therefore, need to use the [Gitea signing feature](
https://docs.gitea.io/en-us/signing/
"gpg Commit Signatures - Docs")
This seems to require Gitea to do a lot of signing, which is a gaping
security hole, though it can be used with the gpg feature allowing short
lived subkeys.
Will sign with subkey, because MasterKey should not be available on the server
### To set up a system with gpg subkeys but without the master key
This is horribly painful, and we need to create a better system and eat our
own dogfood.
We ignore the Gpg Web of Trust model and instead use the Zooko identity model.
We use Gpg signatures to verify that remote repository code is coming from an unchanging entity, not for Gpg Web of Trust. Web of Trust is too complicated and too user hostile to be workable or safe.
Never --sign any Gpg key. --lsign it.
Never use any public gpg key repository.
Never use any email address on a gpg key unless it is only used for messages relating to a single group and a single purpose, or a fake email.
Gpg fundamentally lacks the concept of one entity acting for and on behalf
of another. A subkey should be identified by the name of the master key
followed by a subname intelligible to humans, and a sequence of grant of
authority values intelligible to computers, represented by either a variable
precision integer representing a bit string, each bit corresponding to a an
authority value, which bitstring may contain a flag saying that further
authorities are represented by a null terminated Dewey decimal sequence rather
than by one bits in a sparse bitstring.
But for the moment:
```bash
gpg --expert --full-gen-key
```
Select 9 ECC and ECC, then select curve25519, then 0, key does not expire.
(This is going to be the rarely used master key, whose secret will be kept in
a safe place and seldom used)
Much open source cryptography has been backdoored. I have no reason to
suppose that gpg is backdoored, other than that there is a great deal of
backdooring going around, but I have positive reason to suppose that
curve25519 has *not* been backdoored in gpg, and even if I did not, the
fewer cryptographic algorithms we use, the smaller the attack surface.
Gitlab strongly recommends using only ED25519 ssh keys to interact with
git repositories, and I strongly recommend using only ED25519 ssh keys
to interact with repositories and only 25519 gpg keys to authenticate
commits and identify committers. I mention Gitlab not because I regard
them as a highly authoritative source, but to show I am not the only one
around who is paranoid about broken and corrupted cryptographic code.
Everyone should use the same algorithm to reduce the attack surface.
Name `«master key»>` (use a fake email address) Gpg was designed to
make email secure, but email is not secure. We will be using this as the
root of identities in Git, rather than to authenticate email. Use this root of
identity only for project related matters, only for the authentication of code
and design documents. Don't use this identity for other purposes, as this
will increase the risk that pressure or unpleasant consequences will be
applied to you through those other activities. Don't link this identity to
your broader identity and broader activities, as pressure is likely to applied
to introduce hostile code and strange design decisions that facilitate other
people's hostile code. This happens all the time in projects attempting to
implement cryptography. They get one funny feature after another whose
only utility is facilitating backdoors.
gpg will ask you for a passphrase. If your keyfile falls into enemy hands,
your secret key is subject to offline dictionary attack, against which only a
very strong passphrase, with about 128 bits of entropy, can protect it.
Which is a problem that crypto wallets address, and gpg fails to address.
Either use a strong non human memorable passphrase or else use a trivial
passphrase or no passphrase, and instead export and hide the secret key
file and delete it from gpg when you are done. A human cannot remember
a strong passphrase. Write it down, in pencil, and hide it somewhere.
(Pencil does not fade, but some inks fade) If you can remember the
passphrase, and someone gets at your keyfile, it is unlikely to protect you
keyfile. A strong passphrase looks like a wallet master secret.
Gpg's passphrases are merely a nuisance. They fail to serve a useful
purpose if used in the manner intended. Wallets came under real attack,
and now do things correctly. One of our objectives is to replace gpg for git
and gpg for secure messaging with something that actually works, with a
wallet, but for the moment, we use what we have.
Now create another similar subkey. This time give it an expiry date in the
near future
```bash
gpg --expert --edit-key «master key»
addkey
```
Rather than protecting your primary keys with a useless password, you
should export them to a safe place, such as a pair of thumbdrives, and then
delete them, to be re-imported when you need them to add a new subkey
Being written in PHP, assumes a Lamp stack, apache2, php 5.2 or later,
mysql 5.5 or later
Phabricator assumes and enforces a single truth, it throws away a lot of the
inherent decentralization of git. You always need to have one copy of
the database, which is the King – which people spontaneously do with git,
but with git it is spontaneous, and subject to change.
KDE seems to be moving away from Phabricator to Gitolite, but they have an
enormously complex system, written in house, for managing access, of which
Gitolite is part. Perhaps, Phabricator, being too big, and doing too much was
inflexible and got in their way.
Github clients spontaneously choose one git repository as the single truth, but then you have the problem thatGithub itself is likely to be hostile. An easy solution is to have the Github respository a clone of the remote repository, without write privileges.
### Gitlab repository
Git, like email, automatically works – provided that all users have ssh
login to the git user, but it is rather bare bones, better to fork out
the extra cash and support gitlab – but gitlab is far from automagic,
and expects one git address for git and one chat address for matterhorn,
and I assume expects an MX record also.
Gitlab is a gigantic memory hog, and needs absolute minimum of one core
and four gig, and two cores and eight gig strongly recommended for
anything except testing it out and toying with it. It will absolutely
crash and burn on less than four gig. If you are running gitlab, no cost
advantage to running it on debian. But for my own private vpn, huge cost
Every Linux desktop is different, and programs written for one desktop
have a tendency to die, mess up, or crash the desktop when running on
another Linux desktop.
Flatpack is a sandboxing environment designed to make every desktop
look to the program like the program’s native desktop (which for
wxWidgets is Gnome), and every program look to the desktop like a
program written for that particular desktop.
Flatpack simulates Gnome or KDE desktops to the program, and then translates
Gnome or KDE behaviour to whatever the actual desktop expects. To do this, it
requires some additional KDE configuration for Gnome desktop programs, and some
additional Gnome information for KDE desktop programs, and some additional
information to cover the other 101 desktops.
WxWidgets tries to make all desktops look alike to the programmer, and Flatpack
tries to make all desktops look alike to the program, but they cover different
aspects of program and desktop behaviour, so are both needed. Flatpack covers
interaction with launcher, the iconization, the install procedure, and stuff,
which wxWidgets does not cover.
Linux installs tend to be wildly idiosyncratic, and the installed program winds
up never being integrated with the desktop, and never updated.
Flatpack provides package management and automatic updates, all the way from the
git repository to the end user’s desktop, which wxWidgets cannot.
This is vital, since we want every wallet to talk the same language as every
other wallet.
Flatpack also makes all IPC look alike, so you can have your desktop program
talk to a service, and it will be talking gnome iPC on every linux.
Unfortunately Flatpack does all this by running programs inside a virtual
machine with a virtual file system, which denies the program access to the
real machine, and denies the real machine access to the program’s
environment. So the end user cannot easily do stuff like edit the program’s
config file, or copy bitcoin’s wallet file or list of blocks.
A program written for the real machine, but actually running in the emulated
flatpack environment, is going to not have the same behaviours. The programmer
has total control over the environment in which his program runs – which means
that the end user does not.
# Censorship resistant internet
## [My planned system](social_networking.html)
## Jitsi
Private video conferencing
[To set up a Jitsi meet server](andchad.net/jitsi)
## [Zeronet](https://zeronet.io/docs)
Namecoin plus bittorrent based websites. Allows dynamic content.
Not compatible with wordpress or phabricator. Have to write your own dynamic site in python and coffescript.
## [Bitmessage](https://wiki.bitmessage.org/)
Messaging system, email replacement, with proof of work and hidden servers.
Non instant text messages. Everyone receives all messages in a stream, but only the intended recipients can read them, making tracing impossible, hence no need to hide one’s IP. Proof of work requires a lot of work, to prevent streams from being spammed.
Not much work has been done on this project recently, though development and maintenance continues in a desultory fashion.
# Tcl Tk
An absolutely brilliant and ingenious language for producing cross
platform UI. Unfortunately I looked at a website that was kept around for
historical reasons, and concluded that development had stopped twenty years ago.
In fact development continues, and it is very popular, but being absolutely
typeless (everything is a conceptually a string, including running code)
any large program becomes impossible for multiple people to work on.
Best suited for relatively small programs that hastily glue stuff together - it
is, more or less, a better bash, capable of running on any desktop, and