new file: images/gpt_partitioned_linux_disk.webp

new file:   images/msdos_linux_partition.webp
modified:   images/nobody_know_you_are_a_dog.webp
modified:   libraries.md
new file:   notes/merkle_patricia_dag.md
modified:   pandoc_templates/style.css
new file:   pandoc_templates/vscode.css
modified:   scale_clients_trust.md
modified:   setup/contributor_code_of_conduct.md
modified:   setup/set_up_build_environments.md
modified:   setup/wireguard.md
modified:   social_networking.md
modified:   ../libsodium
modified:   ../wxWidgets
This commit is contained in:
reaction.la 2023-02-19 15:15:25 +08:00
parent 900696f685
commit 5e116784bf
No known key found for this signature in database
GPG Key ID: 99914792148C8388
14 changed files with 875 additions and 149 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.5 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

@ -445,7 +445,25 @@ environment without MSVC present.
choco install mingw pandoc git vscode gpg4win -y choco install mingw pandoc git vscode gpg4win -y
``` ```
That Cmake does not really work all that well with the MSVC environment. If we eventually take the CMake path, it will be after wc and build on MingGW, not before. Cmake does not really work all that well with the MSVC environment.\
If we eventually take the CMake path, it will be after wc and build on
MingGW, not before.
## vscode
Vscode has taken the correct path, for one always winds up with a full
language and full program running the build from source, and they went
with javascript. Javascript is an unworkable language that falls apart on
any large complex program, but one can use typescript which compiles to javascript.
A full language is needed to govern the compile from source of a large
complex program - and none of the ad hoc languages have proven very useful.
So, I now belatedly conclude the correct path is to build everthing under vscode.
On the other hand, the central attribute of both the makefile language and
the cmake language is dependency scanning, and we shall have to see how
good vscode's toolset is at this big central job.
## The standard Linux installer ## The standard Linux installer

View File

@ -0,0 +1,23 @@
---
title:
Big Circ notation
# katex
...
The definition of $\bigcirc$ used by mathematicians is not convenient for engineers.
So in practice we ignore that definition and use our own.
The mathematical definition is, roughly, that if $f(n)=\bigcirc\big(g(n)\big)$ then $f(n)$ grows no faster than $g(n)$, that there exists some value K such that for values of $n$ of interest and larger than of interest $f(n)\le Kg(n)$
Which is kind of stupid for engineers, because by that definition an algorithm that takes time $\bigcirc(n)$ also takes time $\bigcirc(n^2)$, $\bigcirc(n!)$, etcetera.
So, Knuth defined $\large\Omega$, which means, roughly, that there exists some value K such that for values of $n$ of interest and larger than of interest $f(n)\ge Kg(n)$
Which is also stupid for the same reason.
So what all engineers do in practice is use $\bigcirc$ to mean that the mathematical definition of $\bigcirc$ is true, *and* Knuths definition of $\large\Omega$ is also largely true, so when we say that an operation take that much time, we mean that it takes no more than that much time, *and frequently takes something like that much time*.
So, by the engineer's definition of $\bigcirc$, if an algorithm takes $\bigcirc(n)$ time it does *not* take $\bigcirc(n^2)$ time.
Which is why we never need to use Knuth's $\large\Omega$

View File

@ -6,7 +6,7 @@ body {
font-variant: normal; font-variant: normal;
font-weight: normal; font-weight: normal;
font-stretch: normal; font-stretch: normal;
font-size: 16px; font-size: 100%;
} }
table { table {
border-collapse: collapse; border-collapse: collapse;
@ -45,7 +45,9 @@ td, th {
text-align: left; text-align: left;
} }
pre.terminal_image { pre.terminal_image {
font-family: 'Lucida Console';
background-color: #000; background-color: #000;
color: #0F0; color: #0F0;
font-size: 75%; font-size: 75%;
white-space: no-wrap;
} }

View File

@ -0,0 +1,3 @@
body {
font-size: 100%;
}

View File

@ -2,6 +2,28 @@
title: Scaling, trust and clients title: Scaling, trust and clients
... ...
The fundamental strength of the blockchain architecture is that it is a immutable public ledger. The fundamental flaw of the blockchain architecture is that it is an immutable public ledger.
This is a problem for privacy and fungibility, but what is really biting is scalability, the sheer size of the thing. Every full peer has to download every transaction that anyone ever did, evaluate that transaction for validity, and store it forever. And we are running hard into the physical limits of that. Every full peer on the blockchain has to know every transaction and every output of every transaction that ever there was.
As someone said when Satoshi first proposed what became bitcoin: “it does not seem to scale to the required size.”
And here we are now, fourteen years later, at rather close to that scaling limit. And for fourteen years, very smart people have been looking for a way to scale without limits.
And, at about the same time as we are hitting scalability limits, “public” is becoming a problem for fungibility. The fungibility crisis and the scalability crisis are hitting at about the same time. The fungibility crisis is hitting eth and is threatening bitcoin.
That the ledger is public enables the blood diamonds attack on crypto currency. Some transaction outputs could be deemed dirty, and rendered unspendable by centralized power, and to eventually, to avoid being blocked, you have to make everything KYC, and then even though you are fully compliant, you are apt to get arbitrarily and capriciously blocked because the government, people in quasi government institutions, or random criminals on the revolving door between regulators and regulated decide they do not like you for some whimsical reason. I have from time to time lost small amounts of totally legitimate fiat money in this fashion, as an international transactions become ever more difficult and dangerous, and recently lost an enormous amount of totally legitimate fiat money in this fashion.
Eth is highly centralized, and the full extent that it is centralized and in bed with the state is now being revealed, as tornado eth gets demonetized.
Some people in eth are resisting this attack. Some are not.
Bitcoiners have long accused eth of being a shitcoin, which accusation is obviously false. With the blood diamonds attack under way on eth, likely to become true. It is not a shitcoin, but I have long regarded it as likely to become one. Which expectation may well come true shortly.
A highly centralized crypto currency is closer to being an unregulated bank than a crypto currency. Shitcoins are fraudulent unregulated banks posing as crypto currencies. Eth may well be about to turn into a regulated bank. When bitcoiners accuse eth of being a shitcoin, the truth in their accusation is dangerous centralization, and dangerous closeness to the authorities.
The advantage of crypto currency is that as elite virtue collapses, the regulated banking system becomes ever more lawless, arbitrary, corrupt, and unpredictable. An immutable ledger ensures honest conduct. But if a central authority has too much power over the crypto currency, they get to retroactively decide what the ledger means. Centralization is a central point of failure, and in world of ever more morally debased and degenerate elites, will fail. Maybe Eth is failing now. If not, will likely fail by and by.
# Scaling # Scaling
The Bitcoin blockchain has become inconveniently large, and evaluating it The Bitcoin blockchain has become inconveniently large, and evaluating it
@ -155,11 +177,9 @@ with both privacy and scaling.
## zk-snarks ## zk-snarks
Zk-snarks are not yet a solution. They have enormous potential Zk-snarks, zeeks, are not yet a solution. They have enormous potential
benefits for privacy and scaling, but as yet, no one has quite found a way. benefits for privacy and scaling, but as yet, no one has quite found a way.
[performance survey of zksnarks](https://github.com/matter-labs/awesome-zero-knowledge-proofs#comparison-of-the-most-popular-zkp-systems)
A zk-snark is a succinct proof that code *was* executed on an immense pile A zk-snark is a succinct proof that code *was* executed on an immense pile
of data, and produced the expected, succinct, result. It is a witness that of data, and produced the expected, succinct, result. It is a witness that
someone carried out the calculation he claims he did, and that calculation someone carried out the calculation he claims he did, and that calculation
@ -167,24 +187,103 @@ produced the result he claimed it did. So not everyone has to verify the
blockchain from beginning to end. And not everyone has to know what blockchain from beginning to end. And not everyone has to know what
inputs justified what outputs. inputs justified what outputs.
As "zk-snark" is not a pronounceable work, I am going to use the word "zeek"
to refer to the blob proving that a computation was performed, and
produced the expected result. This is an idiosyncratic usage, but I just do
not like acronyms.
The innumerable privacy coins around based on zk-snarks are just not The innumerable privacy coins around based on zk-snarks are just not
doing what has to be done to make a zk-snark privacy currency that is doing what has to be done to make a zeek privacy currency that is viable
viable at any reasonable scale. They are intentionally scams, or by at any reasonable scale. They are intentionally scams, or by negligence,
negligence, unintentionally scams. All the zk-snark coins are doing the unintentionally scams. All the zk-snark coins are doing the step from a set
step from set $N$ of valid coins, valid unspent transaction outputs, to set $N$ of valid coins, valid unspent transaction outputs, to set $N+1$, in the
$N+1$, in the old fashioned Satoshi way, and sprinkling a little bit of old fashioned Satoshi way, and sprinkling a little bit of zk-snark magic
zk-snark magic privacy pixie dust on top (because the task of producing a privacy pixie dust on top (because the task of producing a genuine zeek
genuine zk-snark proof of coin state for step $N$ to step $N+1$ is just too big proof of coin state for step $N$ to step $N+1$ is just too big for them).
for them). Which is, intentionally or unintentionally, a scam. Which is, intentionally or unintentionally, a scam.
Not yet an effective solution for scaling the blockchain, for to scale the Not yet an effective solution for scaling the blockchain, for to scale the
blockchain, you need a concise proof that any spend in the blockchain was blockchain, you need a concise proof that any spend in the blockchain was
only spent once, and while a zk-snark proving this is concise and only spent once, and while a zk-snark proving this is concise and
capable of being quickly evaluated by any client, generating the proof is capable of being quickly evaluated by any client, generating the proof is
an enormous task. Lots of work is being done to render this task an enormous task.
manageable, but as yet, last time I checked, not manageable at scale.
Rendering it efficient would be a total game changer, radically changing ### what is a Zk-stark or a Zk-snark?
the problem.
Zk-snark stands for “Zero-Knowledge Succinct Non-interactive Argument of Knowledge.”
A zk-stark is the same thing, except “Transparent”, meaning it does not have
the “toxic waste problem”, a potential secret backdoor. Whenever you create
zk-snark parameters, you create a backdoor, and how do third parties know that
this backdoor has been forever erased?
zk-stark stands for Zero-Knowledge Scalable Transparent ARguments of Knowledge, where “scalable” means the same thing as “succinct”
Ok, what is this knowledge that a zk-stark is an argument of?
Bob can prove to Carol that he knows a set of boolean values that
simultaneously satisfy certain boolean constraints.
This is zero knowledge because he proves this to Carol without revealing
what those values are, and it is “succinct” or “scalable”, because he can
prove knowledge of a truly enormous set of values that satisfy a truly
enormous set of constraints, with a proof that remains roughly the same
reasonably small size regardless of how enormous the set of values and
constraints are, and Carol can check the proof in a reasonably short time,
even if it takes Bob an enormous time to evaluate all those constraints over all those booleans.
Which means that Carol could potentially check the validity of the
blockchain without having to wade through terabytes of other peoples
data in which she has absolutely no interest.
Which means that each peer on the blockchain would not have to
download the entire blockchain, keep it all around, and evaluate from the beginning. They could just keep around the bits they cared about.
The peers as a whole have to keep all the data around, and make certain
information about this data available to anyone on demand, but each
individual peer does not have to keep all the data around, and not all the
data has to be available. In particular, the inputs to the transaction do not
have to be available, only that they existed, were used once and only once,
and the output in question is the result of a valid transaction whose outputs
are equal to its inputs.
Unfortunately producing a zeek of such an enormous pile of data, with
such an enormous pile of constraints, could never be done, because the
blockchain grows faster than you can generate the zeek.
### zk-stark rollups, zeek rollups
Zk-stark rollups are a privacy technology and a scaling technology.
A zeek rollup is a zeek that proves that two or more other zeeks were verified.
Instead of Bob proving to Alice that he knows the latest block was valid, having evaluated every transaction, he proves to Alice that *someone* evaluated every transaction.
Fundamentally a ZK-stark proves to the verifier that the prover who generated the zk-stark knows a solution to an np complete problem. Unfortunately the proof is quite large, and the relationship between that problem, and anything that anyone cares about, extremely elaborate and indirect. The proof is large and costly to generate, even if not that costly to verify, not that costly to transmit, not that costly to store.
So you need a language that will generate such a relationship. And then you can prove, for example, that a hash is the hash of a valid transaction output, without revealing the value of that output, or the transaction inputs.
But if you have to have such a proof for every output, that is a mighty big pile of proofs, costly to evaluate, costly to store the vast pile of data. If you have a lot of zk-snarks, you have too many.
So, rollups.
Instead of proving that you know an enormous pile of data satisfying an enormous pile of constraints, you prove you know two zk-starks.
Each of which proves that someone else knows two more zk-starks. And the generation of all these zk-starks can be distributed over all the peers of the entire blockchain. At the bottom of this enormous pile of zk-starks is an enormous pile of transactions, with no one person or one computer knowing all of them, or even very many of them.
Instead of Bob proving to Carol that he knows every transaction that ever there was, and that they are all valid, Bob proves that for every transaction that ever there was, someone knew that that transaction was valid. Neither Carol nor Bob know who knew, or what was in that transaction.
You produce a proof that you verified a pile of proofs. You organize the information about which you want to prove stuff into a merkle tree, and the root of the merkle tree is associated with a proof that you verified the proofs of the direct children of that root vertex. And proof of each of the children of that root vertex proves that someone verified their children. And so forth all the way down to the bottom of the tree, the origin of the blockchain, proofs about proofs about proofs about proofs.
And then, to prove that a hash is a hash of a valid transaction output, you just produce the hash path linking that transaction to the root of the merkle tree. So with every new block, everyone has to just verify one proof once. All the child proofs get thrown away eventually.
Which means that peers do not have to keep every transaction and every output around forever. They just keep some recent roots of the blockchain around, plus the transactions and transaction outputs that they care about. So the blockchain can scale without limit.
ZK-stark rollups are a scaling technology plus a privacy technology. If you are not securing peoples privacy, you are keeping an enormous pile of data around that nobody cares about, (except a hostile government) which means your scaling does not scale.
And, as we are seeing with Tornado, some people Eth do not want that vast pile of data thrown away.
To optimize scaling to the max, you optimize privacy to the max. You want all data hidden as soon as possible as completely as possible, so that everyone on the blockchain is not drowning in other peoples data. The less anyone reveals, and the fewer the people they reveal it to, the better it scales, and the faster and cheaper the blockchain can do transactions, because you are pushing the generation of zk-starks down to the parties who are themselves directly doing the transaction. Optimizing for privacy is almost the same thing as optimizing for scalability.
The fundamental problem is that in order to produce a compact proof that The fundamental problem is that in order to produce a compact proof that
the set of coins, unspent transaction outputs, of state $N+1$ was validly the set of coins, unspent transaction outputs, of state $N+1$ was validly
@ -205,21 +304,20 @@ problem of factoring, dividing the problem into manageable subtasks, but
it seems to be totally oblivious to the hard problem of incentive compatibility at scale. it seems to be totally oblivious to the hard problem of incentive compatibility at scale.
Incentive compatibility was Satoshi's brilliant insight, and the client trust Incentive compatibility was Satoshi's brilliant insight, and the client trust
problem is failure of Satoshi's solution to that problem to scale. Existing problem, too may people runing client wallets and not enough people
zk-snark solutions fail at scale, though in a different way. With zk-snarks, running full peers, is failure of Satoshi's solution to that problem to scale.
the client can verify the zk-snark, but producing a valid zk-snark in the Existing zk-snark solutions fail at scale, though in a different way. With
zk-snarks, the client can verify the zeek but producing a valid zeek in the
first place is going to be hard, and will rapidly get harder as the scale first place is going to be hard, and will rapidly get harder as the scale
increases. increases.
A zk-snark that succinctly proves that the set of coins (unspent transaction A zeek that succinctly proves that the set of coins (unspent transaction
outputs) at block $N+1$ was validly derived from the set of coins at outputs) at block $N+1$ was validly derived from the set of coins at
block $N$, and can also prove that any given coin is in that set or not in that block $N$, and can also prove that any given coin is in that set or not in that
set is going to have to be a proof about many, many, zk-snarks produced set is going to have to be a proof about many, many, zeeks produced by
by many, many machines, a proof about a very large dag of zk-snarks, many, many machines, a proof about a very large dag of zeeks, each zeek
each zk-snark a vertex in the dag proving some small part of the validity a vertex in the dag proving some small part of the validity of the step from
of the step from consensus state $N$ of valid coins to consensus state consensus state $N$ of valid coins to consensus state $N+1$ of valid coins, and the owners of each of those machines that produced a tree vertex for the step from set $N$ to set $N+1$ will need a reward proportionate
$N+1$ of valid coins, and the owners of each of those machines that produced a tree
vertex for the step from set $N$ to set $N+1$ will need a reward proportionate
to the task that they have completed, and the validity of the reward will to the task that they have completed, and the validity of the reward will
need to be part of the proof, and there will need to be a market in those need to be part of the proof, and there will need to be a market in those
rewards, with each vertex in the dag preferring the cheapest source of rewards, with each vertex in the dag preferring the cheapest source of
@ -227,16 +325,6 @@ child vertexes. Each of the machines would only need to have a small part
of the total state $N$, and a small part of the transactions transforming state of the total state $N$, and a small part of the transactions transforming state
$N$ into state $N+1$. This is hard but doable, but I am just not seeing it done yet. $N$ into state $N+1$. This is hard but doable, but I am just not seeing it done yet.
I see good [proposals for factoring the work], but I don't see them
addressing the incentive compatibility problem. It needs a whole picture
design, rather than a part of the picture design. A true zk-snark solution
has to shard the problem of producing state $N+1$, the set of unspent
transaction outputs, from state $N$, so it should also shard the problem of
producing a consensus on the total set and order of transactions.
[proposals for factoring the work]:https://hackmd.io/@vbuterin/das
"Data Availability Sampling Phase 1 Proposal"
### The problem with zk-snarks ### The problem with zk-snarks
Last time I checked, [Cairo] was not ready for prime time. Last time I checked, [Cairo] was not ready for prime time.
@ -362,6 +450,20 @@ rocket and calling it a space plane.
[a frequently changing secret that is distributed]:multisignature.html#scaling [a frequently changing secret that is distributed]:multisignature.html#scaling
### How a fully scalable blockchain running on zeek rollups would work
A blockchain is of course a chain of blocks, and at scale, each block would be far too immense for any one peer to store or process, let alone the entire chain.
Each block would be a Merkle patricia tree, or a Merkle tree of a number of Merkle patricia trees, because we want the block to be broad and flat, rather than deep and narrow, so that it can be produced in a massively parallel way, created in parallel by an immense number of peers. Each block would contain a proof that it was validly derived from the previous block, and that the previous blocks similar proof was verified. A chain is narrow and deep, but that does not matter, because the proofs are “scalable”. No one has to verify all the proofs from the beginning, they just have to verify the latest proofs.
Each peer would keep around the actual data and actual proofs that it cared about, and the chain of hashes linking the data it cared about to Merkle root of the latest block.
All the immense amount of data in the immense blockchain that anyone
cares about would need to exist somewhere, but it would not have to exist
*everywhere*, and everyone would have a proof that the tiny part of the
blockchain that they keep around is consistent with all the other tiny parts
of the blockchain that everyone else is keeping around.
# sharding within each single very large peer # sharding within each single very large peer
Sharding within a single peer is an easier problem than sharding the Sharding within a single peer is an easier problem than sharding the

View File

@ -15,6 +15,13 @@ that frequently strange and overcomplicated design decisions are made,
decisions), decisions whose only apparent utility is to provide paths for decisions), decisions whose only apparent utility is to provide paths for
hostile organizations to exploit subtle, complex, and unobvious security holes. hostile organizations to exploit subtle, complex, and unobvious security holes.
McAffee reported that this is a result of plants - the state plants engineers
in nominally private organizations to create backdoors. Shortly after he
reported this he was arrested and murdered by the US government. (To be
precise he was arrested at the instigation of the US government, and then
"mysteriously" murdered while in prison. Prison murders remain
"mysterious" only if carried out by the state.)
These holes are often designed so that they can only be utilized efficiently These holes are often designed so that they can only be utilized efficiently
by a huge organization with a huge datacentre that collects enormous by a huge organization with a huge datacentre that collects enormous
numbers of hashes and enormous amounts of data, and checks enormous numbers of hashes and enormous amounts of data, and checks enormous
@ -66,9 +73,37 @@ Login identities shall have no password reset, because that is a security
hole. If people forget their password, they should just create a new login hole. If people forget their password, they should just create a new login
that uses the same GPG key. that uses the same GPG key.
Every pull request should be made using `git request-pull`, (rather than
some web UI, for the web UI is apt to identify people through the domain
name system and their login identities.)
The start argument of `git request-pull` should correspond to a signed
commit by the person requested, and the end argument to a signed and
tagged commit by the person requesting.
When creating the tag for a pull request, git drops one into an editor and
asks one to describe the tag. One should then give a lengthy description of
one's pull request documenting the changes made. Or, better, the tag
should already contain a lengthy description of the pull request containing
the changes made.
When accepting a pull request, the information provided by the requestor
through the tag and elsewhere should be duplicated by the acceptor into
the (possibly quite lengthy) merge message. Or, better, if he can fast
forward the pull request, the tag will be the merge message, which will
lead to git recording a more intelligible and linear project history.
Thus all changes should be made, explained, and approved by persons
identified cryptographically, rather than through the domain name system.
It is preferable to simplify the history recorded in git by rebasing the
changes in the branch that you want pulled to the most recent version of the branch that you want it pulled into. This produces an artificially linear
history, which is likely to be more intelligible and informative than the
actual history. In particular, more intelligible to the person pulling.
# No race, sex, religion, nationality, or sexual preference # No race, sex, religion, nationality, or sexual preference
![On the internet nobody knows you are a dog](./images/nobody_know_you_are_a_dog.webp) ![On the internet nobody knows you are a dog](../images/nobody_know_you_are_a_dog.webp)
Everyone shall be white, male, heterosexual, and vaguely Christian, even Everyone shall be white, male, heterosexual, and vaguely Christian, even
if they quite obviously are not, but no one shall unnecessarily and if they quite obviously are not, but no one shall unnecessarily and

View File

@ -2,19 +2,42 @@
title: title:
Set up build environments Set up build environments
... ...
# partitioning for linux
For a gpt partition table, sixteen MiB fat32 partition with boot and efi flags
set, one gigabyte linux swap, and the rest your ext4 root file system.
With an efi-gpt partition table, efi handles multiboot, so if you have
windows, going to need a bigger boot-efi partition. (grub takes a bit over
four MiB)
For an ms-dos (non efi) partition table, fivehundred and twelve MIB ext4
partition with the boot flag set, (linux uses 220 MiB) one gigabyte linux swap,
and the rest your ext4 root file system.
In `gparted' an msdos partition table for a linux system should look
something like this
![msdos partition table](../images/msdos_linux_partition.webp)
And a gpt partition table for a linux system should look something like this
![gpt partition table](../images/gpt_partitioned_linux_disk.webp)
# Virtual Box # Virtual Box
To build a cross platform application, you need to build in a cross To build a cross platform application, you need to build in a cross
platform environment. platform environment.
## Setting up Ubuntu in Virtual Box ## Setting up Ubuntu in VirtualBox
Having a whole lot of different versions of different machines, with a Having a whole lot of different versions of different machines, with a
whole lot of snapshots, can suck up a remarkable amount of disk space whole lot of snapshots, can suck up a remarkable amount of disk space
mighty fast. Even if your virtual disk is quite small, your snapshots mighty fast. Even if your virtual disk is quite small, your snapshots wind
wind up eating a huge amount of space, so you really need some capacious up eating a huge amount of space, so you really need some capacious disk
disk drives. And you are not going to be able to back up all this drives. And you are not going to be able to back up all this enormous stuff,
enormous stuff, so you have to document how to recreate it. so you have to document how to recreate it.
Each snapshot that you intend to keep around long term needs to Each snapshot that you intend to keep around long term needs to
correspond to a documented path from install to that snapshot. correspond to a documented path from install to that snapshot.
@ -43,16 +66,19 @@ Debian especially tends to have security in place to stop random people
from sticking in CDs that get root access to the OS to run code to amend from sticking in CDs that get root access to the OS to run code to amend
the OS in ways the developers did not anticipate. the OS in ways the developers did not anticipate.
## Setting up Debian in Virtual Box ## Setting up Debian in VirtualBox
### Guest Additions
To install guest additions on Debian: To install guest additions on Debian:
```bash ```bash
su -l root sudo -i
apt-get -qy update && apt-get -qy install build-essential module-assistant git sudo dialog rsync apt-get -qy update && apt-get -qy install build-essential module-assistant git dnsutils curl sudo dialog rsync
apt-get -qy full-upgrade apt-get -qy full-upgrade
m-a -qi prepare m-a -qi prepare
mount -t iso9660 /dev/sr0 /media/cdrom apt autoremove
mount /media/cdrom0
cd /media/cdrom0 && sh ./VBoxLinuxAdditions.run cd /media/cdrom0 && sh ./VBoxLinuxAdditions.run
usermod -a -G vboxsf cherry usermod -a -G vboxsf cherry
``` ```
@ -65,9 +91,7 @@ system updates in the background, the system will not shut
down correctly, and guest additions has to be reinstalled with a down correctly, and guest additions has to be reinstalled with a
`shutdown -r`. Or copy and paste mysteriously stops working. `shutdown -r`. Or copy and paste mysteriously stops working.
On Debian lightdm mate go to system/ control center/ Look and Feel/ Screensaver and turn off the screensaver screen lock ### auto gui login
Go to go to system / control center/ Hardware/ Power Management and turn off the computer and screen sleep.
To set automatic login on lightdm-mate To set automatic login on lightdm-mate
@ -91,23 +115,33 @@ autologin-user=cherry
autologin-user-timeout=0 autologin-user-timeout=0
``` ```
### grub timeout
```bash
nano /etc/default/grub
```
### autostart preferred programs
To set things to autostart on gui login under Mate and KDE Plasma create
the directory `~/.config/autostart` and copy the appropriate `*.desktop`
files into it from `/usr/share/applications` or
`~/.local/share/applications`.
### Don't let the screen saver log you out.
On Debian lightdm mate go to system/ control center/ Look and Feel/ Screensaver and turn off the screensaver screen lock
Go to go to system / control center/ Hardware/ Power Management and turn off the computer and screen sleep.
### setup ssh server
In the shared directory, I have a copy of /etc and ~.ssh ready to roll, so I just go into the shared directory copy them over, `chmod` .ssh and reboot. In the shared directory, I have a copy of /etc and ~.ssh ready to roll, so I just go into the shared directory copy them over, `chmod` .ssh and reboot.
On the source machine
```bash ```bash
scp -r .ssh «destination»:~ chmod 700 ~/.ssh && chmod 600 ~/.ssh/*
scp -r etc «destination»:/
``` ```
On the destination machine
```bash
chmod 700 .ssh && chmod 600 .ssh/*
```
I cannot do it all from within the destination machine, because linux cannot follow windows symbolic links.
### Set the hostname ### Set the hostname
check the hostname and dns domain name with check the hostname and dns domain name with
@ -119,8 +153,9 @@ hostname && domainname -s && hostnamectl status
And if need be, set them with And if need be, set them with
```bash ```bash
domainname -b reaction.la fn=reaction.la
hostnamectl set-hostname reaction.la domainname -b $fn
hostnamectl set-hostname $fn
``` ```
Your /etc/hosts file should contain Your /etc/hosts file should contain
@ -152,32 +187,273 @@ ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key
Note that visual studio remote compile requires an `ecdsa-sha2-nistp256` key on the host machine that it is remote compiling for. If it is nist, it is Note that visual studio remote compile requires an `ecdsa-sha2-nistp256` key on the host machine that it is remote compiling for. If it is nist, it is
backdoored backdoored
### .bashrc
If the host has a domain name, the default in `/etc/bash.bashrc` will not display it in full at the prompt, which can lead to you being confused about which host on the internet you are commanding. If the host has a domain name, the default in `/etc/bash.bashrc` will not display it in full at the prompt, which can lead to you being confused about which host on the internet you are commanding.
```bash ```bash
nano /etc/bash.bashrc nano /etc/bash.bashrc
``` ```
Change the lower case `h` in ` PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '` to an upper case `H` Change the lower case `h` in `PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '` to an upper case `H`
```text ```text
PS1='${debian_chroot:+($debian_chroot)}\u@\H:\w\$ ' PS1='${debian_chroot:+($debian_chroot)}\u@\H:\w\$ '
``` ```
And, similarly, in two places in `etc/skel/.bashrc` Also I also like the bash aliases:
```bash ```text
cp -rv ~/.ssh /etc/skel alias ll="ls -hal"
mkcd() { mkdir -p "$1" && cd "$1"; }
``` ```
# Actual server Setting them in `/etc/bash.bashrc` sets them for all users, including root. But the default `~/.bashrc` is apt to override the change of `H` for `h` in `PS1`
## disable password entry ### fstab
The line for in fstab for optical disks needs to given the options `udf,iso9660 ro,users,auto,nofail` so that it automounts, and any user can eject it.
Confusingly, `nofail` means that it is allowed to fail, which of course it will
if there is nothing in the optical drive.
`'user,noauto` means that the user has to mount it, and only the user that
mounted it can unmount it. `user,auto` is likely to result in root mounting it,
and if `root` mounted it, as it probably did, you have a problem. Which
problem is fixed by saying `users` instead of `user`
## Setting up OpenWrt in VirtualBox
OpenWrt is a router, and needs a network to route. So you use it to route a
virtual box internal network.
Ignore the instructions on the OpenWrt website for setting up in Virtual
Box. Those instructions are wrong and do not work. Kind of obvious that
they are not going to work, since they do not provide for connecting to an
internal network that would need its own router. They suffer from a basic
lack of direction, purpose, and intent.
Download the appropriate gzipped image file, expand it to an image file, and convert to a vdi file.
You need an [x86 64 bit version of OpenWrt](https://openwrt.org/docs/guide-user/installation/openwrt_x86). There are four versions of them, squashed and not squashed, efi and not efi. Not efi is more likely to work and not squashed is more likely to work, but only squashed supports automatic updates of the kernel.
In git bash terminal
```bash
gzip -d openwrt-*.img.gz
/c/"Program Files"/Oracle/VirtualBox/VBoxManage convertfromraw --format VDI openwrt-22.03.3-x86-64-generic-ext4-combined.img openwrt-generic-ext4-combined.vdi
```
Add the vdi to oracle media using the oracle media manager.
The resulting vdi file may have things wrong with it that would prevent it from booting, but viewing it in gparted will normalize it.
Create a virtual computer, name openwrt, type linux, version Linux 2.6, 3.x, 4.x, 5.x (64 bit) The first network adaptor in it should be internal, the second one should be NAT or bridged/
Boot up openwrt headless, and any virtual machine on the internal network should just work. From any virtual machine on the internal network, configure the router at http://192.168.1.1
## Virtual disks
The first virtual disk attached to a virtual machine is `/dev/sda`, the second
is `/dev/sdb`, and so on and so forth.
This does not necessarily correspond to order in which virtual drives have
been attached to the virtual machine
Be warned that the debian setup, when it encounters multiple partitions
that have the same UUID is apt to make seemingly random decisions as to which partitions to mount to what.
The problem is that virtual box clone does not change the partition UUIDs. To address this, attach to another linux system without mounting, change the UUIDs with `gparted`. Which will frequently refuse to change a UUID because it knows
better than you do. Will not do anything that would screw up grub.
`boot-repair` can fix a `grub` on the boot drive of a linux system different
from the one it itself booted from, but to boot a cdrom on an oracle virtual
box efi system, cannot have anything attached to SATA. Attach the disk
immediately after the boot-repair grub menu comes up.
The resulting repaired system may nonetheless take a strangely long time
to boot, because it is trying to resume a suspended linux, which may not
be supported on your device.
`boot-repair` and `update-initramfs` make a wild assed guess that if it sees
what looks like a swap partition, it is probably on a laptop that supports
suspend/resume. If this guess is wrong, you are in trouble.
If it is not supported this leads to a strangely long boot delay while grub
waits for the resume data that was stored to the swap file:
```bash
#to fix long waits to resume a nonexistent suspend
sudo -i
swapoff -a
update-initramfs -u
shutdown -r now
```
If you have a separate boot partition in an `efi `system then the `grub.cfg` in `/boot/efi/EFI/debian` (not to be confused with all the other `grub.cfgs`)
should look like
```terminal_image
search.fs_uuid «8943ba15-8939-4bca-ae3d-92534cc937c3» boot hd0,gpt«4»
set prefix=($boot)'/grub'
configfile $prefix/grub.cfg
```
Where the «funny brackets», as always, indicate mutas mutandis.
Should you dig all the way down to the efi boot menu, which boots grub,
which then boots the real grub, the device identifier used corresponds to
the PARTUUID in
`lsblk -o name,type,size,fstype,mountpoint,UUID,PARTUUID` while linux uses the UUID.
If you attach two virtual disks representing two different linux
systems,with the same UUIDs to the same sata controller while powered
down, big surprise is likely on powering up. Attaching one of them to
virtio will evade this problem.
But a better solution is to change all the UUIDs, since every piece of software expects them to be unique, and edit `/etc/fstab` accordingly. Which will probably stop grub from booting your system, because in grub.cfg it is searching for the /boot or / by UUID.
However, sometimes one can add one additional virtual disk to a sata
controller after the system has powered up, which will produce no
surprises, for the disk will be attached but not mounted.
So cheerfully attaching one linux disk to another linux system so that you
can manipulate one system with the other may well have surprising,
unexpected, and highly undesirable results.
What decisions it has in fact made are revealed by `lsblk`
If one wants to add a several attached disks without surprises, then while
the virtual machines is powered down, attach the virtio-scsis controller,
and a bunch of virtual hard disks to it. The machine will then boot up with
only the sata disk mounted, as one would expect, but the disks attached to
the virtio controller will get attached as the ids /dev/sda, /dev/sdb,
/dev/sdc/, etc, while the sata disk gets mounted, but surprisingly gets the
last id, rather than the first.
After one does what is needful, power down and detach the hard disks, for
if a hard disk is attached to multiple systems, unpleasant suprises are
likely to ensue.
So when you attach a foreign linux disk by sata to another linux system,
attach after it has booted, and detach before you shutdown, to ensure
predictable and expected behavior.
This however only seems to work with efi sata drives, so one can only
attach one additional disk after it has booted.
Dynamic virtual disks in virtual box can be resized, and copied to a
different (larger size)
Confusingly, the documentation and the UI does not distinguish between
dynamic and fixed sized virtual disks - so the UI to change a fixed sized
disks size, or to copy it to a disk of different size is there, but has
absolutely no effect.
Having changed the virtual disk size in the host system, you then want to
change the partition sizes using gparted, which requires the virtual disk to
be attached, but not mounted, to another guest virtual machine in which
you will run `gparted`.
Over time, dynamic virtual disks occupy more and more physical storage,
because more and more sectors become non zero, even though unused.
You attach the virtual disk that you want to shrink to another guest OS as
`/dev/sdb`, which is attached but not mounted, and, in the other guest OS
`zerofree /dev/sdb1` which will zero the free space on partition 1. (And
similarly for any other linux file system partitions)
You run `zerofree`, like gparted, in another in a guest OS, that is mounted
on `/dev/sda` while the disk whose partitions you are zeroing is attached,
but not mounted, as `/dev/sdb1`.
You can then shrink it in the host OS with
```bash
VBoxManage modifyhd -compact thediskfile.vdi
```
or make a copy that will be smaller than the original.
To resize a fixed sized disk you have to make a dynamic copy, then run
gparted (on the other guest OS, you don't want to muck with a mounted
file system using gparted, it is dangerous and broken) to shrink the
partitions if you intend to shrink the virtual disk, resize the dynamic copy
in the host OS, then, if you expanded the virtual disk run gparted to expand
the partitions.
To modify the size of a guest operating system virtual disk, you need that
OS not running, and two other operating systems, the host system and a
second guest operating system. You attach, but not mount, the disk to a
second guest operating system so that you can run zerofree and gparted in
that guest OS.
And now that you have a dynamic disk that is a different size, you can
create a fixed size copy of it using virtual media manager in the host
system. This, however, is an impractically slow and inefficient process for
any large disk. For a one terabyte disk, takes a couple of days, a day or
so to initialize the new virtual disk, during which the progress meter shows
zero progress, and another day or so to do actually do the copy, during which
the progress meter very slowly increases.
Cloning a fixed sized disk is quite fast, and a quite reasonable way of
backing stuff up.
To list block devices `lsblk -o name,type,size,fsuse%,fstype,fsver,mountpoint,UUID`.
To mount an attached disk, create an empty directory, normally under
`mnt`, and `mount /dev/sdb3 /mnt/newvm`
For example:
```terminal_image
root@example.com:~#lsblk -o name,type,size,fsuse%,fstype,fsver,mountpoint,UUID
NAME TYPE SIZE FSTYPE MOUNTPOINT UUID
sda disk 20G
├─sda1 part 33M vfat /boot/efi E470-C4BA
├─sda2 part 3G swap [SWAP] 764b1b37-c66f-4552-b2b6-0d48196198d7
└─sda3 part 17G ext4 / efd3621c-63a4-4728-b7dd-747527f107c0
sdb disk 20G
├─sdb1 part 33M vfat E470-C4BA
├─sdb2 part 3G swap 764b1b37-c66f-4552-b2b6-0d48196198d7
└─sdb3 part 17G ext4 efd3621c-63a4-4728-b7dd-747527f107c0
sr0 rom 1024M
root@example.com:~# mkdir -p /mnt/sdb2
root@example.com:~# mount /dev/sdb2 /mnt/sdb2
root@example.com:~# ls -hal /mnt/sdb2
drwxr-xr-x 20 root root 4.0K Dec 12 06:55 .
drwxr-xr-x 5 root root 4.0K Dec 20 16:02 ..
drwxr-xr-x 4 root root 4.0K Dec 12 06:27 dev
drwxr-xr-x 119 root root 4.0K Dec 20 12:58 etc
drwxr-xr-x 3 root root 4.0K Dec 12 06:32 home
drwxr-xr-x 3 root root 4.0K Dec 12 06:27 media
drwxr-xr-x 2 root root 4.0K Dec 12 06:27 mnt
drwxr-xr-x 11 root root 4.0K Dec 12 06:27 var
```
when backing up from one virtual hard drive to another very similar one,
mount the source disk with `mount -r`
We are not worried about permissions and symlinks, so use `rsync -rcv --inplace --append-verify`
If worried about permissions and symlinks `rsync -acv --inplace --append-verify`
There is some horrid bug with `rsync -acv --inplace --append-verify` that makes it excruciatingly slow if you are copying a lot of data.
`cp -vuxr «source-dir»/«.bit*» «dest-dir»` should have similar effect,
but perhaps considerably faster, but it checks only the times, which may
be disastrous if you have been using your backup live any time after you
used the master live. After backing up, run your backup live once briefly,
before using the backed up master, then never again till the next backup.
# Actual server
Setting up an actual server is similar to setting up the virtual machine Setting up an actual server is similar to setting up the virtual machine
modelling it, except you have to worry about the server getting overloaded modelling it, except you have to worry about the server getting overloaded
and locking up. and locking up.
## disable password entry
On an actual server, it is advisable to enable passwordless sudo for one user. On an actual server, it is advisable to enable passwordless sudo for one user.
issue the command `visudo` and edit the sudoers file to contain the line: issue the command `visudo` and edit the sudoers file to contain the line:
@ -186,25 +462,16 @@ issue the command `visudo` and edit the sudoers file to contain the line:
cherry ALL=(ALL) NOPASSWD:ALL cherry ALL=(ALL) NOPASSWD:ALL
``` ```
That user can now sudo any root command, with no password login nor ssh in for root. And can also get into the root shell with `sudo su -l root` That user can now sudo any root command, with no password login nor
ssh in for root. And can also get into the root shell with `sudo su -l root`
On an actual server, you may want to totally disable passwords to On an actual server, you may want to totally disable passwords to accounts
accounts that have sensitive information by corrupting the shadow file that have sensitive information. Unfortunately any method for totally
disabling passwords is likely to totally disable ssh login, because the
```bash people writing the software have "helpfully" decided that that is what you
usermod -L cherry probably intended, even though it is seldom what people want, intend, or
``` expect . So the nearest thing you can do is set a long, random, non
When an account is disabled in this manner, you cannot login at the memorable password, and forget it.
terminal, and may be unable to ssh in, but you can still get into it by `su -l cherry` from the root account. And if you have disabled the root account,
but have enabled passwordless sudo for one special user, you can still get
into the root account with `sudo -s` or `sudo su -l root` But if you disable
the root account in this manner without creating an account that can sudo
into root passwordless, you are hosed big time. So instead, once `ssh` is
working, give one user passwordless sudo, make sure you can ssh into that
account, and disable password and ssh access to the root account.
You can always undo the deliberate corruption by setting a new password,
providing you can somehow get into root.
## never enough memory ## never enough memory
@ -369,19 +636,53 @@ of (multi-)user utilities and applications.
## Setting up ssh ## Setting up ssh
When your hosing service gives you a server, you will probably initially
have to control it by password. And not only is this unsafe and lots of
utilities fail to work with passwords, but your local ssh client may well fail
to do a password login, endelessly offering public keys, when no
`~/.ssh/authorized_keys` file yet exists on the freshly created server.
To force your local client to employ passwords:
```bash
ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no -o StrictHostKeyChecking=no root@«server»
```
And then the first thing you do on the freshly initialized server is
```bash
apt update -qy
apt upgrade -qy
shutdown -r now && exit
```
And the *next* thing you do is login again and set up login by ssh key,
because if you make changes and *then* update, things are likely to break
(because your hosting service likely installed a very old version of linux).
Login by password is second class, and there are a bunch of esoteric Login by password is second class, and there are a bunch of esoteric
special cases where it does not quite 100% work in all situations, special cases where it does not quite 100% work in all situations,
because stuff wants to auto log you in without asking for input. because stuff wants to auto log you in without asking for input.
Putty is the windows ssh client, but you can use the Linux ssh client in Putty is the windows ssh client, but you can use the Linux ssh client in
windows in the git bash shell, and the Linux remote file copy utility windows in the git bash shell, which is way better than putty, and the
`scp` is way better than the putty utility PSFTP. Linux remote file copy utility `scp` is way better than the putty utility
`PSFTP`, and the Linux remote file copy utility `rsync` way better than
either of them, though unfortunately `rsync` does not work in the windows bash shell.
The filezilla client works natively on both windows and linux, and it is very good gui file copy utility that, like scp and rsync, works by ssh (once you set up the necessary public and private keys.) Unfortunately on windows, it insists on putty format private keys, while the git bash shell for windows wants linux format keys.
Usually a command line interface is a pain and error prone, with a Usually a command line interface is a pain and error prone, with a
multitude of mysterious and inexplicable options and parameters, and one multitude of mysterious and inexplicable options and parameters, and one
typo or out of order command causing your system to unrecoverably die,but even though Putty has a windowed interface, the command line typo or out of order command causing your system to unrecoverably
die,but even though Putty has a windowed interface, the command line
interface of bash is easier to use. interface of bash is easier to use.
(The gui interface of filezilla is the easiest to us, but I tend not to bother
setting up the putty keys for it, and wind up using rsync linux to linux,
which, like all comand line interfaces is more powerful, but more difficult
and dangerous)
It is easier in practice to use the bash (or, on Windows, git-bash) to manage keys than PuTTYgen. You generate a key pair with It is easier in practice to use the bash (or, on Windows, git-bash) to manage keys than PuTTYgen. You generate a key pair with
```bash ```bash
@ -419,7 +720,7 @@ I make sure auto login works, which enables me to make `ssh` do all sorts of
things, then I disable ssh password login, restrict the root login to only be things, then I disable ssh password login, restrict the root login to only be
permitted via ssh keys. permitted via ssh keys.
In order to do this, open up the SSHD config file (which is ssh daemon In order to do this, open up the `sshd_config` file (which is ssh daemon
config, not ssh_config. If you edit this into the the ssh_config file config, not ssh_config. If you edit this into the the ssh_config file
everything goes to hell in a handbasket. ssh_config is the global everything goes to hell in a handbasket. ssh_config is the global
.ssh/config file): .ssh/config file):
@ -431,18 +732,18 @@ nano /etc/ssh/sshd_config
Your config file should have in it Your config file should have in it
```default ```default
HostKey /etc/ssh/ssh_host_ed25519_key UsePAM no
PermitRootLogin prohibit-password
ChallengeResponseAuthentication no
PasswordAuthentication no
PubkeyAuthentication yes
PermitTunnel yes
X11Forwarding yes X11Forwarding yes
AllowAgentForwarding yes AllowAgentForwarding yes
AllowTcpForwarding yes AllowTcpForwarding yes
TCPKeepAlive yes TCPKeepAlive yes
AllowStreamLocalForwarding yes
GatewayPorts yes HostKey /etc/ssh/ssh_host_ed25519_key
PermitTunnel yes
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no
PermitRootLogin prohibit-password
ciphers chacha20-poly1305@openssh.com ciphers chacha20-poly1305@openssh.com
macs hmac-sha2-256-etm@openssh.com macs hmac-sha2-256-etm@openssh.com
kexalgorithms curve25519-sha256 kexalgorithms curve25519-sha256
@ -450,6 +751,16 @@ pubkeyacceptedkeytypes ssh-ed25519
hostkeyalgorithms ssh-ed25519 hostkeyalgorithms ssh-ed25519
hostbasedacceptedkeytypes ssh-ed25519 hostbasedacceptedkeytypes ssh-ed25519
casignaturealgorithms ssh-ed25519 casignaturealgorithms ssh-ed25519
# no default banner path
Banner none
PrintMotd no
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
# override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
``` ```
`PermitRootLogin` defaults to prohibit-password, but best to set it `PermitRootLogin` defaults to prohibit-password, but best to set it
@ -1137,7 +1448,8 @@ map to the old server, until the new server works.)
```bash ```bash
apt-get -qy install certbot python-certbot-nginx apt-get -qy install certbot python-certbot-nginx
certbot register --register-unsafely-without-email --agree-tos certbot register --register-unsafely-without-email --agree-tos
certbot run -a manual --preferred-challenges dns -i nginx -d reaction.la -d blog.reaction.la certbot run -a manual --preferred-challenges dns -i nginx \
-d reaction.la -d blog.reaction.la
nginx -t nginx -t
``` ```
@ -1145,13 +1457,23 @@ This does not set up automatic renewal. To get automatic renewal going,
you will need to renew with the `webroot` challenge rather than the `manual` you will need to renew with the `webroot` challenge rather than the `manual`
once DNS points to this server. once DNS points to this server.
This, ` --preferred-challenges dns`, also allows you to set up wildcard
certificates, but it is a pain, and does not support automatic renewal.
Automatic renewal requires of wildcards requires the cooperation of
certbot and your dns server, and is different for every organization, so only
the big boys can play.
But if you are doing this, not on your test server, but on your live server, the easy way, which will also setup automatic renewal and configure your webserver to be https only, is: But if you are doing this, not on your test server, but on your live server, the easy way, which will also setup automatic renewal and configure your webserver to be https only, is:
```bash ```bash
certbot --nginx -d mail.reaction.la,blog.reaction.la,reaction.la certbot --nginx -d \
mail.reaction.la,blog.reaction.la,reaction.la,\
www.reaction.la,www.blog.reaction.la,\
gitea.reaction.la,git.reaction.la
``` ```
If instead you already have a certificate, because you copied over your `/etc/letsencrypt` directory If instead you already have a certificate, because you copied over your
`/etc/letsencrypt` directory
```bash ```bash
apt-get -qy install certbot python-certbot-nginx apt-get -qy install certbot python-certbot-nginx
@ -1749,16 +2071,16 @@ apt -qy install postfix
``` ```
Near the end of the installation process, you will be presented with a window that looks like the one in the image below: Near the end of the installation process, you will be presented with a window that looks like the one in the image below:
![Initial Config Screen](./images/postfix_cfg1.webp){width=100%} ![Initial Config Screen](../images/postfix_cfg1.webp){width=100%}
If `<Ok>` is not highlighted, hit tab. If `<Ok>` is not highlighted, hit tab.
Press `ENTER` to continue. Press `ENTER` to continue.
The default option is **Internet Site**, which is preselected on the following screen: The default option is **Internet Site**, which is preselected on the following screen:
![Config Selection Screen](./images/postfix_cfg2.webp){width=100%} ![Config Selection Screen](../images/postfix_cfg2.webp){width=100%}
Press `ENTER` to continue. Press `ENTER` to continue.
After that, youll get another window to set the domain name of the site that is sending the email: After that, youll get another window to set the domain name of the site that is sending the email:
![System Mail Name Selection](./images/postfix_cfg3.webp){width=100%} ![System Mail Name Selection](../images/postfix_cfg3.webp){width=100%}
The `System mail name` should be the same as the name you assigned to the server when you were creating it. When youve finished, press `TAB`, then `ENTER`. The `System mail name` should be the same as the name you assigned to the server when you were creating it. When youve finished, press `TAB`, then `ENTER`.
You now have Postfix installed and are ready to modify its configuration settings. You now have Postfix installed and are ready to modify its configuration settings.
@ -2855,7 +3177,7 @@ when your subkey expires.
```bash ```bash
save save
gpg --list-keys --with-subkey-fingerprints --with-keygrip «master key» gpg --list-keys --with-subkey-fingerprints --with-keygrip «master key»
gpg -a --export-keys «master key» gpg -a --export «master key»
gpg -a --export-secret-keys «master key» gpg -a --export-secret-keys «master key»
``` ```

View File

@ -43,6 +43,11 @@ Supercedes OpenVPN and IPSec, which are obsolete and insecure.
I assume you have a host in the cloud, with world accessible network address and ports, that can access blocked websites freely outside of your country or Internet filtering system. I assume you have a host in the cloud, with world accessible network address and ports, that can access blocked websites freely outside of your country or Internet filtering system.
We are going to enable ip4 and ipv6 on our vpn. The tutorial assumes ipv6 is working. Check that it *is* working by pinging your server `ping -6 «server»`, then ssh in to your server and attempt to `ping -6 «something»`
It may well happen that your server is supposed to have an ipv6 address and /64 ipv6 subnet, but something is broken.
The VPN server is running Debian 11 operating system. This tutorial is not The VPN server is running Debian 11 operating system. This tutorial is not
going to work on Debian 10 or lower. Accessing your vpn from a windows going to work on Debian 10 or lower. Accessing your vpn from a windows
client, however, easy since the windows wireguard windows client is very client, however, easy since the windows wireguard windows client is very
@ -50,6 +55,77 @@ friendly. Setting up wireguard on windows is easy. Setting up a wireguard
VPN server on windows is, on the other hand, very difficult. Don't even VPN server on windows is, on the other hand, very difficult. Don't even
try. I am unaware of anyone succeeding. try. I am unaware of anyone succeeding.
## Make sure you have control of nameservice
No end of people are strangely eager to provide free nameservice. If it is a
free service, you are the product. And some of them have sneaky ways to get
you use their nameservice whether you want it or not.
Nameservice reveals which websites you are visiting. We are going to set up
our own nameserver for the vpn clients, but it will have to forward to a
bigger nameserver, thus revealing which websites the clients are visiting,
though not which client is visiting them. Lots of people are strangely eager
to know which websites you are visiting. If you cannot control your
nameservice, then when you set up your own nameserver, it is likely to
behave strangely.
No end of people's helpful efforts to help you automatically set up
nameservice are likely foul up your nameservice for your vpn clients.
```bash
cat /etc/resolv.conf
```
Probably at least two of them are google, which logs everything and
shares the data with the Global American Empire, and the other two are
mystery meat. Maybe good guys provided by your good guy ISP, but I
would not bet on it. Your ISP probably went along with his ISP, and his
ISP may be in the pocket of your enemies.
I use Yandex.com resolvers, since Russia is currently in a state of proxy
war with the Global American Empire which is heading into flat out war,
and I do not care if the Russian government knows which websites I visit,
because it is unlikely to share that data with the five eyes.
So for me
```terminal_image
cat /etc/resolv.conf
nameserver 2a02:6b8::feed:0ff
nameserver 2a02:6b8:0:1::feed:0ff
nameserver 77.88.8.8
nameserver 77.88.8.1
```
Of course your mileage may vary, depending on which enemies you are
worried about, and what the political situation is when you read this (it
may well change radically in the near future). Read up on the resolver's
privacy policies, but apply appropriate cynicism. Political alignments and
vulnerability to power matter more that professed good intentions.
We are going to change this when we set up our own nameserver for the
vpn clients, but if you don't have control, things are likely to get strange.
You cannot necessarily change your nameservers by editing
`/etc/resolv.conf`, since no end of processes are apt to rewrite that file
durig boot up. Changing your nameservers depends on how your linux is
set up, but editing `/etc/resolv.conf` currently works on the standard
distribution. But may well cease to work when you add more software.
If it does not work, maybe you need to subtract some software, but it is
hard to know what software. A clean fresh install may be needed.
It all depends on which module of far too many modules gets the last
whack at `/etc/resolv.conf` on bootup. Far too many people display a
curious and excessive interest in controlling what nameserver you are
using, and if they have their claw in your linux distribution, you are going
to have to edit the configuration files of that module.
If something is whacking your `/etc/resolv.conf`, install `openresolv`,
which will generally make sure it gets the last whack, and edit its
configuration files. Or install a distribution where you *can* control
nameservice by editing `/etc/resolv.conf`
# Install WireGuard on Debian Client and server # Install WireGuard on Debian Client and server
```bash ```bash
@ -79,6 +155,8 @@ sudo chmod 600 /etc/wireguard/ -R
## Create WireGuard Server Configuration File ## Create WireGuard Server Configuration File
This configuration file is for two clients, one of which is a bitcoin peer for which port forwarding is provided, and to provide them a nat translated IPv4 address, and an IPv6 address on a random /112 subnet of the vpn servers /64 subnet. Adjust to taste. IPv6 is tricky.
Use a command-line text editor like Nano to create a WireGuard configuration file on the Debian server. `wg0` will be the network interface name. Use a command-line text editor like Nano to create a WireGuard configuration file on the Debian server. `wg0` will be the network interface name.
```bash ```bash
@ -89,6 +167,30 @@ Copy the following text and paste it to your configuration file. You need to use
The curly braces mean that you do not copy the text inside the curly braces, which is only there for example. You have to substitute your own private key (since everyone now knows this private key), and your own client public key., mutas mutandis. The curly braces mean that you do not copy the text inside the curly braces, which is only there for example. You have to substitute your own private key (since everyone now knows this private key), and your own client public key., mutas mutandis.
```default
[Interface]
# public key = CHRh92zutofXTapxNRKxYEpxzwKhp3FfwUfRYzmGHR4=
Address = 10.10.10.1/24, 2405:4200:f001:13f6:7ae3:6c54:61ab:0001/112
ListenPort = 115
PrivateKey = iOdkQoqm5oyFgnCbP5+6wMw99PxDb7pTs509BD6+AE8=
[Peer]
PublicKey = rtPdw1xDwYjJnDNM2eY2waANgBV4ejhHEwjP/BysljA=
AllowedIPs = 10.10.10.4/32, 2405:4200:f001:13f6:7ae3:6c54:61ab:0009/128
[Peer]
PublicKey = YvBwFyAeL50uvRq05Lv6MSSEFGlxx+L6VlgZoWA/Ulo=
AllowedIPs = 10.10.10.8/32, 2405:4200:f001:13f6:7ae3:6c54:61ab:0019/128
[Peer]
PublicKey = XpT68TnsSMFoZ3vy/fVvayvrQjTRQ3mrM7dmyjoWJgw=
AllowedIPs = 10.10.10.12/32, 2405:4200:f001:13f6:7ae3:6c54:61ab:0029/128
[Peer]
PublicKey = f2m6KRH+GWAcCuPk/TChzD01fAr9fHFpOMbAcyo3t2U=
AllowedIPs = 10.10.10.16/32, 2405:4200:f001:13f6:7ae3:6c54:61ab:0039/128
```
```default ```default
[Interface] [Interface]
Address = 10.10.10.1/24 Address = 10.10.10.1/24
@ -145,13 +247,18 @@ Next, find the name of your servers main network interface.
```bash ```bash
ip addr | grep BROADCAST ip addr | grep BROADCAST
server_network_interface=$(ip addr | grep BROADCAST |sed -r "s/.*:[[:space:]]*([[:alnum:]]+)[[:space:]]*:.*/\1/")
echo $server_network_interface
``` ```
As you can see, its named `eth0` on my Debian server. As you can see, its named `eth0` on my Debian server.
```terminal_image ```terminal_image
:~# ip addr | grep BROADCAST :~# ip addr | grep BROADCAST
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 1000
:~# server_network_interface=$(ip addr | grep BROADCAST |sed -r "s/([[:alnum:]]+):[[:space:]]*(.*)[[:space:]]*:(.*)/\2/")
:~# echo $server_network_interface
eth0
``` ```
To configure IP masquerading, we have to add iptables command in a UFW configuration file. To configure IP masquerading, we have to add iptables command in a UFW configuration file.
@ -202,7 +309,7 @@ The above lines will append `-A` a rule to the end of the`POSTROUTING` chain of
Like your home router, it means your client system behind the nat has no open ports. Like your home router, it means your client system behind the nat has no open ports.
If you want to open some ports, for example the bitcoin port 8333 so that you can run bitcoin core If you want to open some ports, for example the bitcoin port 8333 so that you can run bitcoin core and the monaro ports.
```terminal_image ```terminal_image
NAT table rules NAT table rules
@ -210,8 +317,11 @@ NAT table rules
:PREROUTING ACCEPT [0:0] :PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0]
-A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j MASQUERADE
-A PREROUTING -d «123.45.67.89»/32 -i eth0 -p tcp --dport 8333 -j DNAT --to-destination 10.10.10.2:8333 -A PREROUTING -d «123.45.67.89»/32 -i eth0 -p tcp --dport 8333 -j DNAT --to-destination 10.10.10.«5»:8333
-A PREROUTING -d «123.45.67.89»/32 -i eth0 -p udp --dport 8333 -j DNAT --to-destination 10.10.10.2:8333 -A PREROUTING -d «123.45.67.89»/32 -i eth0 -p udp --dport 8333 -j DNAT --to-destination 10.10.10.«5»:8333
-A PREROUTING -d «123.45.67.89»/32 -i eth0 -p tcp --dport 18080 -j DNAT --to-destination 10.10.10.«5»:18080
-A PREROUTING -d «123.45.67.89»/32 -i eth0 -p tcp --dport 18089 -j DNAT --to-destination 10.10.10.«5»:18089
COMMIT COMMIT
``` ```
@ -220,20 +330,28 @@ Then open the corresponding ports in ufw
```bash ```bash
ufw allow in 8333 ufw allow in 8333
ufw enable ufw enable
ufw status verbose
``` ```
If you have made an error in `/etc/ufw/before6.rules` enable will fail.
If you have enabled UFW before, then you can use systemctl to restart UFW. If you have enabled UFW before, then you can use systemctl to restart UFW.
## Configure forwarding on the Server ## Configure forwarding on the Server
### Allow routing
By default, UFW forbids packet forwarding. We can allow forwarding for our private network, mutas mutandis. By default, UFW forbids packet forwarding. We can allow forwarding for our private network, mutas mutandis.
```bash ```bash
ufw route allow in on wg0 ufw route allow in on wg0
ufw route allow out on wg0 ufw route allow out on wg0
ufw allow in on wg0 ufw allow in on wg0
ufw allow in from 10.10.10.0/24
ufw allow in from 2405:4200:f001:13f6:7ae3:6c54:61ab:0001/112
ufw allow «51820»/udp ufw allow «51820»/udp
ufw allow to «2405:4200:f001:13f6:7ae3:6c54:61ab:1/112» ufw allow to 10.10.10.1/24
ufw allow to 2405:4200:f001:13f6:7ae3:6c54:61ab:0001/112
``` ```
As always «...» means that this is an example value, and you need to substitute your actual value. "_Mutas mutandis_" means "changing that which should be changed", in other words, watch out for those «...» . As always «...» means that this is an example value, and you need to substitute your actual value. "_Mutas mutandis_" means "changing that which should be changed", in other words, watch out for those «...» .
@ -250,6 +368,28 @@ windows, mac, and android clients in the part that is not open.
`wg0` is the virtual network card that `wg0.conf` specifies. If you called it `«your name».conf` then mutatis mutandis. `wg0` is the virtual network card that `wg0.conf` specifies. If you called it `«your name».conf` then mutatis mutandis.
### Enable routing
You just told ufw to allow your vpn clients to see each other on the internet, but allowing routing does not in itself result in any routing.
To actually enable routing, edit the system kernel configuration file, and uncomment the following lines. `nano /etc/sysctl.conf`
```terminal_image
# Uncomment the next line to enable packet forwarding for IPv4
net.ipv4.ip_forward=1
# Uncomment the next line to enable packet forwarding for IPv6
# Enabling this option disables Stateless Address Autoconfiguration
# based on Router Advertisements for this host
net.ipv6.conf.all.forwarding=1
```
For these changes to take effect:
```bash
sysctl -p
```
Now if you list the rules in the POSTROUTING chain of the NAT table by using the following command: Now if you list the rules in the POSTROUTING chain of the NAT table by using the following command:
```bash ```bash
@ -284,14 +424,25 @@ Sample output:
:~$ systemctl status bind9 :~$ systemctl status bind9
● named.service - BIND Domain Name Server ● named.service - BIND Domain Name Server
Loaded: loaded (/lib/systemd/system/named.service; enabled; vendor preset: enabled) Loaded: loaded (/lib/systemd/system/named.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2020-05-17 08:11:26 UTC; 37s ago Active: active (running) since Wed 2022-09-21 20:14:33 EDT; 6min ago
Docs: man:named(8) Docs: man:named(8)
Main PID: 13820 (named) Main PID: 1079 (named)
Tasks: 5 (limit: 1074) Tasks: 5 (limit: 1132)
Memory: 14.3M Memory: 16.7M
CPU: 8.709s CPU: 86ms
CGroup: /system.slice/named.service CGroup: /system.slice/named.service
└─13820 /usr/sbin/named -f -u bind └─1079 /usr/sbin/named -f -u bind
Sep 21 20:14:33 rho.la named[1079]: command channel listening on ::1#953
Sep 21 20:14:33 rho.la named[1079]: managed-keys-zone: loaded serial 0
Sep 21 20:14:33 rho.la named[1079]: zone 0.in-addr.arpa/IN: loaded serial 1
Sep 21 20:14:33 rho.la named[1079]: zone 127.in-addr.arpa/IN: loaded serial 1
Sep 21 20:14:33 rho.la named[1079]: zone 255.in-addr.arpa/IN: loaded serial 1
Sep 21 20:14:33 rho.la named[1079]: zone localhost/IN: loaded serial 2
Sep 21 20:14:33 rho.la named[1079]: all zones loaded
Sep 21 20:14:33 rho.la named[1079]: running
Sep 21 20:14:33 rho.la named[1079]: managed-keys-zone: Initializing automatic trust anchor management for zone '.'; >
Sep 21 20:14:33 rho.la named[1079]: resolver priming query complete
``` ```
If its not running, start it with: If its not running, start it with:
@ -300,30 +451,74 @@ If its not running, start it with:
systemctl start bind9 systemctl start bind9
``` ```
Check that lookups still work:
```bash
curl -6 icanhazip.com
curl -4 icanhazip.com
```
See what dns server you are in fact using
```bash
dig icanhazip.com
```
You will notice you are not using your own bind9
Edit the BIND DNS servers configuration file. Edit the BIND DNS servers configuration file.
```bash ```bash
nano /etc/bind/named.conf.options nano /etc/bind/named.conf.options
``` ```
Add the following line to allow VPN clients to send recursive DNS queries. Add some acls above the options block, one for your networks, and one for potential attackers.
```default Add some real forwarders
allow-recursion { 127.0.0.1; 10.10.10.0/24; ::1; 2405:4200:f001:13f6::1/64; };
``` And add allow recursion for your subnets.
After which it should look something like this:
Save and close the file.
```terminal_image ```terminal_image
:~$ cat /etc/bind/named.conf.options | tail -n8 :~# cat /etc/bind/named.conf.options | tail -n 9
// If BIND logs error messages about the root key being expired, acl bogusnets {
// you will need to update your keys. See https://www.isc.org/bind-keys 0.0.0.0/8; 192.0.2.0/24; 224.0.0.0/3;
//======================================================================== 10.0.0.0/8; 172.16.0.0/12; 192.168.0.0/16;
};
acl my_net {
127.0.0.1;
::1;
116.251.216.176;
10.10.10.0/24;
2405:4200:f001:13f6::/64;
};
options {
directory "/var/cache/bind";
forwarders {
2a02:6b8::feed:0ff;
2a02:6b8:0:1::feed:0ff;
77.88.8.8;
77.88.8.1;
};
//==========================
// If BIND logs error messages about the
// root key being expired,
// you will need to update your keys.
// See https://www.isc.org/bind-keys
//==========================
dnssec-validation auto; dnssec-validation auto;
listen-on-v6 { any; }; listen-on-v6 { any; };
allow-recursion { 127.0.0.1; 10.10.10.0/24; ::1; 2405:4200:f001:13f6::1/64; };
}; allow-recursion { my_net; };
blackhole { bogusnets; };
};
``` ```
Then edit the `/etc/default/named` files. Then edit the `/etc/default/named` files.
@ -332,28 +527,29 @@ Then edit the `/etc/default/named` files.
nano /etc/default/named nano /etc/default/named
``` ```
Add `-4` to the `OPTIONS` to ensure BIND can query root DNS servers. If on an IPv4 network, add `-4` to the `OPTIONS` to ensure BIND can query root DNS servers.
OPTIONS="-u bind -4" OPTIONS="-u bind -4"
If on the other hand, you are on a network that supports both IPv6 and
IPv4, this will cause unending havoc and chaos, as bind9's behavior
comes as a surprise to other components of the network, and bind9 crashes
on IPv6 information in its config files.
Save and close the file. Save and close the file.
By default, BIND enables DNSSEC, which ensures that DNS responses are correct and not tampered with. However, it might not work out of the box due to *trust anchor rollover* and other reasons. To make it work properly, we can rebuild the managed key database with the following commands.
```bash
rndc managed-keys destroy
rdnc reconfig
```
Restart `bind9` for the changes to take effect. Restart `bind9` for the changes to take effect.
```bash ```bash
systemctl restart bind9 systemctl restart bind9
systemctl status bind9
dig -t txt -c chaos VERSION.BIND @127.0.0.1
``` ```
Your ufw will allow vpn clients to access `bind9` because you earlier allowed in everything from `wg0`. Your ufw firewall will allow vpn clients to access `bind9` because you earlier allowed everything from `wg0` in.
## Start WireGuard on the server.
## Start WireGuard on the server
Run the following command on the server to start WireGuard. Run the following command on the server to start WireGuard.
@ -437,19 +633,19 @@ chmod 600 /etc/wireguard/ -R
Start WireGuard. Start WireGuard.
```bash ```bash
systemctl start wg-quick@wg0-client0.service systemctl start wg-quick@wg-client0.service
``` ```
Enable auto-start at system boot time. Enable auto-start at system boot time.
```bash ```bash
systemctl enable wg-quick@wg0-client0.service systemctl enable wg-quick@wg-client0.service
``` ```
Check its status: Check its status:
```bash ```bash
systemctl status wg-quick@wg0-client0.service systemctl status wg-quick@wg-client0.service
``` ```
Now go to this website: `http://icanhazip.com/` to check your public IP address. If everything went well, it should display your VPN servers public IP address instead of your client computers public IP address. Now go to this website: `http://icanhazip.com/` to check your public IP address. If everything went well, it should display your VPN servers public IP address instead of your client computers public IP address.
@ -460,6 +656,11 @@ You can also run the following command to get the current public IP address.
curl https://icanhazip.com curl https://icanhazip.com
``` ```
To get the geographic location
```bash
curl https://www.dnsleaktest.com |grep from
```
# Troubleshooting # Troubleshooting
## Check if UDP port «51820» is open ## Check if UDP port «51820» is open

View File

@ -5,10 +5,29 @@ title: >-
... ...
# the crisis of censorship # the crisis of censorship
If we have a mechanism capable of securely handling arbitrary free form
metadata about transactions, it can handle arbitrary free form information
about anything, and people are likely to use it for information the
government does not like. It is not only transaction data that the
government wants to control.
We have a crisis of censorship. We have a crisis of censorship.
Every uncensored medium of public discussion is getting the treatment. Every uncensored medium of public discussion is getting the treatment.
In a world where truth and reality is massively suppressed, forbidden truth
should migrate to a platform resistant to Global American Empire domination.
The Global American Empire is at war with truth and reality. A
communications platform should support truth and reality, thus must be at
war with the Global American Empire. A crypto currency needs what
Urbit was supposed to be, its own communications and publishing
protocol, in order that you can have transaction metadata protected, and
thus needs its own truth and reality system. And thus it needs to be willing
to be at war with the Global American Empire. Its developers need to
figure on a significant probability of being arrested, murdered or forced to
flee, as Satoshi figured.
We need a pseudonymous social network on which it is possible to safely We need a pseudonymous social network on which it is possible to safely
discuss forbidden topics. discuss forbidden topics.
@ -264,6 +283,7 @@ of a million shills, scammers, and spammers.
So, you can navigate to whole worlds public conversation through So, you can navigate to whole worlds public conversation through
approved links and reply-to links but not every spammer, scammer, and approved links and reply-to links but not every spammer, scammer, and
shill in the world can fill your feed with garbage. shill in the world can fill your feed with garbage.
## Algorithm and data structure for Zooko name network address ## Algorithm and data structure for Zooko name network address
For this to work, the underlying structure needs to be something based on For this to work, the underlying structure needs to be something based on

@ -1 +1 @@
Subproject commit 012e892841ed6edc521f88a23b55863c7afe4622 Subproject commit 8cbcc3ccccb035b1a976c053ab4de47b7f0b9352

@ -1 +1 @@
Subproject commit 8880bc88ff6c2cfcd72c3fcd3ef532b5470b2103 Subproject commit 2648eb4da156a751a377cfe96b91faa03e535c10