Merge remote-tracking branch 'origin/docs' into attempt-to-fix-studio-update

This commit is contained in:
Cheng 2023-02-24 11:34:29 +08:00
commit 55929ac9c8
No known key found for this signature in database
GPG Key ID: 571C3A9C3B9E6FCA
25 changed files with 1163 additions and 71 deletions

View File

@ -97,7 +97,7 @@ leading to the renewal, or the collapse, of civilization.
name system rooted in the blockchain, with encryption rooted in name system rooted in the blockchain, with encryption rooted in
Zookos triangle, as with crypto currency Zookos triangle, as with crypto currency
- [Zookos triangle](zookos_triangle.html), The solution is an ID system based on Zookos - [Zookos triangle](names/zookos_triangle.html), The solution is an ID system based on Zookos
triangle, allowing everyone to have as many IDs as they want, but triangle, allowing everyone to have as many IDs as they want, but
no one else can forge their IDs, ensuring that each identity has a no one else can forge their IDs, ensuring that each identity has a
corresponding public key, thus making end to end encryption easy. corresponding public key, thus making end to end encryption easy.

View File

@ -340,6 +340,8 @@ All wallets now use random words - but you cannot carry an eighteen word random
Should use [grammatically correct passphrases](https://github.com/lungj/passphrase_generator). Should use [grammatically correct passphrases](https://github.com/lungj/passphrase_generator).
That library does not contain a collection of words organized by part of speech. Instead it calls a python library (word.net) of english, which has the information you actually need.
Using those dictionaries, the phrase (adjective noun adverb verb adjective Using those dictionaries, the phrase (adjective noun adverb verb adjective
noun) can encode sixty eight bits of entropy. Two such phrases suffice, noun) can encode sixty eight bits of entropy. Two such phrases suffice,
being stronger than the underlying elliptic curve. With password being stronger than the underlying elliptic curve. With password

View File

@ -6,7 +6,7 @@ title: Building the project and its libraries in Visual Studio
## Setting wxWidgets project in Visual Studio ## Setting wxWidgets project in Visual Studio
First set up your environment variables as described in [Directory Structure First set up your environment variables as described in [Directory Structure
Microsoft Windows](../set_up_build_environments.html#dirstruct). Microsoft Windows](../setup/set_up_build_environments.html#dirstruct).
Run the wxWidgets windows setup, wxMSW-X.X.X-Setup.exe. The project will Run the wxWidgets windows setup, wxMSW-X.X.X-Setup.exe. The project will
build with wxMSW-3.1.2, and will not build with earlier versions. Or just build with wxMSW-3.1.2, and will not build with earlier versions. Or just

View File

@ -13,6 +13,32 @@ generating responses, and interacting with the recipient.
There is a [list](https://github.com/dbohdan/embedded-scripting-languages){target="_blank"} of embeddable scripting languages. There is a [list](https://github.com/dbohdan/embedded-scripting-languages){target="_blank"} of embeddable scripting languages.
# Javascript
Which has a vast ecology, an army of midwit programmers, and vast
collection of tools and libraries.
Vscode is written in typescript, which compiles to javascript, and runs
under Electron. We already have a similar embedding tool in wxWidgets,
wxWebview, which allows javascript to call C++ through custom schemes
and virtual file systems, and C++ to call javascript through
`RunScriptAsync()` and `RunScript()`.
Large projects tend to be written in a framework that supports typescript
* Create React App
* Next.js
* Gatsby
What can be done in Electron can be done in wxWebview, though not
necessarily as easily.
WxWebview is a somewhat messier and trickier to use Electron, with
considerably inferior performance in script execution time, but with vastly
better integration with C++
# Lua and Python
Lua and python are readily embeddable, but [the language shootout](https://benchmarksgame-team.pages.debian.net/benchmarksgame/) tells us Lua and python are readily embeddable, but [the language shootout](https://benchmarksgame-team.pages.debian.net/benchmarksgame/) tells us
they are terribly slow. they are terribly slow.
@ -74,11 +100,12 @@ Anecdotal data on speed for LUA JIT:
* I am personally surprised at luajit's performance. We use it in the network space, and its performance is outstanding. I had used it in the past is a different manner, and its performance was 'ok'. Implementation architecture really is important with this framework. You can get C perf, with minimal effort. There are limitations, but we have worked around all of them now. Highly recommended. * I am personally surprised at luajit's performance. We use it in the network space, and its performance is outstanding. I had used it in the past is a different manner, and its performance was 'ok'. Implementation architecture really is important with this framework. You can get C perf, with minimal effort. There are limitations, but we have worked around all of them now. Highly recommended.
# Lisp
Lisp is sort of embeddable, startlingly fast, and is enormously capable, but Lisp is sort of embeddable, startlingly fast, and is enormously capable, but
it is huge, and not all that portable. it is huge, and not all that portable.
ES (JavaScript) is impressively fast in its node.js implementation, which does # Angelscript
not necessarily imply the embeddable versions are fast.
Very few of the scripting languages make promises about sandbox Very few of the scripting languages make promises about sandbox
capability, and I know there is enormous grief over sandboxing JavaScript. capability, and I know there is enormous grief over sandboxing JavaScript.
@ -87,6 +114,8 @@ It can be done, but it is a big project.
Angelscript *does* make promises about sandbox capability, but I have Angelscript *does* make promises about sandbox capability, but I have
absolutely no information its capability and performance. absolutely no information its capability and performance.
# Tcl
Tcl is event loop oriented. Tcl is event loop oriented.
But hell, I have an event loop. I want my events to put data in memory, But hell, I have an event loop. I want my events to put data in memory,
@ -94,6 +123,8 @@ then launch a script for the event, the script does something with the data,
generates some new data, fires some events that will make use of the data, and generates some new data, fires some events that will make use of the data, and
finishes. finishes.
# Memory management
Given that I want programs to be short and quickly terminate, maybe we Given that I want programs to be short and quickly terminate, maybe we
do not need dynamic memory management and garbage collection. do not need dynamic memory management and garbage collection.

View File

@ -12,7 +12,7 @@ elif [[ "$OSTYPE" == "msys" ]]; then
osoptions="--fail-if-warnings --eol=lf " osoptions="--fail-if-warnings --eol=lf "
fi fi
templates=$(pwd)"/pandoc_templates" templates=$(pwd)"/pandoc_templates"
options=$osoptions"--toc -N --toc-depth=5 --wrap=preserve --metadata=lang:en --include-in-header=icon.pandoc --include-before-body=$templates/before.pandoc --css=$templates/style.css -o" options=$osoptions"--toc -N --toc-depth=5 --from markdown+smart --wrap=preserve --metadata=lang:en --include-in-header=icon.pandoc --include-before-body=$templates/before.pandoc --css=$templates/style.css -o"
pwd pwd
for f in *.md for f in *.md
do do
@ -67,7 +67,32 @@ do
done done
cd .. cd ..
cd names cd names
options=$osoptions"--toc -N --toc-depth=5 --wrap=preserve --metadata=lang:en --include-in-header=./icon.pandoc --include-before-body=$templates/before.pandoc --css=$templates/style.css --include-after-body=$templates/after.pandoc -o" options=$osoptions"--toc -N --toc-depth=5 --from markdown+smart --wrap=preserve --metadata=lang:en --include-in-header=./icon.pandoc --include-before-body=$templates/before.pandoc --css=$templates/style.css --include-after-body=$templates/after.pandoc -o"
pwd
for f in *.md
do
len=${#f}
base=${f:0:($len-3)}
if [ $f -nt $base.html ];
then
katex=""
for i in 1 2 3 4
do
read line
if [[ $line =~ katex ]];
then
katex=" --katex=./"
fi
done <$f
echo "generating $base.html from $f"
pandoc $katex $options $base.html $base.md
#else
# echo " $base.html up to date"
fi
done
cd ..
cd setup
options=$osoptions"--toc -N --toc-depth=5 --from markdown+smart --wrap=preserve --metadata=lang:en --include-in-header=./icon.pandoc --include-before-body=$templates/before.pandoc --css=$templates/style.css --include-after-body=$templates/after.pandoc -o"
pwd pwd
for f in *.md for f in *.md
do do

View File

@ -299,7 +299,7 @@ do little harm with it, so that no one wants to create confusible names.
Spaces are forbidden in a uri, because one routinely passes uris as Spaces are forbidden in a uri, because one routinely passes uris as
arguments on the command line, and arguments are separated by arguments on the command line, and arguments are separated by
whitespace. But lack of spaces is s severe blow to intelligibility, and whitespace. But lack of spaces is s severe blow to intelligibility, and
in particular a severe blow against [Zooko](./zookos_triangle.html) nicknames. One way around this in particular a severe blow against [Zooko](names/zookos_triangle.html) nicknames. One way around this
rule is to have a rule for interpreting command line arguments that if rule is to have a rule for interpreting command line arguments that if
an item is contained in angle quotes or brackets, it is one item, and to an item is contained in angle quotes or brackets, it is one item, and to
have as part of the scheme syntax and schema a rule for interpreting the have as part of the scheme syntax and schema a rule for interpreting the
@ -314,7 +314,7 @@ we should be using parsers. Lexers fail to provide sufficient expressive
power. We should break compatibility by demanding that anything that can power. We should break compatibility by demanding that anything that can
handle our expressions uses a command line parser rather than a lexer, handle our expressions uses a command line parser rather than a lexer,
because you are just not going to be able to handle because you are just not going to be able to handle
[Zooko](./zookos_triangle.html) nicknames in a [Zooko](names/zookos_triangle.html) nicknames in a
lexer. lexer.
The uri syntax is written for a lexer, not a parser, and the command The uri syntax is written for a lexer, not a parser, and the command
@ -535,7 +535,7 @@ class and parse type, some host, possibly far away, has given it that
existence and parse type then for the parse to succeed, the parser has existence and parse type then for the parse to succeed, the parser has
to connect to that host. to connect to that host.
The straightforward case of [Zooko](./zookos_triangle.html) cryptographic resource identifiers is The straightforward case of [Zooko](names/zookos_triangle.html) cryptographic resource identifiers is
that your equivalent of a domain name is a public key. You look up the that your equivalent of a domain name is a public key. You look up the
network address and the public key actually used for communication in network address and the public key actually used for communication in
the equivalent of the domain name system, and get a public key signed by the equivalent of the domain name system, and get a public key signed by
@ -631,7 +631,7 @@ We need Zookos quadrangle.
Obviously you need to be able to reference human readable names on the Obviously you need to be able to reference human readable names on the
blockchain, which is the fourth corner of [Zookos triangle]. blockchain, which is the fourth corner of [Zookos triangle].
[Zookos triangle] : ./zookos_triangle.html [Zookos triangle] : names/zookos_triangle.html
# Location # Location

View File

@ -443,7 +443,7 @@ These invitation only blogs and messaging groups will exist with and
within open searchable blogs and messaging groups, hidden by the secret within open searchable blogs and messaging groups, hidden by the secret
handshake protocol. handshake protocol.
The structure will be ultimately rooted in [Zookos triangle](./zookos_triangle.html), but normal The structure will be ultimately rooted in [Zookos triangle](names/zookos_triangle.html), but normal
people will most of the time sign in by zero knowledge password people will most of the time sign in by zero knowledge password
protocol, your identity will be derivative from someone elses Zooko protocol, your identity will be derivative from someone elses Zooko
based identity. based identity.
@ -538,8 +538,13 @@ To test the signature, check that
$A*(h(Message)B_{base})=B_{base}*M$ $A*(h(Message)B_{base})=B_{base}*M$
Which it should because $(a*B_{base})*(h($Message$)*B_{base}) = Which it should because
B_{base}*(a*h($Message$)*B_{base})$ by bilinearity.
$(a*B_{base})*(h($
Message
$)*B_{base}) =B_{base}*(a*h($
Message
$)*B_{base})$ by bilinearity.
The threshold variant of this scheme is called GDH threshold signature The threshold variant of this scheme is called GDH threshold signature
scheme, Gap Diffie Hellman threshold signature scheme. scheme, Gap Diffie Hellman threshold signature scheme.
@ -555,6 +560,12 @@ corresponding to a well formed token, that token unknown to the signer.
[Paraphrasing](./secret_handshakes.pdf) [Paraphrasing](./secret_handshakes.pdf)
We can prove, without revealing, shared knowledge of the master secret of
the secret society, without third parties cluing it, using ordinary
elliptic curve cryptography.
Trouble is that this requires wide sharing of the secret, which is thus unlikely to remain very secret for very long.
The Secret society of evil has a frequently changing secret key The Secret society of evil has a frequently changing secret key
$k_{evil}$ Ann has a secret key $k_{Ann}$ and public key $K_{Ann} = $k_{evil}$ Ann has a secret key $k_{Ann}$ and public key $K_{Ann} =
k_{Ann}B$, Bob has a secret key $k_{Bob}$ and public key $K_{Bob} = k_{Ann}B$, Bob has a secret key $k_{Bob}$ and public key $K_{Bob} =
@ -562,8 +573,6 @@ b_{Bob}B$
Let $h(…)$ represent a hash of the serialized arguments of H, which Let $h(…)$ represent a hash of the serialized arguments of H, which
hash is an integer modulo the order of the group. Let $H(…) = h(…)B$. hash is an integer modulo the order of the group. Let $H(…) = h(…)B$.
Streams are concatenated with a boundary marker, and accidental
occurrences of the boundary marker within a stream are escaped out.
The evil overlord of the evil society of evil issues Ann and Bob a The evil overlord of the evil society of evil issues Ann and Bob a
signed hash of their public keys. For Ann, the signature is signed hash of their public keys. For Ann, the signature is
@ -576,11 +585,12 @@ secret society of evil, but is not going to be recognizable to third
parties as signed by the secret society of evil. parties as signed by the secret society of evil.
So, they use as part of their shared secret whereby they encrypt So, they use as part of their shared secret whereby they encrypt
messages, the secret that Ann can calculate:\ $[k_{evil}H($“Ann”, messages, the secret that Ann can calculate:\
$[k_{evil}H($“Ann”,
$K_{Ann},$ “evil secret society of evil”$)]*H($“Bob”, $K_{Bob},$ “evil $K_{Ann},$ “evil secret society of evil”$)]*H($“Bob”, $K_{Bob},$ “evil
secret society of evil”$)$ secret society of evil”$)$
Bob calculates: Bob calculates:\
$H($“Ann”, $K_{Ann},$ “evil secret society of evil”$) $H($“Ann”, $K_{Ann},$ “evil secret society of evil”$)
*[k_{evil}H($“Bob”, $K_{Bob},$ “evil secret society of evil”$)]$ *[k_{evil}H($“Bob”, $K_{Bob},$ “evil secret society of evil”$)]$
@ -593,8 +603,51 @@ secret society, you have a secret chatroom, in which case the routing
metadata on messages going to and from the chatroom are traceable if metadata on messages going to and from the chatroom are traceable if
they pass through enemy networks. they pass through enemy networks.
# IFF
But suppose you want battlespace iff (Identify Friend or Foe). You want to But suppose you want battlespace iff (Identify Friend or Foe). You want to
send a message directly to an unknown target that will identify as friendly send a message directly to an unknown target that will identify as friendly
if the other guy is a friendly, but not necessarily let him know enemy if he if the other guy is a friendly, but not necessarily let him know enemy if he
is enemy. If he already knows you are enemy, you dont want to give him is enemy. If he already knows you are enemy, you dont want to give him
the means to identify enemy iff from your friends. the means to identify enemy iff from your friends.
We need an IFF ping that can be distibuted to a very large number of people. Which means that the means for identifying it will leak. Which means it has to be possible to generate a thousand variants on the ping, so that if it leaks, the leadership hierarchy can identify the leaker, but no one else can.
If a member of a sub subgroup leaks, then the leader of the group should be able to identify the subgroup that leaked, the leader of the subgroup should be able to identify the sub subgroup that leaked, and the leader of the sub sub group should be able to identify the member that leaked.
For large groups, the ping has to mutually identify the two subgroups to
each other, and also supply information the chain of command could use to
identify the individual, but to a random member of a one subgroup, should
only identify the other party's subgroup.
Let us figure out how to do this in a single level hierarchy, one leader, plus regular members.
Suppose the evil overlord frequently signs a fresh secret key of each
member, and a randomized list of the corresponding public keys get
distributed to every member.
Then a member simply does authentication without signing on his secret key, and sends the sequence number of his secret key. Which is useless information to anyone who does not know the list.
But this method fails to scale to an enormous hierarchical group of groups, because that information is going to leak very fast, and there is no way of
tracing who leaked it.
So we need a ping that the leader can identify as coming from a particular member, but a random member cannot. But a random member _can_ recognize it as coming from _a_ member.
So Bob, who is a member sub-sub-sub-group CAFD, which is a subgroup of sub-sub-group CAF, which is a subgroup of sub-group CA, which is a subgroup of C, wants to know if a mystery target is fellow member of C.
Trouble is, he wants to do so without revealing he is a member of group C
_except_ to a fellow member of group C. But information that could allow
any member of group C to recognize the ping is going to leak. So we need a
multi stage ping, where you don't know who you are talking to until both
parties have revealed a fair bit of zero knowledge proofs of knowledge.
We want both parties to reveal information that the leadership hierarchy
could use to identify both parties, but after they have both revealed that
information, it is meaningless to them, though potentially meaningful to the
leadership, they still do not know whether the other is a member of group C,
until they reveal further information.
So if there is a leak that defeats the IFF, the leadership hierarchy can
track that leak.
If everyone can IFF, everyone can leak data that will enable enemy fake IFF, so neither party can be allowed to id the ping until after both sides have received information that would allow a much smaller group, much less likely to leak, to id the ping.

View File

@ -309,7 +309,7 @@ payment using cryptography, but cannot create secure delivery of goods
and services using cryptography. The other side of the transaction needs and services using cryptography. The other side of the transaction needs
reputation. reputation.
[sovereign corporations]:social_networking.html#many-sovereign-corporations-on-the-blockchain [sovereign corporations]:../social_networking.html#many-sovereign-corporations-on-the-blockchain
So the other side of the transaction needs to be authenticated by a secret So the other side of the transaction needs to be authenticated by a secret
that *he* controls, not a secret controlled by registrars and certificate that *he* controls, not a secret controlled by registrars and certificate

View File

@ -0,0 +1,23 @@
---
title:
Big Circ notation
# katex
...
The definition of $\bigcirc$ used by mathematicians is not convenient for engineers.
So in practice we ignore that definition and use our own.
The mathematical definition is, roughly, that if $f(n)=\bigcirc\big(g(n)\big)$ then $f(n)$ grows no faster than $g(n)$, that there exists some value K such that for values of $n$ of interest and larger than of interest $f(n)\le Kg(n)$
Which is kind of stupid for engineers, because by that definition an algorithm that takes time $\bigcirc(n)$ also takes time $\bigcirc(n^2)$, $\bigcirc(n!)$, etcetera.
So, Knuth defined $\large\Omega$, which means, roughly, that there exists some value K such that for values of $n$ of interest and larger than of interest $f(n)\ge Kg(n)$
Which is also stupid for the same reason.
So what all engineers do in practice is use $\bigcirc$ to mean that the mathematical definition of $\bigcirc$ is true, *and* Knuths definition of $\large\Omega$ is also largely true, so when we say that an operation take that much time, we mean that it takes no more than that much time, *and frequently takes something like that much time*.
So, by the engineer's definition of $\bigcirc$, if an algorithm takes $\bigcirc(n)$ time it does *not* take $\bigcirc(n^2)$ time.
Which is why we never need to use Knuth's $\large\Omega$

View File

@ -6,7 +6,7 @@ body {
font-variant: normal; font-variant: normal;
font-weight: normal; font-weight: normal;
font-stretch: normal; font-stretch: normal;
font-size: 16px; font-size: 100%;
} }
table { table {
border-collapse: collapse; border-collapse: collapse;
@ -44,3 +44,9 @@ td, th {
padding: 0.5rem; padding: 0.5rem;
text-align: left; text-align: left;
} }
pre.terminal_image {
background-color: #000;
color: #0F0;
font-size: 75%;
white-space: no-wrap;
}

View File

@ -0,0 +1,3 @@
body {
font-size: 85%;
}

View File

@ -349,7 +349,7 @@ We build a system of payments and globally unique human readable names
mapping to [Zookos triangle] names (Zookos quadrangle) on top of this mapping to [Zookos triangle] names (Zookos quadrangle) on top of this
public notary functionality. public notary functionality.
[Zookos triangle]: ./zookos_triangle.html [Zookos triangle]: names/zookos_triangle.html
All wallets shall be client wallets, and all transaction outputs shall All wallets shall be client wallets, and all transaction outputs shall
be controlled by private keys known only to client wallets, but most be controlled by private keys known only to client wallets, but most
@ -654,7 +654,7 @@ To better support the corporate form, the crypto currency maintains a
name system, of globally unique human readable names on top of [Zookos name system, of globally unique human readable names on top of [Zookos
triangle] names. triangle] names.
[Zookos triangle]: ./zookos_triangle.html [Zookos triangle]: names/zookos_triangle.html
Transactions between [Zookos triangle] identities will be untraceable, Transactions between [Zookos triangle] identities will be untraceable,
because amounts will be in fixed sized amounts, and transactions will because amounts will be in fixed sized amounts, and transactions will

View File

@ -19,4 +19,4 @@ This product includes several packages, each with their own free software licenc
Or, in the case of Sqlite, the Sqlite blessing in place of a license, which is Or, in the case of Sqlite, the Sqlite blessing in place of a license, which is
morally though not legally obligatory on those that obey the morally though not legally obligatory on those that obey the
commandments of Gnon. See also the [contributor code of conduct](docs/contributor_code_of_conduct.html). commandments of Gnon. See also the [contributor code of conduct](docs/setup/contributor_code_of_conduct.html).

View File

@ -74,7 +74,7 @@ git config --local include.path ../.gitconfig
this will substantially mitigate the problem of submodules failing to this will substantially mitigate the problem of submodules failing to
update in pushes, pulls, checkouts, and switches. update in pushes, pulls, checkouts, and switches.
[cryptographic software is under attack]:./docs/contributor_code_of_conduct.html#code-will-be-cryptographically-signed [cryptographic software is under attack]:./docs/setup/contributor_code_of_conduct.html#code-will-be-cryptographically-signed
"Contributor Code of Conduct" "Contributor Code of Conduct"
{target="_blank"} {target="_blank"}

View File

@ -2,6 +2,28 @@
title: Scaling, trust and clients title: Scaling, trust and clients
... ...
The fundamental strength of the blockchain architecture is that it is a immutable public ledger. The fundamental flaw of the blockchain architecture is that it is an immutable public ledger.
This is a problem for privacy and fungibility, but what is really biting is scalability, the sheer size of the thing. Every full peer has to download every transaction that anyone ever did, evaluate that transaction for validity, and store it forever. And we are running hard into the physical limits of that. Every full peer on the blockchain has to know every transaction and every output of every transaction that ever there was.
As someone said when Satoshi first proposed what became bitcoin: “it does not seem to scale to the required size.”
And here we are now, fourteen years later, at rather close to that scaling limit. And for fourteen years, very smart people have been looking for a way to scale without limits.
And, at about the same time as we are hitting scalability limits, “public” is becoming a problem for fungibility. The fungibility crisis and the scalability crisis are hitting at about the same time. The fungibility crisis is hitting eth and is threatening bitcoin.
That the ledger is public enables the blood diamonds attack on crypto currency. Some transaction outputs could be deemed dirty, and rendered unspendable by centralized power, and to eventually, to avoid being blocked, you have to make everything KYC, and then even though you are fully compliant, you are apt to get arbitrarily and capriciously blocked because the government, people in quasi government institutions, or random criminals on the revolving door between regulators and regulated decide they do not like you for some whimsical reason. I have from time to time lost small amounts of totally legitimate fiat money in this fashion, as an international transactions become ever more difficult and dangerous, and recently lost an enormous amount of totally legitimate fiat money in this fashion.
Eth is highly centralized, and the full extent that it is centralized and in bed with the state is now being revealed, as tornado eth gets demonetized.
Some people in eth are resisting this attack. Some are not.
Bitcoiners have long accused eth of being a shitcoin, which accusation is obviously false. With the blood diamonds attack under way on eth, likely to become true. It is not a shitcoin, but I have long regarded it as likely to become one. Which expectation may well come true shortly.
A highly centralized crypto currency is closer to being an unregulated bank than a crypto currency. Shitcoins are fraudulent unregulated banks posing as crypto currencies. Eth may well be about to turn into a regulated bank. When bitcoiners accuse eth of being a shitcoin, the truth in their accusation is dangerous centralization, and dangerous closeness to the authorities.
The advantage of crypto currency is that as elite virtue collapses, the regulated banking system becomes ever more lawless, arbitrary, corrupt, and unpredictable. An immutable ledger ensures honest conduct. But if a central authority has too much power over the crypto currency, they get to retroactively decide what the ledger means. Centralization is a central point of failure, and in world of ever more morally debased and degenerate elites, will fail. Maybe Eth is failing now. If not, will likely fail by and by.
# Scaling # Scaling
The Bitcoin blockchain has become inconveniently large, and evaluating it The Bitcoin blockchain has become inconveniently large, and evaluating it
@ -155,11 +177,9 @@ with both privacy and scaling.
## zk-snarks ## zk-snarks
Zk-snarks are not yet a solution. They have enormous potential Zk-snarks, zeeks, are not yet a solution. They have enormous potential
benefits for privacy and scaling, but as yet, no one has quite found a way. benefits for privacy and scaling, but as yet, no one has quite found a way.
[performance survey of zksnarks](https://github.com/matter-labs/awesome-zero-knowledge-proofs#comparison-of-the-most-popular-zkp-systems)
A zk-snark is a succinct proof that code *was* executed on an immense pile A zk-snark is a succinct proof that code *was* executed on an immense pile
of data, and produced the expected, succinct, result. It is a witness that of data, and produced the expected, succinct, result. It is a witness that
someone carried out the calculation he claims he did, and that calculation someone carried out the calculation he claims he did, and that calculation
@ -167,24 +187,103 @@ produced the result he claimed it did. So not everyone has to verify the
blockchain from beginning to end. And not everyone has to know what blockchain from beginning to end. And not everyone has to know what
inputs justified what outputs. inputs justified what outputs.
As "zk-snark" is not a pronounceable work, I am going to use the word "zeek"
to refer to the blob proving that a computation was performed, and
produced the expected result. This is an idiosyncratic usage, but I just do
not like acronyms.
The innumerable privacy coins around based on zk-snarks are just not The innumerable privacy coins around based on zk-snarks are just not
doing what has to be done to make a zk-snark privacy currency that is doing what has to be done to make a zeek privacy currency that is viable
viable at any reasonable scale. They are intentionally scams, or by at any reasonable scale. They are intentionally scams, or by negligence,
negligence, unintentionally scams. All the zk-snark coins are doing the unintentionally scams. All the zk-snark coins are doing the step from a set
step from set $N$ of valid coins, valid unspent transaction outputs, to set $N$ of valid coins, valid unspent transaction outputs, to set $N+1$, in the
$N+1$, in the old fashioned Satoshi way, and sprinkling a little bit of old fashioned Satoshi way, and sprinkling a little bit of zk-snark magic
zk-snark magic privacy pixie dust on top (because the task of producing a privacy pixie dust on top (because the task of producing a genuine zeek
genuine zk-snark proof of coin state for step $N$ to step $N+1$ is just too big proof of coin state for step $N$ to step $N+1$ is just too big for them).
for them). Which is, intentionally or unintentionally, a scam. Which is, intentionally or unintentionally, a scam.
Not yet an effective solution for scaling the blockchain, for to scale the Not yet an effective solution for scaling the blockchain, for to scale the
blockchain, you need a concise proof that any spend in the blockchain was blockchain, you need a concise proof that any spend in the blockchain was
only spent once, and while a zk-snark proving this is concise and only spent once, and while a zk-snark proving this is concise and
capable of being quickly evaluated by any client, generating the proof is capable of being quickly evaluated by any client, generating the proof is
an enormous task. Lots of work is being done to render this task an enormous task.
manageable, but as yet, last time I checked, not manageable at scale.
Rendering it efficient would be a total game changer, radically changing ### what is a Zk-stark or a Zk-snark?
the problem.
Zk-snark stands for “Zero-Knowledge Succinct Non-interactive Argument of Knowledge.”
A zk-stark is the same thing, except “Transparent”, meaning it does not have
the “toxic waste problem”, a potential secret backdoor. Whenever you create
zk-snark parameters, you create a backdoor, and how do third parties know that
this backdoor has been forever erased?
zk-stark stands for Zero-Knowledge Scalable Transparent ARguments of Knowledge, where “scalable” means the same thing as “succinct”
Ok, what is this knowledge that a zk-stark is an argument of?
Bob can prove to Carol that he knows a set of boolean values that
simultaneously satisfy certain boolean constraints.
This is zero knowledge because he proves this to Carol without revealing
what those values are, and it is “succinct” or “scalable”, because he can
prove knowledge of a truly enormous set of values that satisfy a truly
enormous set of constraints, with a proof that remains roughly the same
reasonably small size regardless of how enormous the set of values and
constraints are, and Carol can check the proof in a reasonably short time,
even if it takes Bob an enormous time to evaluate all those constraints over all those booleans.
Which means that Carol could potentially check the validity of the
blockchain without having to wade through terabytes of other peoples
data in which she has absolutely no interest.
Which means that each peer on the blockchain would not have to
download the entire blockchain, keep it all around, and evaluate from the beginning. They could just keep around the bits they cared about.
The peers as a whole have to keep all the data around, and make certain
information about this data available to anyone on demand, but each
individual peer does not have to keep all the data around, and not all the
data has to be available. In particular, the inputs to the transaction do not
have to be available, only that they existed, were used once and only once,
and the output in question is the result of a valid transaction whose outputs
are equal to its inputs.
Unfortunately producing a zeek of such an enormous pile of data, with
such an enormous pile of constraints, could never be done, because the
blockchain grows faster than you can generate the zeek.
### zk-stark rollups, zeek rollups
Zk-stark rollups are a privacy technology and a scaling technology.
A zeek rollup is a zeek that proves that two or more other zeeks were verified.
Instead of Bob proving to Alice that he knows the latest block was valid, having evaluated every transaction, he proves to Alice that *someone* evaluated every transaction.
Fundamentally a ZK-stark proves to the verifier that the prover who generated the zk-stark knows a solution to an np complete problem. Unfortunately the proof is quite large, and the relationship between that problem, and anything that anyone cares about, extremely elaborate and indirect. The proof is large and costly to generate, even if not that costly to verify, not that costly to transmit, not that costly to store.
So you need a language that will generate such a relationship. And then you can prove, for example, that a hash is the hash of a valid transaction output, without revealing the value of that output, or the transaction inputs.
But if you have to have such a proof for every output, that is a mighty big pile of proofs, costly to evaluate, costly to store the vast pile of data. If you have a lot of zk-snarks, you have too many.
So, rollups.
Instead of proving that you know an enormous pile of data satisfying an enormous pile of constraints, you prove you know two zk-starks.
Each of which proves that someone else knows two more zk-starks. And the generation of all these zk-starks can be distributed over all the peers of the entire blockchain. At the bottom of this enormous pile of zk-starks is an enormous pile of transactions, with no one person or one computer knowing all of them, or even very many of them.
Instead of Bob proving to Carol that he knows every transaction that ever there was, and that they are all valid, Bob proves that for every transaction that ever there was, someone knew that that transaction was valid. Neither Carol nor Bob know who knew, or what was in that transaction.
You produce a proof that you verified a pile of proofs. You organize the information about which you want to prove stuff into a merkle tree, and the root of the merkle tree is associated with a proof that you verified the proofs of the direct children of that root vertex. And proof of each of the children of that root vertex proves that someone verified their children. And so forth all the way down to the bottom of the tree, the origin of the blockchain, proofs about proofs about proofs about proofs.
And then, to prove that a hash is a hash of a valid transaction output, you just produce the hash path linking that transaction to the root of the merkle tree. So with every new block, everyone has to just verify one proof once. All the child proofs get thrown away eventually.
Which means that peers do not have to keep every transaction and every output around forever. They just keep some recent roots of the blockchain around, plus the transactions and transaction outputs that they care about. So the blockchain can scale without limit.
ZK-stark rollups are a scaling technology plus a privacy technology. If you are not securing peoples privacy, you are keeping an enormous pile of data around that nobody cares about, (except a hostile government) which means your scaling does not scale.
And, as we are seeing with Tornado, some people Eth do not want that vast pile of data thrown away.
To optimize scaling to the max, you optimize privacy to the max. You want all data hidden as soon as possible as completely as possible, so that everyone on the blockchain is not drowning in other peoples data. The less anyone reveals, and the fewer the people they reveal it to, the better it scales, and the faster and cheaper the blockchain can do transactions, because you are pushing the generation of zk-starks down to the parties who are themselves directly doing the transaction. Optimizing for privacy is almost the same thing as optimizing for scalability.
The fundamental problem is that in order to produce a compact proof that The fundamental problem is that in order to produce a compact proof that
the set of coins, unspent transaction outputs, of state $N+1$ was validly the set of coins, unspent transaction outputs, of state $N+1$ was validly
@ -205,21 +304,20 @@ problem of factoring, dividing the problem into manageable subtasks, but
it seems to be totally oblivious to the hard problem of incentive compatibility at scale. it seems to be totally oblivious to the hard problem of incentive compatibility at scale.
Incentive compatibility was Satoshi's brilliant insight, and the client trust Incentive compatibility was Satoshi's brilliant insight, and the client trust
problem is failure of Satoshi's solution to that problem to scale. Existing problem, too may people runing client wallets and not enough people
zk-snark solutions fail at scale, though in a different way. With zk-snarks, running full peers, is failure of Satoshi's solution to that problem to scale.
the client can verify the zk-snark, but producing a valid zk-snark in the Existing zk-snark solutions fail at scale, though in a different way. With
zk-snarks, the client can verify the zeek but producing a valid zeek in the
first place is going to be hard, and will rapidly get harder as the scale first place is going to be hard, and will rapidly get harder as the scale
increases. increases.
A zk-snark that succinctly proves that the set of coins (unspent transaction A zeek that succinctly proves that the set of coins (unspent transaction
outputs) at block $N+1$ was validly derived from the set of coins at outputs) at block $N+1$ was validly derived from the set of coins at
block $N$, and can also prove that any given coin is in that set or not in that block $N$, and can also prove that any given coin is in that set or not in that
set is going to have to be a proof about many, many, zk-snarks produced set is going to have to be a proof about many, many, zeeks produced by
by many, many machines, a proof about a very large dag of zk-snarks, many, many machines, a proof about a very large dag of zeeks, each zeek
each zk-snark a vertex in the dag proving some small part of the validity a vertex in the dag proving some small part of the validity of the step from
of the step from consensus state $N$ of valid coins to consensus state consensus state $N$ of valid coins to consensus state $N+1$ of valid coins, and the owners of each of those machines that produced a tree vertex for the step from set $N$ to set $N+1$ will need a reward proportionate
$N+1$ of valid coins, and the owners of each of those machines that produced a tree
vertex for the step from set $N$ to set $N+1$ will need a reward proportionate
to the task that they have completed, and the validity of the reward will to the task that they have completed, and the validity of the reward will
need to be part of the proof, and there will need to be a market in those need to be part of the proof, and there will need to be a market in those
rewards, with each vertex in the dag preferring the cheapest source of rewards, with each vertex in the dag preferring the cheapest source of
@ -227,16 +325,6 @@ child vertexes. Each of the machines would only need to have a small part
of the total state $N$, and a small part of the transactions transforming state of the total state $N$, and a small part of the transactions transforming state
$N$ into state $N+1$. This is hard but doable, but I am just not seeing it done yet. $N$ into state $N+1$. This is hard but doable, but I am just not seeing it done yet.
I see good [proposals for factoring the work], but I don't see them
addressing the incentive compatibility problem. It needs a whole picture
design, rather than a part of the picture design. A true zk-snark solution
has to shard the problem of producing state $N+1$, the set of unspent
transaction outputs, from state $N$, so it should also shard the problem of
producing a consensus on the total set and order of transactions.
[proposals for factoring the work]:https://hackmd.io/@vbuterin/das
"Data Availability Sampling Phase 1 Proposal"
### The problem with zk-snarks ### The problem with zk-snarks
Last time I checked, [Cairo] was not ready for prime time. Last time I checked, [Cairo] was not ready for prime time.
@ -362,6 +450,20 @@ rocket and calling it a space plane.
[a frequently changing secret that is distributed]:multisignature.html#scaling [a frequently changing secret that is distributed]:multisignature.html#scaling
### How a fully scalable blockchain running on zeek rollups would work
A blockchain is of course a chain of blocks, and at scale, each block would be far too immense for any one peer to store or process, let alone the entire chain.
Each block would be a Merkle patricia tree, or a Merkle tree of a number of Merkle patricia trees, because we want the block to be broad and flat, rather than deep and narrow, so that it can be produced in a massively parallel way, created in parallel by an immense number of peers. Each block would contain a proof that it was validly derived from the previous block, and that the previous blocks similar proof was verified. A chain is narrow and deep, but that does not matter, because the proofs are “scalable”. No one has to verify all the proofs from the beginning, they just have to verify the latest proofs.
Each peer would keep around the actual data and actual proofs that it cared about, and the chain of hashes linking the data it cared about to Merkle root of the latest block.
All the immense amount of data in the immense blockchain that anyone
cares about would need to exist somewhere, but it would not have to exist
*everywhere*, and everyone would have a proof that the tiny part of the
blockchain that they keep around is consistent with all the other tiny parts
of the blockchain that everyone else is keeping around.
# sharding within each single very large peer # sharding within each single very large peer
Sharding within a single peer is an easier problem than sharding the Sharding within a single peer is an easier problem than sharding the

1
docs/setup/icon.pandoc Normal file
View File

@ -0,0 +1 @@
<link rel="shortcut icon" href="../rho.ico">

View File

@ -11,10 +11,10 @@ platform environment.
Having a whole lot of different versions of different machines, with a Having a whole lot of different versions of different machines, with a
whole lot of snapshots, can suck up a remarkable amount of disk space whole lot of snapshots, can suck up a remarkable amount of disk space
mighty fast. Even if your virtual disk is quite small, your snapshots mighty fast. Even if your virtual disk is quite small, your snapshots wind
wind up eating a huge amount of space, so you really need some capacious up eating a huge amount of space, so you really need some capacious disk
disk drives. And you are not going to be able to back up all this drives. And you are not going to be able to back up all this enormous stuff,
enormous stuff, so you have to document how to recreate it. so you have to document how to recreate it.
Each snapshot that you intend to keep around long term needs to Each snapshot that you intend to keep around long term needs to
correspond to a documented path from install to that snapshot. correspond to a documented path from install to that snapshot.
@ -49,7 +49,7 @@ To install guest additions on Debian:
```bash ```bash
su -l root su -l root
apt-get -qy update && apt-get -qy install build-essential module-assistant git sudo dialog rsync apt-get -qy update && apt-get -qy install build-essential module-assistant git dnsutils curl sudo dialog rsync
apt-get -qy full-upgrade apt-get -qy full-upgrade
m-a -qi prepare m-a -qi prepare
mount -t iso9660 /dev/sr0 /media/cdrom mount -t iso9660 /dev/sr0 /media/cdrom
@ -194,8 +194,14 @@ accounts that have sensitive information by corrupting the shadow file
```bash ```bash
usermod -L cherry usermod -L cherry
``` ```
But this tactic is very risky, because it can, due to bug in Linux, disable
ssh public key login. And then you are really hosed. Better to use a very
long random password, and then throw it away.
When an account is disabled in this manner, you cannot login at the When an account is disabled in this manner, you cannot login at the
terminal, and may be unable to ssh in, but you can still get into it by `su -l cherry` from the root account. And if you have disabled the root account, terminal, and may be unable to ssh in, but you can still get into it by
`su -l cherry` from the root account. And if you have disabled the root account,
but have enabled passwordless sudo for one special user, you can still get but have enabled passwordless sudo for one special user, you can still get
into the root account with `sudo -s` or `sudo su -l root` But if you disable into the root account with `sudo -s` or `sudo su -l root` But if you disable
the root account in this manner without creating an account that can sudo the root account in this manner without creating an account that can sudo
@ -204,7 +210,8 @@ but have enabled passwordless sudo for one special user, you can still get
account, and disable password and ssh access to the root account. account, and disable password and ssh access to the root account.
You can always undo the deliberate corruption by setting a new password, You can always undo the deliberate corruption by setting a new password,
providing you can somehow get into root. providing you can somehow get into root.
## never enough memory ## never enough memory
@ -431,6 +438,13 @@ nano /etc/ssh/sshd_config
Your config file should have in it Your config file should have in it
```default ```default
PubkeyAuthentication yes
ChallengeResponseAuthentication no
PrintMotd no
PasswordAuthentication no
UsePAM no
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
HostKey /etc/ssh/ssh_host_ed25519_key HostKey /etc/ssh/ssh_host_ed25519_key
X11Forwarding yes X11Forwarding yes
AllowAgentForwarding yes AllowAgentForwarding yes
@ -439,9 +453,6 @@ TCPKeepAlive yes
AllowStreamLocalForwarding yes AllowStreamLocalForwarding yes
GatewayPorts yes GatewayPorts yes
PermitTunnel yes PermitTunnel yes
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no
PermitRootLogin prohibit-password PermitRootLogin prohibit-password
ciphers chacha20-poly1305@openssh.com ciphers chacha20-poly1305@openssh.com
macs hmac-sha2-256-etm@openssh.com macs hmac-sha2-256-etm@openssh.com
@ -450,6 +461,11 @@ pubkeyacceptedkeytypes ssh-ed25519
hostkeyalgorithms ssh-ed25519 hostkeyalgorithms ssh-ed25519
hostbasedacceptedkeytypes ssh-ed25519 hostbasedacceptedkeytypes ssh-ed25519
casignaturealgorithms ssh-ed25519 casignaturealgorithms ssh-ed25519
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
# override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
``` ```
`PermitRootLogin` defaults to prohibit-password, but best to set it `PermitRootLogin` defaults to prohibit-password, but best to set it

822
docs/setup/wireguard.md Normal file
View File

@ -0,0 +1,822 @@
---
title: Wireguard
...
Setting up your own vpn using wireguard and a Debian 11 server in the cloud
This tutorial largely stolen from [Linuxbabe](https://www.linuxbabe.com/debian/wireguard-vpn-server-debian){target="_blank"} It is slightly
more up to date than her version at the time of writing.
# WireGuard VPN
Openvpn uses ssl, (not to be confused with ssh) Wireguard uses algorithms
and code developed by privacy advocates. SSL has numerous well known
vulnerabilities, notably that it is subject to active attack by any group
that has a CA in its pocket. The NSA has a passive attack, which is not
understood, but OpenSSL has an enormous codebase, that
is impossible to audit with an architecture that seems designed for hiding
obfuscated vulnerabilities, and NSA has contributed much to its codebase
through innumerable proxies, who are evasive when I talk to them.
Wireguard uses cryptographic libraries developed by our allies, rather than our enemies.
Wireguard is lightweight and fast, blowing OpenVPN out of the water.
Openvpn is a gigantic pile of code, a maze of strange architectural
decisions that slow things down, which vast complicated pile of
incomprehensible things seem to be to provide no useful purpose other
than places to hide backdoors in.
Wireguard is open source and and cross-platform. WireGuard can run on
Linux, BSD, macOS, Windows, Android, iOS, and OpenWRT.
User authentication is done by exchanging public keys, similar to SSH keys.
Assigns static tunnel IP addresses to VPN clients.
Mobile devices can switch between Wi-Fi and mobile network seamlessly
without dropping any connectivity.
Supercedes OpenVPN and IPSec, which are obsolete and insecure.
# Requirements
I assume you have a host in the cloud, with world accessible network address and ports, that can access blocked websites freely outside of your country or Internet filtering system.
We are going to enable ip4 and ipv6 on our vpn. The tutorial assumes ipv6 is working. Check that it *is* working by pinging your server `ping -6 «server»`, then ssh in to your server and attempt to `ping -6 «something»`
It may well happen that your server is supposed to have an ipv6 address and /64 ipv6 subnet, but something is broken.
The VPN server is running Debian 11 operating system. This tutorial is not
going to work on Debian 10 or lower. Accessing your vpn from a windows
client, however, easy since the windows wireguard windows client is very
friendly. Setting up wireguard on windows is easy. Setting up a wireguard
VPN server on windows is, on the other hand, very difficult. Don't even
try. I am unaware of anyone succeeding.
## Make sure you have control of nameservice
No end of people are strangely eager to provide free nameservice. If it is a
free service, you are the product. And some of them have sneaky ways to get
you use their nameservice whether you want it or not.
Nameservice reveals which websites you are visiting. We are going to set up
our own nameserver for the vpn clients, but it will have to forward to a
bigger nameserver, thus revealing which websites the clients are visiting,
though not which client is visiting them. Lots of people are strangely eager
to know which websites you are visiting. If you cannot control your
nameservice, then when you set up your own nameserver, it is likely to
behave strangely.
No end of people's helpful efforts to help you automatically set up
nameservice are likely foul up your nameservice for your vpn clients.
```bash
cat /etc/resolv.conf
```
Probably at least two of them are google, which logs everything and
shares the data with the Global American Empire, and the other two are
mystery meat. Maybe good guys provided by your good guy ISP, but I
would not bet on it. Your ISP probably went along with his ISP, and his
ISP may be in the pocket of your enemies.
I use Yandex.com resolvers, since Russia is currently in a state of proxy
war with the Global American Empire which is heading into flat out war,
and I do not care if the Russian government knows which websites I visit,
because it is unlikely to share that data with the five eyes.
So for me
```terminal_image
cat /etc/resolv.conf
nameserver 2a02:6b8::feed:0ff
nameserver 2a02:6b8:0:1::feed:0ff
nameserver 77.88.8.8
nameserver 77.88.8.1
```
Of course your mileage may vary, depending on which enemies you are
worried about, and what the political situation is when you read this (it
may well change radically in the near future). Read up on the resolver's
privacy policies, but apply appropriate cynicism. Political alignments and
vulnerability to power matter more that professed good intentions.
We are going to change this when we set up our own nameserver for the
vpn clients, but if you don't have control, things are likely to get strange.
You cannot necessarily change your nameservers by editing
`/etc/resolv.conf`, since no end of processes are apt to rewrite that file
durig boot up. Changing your nameservers depends on how your linux is
set up, but editing `/etc/resolv.conf` currently works on the standard
distribution. But may well cease to work when you add more software.
If it does not work, maybe you need to subtract some software, but it is
hard to know what software. A clean fresh install may be needed.
It all depends on which module of far too many modules gets the last
whack at `/etc/resolv.conf` on bootup. Far too many people display a
curious and excessive interest in controlling what nameserver you are
using, and if they have their claw in your linux distribution, you are going
to have to edit the configuration files of that module.
If something is whacking your `/etc/resolv.conf`, install `openresolv`,
which will generally make sure it gets the last whack, and edit its
configuration files. Or install a distribution where you *can* control
nameservice by editing `/etc/resolv.conf`
# Install WireGuard on Debian Client and server
```bash
apt update -qy
apt full-upgrade -qy
apt install -qy wireguard wireguard-tools linux-headers-$(uname -r)
```
## Generate Public/Private Keypairs
On the server
```bash
mkdir -p /etc/wireguard
wg genkey | sudo tee /etc/wireguard/server_private.key | wg pubkey | sudo tee /etc/wireguard/server_public.key
sudo chmod 600 /etc/wireguard/ -R
```
On the client
```bash
mkdir -p /etc/wireguard
wg genkey | sudo tee /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key
sudo chmod 600 /etc/wireguard/ -R
```
# Configure Wireguard on server
## Create WireGuard Server Configuration File
Use a command-line text editor like Nano to create a WireGuard configuration file on the Debian server. `wg0` will be the network interface name.
```bash
sudo nano /etc/wireguard/wg0.conf
```
Copy the following text and paste it to your configuration file. You need to use your own server private key and client public key.
The curly braces mean that you do not copy the text inside the curly braces, which is only there for example. You have to substitute your own private key (since everyone now knows this private key), and your own client public key., mutas mutandis.
```default
[Interface]
Address = 10.10.10.1/24
ListenPort = «51820»
PrivateKey = «cD+ZjXiVIX+0iSX1PNijl4a+88lCbDgw7kO78oXXLEc=»
[Peer]
PublicKey = «AYQJf6HbkQ0X0Xyt+cTMTuJe3RFwbuCMF46LKgTwzz4=»
AllowedIPs = 10.10.10.2/32
```
As always «...» means that this is an example value, and you need to
substitute your own actual value. "_Mutas mutandis_" means "changing that
which ought to be changed". In other words, watch out for those «...» .
Or, as those that want to baffle you would say, metasyntactic variables are enclosed in «...» .
Where:
- **Address**: Specify the private IP address of the VPN server. Here Im using the 10.10.10.0/24 network range, so it wont conflict with your home network range. (Most home routers use 192.168.0.0/24 or 192.168.1.0/24). 10.10.10.1 is the private IP address for the VPN server.
- **PrivateKey**: The private key of VPN server, which can be found in the `/etc/wireguard/server_private.key` file on the server.
- **ListenPort**: WireGuard VPN server will be listening on UDP port 51820, which is the default.
- **PublicKey**: The public key of VPN client, which can be found in the `/etc/wireguard/public.key` file on the client computer.
- **AllowedIPs**: IP addresses the VPN client is allowed to use. In this example, the client can only use the 10.10.10.2 IP address inside the VPN tunnel.
Change the file permission mode so that only root user can read the files. Private keys are supposed to be _private_,
```bash
sudo chmod 600 /etc/wireguard/ -R
```
## Configure IP Masquerading on the Server
We need to set up IP masquerading in the server firewall, so that the server becomes a virtual router for VPN clients. I will use UFW, which is a front end to the iptables firewall. Install UFW on Debian with:
``` bash
apt -qy install ufw
```
If ufw is already installed and running
``` bash
ufw disable
```
First, you need to allow SSH traffic.
```bash
ufw allow 22/tcp
```
Next, find the name of your servers main network interface.
```bash
ip addr | grep BROADCAST
```
As you can see, its named `eth0` on my Debian server.
```terminal_image
:~# ip addr | grep BROADCAST
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
```
To configure IP masquerading, we have to add iptables command in a UFW configuration file.
```bash
nano /etc/ufw/before.rules
```
By default, there are some rules for the `filter` table. Add the following
lines at the end of these default rules. Replace `eth0` with your own
network interface name.
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
-F
-A POSTROUTING -o «eth0» -j MASQUERADE
# End each table with the 'COMMIT' line or these rules
# won't be processed
COMMIT
"MASQUERADE" is NAT packet translation. This puts your IP4
forwarded network addresses behind a NAT firewall, so that they appear
on the internet with network address of the server.
If you want to NAT translate your IPv6 addresses, will have to do
something similar in `/etc/ufw/before6.rules`. But you usually have lots
of IPv6 addresses, so you seldom want to nat translate IPv6
In Nano text editor, you can go to the end of the file by pressing `Ctrl+W`, then pressing `Ctrl+V`.
```terminal_image
-A ufw-before-input -p udp -d 239.255.255.250 --dport 1900 -j ACCEPT
# don't delete the 'COMMIT' line or these rules won't be processed
COMMIT
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
-F
-A POSTROUTING -o eth0 -j MASQUERADE
COMMIT
```
The above lines will append `-A` a rule to the end of the`POSTROUTING` chain of the `nat` table. It will link your virtual private network with the Internet. And also hide your network from the outside world. So the Internet can only see your VPN servers IP, but cant see your VPN clients IP, just like your home router hides your private home network.
Like your home router, it means your client system behind the nat has no open ports.
If you want to open some ports, for example the bitcoin port 8333 so that you can run bitcoin core
```terminal_image
NAT table rules
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -o eth0 -j MASQUERADE
-A PREROUTING -d «123.45.67.89»/32 -i eth0 -p tcp --dport 8333 -j DNAT --to-destination 10.10.10.2:8333
-A PREROUTING -d «123.45.67.89»/32 -i eth0 -p udp --dport 8333 -j DNAT --to-destination 10.10.10.2:8333
COMMIT
```
Then open the corresponding ports in ufw
```bash
ufw allow in 8333
ufw enable
```
If you have enabled UFW before, then you can use systemctl to restart UFW.
## Configure forwarding on the Server
By default, UFW forbids packet forwarding. We can allow forwarding for our private network, mutas mutandis.
```bash
ufw route allow in on wg0
ufw route allow out on wg0
ufw allow in on wg0
ufw allow «51820»/udp
ufw allow to «2405:4200:f001:13f6:7ae3:6c54:61ab:1/112»
```
As always «...» means that this is an example value, and you need to substitute your actual value. "_Mutas mutandis_" means "changing that which should be changed", in other words, watch out for those «...» .
Note that the last line is intended to leave your clients naked on the IPv6
global internet with their own IPv6 addresses, as if they were in the cloud
with no firewall. This is often desirable for linux systems, but dangerous
for windows, android, and mac systems which always have loads of
undocumented closed source mystery meat processes running that do who
knows what.
You could open only part of the IPv6 subnet to incoming, and put
windows, mac, and android clients in the part that is not open.
`wg0` is the virtual network card that `wg0.conf` specifies. If you called it `«your name».conf` then mutatis mutandis.
You just told ufw to allow your vpn clients to see each other on the internet, but allowing routing does not in itself result in any routing.
To actually enable routing, edit the system kernel configuration file, and uncomment the following lines. `nano /etc/sysctl.conf`
```terminal_image
# Uncomment the next line to enable packet forwarding for IPv4
net.ipv4.ip_forward=1
# Uncomment the next line to enable packet forwarding for IPv6
# Enabling this option disables Stateless Address Autoconfiguration
# based on Router Advertisements for this host
net.ipv6.conf.all.forwarding=1
```
Now if you list the rules in the POSTROUTING chain of the NAT table by using the following command:
```bash
iptables -t nat -L POSTROUTING
```
You can see the Masquerade rule.
```terminal_image
:~# iptables -t nat -L POSTROUTING
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- anywhere anywhere
```
## Install a DNS Resolver on the Server
Since we will specify the VPN server as the DNS server for client, we need to run a DNS resolver on the VPN server. We can install the bind9 DNS server.
```bash
apt install bind9
```
Once its installed, BIND will automatically start. You can check its status with:
```bash
systemctl status bind9
```
Sample output:
```terminal_image
:~$ systemctl status bind9
● named.service - BIND Domain Name Server
Loaded: loaded (/lib/systemd/system/named.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2020-05-17 08:11:26 UTC; 37s ago
Docs: man:named(8)
Main PID: 13820 (named)
Tasks: 5 (limit: 1074)
Memory: 14.3M
CPU: 8.709s
CGroup: /system.slice/named.service
└─13820 /usr/sbin/named -f -u bind
```
If its not running, start it with:
```bash
systemctl start bind9
```
Edit the BIND DNS servers configuration file.
```bash
nano /etc/bind/named.conf.options
```
Add the following line to allow VPN clients to send recursive DNS queries.
```default
allow-recursion { 127.0.0.1; 10.10.10.0/24; ::1/128; };
```
Save and close the file.
```terminal_image
:~# cat /etc/bind/named.conf.options | tail -n 9
//========================================================================
// If BIND logs error messages about the root key being expired,
// you will need to update your keys. See https://www.isc.org/bind-keys
//========================================================================
dnssec-validation auto;
listen-on-v6 { any; };
allow-recursion { 127.0.0.1; 10.10.10.0/24; ::1/128; };
};
```
Then edit the `/etc/default/named` files.
```bash
nano /etc/default/named
```
If on an IPv4 network, add `-4` to the `OPTIONS` to ensure BIND can query root DNS servers.
OPTIONS="-u bind -4"
If on the other hand, you are on a network that supports both IPv6 and
IPv4, this will cause unending havoc and chaos, as bind9's behavior
comes as a surprise to other components of the network, and bind9 crashes
on IPv6 information in its config files.
Save and close the file.
Restart `bind9` for the changes to take effect.
```bash
systemctl restart bind9
```
Your ufw firewall will allow vpn clients to access `bind9` because you earlier allowed everything from `wg0` in.
## Start WireGuard on the server
Run the following command on the server to start WireGuard.
```bash
wg-quick up /etc/wireguard/wg0.conf
```
To stop it, run
```bash
wg-quick down /etc/wireguard/wg0.conf
```
You can also use systemd service to start WireGuard.
```bash
systemctl start wg-quick@wg0.service
```
Enable auto-start at system boot time.
```bash
systemctl enable wg-quick@wg0.service
```
Check its status with the following command. Its status should be `active (exited)`.
```bash
systemctl status wg-quick@wg0.service
```
Now WireGuard server is ready to accept client connections.
# Configure Wireguard on Debian 11 client.
```bash
apt -qy install openresolv
nano /etc/wireguard/wg-client0.conf
```
You will edit the wireguard client config file so that the client will use
`openresolv` to use your server's resolver to find the network addresses of
domain names instead of leaking your activities all over the internet.
Copy the following text and paste it to your configuration file. You need to
use your own client private key and server public key, _and your own endpoint and port_.
Remember, curly braces mean that the material is only
for example, and has to be customized. Mutas mutandis. Metasyntactic variables.
```default
[Interface]
Address = 10.10.10.2/24
DNS = 10.10.10.1
PrivateKey = «cOFA+x5UvHF+a3xJ6enLatG+DoE3I5PhMgKrMKkUyXI=»
[Peer]
PublicKey = «kQvxOJI5Km4S1c7WXu2UZFpB8mHGuf3Gz8mmgTIF2U0=»
AllowedIPs = 0.0.0.0/0
Endpoint = «123.45.67.89:51820»
PersistentKeepalive = 25
```
Where:
- `Address`: Specify the private IP address of the VPN client.
- `DNS`: specify 10.10.10.1 (the VPN server) as the DNS server. It will be configured via the `resolvconf` command. You can also specify multiple DNS servers for redundancy like this: `DNS = 10.10.10.1 8.8.8.8`
- `PrivateKey`: The clients private key, which can be found in the `/etc/wireguard/private.key` file on the client computer.
- `PublicKey`: The servers public key, which can be found in the `/etc/wireguard/server_public.key` file on the server.
- `AllowedIPs`: 0.0.0.0/0 represents the whole Internet, which means all traffic to the Internet should be routed via the VPN.
- `Endpoint`: The public IP address and port number of VPN server. Replace 123.45.67.89 with your servers real public IP address and the port number with your servers real port number.
- `PersistentKeepalive`: Send an authenticated empty packet to the peer every 25 seconds to keep the connection alive. If PersistentKeepalive isnt enabled, the VPN server might not be able to ping the VPN client.
Save and close the file.
Change the file mode so that only root user can read the files.
```bash
chmod 600 /etc/wireguard/ -R
```
Start WireGuard.
```bash
systemctl start wg-quick@wg-client0.service
```
Enable auto-start at system boot time.
```bash
systemctl enable wg-quick@wg-client0.service
```
Check its status:
```bash
systemctl status wg-quick@wg-client0.service
```
Now go to this website: `http://icanhazip.com/` to check your public IP address. If everything went well, it should display your VPN servers public IP address instead of your client computers public IP address.
You can also run the following command to get the current public IP address.
```bash
curl https://icanhazip.com
```
# Troubleshooting
## Check if UDP port «51820» is open
Install the `tshark` network traffic analyzer on the server. Tshark is the command-line version of Wireshark.
```bash
apt install -qy tshark
adduser «your-username» wireshark
su -l «your-username»
tshark -i «eth0» "udp port «51820»"
```
If you are asked “_Should non-superusers be able to capture packets?_”,
answer _Yes_. Once its installed, run the following command to add your
user account to the `wireshark` group so that you can capture packets.
If the WireGuard client is able to connect to UDP port «51820» of the server, then you will see packets being captured by tshark like below. As you can see, the client started the handshake initiation, and the server sent back a handshake response. Once the connection is established, the client sends keepalive packets.
```terminal_image
Capturing on 'eth0'
1 105.092578905 11.22.33.44 → 12.34.56.78 WireGuard 190 Handshake Initiation, sender=0x3F1A04AB
2 110.464628716 12.34.56.78 → 11.22.33.44 WireGuard 134 Handshake Response, sender=0x34ED7471, receiver=0xD4B23800
3 110.509517074 11.22.33.44 → 12.34.56.78 WireGuard 74 Keepalive, receiver=0x34ED7471, counter=0
```
If the WireGuard client can not connect to UDP port 51820 of the server, then you will only see handshake initiation packets. Theres no handshake respsonse.
```terminal_image
Capturing on 'eth0'
1 105.092578905 11.22.33.44 → 12.34.56.78 WireGuard 190 Handshake Initiation, sender=0x3F1A04AB
2 149.670118573 11.22.33.44 → 12.34.56.78 WireGuard 190 Handshake Initiation, sender=0x7D584974
3 152.575188680 11.22.33.44 → 12.34.56.78 WireGuard 190 Handshake Initiation, sender=0x8D2407B9
4 153.706876729 12.34.56.78 → 11.22.33.44 WireGuard 190 Handshake Initiation, sender=0x47690027
5 154.789959772 11.22.33.44 → 12.34.56.78 WireGuard 190 Handshake Initiation, sender=0x993232FC
6 157.956576772 11.22.33.44 → 12.34.56.78 WireGuard 190 Handshake Initiation, sender=0x06AD433B
7 159.082825929 12.34.56.78 → 11.22.33.44 WireGuard 190 Handshake Initiation, sender=0x8C089E1
```
## Ping test
You can ping from the VPN server to VPN client (`ping 10.10.10.2`) to see if the tunnel works. If you see the following error message in the ping,
```terminal_image
ping: sendmsg: Required key not available
```
it might be that the `AllowedIPs`  parameter is wrong, like a typo.
If the ping error message is
```terminal_image
ping: sendmsg: Destination address required
```
it could be that the private/public key is wrong in your config files.
## Not able to browse the Internet
If the VPN tunnel is successfully established, but the client public IP
address doesnt change, thats because the masquerading or forwarding
rule in your UFW config file is not working, typically typo in the
`/etc/ufw/before.rules` file
## Enable Debug logging in Linux Kernel
If you use Linux kernel 5.6+, you can, as root, enable debug logging for
WireGuard with the following command. As a non root wireguard user,
cannot log kernel.
sudo su -
echo module wireguard +p > /sys/kernel/debug/dynamic_debug/control
Then you can view the debug logs with
sudo dmesg -wH
or
sudo journalctl -kf
## Restart
If your VPN still doesnt work, try restarting the VPN server and client.
# Adding Additional VPN Clients
WireGuard is designed to associate one IP address with one VPN client. To add more VPN clients, you need to create a unique private/public key pair for each client, then add each VPN clients public key in the servers config file (`/etc/wireguard/wg0.conf`) like this:
```default
[Interface]
Address = 10.10.10.1/24
PrivateKey = «UIFH+XXjJ0g0uAZJ6vPqsbb/o68SYVQdmYJpy/FlGFA=»
ListenPort = «51820»
[Peer]
PublicKey = «75VNV7HqFh+3QIT5OHZkcjWfbjx8tc6Ck62gZJT/KRA=»
AllowedIPs = 10.10.10.2/32
[Peer]
PublicKey = «YYh4/1Z/3rtl0i7cJorcinB7T4UOIzScifPNEIESFD8=»
AllowedIPs = 10.10.10.3/32
[Peer]
PublicKey = «EVstHZc6QamzPgefDGPLFEjGyedJk6SZbCJttpzcvC8=»
AllowedIPs = 10.10.10.4/32
```
Each VPN client will have a static private IP address (10.10.10.2,
10.10.10.3, 10.10.10.4, etc). Restart the WireGuard server for the changes
to take effect.
Then add WireGuard configuration on each VPN client as usual.
## Configure VPN Client on iOS/Andorid
Install the `WireGuard` app from the App store. Then open this app and click the `Add a tunnel` button.
You have 3 methods to create a new WireGuard tunnel.
- create from file or archive
- create from QR code
- Create from scratch
"Create from scratch" means that the Wireguard app gives you a private and public key pair, and an empty wg-client.conf file that you populate in
the wireguard app ui. This is likely to result in a lot of typing where you
are bound to do a typo, even though the correct and working information
is already on your debian server and client and you would like to just copy
and paste it.
Create from QR code means that you create `ios.conf` in your client, as before for debian, add the public key to your server `wg0.conf` as before for debian, restart the server as before, and then generate the QR code with
```bash
apt install -qy qrencode
cat /etc/wireguard/ios.conf | qrencode -t ansiutf8
```
This is apt to be easier, because it is likely to be hard to transfer information between android systems.
Grencode is very useful for transferring data to android systems, which tend to be locked down against ordinary users transferring computer data.
## Configure VPN Client on Windows
Download the [WireGuard installer for Windows](https://www.wireguard.com/install/).
Once its installed, start the WireGuard program. You need to right-click
on the left sidebar to _create a new empty tunnel_. It will automatically
create a public/private key for the Windows client.
And from there on, same as with the android client.
On Windows, you can [use the PowerShell program to SSH into your
Linux server](https://www.linuxbabe.com/linux-server/ssh-windows), so you do not have the problem you had with android.
# Policy Routing, Split Tunneling & VPN Kill Switch
Its not recommended to use _policy routing_, _split tunneling_, or _VPN kill switch_ in conjunction with each other. If you use policy routing, then you should not enable split tunneling or VPN kill switch, and vice versa.
## Policy Routing
By default, all traffic on the VPN client will be routed through the VPN
server. Sometimes you may want to route only a specific type of traffic,
based on the transport layer protocol and the destination port. This is
known as policy routing.
Policy routing is configured on the client computer, and we need to stop
the WireGuard client process and edit the client configuration file.
```bash
systemctl stop wg-quick@wg0.service
nano /etc/wireguard/wg-client0.conf
```
For example, if you add the following 3 lines in the `[interface]` section,
then WireGuard will create a routing table named “1234” and add the ip rule
into the routing table. In this example, traffic will be routed through VPN
server only when TCP is used as the transport layer protocol and the
destination port is 25, i.e, when the client computer sends emails.
```default
Table = 1234
PostUp = ip rule add ipproto tcp dport 25 table 1234
PreDown = ip rule delete ipproto tcp dport 25 table 1234
```
```terminal_image
[Interface]
Address = 10.10.10.2/24
DNS = 10.10.10.1
PrivateKey = «cOFA+x5UvHF+a3xJ6enLatG+DoE3I5PhMgKrMKkUyXI=»
Table = 1234
PostUp = ip rule add ipproto tcp dport 25 table 1234
PreDown = ip rule delete ipproto tcp dport 25 table 1234
[Peer]
PublicKey = «kQvxOJI5Km4S1c7WXu2UZFpB8mHGuf3Gz8mmgTIF2U0=»
AllowedIPs = 0.0.0.0/0
Endpoint = «123.45.67.89:51820»
PersistentKeepalive = 25
```
Save and close the file. Then start WireGuard client again.
## Split Tunneling
By default, all traffic on the VPN client will be routed through the VPN
server. Heres how to enable split tunneling, so only traffic to the
`10.10.10.0/24` IP range will be tunneled through WireGuard VPN. This is useful when you want to build a private network for several cloud servers, because VPN clients will run on cloud servers and if you use a full VPN tunnel, then you will probably lose connection to the cloud servers.
Edit the client configuration file.
```default
nano /etc/wireguard/wg-client0.conf
```
Change
```default
AllowedIPs = 0.0.0.0/0
```
To
```default
AllowedIPs = 10.10.10.0/24
```
So traffic will be routed through VPN only when the destination address is
in the 10.10.10.0/24 IP range. Save and close the file. Then restart WireGuard client.
sudo systemctl restart wg-quick@wg0.service
## VPN Kill Switch
By default, your computer can access the Internet via the normal gateway
when the VPN connection is disrupted. You may want to enable the kill switch
feature, which prevents the flow of unencrypted packets through
non-WireGuard interfaces.
Stop the WireGuard client process and the client configuration file.
```default
systemctl stop wg-quick@wg0.service
nano /etc/wireguard/wg-client0.conf
````
Add the following two lines in the `[interface]` section.
```default
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
```
Like this:
```terminal_image
[Interface]
Address = 10.10.10.2/24
DNS = 10.10.10.1
PrivateKey = cOFA+x5UvHF+a3xJ6enLatG+DoE3I5PhMgKrMKkUyXI=
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
[Peer]
PublicKey = kQvxOJI5Km4S1c7WXu2UZFpB8mHGuf3Gz8mmgTIF2U0=
AllowedIPs = 0.0.0.0/0
Endpoint = 12.34.56.78:51820
PersistentKeepalive = 25
```
Save and close the file. Then start the WireGuard client.

View File

@ -353,9 +353,17 @@ unlimited power, nor allow them to be a central point of failure.
### runningt in schism, with many approximately equal branches ### runningt in schism, with many approximately equal branches
Centralized databases are a single point of failure. They are also extremely
convenient, because they enable many humans to leverage the judgment of
a single human, rather than needing to exercise their own judgement.
With Git, you usually have one master repository. Sometimes you do not,
and have to exercise your own judgement. I have often enough tripped
over this, and often enough managed fine.
Under attack, the system may well schism, with no one source that lists all Under attack, the system may well schism, with no one source that lists all
or most Zooko identities that people are interested in contacting, but it or most Zooko identities that people are interested in contacting, but it
should, like git, be designed to schism, and work well enough while should, like Git, be designed to schism, and work well enough while
schismed. That is what makes Git centralization truly decentralized. schismed. That is what makes Git centralization truly decentralized.
Sometimes, often, there is no one authoritative branch, and things still work. Sometimes, often, there is no one authoritative branch, and things still work.

View File

@ -136,7 +136,7 @@ katex, and in such cases, one should generate the html
```bash ```bash
fn=filename fn=filename
pandoc --toc --eol=lf --wrap=preserve --from markdown+ascii_identifiers --to html --metadata=lang:en --verbose --include-in-header=./pandoc_templates/header.pandoc --include-before-body=./pandoc_templates/before.pandoc --include-after-body=./pandoc_templates/after.Pandoc -o $fn.html $fn.md pandoc --toc --eol=lf --wrap=preserve --from markdown+ascii_identifiers+smart --to html --metadata=lang:en --verbose --include-in-header=./pandoc_templates/header.pandoc --include-before-body=./pandoc_templates/before.pandoc --include-after-body=./pandoc_templates/after.Pandoc -o $fn.html $fn.md
``` ```
Since markdown has no concept of a title, Pandoc expects to find the Since markdown has no concept of a title, Pandoc expects to find the
@ -159,7 +159,7 @@ In bash
```bash ```bash
fn=foobar fn=foobar
git mv $fn.html $fn.md && cp $fn.md $fn.html && pandoc -s --to markdown-smart --eol=lf --wrap=preserve --verbose -o $fn.md $fn.html git mv $fn.html $fn.md && cp $fn.md $fn.html && pandoc -s --to markdown-smart+raw_html+native_divs+native_spans+fenced_divs+bracketed_spans --eol=lf --wrap=preserve --verbose -o $fn.md $fn.html
``` ```
## Math expressions and katex ## Math expressions and katex