modified: docs/pandoc_templates/style.css

modified:   docs/scale_clients_trust.md
modified:   docs/setup/set_up_build_environments.md
modified:   docs/setup/wireguard.md
This commit is contained in:
reaction.la 2022-09-18 22:08:33 +10:00
parent 900696f685
commit 3f196cc5b3
No known key found for this signature in database
GPG Key ID: 99914792148C8388
4 changed files with 272 additions and 63 deletions

View File

@ -6,7 +6,7 @@ body {
font-variant: normal;
font-weight: normal;
font-stretch: normal;
font-size: 16px;
font-size: 100%;
}
table {
border-collapse: collapse;
@ -48,4 +48,5 @@ pre.terminal_image {
background-color: #000;
color: #0F0;
font-size: 75%;
white-space: no-wrap;
}

View File

@ -2,6 +2,28 @@
title: Scaling, trust and clients
...
The fundamental strength of the blockchain architecture is that it is a immutable public ledger. The fundamental flaw of the blockchain architecture is that it is an immutable public ledger.
This is a problem for privacy and fungibility, but what is really biting is scalability, the sheer size of the thing. Every full peer has to download every transaction that anyone ever did, evaluate that transaction for validity, and store it forever. And we are running hard into the physical limits of that. Every full peer on the blockchain has to know every transaction and every output of every transaction that ever there was.
As someone said when Satoshi first proposed what became bitcoin: “it does not seem to scale to the required size.”
And here we are now, fourteen years later, at rather close to that scaling limit. And for fourteen years, very smart people have been looking for a way to scale without limits.
And, at about the same time as we are hitting scalability limits, “public” is becoming a problem for fungibility. The fungibility crisis and the scalability crisis are hitting at about the same time. The fungibility crisis is hitting eth and is threatening bitcoin.
That the ledger is public enables the blood diamonds attack on crypto currency. Some transaction outputs could be deemed dirty, and rendered unspendable by centralized power, and to eventually, to avoid being blocked, you have to make everything KYC, and then even though you are fully compliant, you are apt to get arbitrarily and capriciously blocked because the government, people in quasi government institutions, or random criminals on the revolving door between regulators and regulated decide they do not like you for some whimsical reason. I have from time to time lost small amounts of totally legitimate fiat money in this fashion, as an international transactions become ever more difficult and dangerous, and recently lost an enormous amount of totally legitimate fiat money in this fashion.
Eth is highly centralized, and the full extent that it is centralized and in bed with the state is now being revealed, as tornado eth gets demonetized.
Some people in eth are resisting this attack. Some are not.
Bitcoiners have long accused eth of being a shitcoin, which accusation is obviously false. With the blood diamonds attack under way on eth, likely to become true. It is not a shitcoin, but I have long regarded it as likely to become one. Which expectation may well come true shortly.
A highly centralized crypto currency is closer to being an unregulated bank than a crypto currency. Shitcoins are fraudulent unregulated banks posing as crypto currencies. Eth may well be about to turn into a regulated bank. When bitcoiners accuse eth of being a shitcoin, the truth in their accusation is dangerous centralization, and dangerous closeness to the authorities.
The advantage of crypto currency is that as elite virtue collapses, the regulated banking system becomes ever more lawless, arbitrary, corrupt, and unpredictable. An immutable ledger ensures honest conduct. But if a central authority has too much power over the crypto currency, they get to retroactively decide what the ledger means. Centralization is a central point of failure, and in world of ever more morally debased and degenerate elites, will fail. Maybe Eth is failing now. If not, will likely fail by and by.
# Scaling
The Bitcoin blockchain has become inconveniently large, and evaluating it
@ -155,11 +177,9 @@ with both privacy and scaling.
## zk-snarks
Zk-snarks are not yet a solution. They have enormous potential
Zk-snarks, zeeks, are not yet a solution. They have enormous potential
benefits for privacy and scaling, but as yet, no one has quite found a way.
[performance survey of zksnarks](https://github.com/matter-labs/awesome-zero-knowledge-proofs#comparison-of-the-most-popular-zkp-systems)
A zk-snark is a succinct proof that code *was* executed on an immense pile
of data, and produced the expected, succinct, result. It is a witness that
someone carried out the calculation he claims he did, and that calculation
@ -167,24 +187,103 @@ produced the result he claimed it did. So not everyone has to verify the
blockchain from beginning to end. And not everyone has to know what
inputs justified what outputs.
As "zk-snark" is not a pronounceable work, I am going to use the word "zeek"
to refer to the blob proving that a computation was performed, and
produced the expected result. This is an idiosyncratic usage, but I just do
not like acronyms.
The innumerable privacy coins around based on zk-snarks are just not
doing what has to be done to make a zk-snark privacy currency that is
viable at any reasonable scale. They are intentionally scams, or by
negligence, unintentionally scams. All the zk-snark coins are doing the
step from set $N$ of valid coins, valid unspent transaction outputs, to set
$N+1$, in the old fashioned Satoshi way, and sprinkling a little bit of
zk-snark magic privacy pixie dust on top (because the task of producing a
genuine zk-snark proof of coin state for step $N$ to step $N+1$ is just too big
for them). Which is, intentionally or unintentionally, a scam.
doing what has to be done to make a zeek privacy currency that is viable
at any reasonable scale. They are intentionally scams, or by negligence,
unintentionally scams. All the zk-snark coins are doing the step from a set
$N$ of valid coins, valid unspent transaction outputs, to set $N+1$, in the
old fashioned Satoshi way, and sprinkling a little bit of zk-snark magic
privacy pixie dust on top (because the task of producing a genuine zeek
proof of coin state for step $N$ to step $N+1$ is just too big for them).
Which is, intentionally or unintentionally, a scam.
Not yet an effective solution for scaling the blockchain, for to scale the
blockchain, you need a concise proof that any spend in the blockchain was
only spent once, and while a zk-snark proving this is concise and
capable of being quickly evaluated by any client, generating the proof is
an enormous task. Lots of work is being done to render this task
manageable, but as yet, last time I checked, not manageable at scale.
Rendering it efficient would be a total game changer, radically changing
the problem.
an enormous task.
### what is a Zk-stark or a Zk-snark?
Zk-snark stands for “Zero-Knowledge Succinct Non-interactive Argument of Knowledge.”
A zk-stark is the same thing, except “Transparent”, meaning it does not have
the “toxic waste problem”, a potential secret backdoor. Whenever you create
zk-snark parameters, you create a backdoor, and how do third parties know that
this backdoor has been forever erased?
zk-stark stands for Zero-Knowledge Scalable Transparent ARguments of Knowledge, where “scalable” means the same thing as “succinct”
Ok, what is this knowledge that a zk-stark is an argument of?
Bob can prove to Carol that he knows a set of boolean values that
simultaneously satisfy certain boolean constraints.
This is zero knowledge because he proves this to Carol without revealing
what those values are, and it is “succinct” or “scalable”, because he can
prove knowledge of a truly enormous set of values that satisfy a truly
enormous set of constraints, with a proof that remains roughly the same
reasonably small size regardless of how enormous the set of values and
constraints are, and Carol can check the proof in a reasonably short time,
even if it takes Bob an enormous time to evaluate all those constraints over all those booleans.
Which means that Carol could potentially check the validity of the
blockchain without having to wade through terabytes of other peoples
data in which she has absolutely no interest.
Which means that each peer on the blockchain would not have to
download the entire blockchain, keep it all around, and evaluate from the beginning. They could just keep around the bits they cared about.
The peers as a whole have to keep all the data around, and make certain
information about this data available to anyone on demand, but each
individual peer does not have to keep all the data around, and not all the
data has to be available. In particular, the inputs to the transaction do not
have to be available, only that they existed, were used once and only once,
and the output in question is the result of a valid transaction whose outputs
are equal to its inputs.
Unfortunately producing a zeek of such an enormous pile of data, with
such an enormous pile of constraints, could never be done, because the
blockchain grows faster than you can generate the zeek.
### zk-stark rollups, zeek rollups
Zk-stark rollups are a privacy technology and a scaling technology.
A zeek rollup is a zeek that proves that two or more other zeeks were verified.
Instead of Bob proving to Alice that he knows the latest block was valid, having evaluated every transaction, he proves to Alice that *someone* evaluated every transaction.
Fundamentally a ZK-stark proves to the verifier that the prover who generated the zk-stark knows a solution to an np complete problem. Unfortunately the proof is quite large, and the relationship between that problem, and anything that anyone cares about, extremely elaborate and indirect. The proof is large and costly to generate, even if not that costly to verify, not that costly to transmit, not that costly to store.
So you need a language that will generate such a relationship. And then you can prove, for example, that a hash is the hash of a valid transaction output, without revealing the value of that output, or the transaction inputs.
But if you have to have such a proof for every output, that is a mighty big pile of proofs, costly to evaluate, costly to store the vast pile of data. If you have a lot of zk-snarks, you have too many.
So, rollups.
Instead of proving that you know an enormous pile of data satisfying an enormous pile of constraints, you prove you know two zk-starks.
Each of which proves that someone else knows two more zk-starks. And the generation of all these zk-starks can be distributed over all the peers of the entire blockchain. At the bottom of this enormous pile of zk-starks is an enormous pile of transactions, with no one person or one computer knowing all of them, or even very many of them.
Instead of Bob proving to Carol that he knows every transaction that ever there was, and that they are all valid, Bob proves that for every transaction that ever there was, someone knew that that transaction was valid. Neither Carol nor Bob know who knew, or what was in that transaction.
You produce a proof that you verified a pile of proofs. You organize the information about which you want to prove stuff into a merkle tree, and the root of the merkle tree is associated with a proof that you verified the proofs of the direct children of that root vertex. And proof of each of the children of that root vertex proves that someone verified their children. And so forth all the way down to the bottom of the tree, the origin of the blockchain, proofs about proofs about proofs about proofs.
And then, to prove that a hash is a hash of a valid transaction output, you just produce the hash path linking that transaction to the root of the merkle tree. So with every new block, everyone has to just verify one proof once. All the child proofs get thrown away eventually.
Which means that peers do not have to keep every transaction and every output around forever. They just keep some recent roots of the blockchain around, plus the transactions and transaction outputs that they care about. So the blockchain can scale without limit.
ZK-stark rollups are a scaling technology plus a privacy technology. If you are not securing peoples privacy, you are keeping an enormous pile of data around that nobody cares about, (except a hostile government) which means your scaling does not scale.
And, as we are seeing with Tornado, some people Eth do not want that vast pile of data thrown away.
To optimize scaling to the max, you optimize privacy to the max. You want all data hidden as soon as possible as completely as possible, so that everyone on the blockchain is not drowning in other peoples data. The less anyone reveals, and the fewer the people they reveal it to, the better it scales, and the faster and cheaper the blockchain can do transactions, because you are pushing the generation of zk-starks down to the parties who are themselves directly doing the transaction. Optimizing for privacy is almost the same thing as optimizing for scalability.
The fundamental problem is that in order to produce a compact proof that
the set of coins, unspent transaction outputs, of state $N+1$ was validly
@ -205,21 +304,20 @@ problem of factoring, dividing the problem into manageable subtasks, but
it seems to be totally oblivious to the hard problem of incentive compatibility at scale.
Incentive compatibility was Satoshi's brilliant insight, and the client trust
problem is failure of Satoshi's solution to that problem to scale. Existing
zk-snark solutions fail at scale, though in a different way. With zk-snarks,
the client can verify the zk-snark, but producing a valid zk-snark in the
problem, too may people runing client wallets and not enough people
running full peers, is failure of Satoshi's solution to that problem to scale.
Existing zk-snark solutions fail at scale, though in a different way. With
zk-snarks, the client can verify the zeek but producing a valid zeek in the
first place is going to be hard, and will rapidly get harder as the scale
increases.
A zk-snark that succinctly proves that the set of coins (unspent transaction
A zeek that succinctly proves that the set of coins (unspent transaction
outputs) at block $N+1$ was validly derived from the set of coins at
block $N$, and can also prove that any given coin is in that set or not in that
set is going to have to be a proof about many, many, zk-snarks produced
by many, many machines, a proof about a very large dag of zk-snarks,
each zk-snark a vertex in the dag proving some small part of the validity
of the step from consensus state $N$ of valid coins to consensus state
$N+1$ of valid coins, and the owners of each of those machines that produced a tree
vertex for the step from set $N$ to set $N+1$ will need a reward proportionate
set is going to have to be a proof about many, many, zeeks produced by
many, many machines, a proof about a very large dag of zeeks, each zeek
a vertex in the dag proving some small part of the validity of the step from
consensus state $N$ of valid coins to consensus state $N+1$ of valid coins, and the owners of each of those machines that produced a tree vertex for the step from set $N$ to set $N+1$ will need a reward proportionate
to the task that they have completed, and the validity of the reward will
need to be part of the proof, and there will need to be a market in those
rewards, with each vertex in the dag preferring the cheapest source of
@ -227,16 +325,6 @@ child vertexes. Each of the machines would only need to have a small part
of the total state $N$, and a small part of the transactions transforming state
$N$ into state $N+1$. This is hard but doable, but I am just not seeing it done yet.
I see good [proposals for factoring the work], but I don't see them
addressing the incentive compatibility problem. It needs a whole picture
design, rather than a part of the picture design. A true zk-snark solution
has to shard the problem of producing state $N+1$, the set of unspent
transaction outputs, from state $N$, so it should also shard the problem of
producing a consensus on the total set and order of transactions.
[proposals for factoring the work]:https://hackmd.io/@vbuterin/das
"Data Availability Sampling Phase 1 Proposal"
### The problem with zk-snarks
Last time I checked, [Cairo] was not ready for prime time.
@ -362,6 +450,20 @@ rocket and calling it a space plane.
[a frequently changing secret that is distributed]:multisignature.html#scaling
### How a fully scalable blockchain running on zeek rollups would work
A blockchain is of course a chain of blocks, and at scale, each block would be far too immense for any one peer to store or process, let alone the entire chain.
Each block would be a Merkle patricia tree, or a Merkle tree of a number of Merkle patricia trees, because we want the block to be broad and flat, rather than deep and narrow, so that it can be produced in a massively parallel way, created in parallel by an immense number of peers. Each block would contain a proof that it was validly derived from the previous block, and that the previous blocks similar proof was verified. A chain is narrow and deep, but that does not matter, because the proofs are “scalable”. No one has to verify all the proofs from the beginning, they just have to verify the latest proofs.
Each peer would keep around the actual data and actual proofs that it cared about, and the chain of hashes linking the data it cared about to Merkle root of the latest block.
All the immense amount of data in the immense blockchain that anyone
cares about would need to exist somewhere, but it would not have to exist
*everywhere*, and everyone would have a proof that the tiny part of the
blockchain that they keep around is consistent with all the other tiny parts
of the blockchain that everyone else is keeping around.
# sharding within each single very large peer
Sharding within a single peer is an easier problem than sharding the

View File

@ -11,10 +11,10 @@ platform environment.
Having a whole lot of different versions of different machines, with a
whole lot of snapshots, can suck up a remarkable amount of disk space
mighty fast. Even if your virtual disk is quite small, your snapshots
wind up eating a huge amount of space, so you really need some capacious
disk drives. And you are not going to be able to back up all this
enormous stuff, so you have to document how to recreate it.
mighty fast. Even if your virtual disk is quite small, your snapshots wind
up eating a huge amount of space, so you really need some capacious disk
drives. And you are not going to be able to back up all this enormous stuff,
so you have to document how to recreate it.
Each snapshot that you intend to keep around long term needs to
correspond to a documented path from install to that snapshot.
@ -49,7 +49,7 @@ To install guest additions on Debian:
```bash
su -l root
apt-get -qy update && apt-get -qy install build-essential module-assistant git sudo dialog rsync
apt-get -qy update && apt-get -qy install build-essential module-assistant git dnsutils curl sudo dialog rsync
apt-get -qy full-upgrade
m-a -qi prepare
mount -t iso9660 /dev/sr0 /media/cdrom
@ -194,8 +194,14 @@ accounts that have sensitive information by corrupting the shadow file
```bash
usermod -L cherry
```
But this tactic is very risky, because it can, due to bug in Linux, disable
ssh public key login. And then you are really hosed. Better to use a very
long random password, and then throw it away.
When an account is disabled in this manner, you cannot login at the
terminal, and may be unable to ssh in, but you can still get into it by `su -l cherry` from the root account. And if you have disabled the root account,
terminal, and may be unable to ssh in, but you can still get into it by
`su -l cherry` from the root account. And if you have disabled the root account,
but have enabled passwordless sudo for one special user, you can still get
into the root account with `sudo -s` or `sudo su -l root` But if you disable
the root account in this manner without creating an account that can sudo
@ -206,6 +212,7 @@ but have enabled passwordless sudo for one special user, you can still get
You can always undo the deliberate corruption by setting a new password,
providing you can somehow get into root.
## never enough memory
If a server is configured with an [ample swap file] an overloaded server will
@ -431,6 +438,13 @@ nano /etc/ssh/sshd_config
Your config file should have in it
```default
PubkeyAuthentication yes
ChallengeResponseAuthentication no
PrintMotd no
PasswordAuthentication no
UsePAM no
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
HostKey /etc/ssh/ssh_host_ed25519_key
X11Forwarding yes
AllowAgentForwarding yes
@ -439,9 +453,6 @@ TCPKeepAlive yes
AllowStreamLocalForwarding yes
GatewayPorts yes
PermitTunnel yes
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no
PermitRootLogin prohibit-password
ciphers chacha20-poly1305@openssh.com
macs hmac-sha2-256-etm@openssh.com
@ -450,6 +461,11 @@ pubkeyacceptedkeytypes ssh-ed25519
hostkeyalgorithms ssh-ed25519
hostbasedacceptedkeytypes ssh-ed25519
casignaturealgorithms ssh-ed25519
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
# override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
```
`PermitRootLogin` defaults to prohibit-password, but best to set it

View File

@ -43,6 +43,11 @@ Supercedes OpenVPN and IPSec, which are obsolete and insecure.
I assume you have a host in the cloud, with world accessible network address and ports, that can access blocked websites freely outside of your country or Internet filtering system.
We are going to enable ip4 and ipv6 on our vpn. The tutorial assumes ipv6 is working. Check that it *is* working by pinging your server `ping -6 «server»`, then ssh in to your server and attempt to `ping -6 «something»`
It may well happen that your server is supposed to have an ipv6 address and /64 ipv6 subnet, but something is broken.
The VPN server is running Debian 11 operating system. This tutorial is not
going to work on Debian 10 or lower. Accessing your vpn from a windows
client, however, easy since the windows wireguard windows client is very
@ -50,6 +55,77 @@ friendly. Setting up wireguard on windows is easy. Setting up a wireguard
VPN server on windows is, on the other hand, very difficult. Don't even
try. I am unaware of anyone succeeding.
## Make sure you have control of nameservice
No end of people are strangely eager to provide free nameservice. If it is a
free service, you are the product. And some of them have sneaky ways to get
you use their nameservice whether you want it or not.
Nameservice reveals which websites you are visiting. We are going to set up
our own nameserver for the vpn clients, but it will have to forward to a
bigger nameserver, thus revealing which websites the clients are visiting,
though not which client is visiting them. Lots of people are strangely eager
to know which websites you are visiting. If you cannot control your
nameservice, then when you set up your own nameserver, it is likely to
behave strangely.
No end of people's helpful efforts to help you automatically set up
nameservice are likely foul up your nameservice for your vpn clients.
```bash
cat /etc/resolv.conf
```
Probably at least two of them are google, which logs everything and
shares the data with the Global American Empire, and the other two are
mystery meat. Maybe good guys provided by your good guy ISP, but I
would not bet on it. Your ISP probably went along with his ISP, and his
ISP may be in the pocket of your enemies.
I use Yandex.com resolvers, since Russia is currently in a state of proxy
war with the Global American Empire which is heading into flat out war,
and I do not care if the Russian government knows which websites I visit,
because it is unlikely to share that data with the five eyes.
So for me
```terminal_image
cat /etc/resolv.conf
nameserver 2a02:6b8::feed:0ff
nameserver 2a02:6b8:0:1::feed:0ff
nameserver 77.88.8.8
nameserver 77.88.8.1
```
Of course your mileage may vary, depending on which enemies you are
worried about, and what the political situation is when you read this (it
may well change radically in the near future). Read up on the resolver's
privacy policies, but apply appropriate cynicism. Political alignments and
vulnerability to power matter more that professed good intentions.
We are going to change this when we set up our own nameserver for the
vpn clients, but if you don't have control, things are likely to get strange.
You cannot necessarily change your nameservers by editing
`/etc/resolv.conf`, since no end of processes are apt to rewrite that file
durig boot up. Changing your nameservers depends on how your linux is
set up, but editing `/etc/resolv.conf` currently works on the standard
distribution. But may well cease to work when you add more software.
If it does not work, maybe you need to subtract some software, but it is
hard to know what software. A clean fresh install may be needed.
It all depends on which module of far too many modules gets the last
whack at `/etc/resolv.conf` on bootup. Far too many people display a
curious and excessive interest in controlling what nameserver you are
using, and if they have their claw in your linux distribution, you are going
to have to edit the configuration files of that module.
If something is whacking your `/etc/resolv.conf`, install `openresolv`,
which will generally make sure it gets the last whack, and edit its
configuration files. Or install a distribution where you *can* control
nameservice by editing `/etc/resolv.conf`
# Install WireGuard on Debian Client and server
```bash
@ -250,6 +326,21 @@ windows, mac, and android clients in the part that is not open.
`wg0` is the virtual network card that `wg0.conf` specifies. If you called it `«your name».conf` then mutatis mutandis.
You just told ufw to allow your vpn clients to see each other on the internet, but allowing routing does not in itself result in any routing.
To actually enable routing, edit the system kernel configuration file, and uncomment the following lines. `nano /etc/sysctl.conf`
```terminal_image
# Uncomment the next line to enable packet forwarding for IPv4
net.ipv4.ip_forward=1
# Uncomment the next line to enable packet forwarding for IPv6
# Enabling this option disables Stateless Address Autoconfiguration
# based on Router Advertisements for this host
net.ipv6.conf.all.forwarding=1
```
Now if you list the rules in the POSTROUTING chain of the NAT table by using the following command:
```bash
@ -309,20 +400,21 @@ nano /etc/bind/named.conf.options
Add the following line to allow VPN clients to send recursive DNS queries.
```default
allow-recursion { 127.0.0.1; 10.10.10.0/24; ::1; 2405:4200:f001:13f6::1/64; };
allow-recursion { 127.0.0.1; 10.10.10.0/24; ::1/128; };
```
Save and close the file.
```terminal_image
:~$ cat /etc/bind/named.conf.options | tail -n8
:~# cat /etc/bind/named.conf.options | tail -n 9
//========================================================================
// If BIND logs error messages about the root key being expired,
// you will need to update your keys. See https://www.isc.org/bind-keys
//========================================================================
dnssec-validation auto;
listen-on-v6 { any; };
allow-recursion { 127.0.0.1; 10.10.10.0/24; ::1; 2405:4200:f001:13f6::1/64; };
allow-recursion { 127.0.0.1; 10.10.10.0/24; ::1/128; };
};
```
@ -332,28 +424,26 @@ Then edit the `/etc/default/named` files.
nano /etc/default/named
```
Add `-4` to the `OPTIONS` to ensure BIND can query root DNS servers.
If on an IPv4 network, add `-4` to the `OPTIONS` to ensure BIND can query root DNS servers.
OPTIONS="-u bind -4"
If on the other hand, you are on a network that supports both IPv6 and
IPv4, this will cause unending havoc and chaos, as bind9's behavior
comes as a surprise to other components of the network, and bind9 crashes
on IPv6 information in its config files.
Save and close the file.
By default, BIND enables DNSSEC, which ensures that DNS responses are correct and not tampered with. However, it might not work out of the box due to *trust anchor rollover* and other reasons. To make it work properly, we can rebuild the managed key database with the following commands.
```bash
rndc managed-keys destroy
rdnc reconfig
```
Restart `bind9` for the changes to take effect.
```bash
systemctl restart bind9
```
Your ufw will allow vpn clients to access `bind9` because you earlier allowed in everything from `wg0`.
Your ufw firewall will allow vpn clients to access `bind9` because you earlier allowed everything from `wg0` in.
## Start WireGuard on the server.
## Start WireGuard on the server
Run the following command on the server to start WireGuard.
@ -437,19 +527,19 @@ chmod 600 /etc/wireguard/ -R
Start WireGuard.
```bash
systemctl start wg-quick@wg0-client0.service
systemctl start wg-quick@wg-client0.service
```
Enable auto-start at system boot time.
```bash
systemctl enable wg-quick@wg0-client0.service
systemctl enable wg-quick@wg-client0.service
```
Check its status:
```bash
systemctl status wg-quick@wg0-client0.service
systemctl status wg-quick@wg-client0.service
```
Now go to this website: `http://icanhazip.com/` to check your public IP address. If everything went well, it should display your VPN servers public IP address instead of your client computers public IP address.