1
0
forked from cheng/wallet

Merge remote-tracking branch 'origin/docs'

This commit is contained in:
Cheng 2022-06-15 17:14:32 +10:00
commit 180b3edf41
No known key found for this signature in database
GPG Key ID: D51301E176B31828
18 changed files with 798 additions and 4440 deletions

62
.gitattributes vendored
View File

@ -36,47 +36,33 @@ Makefile text eol=lf encoding=utf-8
*.vcxproj.filters text eol=crlf encoding=utf-8 whitespace=trailing-space,space-before-tab,tabwidth=4
*.vcxproj.user text eol=crlf encoding=utf-8 whitespace=trailing-space,space-before-tab,tabwidth=4
#Don't let git screw with pdf files
*.pdf -text
# Force binary files to be binary
*.gif -textn -diff
*.jpg -textn -diff
*.jepg -textn -diff
*.png -textn -diff
*.webp -textn -diff
###############################################################################
# Set default behavior for command prompt diff.
#
# This is need for earlier builds of msysgit that does not have it on by
# default for csharp files.
# Note: This is only used by command line
###############################################################################
#*.cs diff=csharp
# Archives
*.7z filter=lfs diff=lfs merge=lfs -text
*.br filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
###############################################################################
# Set the merge driver for project and solution files
#
# Merging from the command prompt will add diff markers to the files if there
# are conflicts (Merging from VS is not affected by the settings below, in VS
# the diff markers are never inserted). Diff markers may cause the following
# file extensions to fail to load in VS. An alternative would be to treat
# these files as binary and thus will always conflict and require user
# intervention with every merge. To do so, just uncomment the entries below
###############################################################################
#*.sln merge=binary
#*.csproj merge=binary
#*.vbproj merge=binary
#*.vcxproj merge=binary
#*.vcproj merge=binary
#*.dbproj merge=binary
#*.fsproj merge=binary
#*.lsproj merge=binary
#*.wixproj merge=binary
#*.modelproj merge=binary
#*.sqlproj merge=binary
#*.wwaproj merge=binary
# Documents
*.pdf filter=lfs diff=lfs merge=lfs -text
# Images
*.gif filter=lfs diff=lfs merge=lfs -text
*.ico filter=lfs diff=lfs merge=lfs -text
*.jpg filter=lfs diff=lfs merge=lfs -text
*.jpeg filter=lfs diff=lfs merge=lfs -text
*.pdf filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text
*.psd filter=lfs diff=lfs merge=lfs -text
*.webp filter=lfs diff=lfs merge=lfs -text
# Fonts
*.woff2 filter=lfs diff=lfs merge=lfs -text
# Other
*.exe filter=lfs diff=lfs merge=lfs -text
###############################################################################
# diff behavior for common document formats

View File

@ -138,41 +138,49 @@ build the program and run unit test for the first time, launch the Visual
Studio X64 native tools command prompt in the cloned directory, then:</p>
<pre class="bat"><code>winConfigure.bat</code></pre>
<p>Should the libraries change in a subsequent <code>pull</code> you will need</p>
<pre class="bat"><code>pull -f --recurse-submodules
<pre class="bat"><code>git pull
rem you get a status message indicating libraries have been updated.
git pull -force --recurse-submodules
winConfigure.bat</code></pre>
<p>winConfigure.bat also configures the repository you just created to use
<p>in order to rebuild the libraries.</p>
<p>The <code>--force</code> is necessary, because <code>winConfigure.bat</code> changes
many of the library files, and therefore git will abort the pull.</p>
<p><code>winConfigure.bat</code> also configures the repository you just created to use
<code>.gitconfig</code> in the repository, causing git to to implement GPG signed
commits because <a href="./docs/contributor_code_of_conduct.html#code-will-be-cryptographically-signed" target="_blank" title="Contributor Code of Conduct">cryptographic software is under attack</a> from NSA
entryists, and shills, who seek to introduce backdoors.</p>
entryists and shills, who seek to introduce backdoors.</p>
<p>This may be inconvenient if you do not have <code>gpg</code> installed and set up.</p>
<p><code>.gitconfig</code> adds several git aliases:</p>
<ol type="1">
<li><code>git lg</code> to display the gpg trust information for the last four commits.
For this to be useful you need to import the repository public key
<code>public_key.gpg</code> into gpg, and locally sign that key.</li>
<li><code>git fixws</code> to standardise white space to the project standards</li>
<li><code>git graph</code> to graph the commit tree</li>
<li><code>git graph</code> to graph the commit tree with signing status</li>
<li><code>git alias</code> to display the git aliases.</li>
</ol>
<div class="sourceCode" id="cb4"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb4-1"><a href="#cb4-1" aria-hidden="true" tabindex="-1"></a><span class="co"># To verify that the signature on future pulls is unchanged.</span></span>
<span id="cb4-2"><a href="#cb4-2" aria-hidden="true" tabindex="-1"></a><span class="ex">gpg</span> <span class="at">--import</span> public_key.gpg</span>
<span id="cb4-3"><a href="#cb4-3" aria-hidden="true" tabindex="-1"></a><span class="ex">gpg</span> <span class="at">--lsign</span> 096EAE16FB8D62E75D243199BC4482E49673711C</span>
<span id="cb4-4"><a href="#cb4-4" aria-hidden="true" tabindex="-1"></a><span class="co"># We ignore the Gpg Web of Trust model and instead use</span></span>
<span id="cb4-5"><a href="#cb4-5" aria-hidden="true" tabindex="-1"></a><span class="co"># the Zooko identity model.</span></span>
<span id="cb4-6"><a href="#cb4-6" aria-hidden="true" tabindex="-1"></a><span class="co"># We use Gpg signatures to verify that remote repository</span></span>
<span id="cb4-7"><a href="#cb4-7" aria-hidden="true" tabindex="-1"></a><span class="co"># code is coming from an unchanging entity, not for</span></span>
<span id="cb4-8"><a href="#cb4-8" aria-hidden="true" tabindex="-1"></a><span class="co"># Gpg Web of Trust. Web of Trust is too complicated</span></span>
<span id="cb4-9"><a href="#cb4-9" aria-hidden="true" tabindex="-1"></a><span class="co"># and too user hostile to be workable or safe.</span></span>
<span id="cb4-10"><a href="#cb4-10" aria-hidden="true" tabindex="-1"></a><span class="co"># Never --sign any Gpg key related to this project. --lsign it.</span></span>
<span id="cb4-11"><a href="#cb4-11" aria-hidden="true" tabindex="-1"></a><span class="co"># Never check any Gpg key related to this project against a</span></span>
<span id="cb4-12"><a href="#cb4-12" aria-hidden="true" tabindex="-1"></a><span class="co"># public gpg key repository. It should not be there.</span></span>
<span id="cb4-13"><a href="#cb4-13" aria-hidden="true" tabindex="-1"></a><span class="co"># Never use any email address on a gpg key related to this project</span></span>
<span id="cb4-14"><a href="#cb4-14" aria-hidden="true" tabindex="-1"></a><span class="co"># unless it is only used for project purposes, or a fake email,</span></span>
<span id="cb4-15"><a href="#cb4-15" aria-hidden="true" tabindex="-1"></a><span class="co"># or the email of an enemy.</span></span></code></pre></div>
<div class="sourceCode" id="cb4"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb4-1"><a href="#cb4-1" aria-hidden="true" tabindex="-1"></a><span class="co"># To verify that the signature on future pulls is</span></span>
<span id="cb4-2"><a href="#cb4-2" aria-hidden="true" tabindex="-1"></a><span class="co"># unchanged.</span></span>
<span id="cb4-3"><a href="#cb4-3" aria-hidden="true" tabindex="-1"></a><span class="ex">gpg</span> <span class="at">--import</span> public_key.gpg</span>
<span id="cb4-4"><a href="#cb4-4" aria-hidden="true" tabindex="-1"></a><span class="ex">gpg</span> <span class="at">--lsign</span> 096EAE16FB8D62E75D243199BC4482E49673711C</span></code></pre></div>
<p>We ignore the Gpg Web of Trust model and instead use the Zooko
identity model.</p>
<p>We use Gpg signatures to verify that remote repository code
is coming from an unchanging entity, not for Gpg Web of Trust. Web
of Trust is too complicated and too user hostile to be workable or safe.</p>
<p>Never sign any Gpg key related to this project. lsign it.</p>
<p>Never check any Gpg key related to this project against a public
gpg key repository. It should not be there.</p>
<p>Never use any email address on a gpg key related to this project
unless it is only used for project purposes, or a fake email, or the
email of an enemy. We dont want Gpg used to link different email
addresses as owned by the same entity, and we dont want email
addresses used to link people to the project, because those
identities would then come under state and quasi state pressure.</p>
<p>To build the documentation in its intended html form from the markdown
files, execute the bash script file <code>docs/mkdocs.sh</code>, in an environment where
<code>pandoc</code> is available. On Windows, if Git Bash and Pandoc has been
installed, you should be able to run a shell file in bash by double clicking on it.</p>
<code>pandoc</code> is available. On Windows, if Git Bash and Pandoc
has been installed, you should be able to run this shell
file in bash by double clicking on it.</p>
<p><a href="./RELEASE_NOTES.html">Pre alpha release</a>, which means it does not yet work even well enough for
it to be apparent what it would do if it did work.</p>
</body>

View File

@ -64,7 +64,7 @@ text-align: left;
<h1 class="title">Release Notes</h1>
</header>
<p>To build and run <a href="./README.html">README</a></p>
<p><a href="docs/index.htm">pre alpha documentation (mostly a wish list)</a></p>
<p><a href="docs/index.htm">pre alpha documentation (mostly a wish list)</a> (In order to read these on this local system, you must first execute the document build script <code>mkdocs.sh</code>, with <code>bash</code>, <code>sed</code> and <code>pandoc</code>)</p>
<p>This software is pre alpha and should not yet be released. It does
not work well enough to even show what it would do if it was
working</p>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -46,6 +46,31 @@ off. If someone is on his buddy list, people whitelisted the global
consensus name is turned off, unless it is the same, in which case it is
turned on, and if the consensus changes, the end user sees that change.
# Existing cryptographic social software
Maverick says:
[Manyverse]:https://www.manyver.se/
{target="_blank"}
[Scuttlebutt]:https://staltz.com/an-off-grid-social-network.html
{target="_blank"}
If looking for something to try out that begins to have the right shape, see [Manyverse] that uses the [Scuttlebutt] protocol. Jim is fond of Bitmessage and it is quite secure, but it has a big weakness in that it needs to flood every message to all nodes in the world so everyone can try their private key to see if it works for decryption. That won't scale. (Can't avoid passing every message on to all callers even if you know it's yours, as you don't want to let someone snooping see that you absorbed a message which is an information disclosure.)
Instead [Manyverse] and [Scuttlebutt] allow for publishing and reputation for a public key. The world can see what a key (its author really) publishes. Can publish public posts like a blog, signed by the private key for authenticity and verifiability. Also can publish private messages (DMs), visible to the world but whose contents are encrypted. Weakness in private messages is that the recipient public key is visible on the message - good for routing and avoiding testing every message, bad for privacy. Would be better to have a 3rd mode, private for "someone" but you have to test your private key to see if it's for you. That should not be hard to add to [Scuttlebutt] and [Manyverse].
Reputation is similar to what Jim has proposed: You can specify a list of primary keys/people (friends) you want to listen to and watch. You can also in the interface specify how many degrees of separation you want to see outward from your friends - the public messages from their friends and friends' friends. Presumably specifying 6 degrees gets you Kevin Bacon and the rest of the [Manyverse]. You can also block friends' friends so their connections are not visible to you - so if you don't like a friend's friend you at least don't have to listen any more.
Another advantage is that [Manyverse] works in a sometimes-connected universe: Turn off your computer or phone for days, turn back on, catch up to messages. You realy don't even have to be on the public Internet, you could sneakernet or local/private-net messages which is nice for, say, messaging in a disaster or SHTF scenario where you have a local wifi network while the main network connections are down. Bitmessage has a decay/lifetime for messages that means you need to be connected at least every 2-3 days.
Biggest weakness is hosting. Your service can be hosted by 3rd parties like any service, and you can host your own. Given the legal landscape as well as susceptibility to censorship via DDoS and hack attacks, you want to have your own server. There are some public servers but sensibly they don't want a rando or glowie from the net jumping on there to drop dank memes. But hosting is nontrivial to carve out your own network bubble that can see the Internet (at least periodically) while being fully patched and DDoS resistant.
Of course missing from this from Jim's long list of plans are DDoS protection, a name service that provides name mapping to key hierarchies for messaging and direct communications, and a coin tie-in. But [Manyverse] at least has the right shape for passing someone a message with a payment inside, while using a distributed network and sometimes connection with store-and-forward to let you avoid censorship-as-network-damage. A sovereign corporation can also message publicly or privately using its own sovereign name and key hierarchy and private ledger-coin.
The net is vast and deep. Maybe we need to start cobbling these pieces together. The era of centralized censorship needs to end. Musk will likely lose either way, and he's only one man against the might of so many paper tigers that happen to be winning the information war.
# Consensus
I have no end of smart ideas about how a blockchain should work, but no
@ -283,12 +308,28 @@ is a much worse idea it is the usual “embrace and extend” evil plot by
Microsoft against open source software, considerably less competently
executed than in the past.
## The standard gnu installer
## The standard gnu installer from source
```bash
./configure && make && make install
```
## The standard cmake installer from source
```bash
cmake .. && cmake --build && make && make install
```
To support this on linux, Cmakelists.txt needs to contain
```default
project (Test)
add_executable(test main.cpp)
install(TARGETS test)
```
On linux, `install(TARGETS test)` is equivalent to `install(TARGETS test DESTINATION bin)`
## The standard Linux installer
`*.deb`
@ -314,50 +355,276 @@ But other systems like a `*.rpm` package, which is built by `git-buildpackage-rp
But desktop integration is kind of random.
Under Mate and KDE Plasma, run-on-login is done by placing your
`*.desktop` file in `~/.config/autostart`
Under Mate and KDE Plasma, bitcoin implements run-on-login by generating a
`bitcoin.desktop` file and writing it into `~/.config/autostart`
It does not, however, place the `bitcoin.desktop` file in any of the
expected other places. Should be in `/usr/share/applications`
The wasabi desktop file cat `/usr/share/applications/wassabee.desktop` is
```config
[Desktop Entry]
Type=Application
Name=Wasabi Wallet
StartupWMClass=Wasabi Wallet
GenericName=Bitcoin Wallet
Comment=Privacy focused Bitcoin wallet.
Icon=wassabee
Terminal=false
Exec=wassabee
Categories=Office;Finance;
Keywords=bitcoin;wallet;crypto;blockchain;wasabi;privacy;anon;awesome;qwe;asd;
```
To be in the menus for all users, should be in
`/usr/share/applications` with its `Categories=` entry set to whatever Wasabi sets its `Categories=` entry to. But what about the
menu for just one user?
`/usr/share/applications` with its `Categories=` entry set appropriately. Wasabi appears in the category `Office` on mate.
But what about the menu for just one user?
The documentation says `~/.local/share/applications`. Which I
do not entirely trust.
### autotools
Has a poorly documented and unexplained pipeline to `*.deb` files.
Plausibly `cmake` also has a pipeline, but I have not found it.
autotools is linux standard, is said to have a straightforward pipeline
into making `*.deb` files, and everyone uses it, including most of your
libraries, but I hear it cursed as a complex mess, and no one wants to
get into it. They find the far from easy `cmake` easier. And `cmake`
runs on all systems, while autotools only runs on linux.
I believe `cmake` has a straightforward pipeline into `*.deb` files, but if it has, the autotools pipleline is far more common and widely used.
## The standard windows installer
Wix creating an `*.msi` file.
Requires an `*.msi` file. If the install is something other than an msi
file, it is broken.
Which `*.msi` file can be wrapped in an executable, but there is no sane
reason for this and you are likely to wind up with installs that consist of an
executable that wraps an msi that wraps an executable that wraps an msi.
[Help Desk Geek reviews tools for creating `*.msi`]: https://helpdeskgeek.com/free-tools-review/4-tools-to-create-windows-installer-packages/
{target="_blank"}
To build an `*.msi`, you need to download the Wix toolset, which is referenced in the relevant Visual Studio extensions, but cannot be downloaded from within the Visual Studio extension manager.
[Help Desk Geek reviews tools for creating `*.msi`]
The Wix Toolset, however, requires the net framework in order to install it
and use it, which is the cobblers children going barefoot. You want a
banana, and have to install a banana tree, a monkey, and a jungle.
1. First and formost, Nullsoft Scriptable Install System (NSIS) Small, simple, and powerful.
There is a [good web page](https://stackoverflow.com/questions/1042566/how-can-i-create-an-msi-setup) on WIX resources
1. Last and least Wix and Wax: it requires the biggest learning
curve. You can create some very complex installers with it, but youll be coding quite a bit and using a command line often.\
And word on the internet is that complex installs created with
Wix and Wax create endless headaches and even if you get it
working in your unit test environment, it then breaks your
customer's machine irreversibly and no one can figure out why.
There is an automatic wix setup: Visual Studio-> Tools-> Extensions&updates ->search Visual Studio Installer Projects
### [NSIS] Nullsoft Scriptable Install System
Which is the Microsoft utility for building wix files. It creates a quite adequate wix setup by gui, in the spirit of the skeleton windows gui app.
NSIS can create msi files for windows, and is open source.
[NSIS]:https://nsis.sourceforge.io/Download
{target="blank"}
{target="_blank"}
## [NSIS] Nullsoft Scriptable Install System
[NSIS Open Source repository]:https://sourceforge.net/projects/nsis/files/NSIS%203/3.08/RELEASE.html/view
{target="_blank"}
[NSIS Open Source repository]
People who know what they are doing seem to use this open
source install system, and they write nice installs with it.
But NSIS has not had any releases since 2019, and it looks that
updating has been minimal for several years before that.
Unlike `Wix`, I hear no whining that any attempt to use its power will
leave you buggered and hopeless.
# Package managers
When I most recently checked, the most recent release was thirty
five days previous, which is moderately impressive, given that their
release process is somewhat painful and arduous.
Lately, however, package managers have appeared: Conan and [vcPkg](https://blog.kitware.com/vcpkg-a-tool-to-build-open-source-libraries-on-windows/). Conan lacks wxWidgets, and has far fewer packages than [vcpkg](https://libraries.io/github/Microsoft/vcpkg).
### Wix
`Wix` is suffering from bitrot. The wix toolset relies on a framework
that is no longer default installed on windows, and has not been for
a very very long time.
But no end of people say that sucky though it is, it is the standard
way to create install files.
[Hello World for Wix]:https://stackoverflow.com/questions/47970743/wix-installer-msi-not-installing-the-winform-app-created-with-visual-studio-2017/47972615#47972615
{target="_blank"}
[Hello World for Wix] is startling nontrivial. It does not default create
a minimal useful install for you. So even if you get it working, still
looks like it is broken.
[Common Design Flaws]:https://stackoverflow.com/questions/45840086/how-do-i-avoid-common-design-flaws-in-my-wix-msi-deployment-solution
{target="_blank"}
[Common Design Flaws] do not sound entirely like design flaws. It
sounds like it is easy to create `*.msi` files whose behaviour is
complex, unpredictable, unexpected, and apt to vary according to
circumstances on the target machine in incomprehensible and
unexpected ways. "Works great when we test it. Passes unit test."
[Some practical Wix advice]:https://stackoverflow.com/questions/6060281/windows-installer-and-the-creation-of-wix/12101548#12101548
{target="_blank"}
[Some practical Wix advice] advises that trying to do anything
complicated on Wix is hell on wheels, and will lead to unending
broken installs out in the field that fuck over the target systems.
While Wix in theory permits arbitrarily complex and powerful
installs, in practice, no one succeeds.
"certain things are still coded on a case by case basis. These ad hoc
solutions are implemented as 'custom actions` in Windows Installer,"
And custom actions that involve writing anything other than file
properties, die horribly.
Attempts to install Wix on Visual Studio repeatedly failed, and
sometimes trashed my Visual Studio installation.
After irreversibly destroying Visual Studio far too many times,
attempted to install on a fresh clean virtual machine.
Clean install of Visual Studio on a vm worked, loaded my project,
compiled and built it almost as fast as my real machine. The
program it built ran fine and passed unit test. And then Visual
Studio crashed on close. Investigating the hung Visual Studio, it had
freed up almost all memory, and then just stopped running. Maybe
the problem is not Wix bitrot, but Visual Studio bitrot, since I did
not even get as far as trying to install Wix.
If the Wix installer is horribly broken, is it not likely that any install
created by Wix will be horribly broken?
The Wix Toolset, requires the net framework 3.5 in order to install it
and use it, which is the cobblers children going barefoot. You want
a banana, and have to install a banana tree, a monkey, and a jungle.
Network Framework 3.5.1 can be installed with Control
Panel/programs and programs/features.
You have to install the extension after the framework in that order,
or else everything breaks. Or maybe everything just breaks anyway
far too often and people develop superstitions about how to avoid
such cases.
## Choco
Choco, Chocolatey, is the Windows Package manager system. Does not use `*.msi` as its packaging system. A chocolatey package consists of an `*.nuget`, `chocolateyInstall.ps1`, `chocolateyUninstall.ps1`, and `chocolateyBeforeModify.ps1` (the latter script is run before upgrade or uninstall, and is to reverse stuff done by is accompanying
`chocolateyInstall.ps1 `)
Interaction with stuff installed by `*.msi` is apt to be bad.
The community distribution redirects requests to particular servers,
which have to be maintained by particular people - which requires
an 8GB ram, 50GB disk Windows server. I could have `nginx` in the
cloud reverse proxying that to a physically local server over
wireguard, which solves the certificate problem, or I could use a
commercial service, which is cheap, but leaks identity all over the
place and is likely to be subject to hostile interdiction and state sponsored identity theft.
Getting on the `choco` list is largely automatic. Your package has to
install on their standard image, which is a deliberately obsolete
2012 windows server - and your install script may have to install
windows update packages. Your package is unlikely to successfully
install until you have first tested it on an imitation of their test
environment, which is a great deal of work and skill to set up.
Human curation exists, but is normally routine and superficial.
Installs, has license, done.
[whole lot more checks]:https://docs.chocolatey.org/en-us/information/security#chocolatey.org-packages
{target="_blank"}
[whole lot more rules]:https://docs.chocolatey.org/en-us/community-repository/moderation/package-validator/rules/
{target="_blank"}
Well, actually there are a [whole lot more checks], which enforce a [whole lot more rules], sixty eight rules and growing, but they are robotically checked and the outcome reported to human. If the robot OKs it, it normally goes through automatically into the community distribution.
A Choco package is immutable. Can be superseded, but cannot
change. Could have the program check for a Zooko signature of its package file against a list, and look for indications of broad
approval, thus solving the identity problem and eating my own dogfood.
Choco packages would be very handy to automatically install my build environment.
### Cmake
`cmake` has a pipeline for building choco files.
[wxWidgets has instructions for building with Cmake]:https://docs.wxwidgets.org/trunk/overview_cmake.html
{target="_blank"}
[wxWidgets has instructions for building with Cmake]. My other
libraries do not, and require their own idiosyncratic build scripts,
and I doubt that I can do what the authors were disinclined to do.
Presumably I could fix this with `add_custom_target` and
`add_custom_command`, where the custom command is bash script
that just invokes the author's scripts, but I just do not understand
the documentation for these commands, which documentation
resupposes knowledge of the incomprehensible domain specific language.
`Cmake` runs on both Windows and Linux, and is a replacement for autotools, that runs only on Linux.
Going with `cmake` means you have a defined standard cross platform development environment, `vscode` which is wholly open source, and a defined standard cross platform packaging system, or rather four somewhat equivalent standard packaging systems, two for each platform.
Instead of
```bash
./configure
make
make install
```
We have
```bat
cmake ..
cmake --build .
cmake --install .
```
`cmake --install` installs from source, and has a pipeline (`cpack`)
to generate `*.msi` through [NSIS]. Notice it does *not* have a pipeline
through Wix and Wax. It also has a pipeline to Choco, and, on linux,
to `*.deb` and `*.rpm`.
No uninstall, which has to be hand written for your distribution.
`cmake` has the huge advantage that with certain compilers, far from
all of them, it integrates with the vscode ide, including a graphical
debugger that runs on both windows and linux. Which otherwise
you really do not have on linux.
It thus provides maximum cross platform portability. On the other
hand, all of my libraries rely on `.configure && make && make install`
on linux, and on visual studio on Windows. In my previous
encounter with `cmake`, I found mighty good reason for doing it that
way. The domain specific language of `CMakeLists.txt` is arcane,
unreadable, unwriteable, and subject to frequent, arbitrary,
inconsistent, and illogical ad hoc change. It inexplicably does
remarkably complicated things without obvious reason or purpose,
which strange complexity usually does things you do not want.
Glancing through their development blog, I keep seeing major
breaking changes being corrected by further major breaking
changes. Internals are undocumented, subject to surprising change,
and likely to change further, and you have to keep editing them,
without any clearly knowable boundary between what is internal
stuff that you should not need to look at and edit, and what is the
external language that you are supposed to use to define what
`cmake` is supposed to accomplish. It is not obvious how to tell `cmake` to do a certain thing, and looking at a `CmakeLists.txt` file, not at all obvious what `cmake` is going to do. And when the next
version comes out, probably going to do something different.
But allegedly the domain specific language of `./configure` has
grown a multitude of idiosyncrasies, making it even worse.
`ccmake` is a graphical tool that will do some editing of
`CMakeLists.txt` with respect for the mysterious undocumented
arcane syntax of the nowhere explained or documented domain
specific language.
# Library Package managers
Lately, however, library package managers have appeared: Conan and [vcPkg](https://blog.kitware.com/vcpkg-a-tool-to-build-open-source-libraries-on-windows/). Conan lacks wxWidgets, and has far fewer packages than [vcpkg](https://libraries.io/github/Microsoft/vcpkg).
I have attempted to use package managers, and not found them very useful. It
is easier to deal with each package as its own unique special case. The

View File

@ -4,15 +4,249 @@ title: Review of Cryptographic libraries
# Noise Protocol Framework
[Noise](https://noiseprotocol.org/) is an architecture and a design document,
not source code. Example source code exists for it, though the [C example]
(https://github.com/rweather/noise-c) uses a build architecture that may not
fit with what you want, and uses protobuf, while you want to use Capn
Proto or roll your own serialization. It also is designed to use several
different implementations of the core crypto protocols, one of them being
libsodium, while you want a pure libsodium only version. It might be easier
to implement your own version, using the existing version as a guide.
Probably have to walk through the existing version.
The Noise Protocol Framework matters because used by Wireguard to do something related to what we intend to accomplish.
Noise is an already existent messaging protocol, implemented in
Wireguard as a UDP only protocol.
My fundamental objective is to secure the social net, particularly the
social net where the money is, the value of most corporations being
the network of customer relationships, employee relationships,
supplier relationships, and employee roles.
This requires that instead of packets being routed to network
addresses identified by certificate authority names and the domain
name system, they are routed to public keys that reflect a private
key derived from the master secret of a wallet.
## Wireguard Noise
Wireguard maps network addresses to public keys, and then to the
possessor of the secret key corresponding to that public key. We
need a system that maps names to public keys, and then packets to
the possessor of the secret key. So that you can connect to service
on some port of some computer, which you locate by its public key.
Existing software looks up a name, finds an thirty two bit or one
twenty eight bit value, and then connects. We need that name to
map through software that we control to a durable and attested
public key, which is then, for random strangers not listed in the conf
file, locally, arbitrarily and temporarily mapped into Wireguard
subnets , which mapping is actually a local and temporary handle to
that public key, which is then mapped back to the public key, which
is then mapped to the network address of the actual owner of that
secret key by software that we control. So that software that we do
not control thinks it is using network addresses, but is actually using
local handles to public keys which are then mapped to network
address supported by our virtual network card, which sends them off,
encapsulated in Wireguard style packets identified by the public
key of their destination to a host in the cloud identified by its actual
network address, which then routes them by public key, either to a
particular local port on that host itself by public key, or to another
host by public key which then routes them eventually by public key
to a particular port.
For random strangers on the internet, we have to in effect NAT
them into our Wireguard subnets, and we don't want them able to
connect to arbitrary ports, so we in effect give them NAT type port forwarding.
It will frequently be convenient to have only one port forwarded
address per public key, in which case our Wireguard fork needs to
accept several public keys, one for each service.
The legacy software process running on the client initiates a
connection to a name and a port, from a random client port. The
legacy server process receives it on the whitelisted port ignoring
the port requested, if only one incoming port is whitelisted for
this key, or to the requested whitelisted port if more than one port
is whitelisted. It replies to the original client port, which was
encapsulated, with the port being replied to encapsulated in the
message secured and identified by public key, and the receiving
networking software on the client has temporarily whitelisted that
client port for messages coming from that server key. Such
"temporary" white listing should last for a very long time, since we
might have quiet but very long lived connections. We do not want
random people on the internet messaging us, but we do want people
that we have messaged to randomly messaging at random times the
service that message them.
One confusing problem is that stable ports are used to identify a
particular service, and random ports a particular connection, and we
have to disentangle this relationship and distinguish connection
identifiers, from service identifiers. We would like public keys to
identify services, rather than hosts but sometimes, they will not.
Whitelist and history helps us disentangle them when connecting to
legacy software, and, within the protocol, they need to be
distinguished even though they will be lumped back together when
talking to legacy software. Internally, we need to distinguish
between connections and services. A service is not a connection.
Note that the new Google https allows many short lived streams,
hence many connections, identified by a single server service port
and a single random client port, which ordinarily would identify a
single connection. A connection corresponds to a single concurrent
process within client software, and single concurrent process within
server software, and many messages may pass back and forth between
these two processes and are handled sequentially by those
processes, who have retrospective agreement about their total shared state.
So we have four very different kinds of things, which old type ports
mangle together
1. a service, which is always available as long as the host is up
and the internet is working, which might have no activity for
a very long time, or might have thousands of simultaneous
connections to computers from all over the internet
1. a connection, which might live while inactive for a very long time,
or might have many concurrent streams active simultaneously
1. a stream which has a single concurrent process attached to it
at both ends, and typically lives only to send a message and
receive a reply. A stream may pass many messages back and
forth, which both ends process sequentially. If a stream is
inactive for longer than a quite short period, it is likely to be
ungracefully terminated. Normally, it does something, and
then ends gracefully, and the next stream and the next
concurrent process starts when there is something to do. While a stream lives, both ends maintain state, albeit in a
request reply, the state lives only briefly.
1. A message.
Representing all this as a single kind of port, and packets going
between ports of a single kind, inherently leads to the mess that we
now have. They should have been thought of as different derived
classes with from a common base class.
[Endpoint-Independent Mapping]:https://datatracker.ietf.org/doc/html/rfc4787
{target="_blank"}
Existing software is designed to work with the explicit white listing
provided by port forwarding through NATs with [Endpoint-Independent Mapping],
and the implicit (but inconveniently
transient) white listing provided by NAT translation, so we make it
look like that to legacy software. To legacy client software, it is as if
sending its packets through a NAT, and to legacy server software, it
is sending its packets through a NAT with port forwarding. Albeit
we make the mapping extremely long lived, since we can rely on
stable identities and have no shortage of them. And we also want
the port mappings (actually internal port whitelistings, they would
be mappings if this was actual NAT) associated with each such
mapping to be extremely stable and long lived.
[Endpoint-Independent Mapping] means that the NAT reuses the
address and port mapping for subsequent packets sent from the
same internal port (X:x) to any external IP address and port (Y:y).
X1':x1' equals X2':x2' for all values of Y2:y2, which our architecture
inherently tends to force unless we do something excessively clever,
since we should not muck with ports randomly chosen. For us, [Endpoint-Independent Mapping] means that the mapping between
external public keys of random strangers not listed in our
configuration files, and the internal ranges of the Wireguard fork
interface is stable, very long lived and *independent of port numbers*.
## Noise architecture
[Noise](https://noiseprotocol.org/) is an architecture and a design document, not source code.
Example source code exists for it, though the [C example]
(https://github.com/rweather/noise-c) uses a build architecture that
may not fit with what I want, and uses protobuf, enemy software. It
also is designed to use several different implementations of the
core crypto protocols, one of them being libsodium, while I want a
pure libsodium only version. It might be easier to implement my
own version, using the existing versions as a guide, in particular and
especially Wireguard's version, since it is in wide use. Probably have
to walk through the existing version.
Noise is built around the ingenious central concept of using as the
nonce the hash of past shared and acknowledged data, which is
AEAD secured but sent in the clear. Which saves significant space
on very short messages, since you have to secure shared state
anyway. It regularly and routinely renegotiates keys, thus has no $2^{64}$
limit on messages. A 128 bit hash sample suffices for the nonce,
since the nonce of the next message will reflect the 256 bit hash of
the previous message, hence contriving a hash that has the same
nonce does the adversary no good. It is merely a denial of service.
I initially thought that this meant it had to be built on top of a
reliable messaging protocol, and it tends to be described as if it did,
but Wireguard uses a bunch of designs and libraries in its protocol,
with Noise pulling most of them together, and I need to copy,
rather than re-invent their work.
On the face of it, Wireguard does not help with what I want to do.
But I am discovering a whole lot of low level stuff related to
maintaining a connection, and Wireguard incorporates that low level stuff.
Noise goes underneath, and should be integrated with, reliable
messaging. It has a built in message limit of 2^16 bytes. It is not
just an algorithm, but very specific code.
Noise is messaging code. Here now, and present in Wireguard,
as a UDP only cryptographic protocol. I need to implement my
messaging system as a fork of Wireguard.
Wireguard uses base64, and my bright idea of slash6 gets in the
way. Going to use base52 for any purposes for which my bright idea
would have been useful, so should be rewritten to base64 regardless.
Using the hash of shared state goes together with immutable
append only Merkle-patricia trees like ham and eggs, though you
don't need to keep the potentially enormous data structure around.
When a connection has no activity for a little while, you can discard
everything except a very small amount of data, primarily the keys,
the hash, the block number, the MTU, and the expected timings.
The Noise system for hashing all past data is complicated and ad
hoc. For greater generality and more systematic structure, for a
simpler fundamental structure with fewer arbitrary decisions about
particular types of data, needs to be rewritten as hashing like an
immutable append only Patricia Merkle tree. Which instantly and
totally breaks interoperability with existing Wireguard, so to talk
to the original Wireguard, has to know what it is talking to.
Presumably Wireguard has a protocol negotiation mechanism, that
you can hook. If it does not, well, it breaks and the nature of the
thing that public key addresses has to be flagged anyway, since I
am using Ristretto public keys, and they are not. Also, have to move
Wireguard from NACL encryption to Libsodium encryption, because
NACL is an attack vector.
Wireguard messages are distinguishable on the wire, which is odd,
because Noise messages are inherently white noise, and destination
keys are known in advance. Looks like enemy action by the bad guys at NACL.
I think a fork that if a key is an legacy key type, talks legacy
wireguard, and if a new type (probably coming from our domain
name system), though it can also be placed in `.conf` files) talks
with packets indistinguishable from white noise to an adversary that
does not know the key.
Old type session initiation messages are distinguishable from
random noise. For new type session initiation messages to a server
with an old type id and a new type id on the same port, make sure
that the new type session initiation packet does not match, which
may require both ends to try a variety of guesses if its expectations
are violated. Which opens a DOS attack, but that is OK. You just
shut down that connection. DOS resistance is going to require
messages readily distinguishable from random noise, but we don't
send those messages unless facing workloads suggestive of DOS,
unless under heavy session initiation load.
Ristretto keys are uncommon, and are recognizable as ristretto
keys, but not if they are sent in unreduced form.
Build on top a fork of Wireguard a messaging system that delivers
messages not to network addresses, but to Zooko names (which
might well map to a particular port on a particular host, but whose
network address and port may change without people noticing or caring.)
Noise is a messaging protocol. Wireguard is a messaging protocol
built on top of it that relies on public keys for routing messages.
Most of the work is done. It is not what I want built, but it has an
enormous amount of commonality. I plan a very different
architecture, but that is a re-arrangement of existing structures
already done. I am going to want Kademlia and a blockchain for the
routing, rather than a pile of local text files mapping IPs to nameless
public keys. Wireguard is built on `.conf` text files the way the
Domain name system was built on `host` files. It almost does the job,
needs a Kamelia based domain name system on top and integrated with it.
# [Libsodium](./building_and_using_libraries.html#instructions-for-libsodium)
@ -50,7 +284,7 @@ Amber library packages all these in what is allegedly easy to incorporate form,
The fastest library I can find for pairing based crypto is [herumi](https://github.com/herumi/mcl). 
How does this compare to [Curve25519](https://github.com/bernedogit/amber)? 
How does this compare to [Curve25519](https://github.com/bernedogit/amber)?
There is a good discussion of the performance tradeoff for crypto and IOT in [this Internet Draft](https://datatracker.ietf.org/doc/draft-ietf-lwig-crypto-sensors/), currently in IETF last call: 
@ -71,4 +305,4 @@ that document, nor any evaluations of the time required for pairing based
cryptography in that document. Relic-Toolkit is not Herumi and is supposedly
markedly slower than Herumi. 
Looks like I will have to compile the libraries myself and run tests on them. 
Looks like I will have to compile the libraries myself and run tests on them.

View File

@ -11,17 +11,69 @@ scripts that can interact with the recipient within a sandbox. Not wanting
to repeat the mistakes of the internet, we will want the same bot language
generating responses, and interacting with the recipient.
There is a [list](https://github.com/dbohdan/embedded-scripting-languages) of embeddable scripting languages.
There is a [list](https://github.com/dbohdan/embedded-scripting-languages){target="_blank"} of embeddable scripting languages.
Lua and python are readily embeddable, but [the language shootout](https://benchmarksgame-team.pages.debian.net/benchmarksgame/) tells us
they are terribly slow.
[Embedding LuaJIT in 30 minutes]:https://en.blog.nic.cz/2015/08/12/embedding-luajit-in-30-minutes-or-so/
{target="_blank"}
Lua, however, has `LuaJIT`, which is about ten times faster than `Lua`, which
makes it only about four or five times slower than JavaScript under
`node.js`. It is highly portable, though I get the feeling that porting it to
windows is going to be a pain, but then it is never going to be expected to
call the windows file and gui operations.
Other people say it is faster than JavaScript, but avoid comparison
to Nodejs. But it is allegedly faster than any JavaScript I am likely to
be able to embed.
[Embedding LuaJIT in 30 minutes] gives as its example code a DNS
server written in embedded LUA. Note that it is very easy to embed
LUAJIT in such a way that the operations run amazingly slow.
Web application firewall that is used by Cloudfare was rewritten from
37000 lines of C to 2000 lines of lua. And it handles requests in
2ms. But it needed profiling and all that to find the slowdown
gotchas - everyone who uses LUA JIT winds up spending some time and
effort to avoid horrible pointless slow operations and they need
to know what they are doing.
He recommends embedded LUA JIT as easier to embed and
interface to C than straight LUA. Well, it would be easier for
someone who has an good understanding of what is happening
under the hood.
He also addresses sandboxing. Seems that LUA JIT sandboxes just
fine (unlike Javascript).
Checking his links, it seems that embedded LUA JIT is widely used in
many important applications that need to be fast and have a great
deal of money behind them.
LUA JIT is not available on the computer benchmarks shootout. The
JIT people say the shootout maintainer is being hostile and difficult,
the shootout maintainer is kind of apt to change the subject and
imply the JIT people are not getting off their asses, but I can see
they have done a decent amount of work to get their stuff included.
LUA JIT is a significantly different dialect to LUA, and tends to get
rather different idioms as a result of profiling driven optimization.
It looks like common LUA idioms are apt to bite in LUA JIT for
reasons that are not easy to intuit. Thus there is in effect some hand
precompiling for LUA JIT.
Anecdotal data on speed for LUA JIT:
* lua-JIT is faster than Java-with-JIT (the sun Java), lua-JIT is faster than V8 (Javascript-with-JIT), etc, ...
* As Justin Cormack notes in the comments to the answer below, it is crucial to remark that JITed calls to native C functions (rather than lua_CFunctions) are extremely fast (zero overhead) when using the LuaJIT ffi. That's a double win: you don't have to write bindings anymore, and you get on-the-metal performance. Using LJ to call into a C library can yield spooky-fast performance even when doing heavy work on the Lua side.
* I am personally surprised at luajit's performance. We use it in the network space, and its performance is outstanding. I had used it in the past is a different manner, and its performance was 'ok'. Implementation architecture really is important with this framework. You can get C perf, with minimal effort. There are limitations, but we have worked around all of them now. Highly recommended.
Lisp is sort of embeddable, startlingly fast, and is enormously capable, but
it is huge, and not all that portable.

View File

@ -64,7 +64,33 @@ do
# echo " $base.html up to date"
fi
done
cd ../rootDocs
cd ..
cd names
templates="../pandoc_templates"
options=$osoptions"--toc -N --toc-depth=5 --wrap=preserve --metadata=lang:en --include-in-header=$templates/icondotdot.pandoc --include-before-body=$templates/beforedotdot.pandoc --css=$templates/style.css --include-after-body=$templates/after.pandoc -o"
for f in *.md
do
len=${#f}
base=${f:0:($len-3)}
if [ $f -nt $base.html ];
then
katex=""
for i in 1 2 3 4
do
read line
if [[ $line =~ katex ]];
then
katex=" --katex=./"
fi
done <$f
pandoc $katex $options $base.html $base.md
echo "$base.html from $f"
#else
# echo " $base.html up to date"
fi
done
cd ..
cd rootDocs
templates="../pandoc_templates"
for f in *.md
do

View File

@ -1,350 +0,0 @@
---
title: Name System
...
We intend to establish a system of globally unique wallet names, to resolve
the security hole that is the domain name systm, though not all wallets will
have globally unique names, and many wallets will have many names.
Associated with each globally unique name is set of name servers. When ones
wallet starts up, then if your wallet has globally unique name, it logs in
to its name server, which will henceforth direct people to that wallet. If
the wallet has a network accessible tcp and/or UDP address it directs people
to that address (one port only, protocol negotiation will occur once the
connection is established, rather than protocols being defined by the port
number). If not, will direct them to a UDT4 rendevous server, probably itself.
We probably need to support [uTP for the background download of bulk data].
This also supports rendevous routing, though perhaps in a different and
incompatible way, excessively married to the bittorrent protocol.We might
find it easier to construct our own throttling mechanism in QUIC,
accumulating the round trip time and square of the round trip time excluding
outliers, to form a short term and long term average and variance of the
round trip time, and throttling lower priority bulk downloads and big
downloads when the short term average rises above the long term average by
more than the long term variance. The long term data is zeroed when the IP
address of the default gateway(router) is acquired, and is timed out over a
few days. It is also ceilinged at a couple of seconds.
[uTP for the background download of bulk data]: https://github.com/bittorrent/libutp
In this day and age, a program that lives only on one machine really is not
much of a program, and the typical user interaction is a user driving a gui
on one machine which is a gui to program that lives on a machine a thousand
miles away.
We have a problem with the name system, the system for obtaining network
addresses, in that the name system is subject to centralized state control,
and the TCP-SSL system is screwed by the state, which is currently seizing
crimethink domain names, and will eventually seize untraceable crypto
currency domain names.
In todays environment, it is impossible to speak the truth under ones true
name, and dangerous to speak the truth even under any durable and widely used
identity. Therefore, people who post under names tend to be unreliable.
Hence the term “namefag”. If someone posts under his true name, he is a
“namefag” probably unreliable and lying. Even someone who posts under a
durable pseudonym is apt show excessive restraint on many topics.
The aids virus does not itself kill you. The aids virus “wants” to stick
around to give itself lots of opportunities to infect other people, so wants
to disable the immune system for obvious reasons. Then, without a immune
system, something else is likely to kill you.
When I say “wants”, of course the aids virus is not conscious, does not
literally want anything at all. Rather, natural selection means that a virus
that disables the immune system will have opportunities to spread, while a
virus that fails to disable the immune system only has a short window of
opportunity to spread before the immune system kills it, unless it is so
virulent that it likely kills its host before it has the opportunity to
spread.
Similarly, a successful memetic disease that spreads through state power,
through the state system for propagation of official truth “wants” to disable
truth speaking and truth telling hence the replication crisis, peer
review, and the death of science. We are now in the peculiar situation that
truth is best obtained from anonymous sources, which is seriously suboptimal.
Namefags always lie. The drug companies are abandoning drug development,
because science just does not work any more. No one believes their research,
and they do not believe anyone elses research.
It used to be that there were a small number of sensitive topics, and if you
stayed away from those, you could speak the truth on everything else, but now
it is near enough to all of them that it might as well be all of them, hence
the replication crisis. Similarly, the aids virus tends to wind up totally
suppressing the immune system, even though more selective shutdown would
serve its interests more effectively, and indeed the aids virus starts by
shutting down the immune system in a more selective fashion, but in the end
cannot help itself from shutting down the immune system totally.
The memetic disease, the demon, does not “want” to shut down truth telling
wholesale. It “wants” to shut down truth telling selectively, but inevitably,
there is collateral damage, so it winds up shutting down truth telling
wholesale.
To exorcise the demon, we need a prophet, and since the demon occupies the
role of the official state church, we need a true king. Since there is a
persistent shortage of true Kings, I here speaking as engineer rather than a
prophet, so here I am discussing the anarcho agorist solution to anarcho
tyranny, the technological solution, not the true king solution.
Because of the namefag problem and the state snatching domain names, we need,
in order to operate an untraceable blockchain based currency not only a
decentralized system capable of generating consensus on who owns what cash,
we need a system capable of generating consensus on who who owns which human
readable globally unique names, and the mapping between human readable names,
Zooko triangle names (which correspond to encryption public keys), and
network addresses, a name system resistant to the states attempts to link
names to jobs, careers, and warm bodies that can be beaten up or imprisoned,
to link names to property, to property that can be confiscated or destroyed.
A transaction output can hold an amount of currency, or a minimum amount of
currency and a name. Part of the current state, which every block contains,
is unused transaction outputs sorted by name.
If we make unused transaction outputs sorted by name available, might as well
make them available sorted by key.
In the hello world system, we will have a local database mapping names to
keys and to network addresses. In the minimum viable product, a global
consensus database. We will, however, urgently need a rendezvous system that
allows people to set up wallets and peers without opening ports on stable
network address to the internet. Arguably, the minimum viable product will
have a global database mapping between keys and names, but also a nameserver
system, wherein a host without a stable network address can login to a host
with a stable network address, enabling rendezvous. When one identity has its
name servers registered in the global consensus database, it always tries to
login to those and keep the connection alive with a ping that starts out
frequent, and then slows down on the Fibonacci sequence, to one ping every
1024 secondsplus a random number modulo 1024 seconds. At each ping, tells the
server when the next ping coming, and if the server does not get the
expected ping, server sends a nack. If the server gets no ack, logs the
client out. If the client gets no ack, retries, if still no ack, tries to
login to the next server.
In the minimum viable product, we will require everyone operating a peer
wallet to have a static IP address and port forwarding for most functionality
to work, which will be unacceptable or impossible for the vast majority of
users, though necessarily we will need them to be able to receive money
without port forwarding, a static IP, or a globally identified human readable
name, by hosting their client wallet on a particular peer. Otherwise no one
could get crypto currency they would need to set up a peer.
Because static IP is a pain, we should also support nameserver on the state
run domain name system, as well as nameserver on our peer network, but that
can wait a while. And in the end, when we grow so big that every peer is
itself a huge server farm, when we have millions of users and a thousand or
so peers, the natural state of affairs is for each peer to have a static IP.
Eventually we want people to be able to do without static IPs and
portforwarding, which is going to require a UDP layer. One the other hand, we
only intend to have a thousand or so full peers, even if we take over and
replace the US dollar as the world monetary system. Our client wallets are
going to be the primary beneficiaries of rendevous UDT4.8 routing over UDP.
We also need names that you can send money to, and name under which you can
receives. The current cryptocash system involves sending money to
cryptographic identifiers, which is a pain. We would like to be able to send
and receive money without relying on identifiers that look like line noise.
So we need a system similar to namecoin, but namecoin relies on proof of
work, rather than proof of stake, and the states computers can easily mount
a fifty one percent attack on proof of work. We need a namecoin like system
but based on proof of stake, rather than proof of work, so that for the state
to take it over, it would need to pay off fifty one percent of the
stakeholders and thus pay off the people who are hiding behind the name
system to perform untraceable crypto currency transactions and to speak the
unspeakable.
For anyone to get started, we are going to have to enable them to operate a
client wallet without IP and port forwarding, by logging on to a peer wallet.
The minimum viable product will not be viable without a client wallet that
you can use like any networked program. A client wallet logged onto a peer
wallet automatically gets the name `username.peername`. The peer could give
the name to someone else though error, malice or equipment failure, but the
money will remain in the clients wallet, and will be spendable when he
creates another username with another peer. Money is connected to wallet
master secret, which should never be revealed to anyone, not with the
username. So you can receive money with a name associated an evil nazi
identity as one username on one peer, and spend it with a username associated
with a social justice warrior on another peer. No one can tell that both
names are controlled by the same master secret. You send money to a username,
but it is held by the wallet, in effect by the master secret, not by the
user name. That people have usernames, that money goes from one username to
another, makes transferring money easy, but by default the money goes through
the username to the master secret behind the quite discardable username,
thus becomes anonymous, not merely pseudonymous after being received. Once
you have received the money, you can lose the username, throw it away, or
suffer it being confiscated by the peer, and you, not the username, still
have the money. You only lose the money if someone else gets the master
secret.
You can leave the money in the username, in which case the peer hosting your
username can steal it, but for a hacker to steal it he needs to get your
master secret and logon password, or you transfer it to the master secret on
your computer, in which case a hacker can steal it, but the peer cannot, and
also you can spend it from a completely different username. Since most people
using this system are likely to be keen on privacy, and have no good reason
to trust the peer, the default will be for the money to go from the username
to the master secret.
Transfers of money go from one username to another username, and this is
visible to the person who sent it and the person who received it, but if the
transfer is to the wallet and the master secret behind the username, rather
than to the username, this is not visible to the hosts. Money is associated
with a host and this association is visible, but it does not need to be the
same host as your username. By default, money is associated with the host
hosting the username that receives it, which is apt to give a hint to which
username received it, but you can change this default. If you are receiving
crypto currency under one username, and spending it under another username on
another host, it is apt to be a good idea to change this default to the host
that is hosting the username that you use for spending, because then spends
will clear more quickly. Or if both the usernames and both the hosts might
get investigated by hostile people, change the default to a host that is
hosting your respectable username that you do not use much.
We also need a state religion that makes pretty lies low status, but that is
another post.
# Mapping between globally unique human readable names and public keys
The blockchain provides a Merkle-patricia dac of human readable names. Each
human readable name links to a list of signatures transferring ownership form
one public key to the next, terminating in an initial assignment of the name
by a previous block chain consensus. A client typically keeps a few leaves
of this tree. A host keeps the entire tree, and provides portions of the tree
to each client.
When two clients link up by human readable name, they make sure that they are
working off the same early consensus, the same initial grant of user name by
an old blockchain consensus, and also off the same more recent consensus,
for possible changes in the public key that has rightful ownership of that
name. If they see different Merkle hashes at the root of their trees, the
connection fails. Thus the blockchain they are working from has to be the
same originally, and also the same more recently.
This system ensures we know and agree what the public key associated with a
name is, but how do we find the network address?
# Mapping between public keys and nework addresses
## The Nameserver System
Typically someone is logged in to a host with an identity that looks like an
email address, `paf.foo.bar`, where`bar` is the name of a host that is
reliably up, and reliably on the network, and relatively easy to find
You can ask the host `bar` for the public key and *the network address* of
`foo.bar`, or conversely the login name and network address associated with
this public key. Of course these values are completely subject to the caprice
of the owner of `bar`. And, having obtained the network address of `foo.bar`,
you can then get the network address of `paf.foo.bar`
Suppose someone owns the name `paf`, and you can find the global consensus as
to what public key controls `paf`, but he does not have a stable network
address. He can instead provide a nameserver another entity that will
provide a rendevous. If `paf` is generally logged in to `foo`, you can
contact `foo`, to get rendevous data for `bar.foo`, which is, supposing `foo`
to be well behaved, rendevous data for `bar`
Starting from a local list of commonly used name server names, keys, and
network addresses, you eventually get a live connection to the owner of that
public key, who tells you that at the time he received your message, the
information is up to date, and, for any globally unique human readable names
involved in setting up the connection, he is using the same blockchain as you
are using.
Your local list of network addresses may well rapidly become out of date.
Information about network addresses flood fills through the system in the
form of signed assertions about network addresses by owners of public keys,
with timeouts on those assertions, and where to find more up to date
information if the assertion has timed out, but we do not attempt to create a
global consensus on network addresses. Rather, the authoritative source of
information about a network address of a public key comes from successfully
performing a live connection to the owner of that public key. You can, and
probably should, choose some host as the decider on the current tree of
network addresses, but we dont need to agree on the host. People can work
off slightly different mappings about network addresses with no global and
complete consensus. Mappings are always incomplete, out of date, and usually
incomplete and out of date in a multitude of slightly different ways.
We need a global consensus, a single hash of the entire blockchain, on what
public keys own what crypto currency and what human readable names. We do not
need a global consensus on the mapping between public keys and network
addresses.
What you would like to get is an assertion that `paf.foo.bar` has public key
such and such, and whatever you need to make network connection to
`paf.foo.bar`, but likely `paf.foo.bar` has transient public key, because his
identity is merely a username and login at `foo.bar`, and transient network
address, because he is behind nat translation. So you ask `bar` about
`foo.bar`, and `foo.bar` about `paf.foo.bar`, and when you actually contact
`paf.foo.bar`, then, and only then, you know you have reliable information.
But you dont know how long it is likely to remain reliable, though
`paf.foo.bar` will tell you (and no other source of information is
authoritative, or as likely to be accurate).
Information about the mapping between public keys and network addresses that
is likely to be durable flood fills through the network of nameservers.
# logon identity
Often, indeed typically, `ann.foo` contacts `bob.bar`, and `bob.bar` needs
continuity information, needs to know that this is truly the same `ann.foo`
as contacted him last time which is what we currently do with usernames and
passwords.
The name `foo` is rooted in a chain of signatures of public keys and requires
a global consensus on that chain. But the name `ann.foo` is rooted in logon
on `foo`. So `bob.bar` needs to know that `ann.foo` can log on with `foo`,
which `ann.foo` does by providing `bob.bar` with a public key signed by `foo`,
which might be a transient public key generated the last time she logged
on, which will disappear the moment her session on her computer shuts down,
or might be a durable public key. But if it is a durable public key, this
does not give her any added security, since `foo` can always make up a new
public key for anyone he decides to call `ann.foo` and sign it, so he might
as well put a timeout on the key, and `ann.foo` might as well discard it when
her computer turns off or goes into sleep mode. So, it is in everyones
interests (except that of attackers) that only root keys are durable.
`foo`s key is durable, and information about it is published.`ann.foo`s
key is transient, and information about it always obtained directly from
`ann.foo` as a result of `ann.foo` logging in with someone, or as a result of
someone contacting `foo` with the intent of logging in to `ann.foo`.
But suppose, as is likely, the network address of `foo` is not actually all
that durable, is perhaps behind a NAT. In that case, it may well be that to
contact `foo`, you need to contact `bar`.
So, `foo!bar` is `foo` logged in on `bar`, but not by a username and
password, but rather logged on by his durable public key, attested by the
blockchain consensus. So, you get an assertion, flood filled through the
nameservers, that the network address of the public key that the blockchain
asserts is the rightful controller of `foo`, is likely to be found at `foo!`
(public key of `bar`), or likely to be found at `foo!bar`.
Logons by durable public key will work exactly like logons by username and
password, or logons by derived name. It is just that the name of the entity
logged on has a different form..
Just as openssh has logons by durable public key, logons by public key
continuity, and logons by username and password, but once you are logged on,
it is all the same, you will be able to logon to `bob.bar` as `ann.bob.bar`,
meaning a username and password at `bob.bar`, as `ann.foo`, meaning `ann` has
a single signon at `foo`, a username and password at `foo`, or as `ann`,
meaning `ann` logs on to `bob.bar` with a public key attested by the
blockchain consensus as belonging to `ann`.
And if `ann` is currently logged on to `bob.bar` with a public key attested
by the blockchain consensus as belonging to `ann`, you can find the current
network address of `ann` by asking `bob.bar` for the network address of
`ann!bob.bar`
`ann.bob.bar` is whosoever `bob.bar` decides to call `ann.bob.bar`, but
`ann!bob.bar` is an entity that controls the secret key of `ann`, who is at
this moment logged onto `bob.bar`.
If `ann` asserts her current network address is likely to last a long time,
and is accessible without going through
`bob.bar` then that network address information will flood fill through the
network. Less useful network address information, however will not get far.

View File

@ -1,148 +0,0 @@
---
lang: en
title: Peering through NAT
...
A library to peer through NAT is a library to replace TCP, the domain
name system, SSL, and email. This is covered at greater length in
[Replacing TCP](replacing_TCP.html)
# Implementation issues
There is a great [pile of RFCs](./replacing_TCP.html) on issues that arise with using udp and icmp
to communicate.
## timeout
The NAT mapping timeout is officially 20 seconds, but I have no idea
what this means in practice. I suspect each NAT discards port mappings
according to its own idiosyncratic rules, but 20 seconds may be a widely respected minimum.
The official maximum time that should be assumed is two minutes, but
this is far from widely implemented, so keep alives often run faster.
Minimum socially acceptable keep alive time is 15 seconds. To avoid
synch loops, random jitter in keep alives is needed. This is discussed at
length in [RFC2450](https://datatracker.ietf.org/doc/html/rfc5405)
An experiment on [hole punching] showed that most NATs had a way
longer timeout, and concluded that the way to go was to just repunch as
needed. They never bothered with keep alive. They also found that a lot of
the time, both parties were behind the same NAT, sometimes because of
NATs on top of NATs
[hole punching]:http://www.mindcontrol.org/~hplus/nat-punch.html
"How to communicate peer-to-peer through NAT firewalls"
{target="_blank"}
Another source says that "most NAT tables expire within 60 seconds, so
NAT keepalive allows phone ports to remain open by sending a UDP
packet every 25-50 seconds".
The no brainer way is that each party pings the other at a mutually agreed
time every 15 seconds. Which is a significant cost in bandwidth. But if a
server has 4Mib/s of internet bandwidth, can support keepalives for couple
of million clients. On the other hand, someone on cell phone data with thirty
peers is going to make a significant dent in his bandwidth.
With client to client keepalives, probably a client will seldom have more
than dozen peers. Suppose each keepalive is sent 15 seconds after the
counterparty's previous packet, or an expected keepalive is not received,
and each keepalive acks received packets. If not receiving expected acks
or expected keepalives, we send nack keepalives (hello-are-you-there
packets) one per second, until we give up.
This algorithm should not be baked in stone, but rather should be an
option in the connection negotiation, so that we can do new algorithms as
the NAT problem changes, as it continually does.
If two parties are trying to setup a connection through a third party broker,
they both fire packets at each other (at each other's IP as seen by the
broker) at the same broker time minus half the broker round trip time. If
they don't get a packet in the sum of the broker round trip times, keep
firing with slow exponential backoff until connection is achieved,or until
exponential backoff approaches the twenty second limit.
Their initial setup packets should be steganographed as TCP startup
handshake packets.
We assume a global map of peers that form a mesh whereby you can get
connections, but not everyone has to participate in that mesh. They can be
clients of such a peer, and only inform selected counterparties as to whom
they are a client of.
The protocol for a program to open port forwarding is part of Universal Plug and Play, UPnP, which was invented by Microsoft but is now ISO/IEC 29341 and is implemented in most SOHO routers.
But is it generally turned off by default, or manually. Needless to say, if relatively benign Bitcoin software can poke a hole in the
firewall and set up a port forward, so can botnet malware.
The standard for poking a transient hole in a NAT is STUN, which only works for UDP but generally works not always, but most of the time. This problem everyone has dealt with, and there are standards, but not libraries, for dealing with it. There should be a library for dealing with it but then you have to deal with names and keys, and have a reliability and bandwidth management layer on top of UDP.
But if our messages are reasonably short and not terribly frequent, as client messages tend to be, link level buffering at the physical level will take care of bandwidth management, and reliability consists of message received, or message not received. For short messages between peers, we can probably go UDP and retry.
STUN and ISO/IEC 29341 are incomplete, and most libraries that supply implementations are far too complete you just want a banana, and you get the entire jungle.
Ideally we would like a fake or alternative TCP session setup, using raw
sockets and then you get a regular standard TCP connection on a random
port, assuming that the target machine has that service running, and the
default path for exporting that service results in window with a list of
accessible services, and how busy they are. Real polish would be hooking
the domain name resolution so that looking up the names in the peer top
level domain create a a hole, using fake TCP packets sent through a raw
socket. then return the ip of that hole. One might have the hole go through
wireguard like network interface, so that you can catch them coming and
going.
Note that the internet does not in fact use the OSI model though everyone talks as if it did. Internet layers correspond only vaguely to OSI layers, being instead:
1. Physical
2. Data link
3. Network
4. Transport
5. Application
And I have no idea how one would write or install ones own network or
transport layer, but something is installable, because I see no end of
software that installs something, as every vpn does, wireguard being the simplest.
------------------------------------------------------------------------
Assume an identity system that finds the entity you want to
talk to.
If it is behind a firewall, you cannot notify it, cannot
send an interrupt, cannot ring its phone.
Assume the identity system can notify it. Maybe it has a
permanent connection to an entity in the identity system.
Your target agrees to take the call. Both parties are
informed of each others IP address and port number on which
they will be taking the call by the identity system.
Both parties send off introduction UDP packets to the
others IP address and port number thereby punching holes
in their firewall for return packets. When they get
a return packet, an introduction acknowledgement, the
connection is assumed established.
It is that simple.
Of course networks are necessarily non deterministic,
therefore all beliefs about the state of the network need to
be represented in a Bayesian manner, so any
assumption must be handled in such a manner that the
computer is capable of doubting it.
We have finite, and slowly changing, probability that our
packets get into the cloud, a finite and slowly changing
probability that our messages get from the cloud to our
target. We have finite probability that our target
has opened its firewall, finite probability that our
target can open its firewall, which transitions to
extremely high probability when we get an
acknowledgement which prior probability diminishes over
time.
As I observe in [Estimating Frequencies from Small Samples](./estimating_frequencies_from_small_samples.html) any adequately flexible representation of the state of
the network has to be complex, a fairly large body of data,
more akin to a spam filter than a Boolean.

File diff suppressed because it is too large Load Diff

View File

@ -55,6 +55,8 @@ entryists and shills, who seek to introduce backdoors.
This may be inconvenient if you do not have `gpg` installed and set up.
It also means that subsequent pulls and merges will require you to have `gpg `trust the key `public_key.gpg`, and if you submit a pull request, the puller will need to trust your `gpg` public key.
`.gitconfig` adds several git aliases:
1. `git lg` to display the gpg trust information for the last four commits.

View File

@ -49,7 +49,7 @@ To install guest additions on Debian:
```bash
su -l root
apt-get -qy update && apt-get -qy install build-essential module-assistant git dialog rsync
apt-get -qy update && apt-get -qy install build-essential module-assistant git sudo dialog rsync
apt-get -qy full-upgrade
m-a -qi prepare
mount -t iso9660 /dev/sr0 /media/cdrom
@ -3104,8 +3104,31 @@ Under Mate and KDE Plasma, bitcoin implements run-on-login by generating a
It does not, however, place the `bitcoin.desktop` file in any of the
expected other places. Should be in `/usr/share/applications`
with its `Categories=` entry set to whatever Wasabi sets its
`Categories=` entry to.
The following works
```config
$ cat ~/.local/share/applications/bitcoin.desktop
[Desktop Entry]
Type=Application
Name=Bitcoin
Exec=/home/cherry/bitcoin-22.0/bin/bitcoin-qt -min -chain=main
GenericName=Bitcoin core peer
Comment=Bitcoin core peer.
Icon=/home/cherry/bitcoin-22.0/bin/bitcoin-qt
Categories=Office;Finance
Terminal=false
Keywords=bitcoin;crypto;blockchain;qwe;asd;
Hidden=false
cat ~/.config/autostart/bitcoin.desktop
[Desktop Entry]
Type=Application
Name=Bitcoin
Exec=/home/cherry/bitcoin-22.0/bin/bitcoin-qt -min -chain=main
Terminal=false
Hidden=false
```
Under Mate and KDE Plasma, bitcoin stores its configuration data in
`~/.config/Bitcoin/Bitcoin-Qt.conf`, rather than in `~/.bitcoin`

View File

@ -16,7 +16,7 @@ We also have a crisis of shills, spamming, and scamming.
[lengthy battleground report]:
images/anon_report_from_people_who_tried_to_keep_unmoderated_discussion_viable.webp
"anon report_from people who tried to keep unmoderated discussion viable"
{target="blank"}
{target="_blank"}
Here is a [lengthy battleground report] from some people who were on the front lines in that battle, and stuck it out a good deal longer than I did.
@ -825,6 +825,104 @@ to the stake.
# Many sovereign corporations on the blockchain
We have to do something about the enemy controlling speech. No
namefag can mention or acknowledge any important truth, as more
and more things, even in science, technology, and mathematics,
become political. Global warming and recent horrifying ineffective
and dangerous vaccinations are just the tip of the iceberg every
aspect of reality is being politicized. Our capability to monitor
reality is being rapidly and massively eroded.
Among those capabilities, are bookkeeping and accounting, which
is becoming Environmental, Social, and Governance and is
increasingly detached from reality. The latter is an area of truth that
can get us paid for securing the capability to communicate truth.
Information wants to be free, but programmers want to be paid.
Increasingly, the value of shares is not physical things, but "goodwill"
Dominos does not sell pizzas, and Apple does not sell computers. It
sets standards, and sells the expectation that stuff sold with its
brand name will conform to expectations. Domino's does not make the
pizza dough, does not make the pizzas. It sells the brand.
The latest, and one of the biggest, jewels in Apples tech crown, at
the time of writing, is the M1 chip. Which is *designed* by Apple. It is
not *built* by Apple. And similarly if you buy a Dominos pizza it was
cooked according to a Dominos recipe from Dominos approved
ingredients. But it was not cooked in a Dominos owned oven, was
not cooked by a Dominos employee, and it is unlikely that any of
the ingredients where ever anywhere near Dominos owned
physical property or a Dominos direct employee. Domino's does
not cook pizzas, and Apple does not build computers it. It designs
computers and set standards.
Most businesses are in practice distributed over a network, and
their capital is less and less big centralized physical things like steel
refineries that can easily be found and coerced, and more and more
“goodwill”. “Goodwill” being the network of relationships and social
roles within the corporation and between its customers and
suppliers, and customer and supplier expectations of employee
roles enforced by the corporation. *This*, we can move to the
blockchain and protect from governments.
A huge amount of what matters, a major proportion of the value
represented by shares, is in the social network. Which is
increasingly, like Apple and Google, scarcely attached to anything
physical and coercible, and is getting less and less attached.
It mostly information, which is a present organized in a highly
coercible form. It does not have to be.
There are a whole lot of government hooks into the corporation
associated with physical things, but as more and more capital takes
the form of “goodwill”, the most important hooks in the Global
American Empire are Human Resources and Accounting.
We have to attach employee roles and brand names to individually
held unshared secrets, derived from a master secret of a master
wallet, rather than the domain name system.
With SSL, governments can, and routinely do, seize that name, but
that is a flaw in the SSL design, the sort of thing blockchains were
originally invented to prevent. The original concept of an
immutable append only data structure, what we now call, not very
accurately, a blockchain, was invented to address this flaw in SSL,
though its first widely used useful application was bitcoin. It is now
way past time to apply it to its original design purpose.
These days, most of the wealth of the world is no longer in physical
things, and increasingly companies are outsourcing that stuff to
large numbers of smaller businessmen, each of whom owns some
particular physical thing that is directly under his physical control,
and who are therefore hard for the state to coerce. One is more
likely to get shot when one attempts to coerce the actual owner
who is physically present on his property than when coercing his
employee who is supervising and guarding his property. And who
can switch from Dominoes at the cost of taking down one sign and
putting up another. (Also at the cost of confusing his customers.)
Uber does not own any taxis, nor move any passengers, and
Dominoes bakes no pizzas. Trucking companies are converging to
the Uber model. Reflect on the huge revolutionary problem the
Allende government got when attempting to coerce large numbers
of truckers who each owned their own truck. The coup was in large
part the army deciding it was easier and less disturbing to do
something about Allende than to do something about truckers. One
trucker is no problem. Many truckers are a problem. One Allende …
The value of a business is largely the value of its social net of suppliers, customers, and employee roles. Our job is protecting social nets. Among them, social nets who will pay us.
And with Information Epoch warfare looming, the surviving
governments will likely be those that are good at protecting their
social graph information from enemies internal and external, who
will enjoy ever increasing capability to reach out and kill one
particular man a thousand miles away. Though governments are
unlikely to pay us. They are going to try to make us pay them. And
in the end, we probably will, in return for safe locations where we
have freedom to operate. Which we will probably lease from
sovereign corporations who leased the physical facilities from
small owners, and the freedom to operate from governments.
## source of corporateness
State incorporated corporations derive their corporateness from the

View File

@ -1,145 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta content="text/html; charset=UTF-8" http-equiv="content-type">
<style>
body {
max-width: 30em;
margin-left: 2em;
}
p.center {text-align:center;}
</style>
<link rel="shortcut icon" href="../rho.ico">
<title>Enough with the ICO-Me-So-Horny-Get-Rich-Quick-Lambo Crypto</title>
</head>
<body>
<p><a href="./index.html"> To Home page</a> </p>
<h1>Enough with the ICO-Me-So-Horny-Get-Rich-Quick-Lambo Crypto</h1>
<p><em>CoinDesk asked cypherpunk legend Timothy May, author of the &#8220;<strong><a href="https://www.activism.net/cypherpunk/crypto-anarchy.html" data-wpel-link="external" target="_blank" rel="external noopener noreferrer">Crypto Anarchist Manifesto</a>,</strong>&#8221; to write his thoughts on the bitcoin white paper on its 10th anniversary. What he sent back was a sprawling 30-page evisceration of a technology industry he feels is untethered from reality.</em></p>
<p><em>The original message is presented here as a fictional Q&amp;A for clarity. The message remains otherwise unchanged. Read more in our <strong><a href="/category/bitcoin-white-paper-reviewed/" data-wpel-link="internal">White Paper Reflections</a></strong> series.</em></p>
<hr />
<p><strong>CoinDesk: Now that bitcoin has entered the history books, how do you feel the white paper fits in the pantheon of financial cryptography advances?</strong></p>
<p><strong>Tim: </strong>First, I&#8217;ll say I&#8217;ve been following, with some interest, some amusement and a lot of frustration for the past 10 years, the public situation with bitcoin and all of the related variants.</p>
<p>In the pantheon, it deserves a front-rank place, perhaps the most important development since the invention of double-entry book-keeping.</p>
<p>I can&#8217;t speak for what Satoshi intended, but I sure don&#8217;t think it involved bitcoin exchanges that have draconian rules about KYC, AML, passports, freezes on accounts and laws about reporting &#8220;suspicious activity&#8221; to the local secret police. There&#8217;s a real possibility that all the noise about &#8220;governance,&#8221; &#8220;regulation&#8221; and &#8220;blockchain&#8221; will effectively create a surveillance state, a dossier society.</p>
<p>I think Satoshi would barf. Or at least work on a replacement for bitcoin as he first described it in 2008-2009. I cannot give a ringing endorsement to where we are, or generate a puff-piece about the great things already done.</p>
<p>Sure, bitcoin and its variants a couple of forks and many altcoin variants more or less work the way it was originally intended. Bitcoin can be bought or mined, can be sent in various fast ways, small fees paid and recipients get bitcoin and it can be sold in tens of minutes, sometimes even faster.</p>
<p>No permission is needed for this, no centralized agents, not even any trust amongst the parties. And bitcoin can be acquired and then saved for many years.</p>
<p>But this tsunami that swept the financial world has also left a lot of confusion and carnage behind. Detritus of the knowledge-quake, failed experiments, Schumpeter&#8217;s &#8220;creative destructionism.&#8221; It&#8217;s not really ready for primetime. Would anyone expect their mother to &#8220;download the latest client from Github, compile on one of these platforms, use the Terminal to reset these parameters?&#8221;</p>
<p>What I see is losses of hundred of millions in some programming screw-ups, thefts, frauds, initial coin offerings (ICOs) based on flaky ideas, flaky programming and too few talented people to pull off ambitious plans.</p>
<p>Sorry if this ruins the narrative, but I think the narrative is fucked. Satoshi did a brilliant thing, but the story is far from over. She/he/it even acknowledged this, that the bitcoin version in 2008 was not some final answer received from the gods..</p>
<p><strong>CoinDesk: Do you think others in the cypherpunk community share your views? What do you think is creating interest in the industry, or killing it off?<br />
</strong></p>
<p><strong>Tim: </strong>Frankly, the newness in the Satoshi white paper (and then the early uses for things like Silk Road) is what drew many to the bitcoin world. If the project had been about a &#8220;regulatory-compliant,&#8221; &#8220;banking-friendly&#8221; thing, then interest would&#8217;ve been small. (In fact, there were some yawn-inducing electronic transfer projects going back a long time. &#8220;SET,&#8221; for Secure Electronic Transfer, was one such mind-numbingly-boring projects.)</p>
<p>It had no interesting innovations and was 99 percent legalese. Cypherpunks ignored it.</p>
<p>It&#8217;s true that some of us were there when things in the &#8220;financial cryptography&#8221; arena really started to get rolling. Except for some of the work by David Chaum, Stu Haber, Scott Stornetta, and a few others, most academic cryptographers were mainly focused on the mathematics of cryptology: their gaze had not turned much toward the &#8220;financial&#8221; aspects.</p>
<p>This has of course changed in the past decade. Tens of thousands of people, at least, have flocked into bitcoin, blockchain, with major conferences nearly every week. Probably most people are interested in the &#8220;Bitcoin Era,&#8221; starting roughly around 2008-2010, but with some important history leading up to it.</p>
<p>History is a natural way people understand things… it tells a story, a linear narrative.</p>
<p>About the future I won&#8217;t speculate much. I was vocal about some &#8220;obvious&#8221; consequences from 1988 to 1998, starting with &#8220;The Crypto Anarchist Manifesto&#8221; in 1988 and the Cypherpunks group and list starting in 1992.</p>
<p><strong>CoinDesk: It sounds like you don&#8217;t think that bitcoin is particularly living up to its ethos, or that the community around it hasn&#8217;t really stuck to its cypherpunk roots.</strong></p>
<p><strong>Tim: </strong>Yes, I think the greed and hype and nattering about &#8220;to the Moon!&#8221; and &#8220;HODL&#8221; is the biggest hype wagon I&#8217;ve ever seen.</p>
<p>Not so much in the &#8220;Dutch Tulip&#8221; sense of enormous price increases, but in the sense of hundred of companies, thousands of participants, and the breathless reporting. And the hero worship. This is much more hype than we saw during the dot-com era. I think far too much publicity is being given to talks at conferences, white papers and press releases. A whole lot of &#8220;selling&#8221; is going on.</p>
<p>People and companies are trying to stake-out claims. Some are even filing for dozens or hundreds of patents in fairly-obvious variants of the basic ideas, even for topics that were extensively-discussed in the 1990s. Let&#8217;s hope the patent system dismisses some of these (though probably only when the juggernauts enter the legal fray).</p>
<p>The tension between privacy (or anonymity) and &#8220;know your customer&#8221; approaches is a core issue. It&#8217;s &#8220;decentralized, anarchic and peer-to-peer&#8221; versus &#8220;centralized, permissioned and back door.&#8221; Understand that the vision of many in the privacy community — cypherpunks, Satoshi, other pioneers — was explicitly of a permission-less, peer-to-peer system for money transfers. Some had visions of a replacement for &#8220;fiat&#8221; currency.</p>
<p>David Chaum, a principal pioneer, was very forward-thinking on issues of &#8220;buyer anonymity.&#8221; Where, for example, a large store could receive payments for goods without knowing the identity of a buyer. (Which is most definitely not the case today, where stores like Walmart and Costco and everybody else compiled detailed records on what customers buy. And where police investigators can buy the records or access them via subpoenas. And in more nefarious ways in some countries.)</p>
<p>Remember, there are many reasons a buyer does not wish to disclose buying preferences. But buyers and sellers BOTH need protections against tracking: a seller of birth control information is probably even more at risk than some mere buyer of such information (in many countries). Then there&#8217;s blasphemy, sacrilege and political activism. Approaches like Digicash which concentrated on *buyer* anonymity (as with shoppers at a store or drivers on a toll-road), but were missing a key ingredient: that most people are hunted-down for their speech or their politics on the *seller* side.</p>
<p>Fortunately, buyers and sellers are essentially isomorphic, just with some changes in a few arrow directions (&#8220;first-class objects&#8221;).</p>
<p>What Satoshi did essentially was to solve the &#8220;buyer&#8221;/&#8221;seller&#8221; track-ability tension by providing both buyer AND seller untraceability. Not perfectly, it appears. Which is why so much activity continues.</p>
<p><strong>CoinDesk: So, you&#8217;re saying bitcoin and crypto innovators need to fight the powers that be, essentially, not align with them to achieve true innovation?</strong></p>
<p><strong>Tim: </strong>Yes, there is not much of interest to many of us if cryptocurrencies just become Yet Another PayPal, just another bank transfer system. What&#8217;s exciting is the bypassing of gatekeepers, of exorbitant fee collectors, of middlemen who decide whether Wikileaks — to pick a timely example — can have donations reach it. And to allow people to send money abroad.</p>
<p>Attempts to be &#8220;regulatory-friendly&#8221; will likely kill the main uses for cryptocurrencies, which are NOT just &#8220;another form of PayPal or Visa.&#8221;</p>
<p>More general uses of &#8220;blockchain&#8221; technology are another kettle of fish. Many uses may be compliance-friendly. Of course, a lot of the proposed uses — like putting supply chain records — on various public or private blockchains are not very interesting. Many point that these &#8220;distributed ledgers&#8221; are not even new inventions, just variants of databases with backups. As well, the idea that corporations want public visibility into contracts, materials purchases, shipping dates, and so on, is naive.</p>
<p>Remember, the excitement about bitcoin was mostly about bypassing controls, to enable exotic new uses like Silk Road. It was some cool and edgy stuff, not just another PayPal.</p>
<p><strong>CoinDesk: So, you&#8217;re saying that we should think outside the box, try to think about ways to apply the technology in novel ways, not just remake what we know?</strong></p>
<p><strong>Tim: </strong>People should do what interests them. This was how most of the innovative stuff like BitTorrent, mix-nets, bitcoin, etc. happened. So, I&#8217;m not sure that &#8220;try to think about ways&#8221; is the best way to put it. My hunch is that ideologically-driven people will do what is interesting. Corporate people will probably not do well in &#8220;thinking about ways.&#8221;</p>
<p>Money is speech. Checks, IOUs, delivery contracts, Hawallah banks, all are used as forms of money. Nick Szabo has pointed out that bitcoin and some other cryptocurrencies have most if not all of the features of gold except it also has more features: it weighs nothing, it&#8217;s difficult to steal or seize and it can be sent over the crudest of wires. And in minutes, not on long cargo flights as when gold bars are moved from place to another.</p>
<p>But, nothing is sacred about either banknotes, coins or even official-looking checks. These are &#8220;centralized&#8221; systems dependent on &#8220;trusted third parties&#8221; like banks or nation-states to make some legal or royal guaranty.</p>
<p>Sending bitcoin, in contrast, is equivalent to &#8220;saying&#8221; a number (math is more complicated than this, but this is the general idea). To ban saying a number is equivalent to a ban on some speech. That doesn&#8217;t mean the tech can&#8217;t be stopped. There was the &#8220;printing out PGP code,&#8221; or the Cody Wilson, Defense Distributed case, where a circuit court ruled this way,</p>
<p>Printed words are very seldom outside the scope of the First Amendment.</p>
<p><strong>CoinDesk: Isn&#8217;t this a good example of where you, arguably, want some censorship (the ability to force laws), if we&#8217;re going to rebuild the whole economy, or even partial economies, on top of this stuff?</strong></p>
<p><strong>Tim: </strong>There will inevitably be some contact with the legal systems of the U.S., or the rest of the world. Slogans like &#8220;the code is the law&#8221; are mainly aspirational, not actually true.</p>
<p>Bitcoin, qua bitcoin, is mostly independent of law. Payments are, by the nature of bitcoin, independent of charge-backs, &#8220;I want to cancel that transaction,&#8221; and other legal issues. This may change. But in the current scheme, it&#8217;s generally not know who the parties are, which jurisdictions the parties live in, even which laws apply.</p>
<p>This said, I think nearly all new technologies have had uses some would not like. Gutenberg&#8217;s printing press was certainly not liked by the Catholic Church. Examples abound. But does this mean printing presses should be licensed or regulated?</p>
<p>There have usually been some unsavory or worse uses of new technologies (what&#8217;s unsavory to, say, the U.S.S.R. may not be unsavory to Americans). Birth control information was banned in Ireland, Saudi Arabia, etc. Examples abound: weapons, fire, printing press, telephones, copier machines, computers, tape recorders.</p>
<p><strong>CoinDesk: Is there a blockchain or cryptocurrency that&#8217;s doing it right? Is bitcoin, in your opinion, getting its own vision right?</strong></p>
<p><strong>Tim: </strong>As I said, bitcoin is basically doing what it was planned to do. Money can be transferred, saved (as bitcoin), even used as a speculative vehicle. The same cannot be said for dozens of major variants and hundreds of minor variants where a clear-cut, understandable &#8220;use case&#8221; is difficult to find.</p>
<p>Talk of &#8220;reputation tokens,&#8221; &#8220;attention tokens,&#8221; &#8220;charitable giving tokens,&#8221; these all seem way premature to me. And none have taken off the way bitcoin did. Even ethereum, a majorly different approach, has yet to see interest uses (at least that I have seen, and I admit I don&#8217;t the time or will to spend hours every day following the Reddit and Twitter comments.)</p>
<p>&#8220;Blockchain,&#8221; now its own rapidly-developing industry, is proceeding on several paths: private blockchains, bank-controlled blockchains, pubic blockchains, even using the bitcoin blockchain itself. Some uses may turn out to be useful, but some appear to be speculative, toy-like. Really, marriage proposals on the blockchain?</p>
<p>The sheer number of small companies, large consortiums, alternative cryptocurrencies, initial coin offerings (ICOs), conferences, expos, forks, new protocols, is causing great confusion and yet there are new conferences nearly every week.</p>
<p>People jetting from Tokyo to Kiev to Cancun for the latest 3-5 days rolling party. The smallest only attract hundreds of fanboys, the largest apparently have drawn crowds of 8,000. You can contrast that with the straightforward roll-out of credit cards, or even the relatively clean roll-out of bitcoin. People cannot spend mental energy reading technical papers, following the weekly announcements, the contentious debates. The mental transaction costs are too high, for too little.</p>
<p>The people I hear about who are reportedly transferring &#8220;interesting&#8221; amounts of money are using basic forms of bitcoin or bitcoin cash, not exotics new things like Lightning, Avalanche, or the 30 to 100 other things.</p>
<p><strong>CoinDesk: It sounds like you&#8217;re optimistic about the value transfer use case for cryptocurrencies, at least then.</strong></p>
<p><strong>Tim: </strong>Well, it will be a tragic error if the race to develop (and profit from) the things that are confusingly called &#8220;cryptocurrencies&#8221; end up developing dossiers or surveillance societies such as the world has never seen. I&#8217;m just saying there&#8217;s a danger.</p>
<p>With &#8220;know your customer&#8221; regulations, crypto monetary transfers won&#8217;t be like what we have now with ordinary cash transactions, or even with wire transfers, checks, etc. Things will be _worse_ than what we have now if a system of &#8220;is-a-person&#8221; credentialing and &#8220;know your customer&#8221; governance is ever established. Some countries already want this to happen.</p>
<p>The &#8220;Internet driver&#8217;s license&#8221; is something we need to fight against.</p>
<p><strong>CoinDesk: That&#8217;s possible, but you could make a similar claim about the internet today isn&#8217;t exactly the same as the original idea, yet it&#8217;s still be useful in driving human progress.</strong></p>
<p><strong>Tim: </strong>I&#8217;m just saying we could end up with a regulation of money and transfers that is much the same as regulating speech. Is this a reach? If Alice can be forbidden from saying &#8220;I will gladly pay you a dollar next week for a cheeseburger today,&#8221; is this not a speech restriction? &#8220;Know your customer&#8221; could just as easily be applied to books and publishing: &#8220;Know your reader.&#8221; Gaaack!</p>
<p>I&#8217;m saying there are two paths: freedom vs. permissioned and centralized systems.</p>
<p>This fork in the road in the road was widely discussed some 25 years ago. Government and law enforcement types didn&#8217;t even really disagree: they saw the fork approaching. Today, we have tracking, the wide use of scanners (at elevators, chokepoints), tools for encryption, cash, privacy, tools for tracking, scanning, forced decryption, backdoors, escrow.</p>
<p>In a age where a person&#8217;s smartphone or computer may carry gigabytes of photos, correspondence, business information much more than an entire house carried back when the Bill of Rights was written the casual interception of phones and computers is worrisome. A lot of countries are even worse than the U.S. New tools to secure data are needed, and lawmakers need to be educated.</p>
<p>Corporations are showing signs of corporatizing the blockchain: there are several large consortiums, even cartels who want &#8220;regulatory compliance.&#8221;</p>
<p>It is tempting for some to think that legal protections and judicial supervision will stop excesses… at least in the US and some other countries. Yet, we know that even the US has engaged in draconian behavior (purges of Mormons, killings and death marches for Native Americans, lynchings, illegal imprisonment of those of suspected Japanese ancestry).</p>
<p>What will China and Iran do with the powerful &#8220;know your writers&#8221; (to extend &#8220;know your customer&#8221; in the inevitable way)?</p>
<p><strong>CoinDesk: Are we even talking about technology anymore though? Isn&#8217;t this just power and the balance of power. Isn&#8217;t there good that has come from the internet even if it&#8217;s become more centralized?</strong></p>
<p><strong>Tim: </strong>Of course, there&#8217;s been much good coming out of the Internet tsunami.</p>
<p>But, China already uses massive databases with the aid of search engine companies to compile &#8220;citizen trustworthiness&#8221; ratings that can be used to deny access to banking, hotels, travel. Social media corporate giants are eagerly moving to help build the machinery of the Dossier Society (they claim otherwise, but their actions speak for themselves).</p>
<p>Not to sound like a Leftist ranting about Big Brother, but any civil libertarian or actual libertarian has reason to be afraid. In fact, many authors decades ago predicted this dossier society, and the tools have jumped in quantum leaps since then</p>
<p>In thermodynamics, and in mechanical systems, with moving parts, there are &#8220;degrees of freedom.&#8221; A piston can move up or down, a rotor can turn, etc. I believe social systems and economies can be characterized in similar ways. Some things increase degrees of freedom, some things &#8220;lock it down.&#8221;</p>
<p><strong>CoinDesk: Have you thought about writing something definitive on the current crypto times, sort of a new spin on your old works?</strong></p>
<p><strong>Tim: </strong>No, not really. I spent a lot of time in the 1992-95 period writing for many hours a day. I don&#8217;t have it in me to do this again. That a real book did not come out of this is mildly regrettable, but I&#8217;m stoical about it.</p>
<p><strong>CoinDesk: Let&#8217;s step back and look at your history. Knowing what you know about the early cypherpunk days, do you see any analogies to what&#8217;s happening in crypto now?</strong></p>
<p><strong>Tim: </strong>About 30 years ago, I got interested in the implications of strong cryptography. Not so much about the &#8220;sending secret messages&#8221; part, but the implications for money, bypassing borders, letting people transact without government control, voluntary associations.</p>
<p>I came to call it &#8220;crypto anarchy&#8221; and in 1988 I wrote &#8220;The Crypto Anarchist Manifesto,&#8221; loosely-based in form on another famous manifesto. And based on &#8220;anarcho-capitalism,&#8221; a well-known variant of anarchism. (Nothing to do with Russian anarchists or syndicalists, just free trade and voluntary transactions.)</p>
<p>At the time, there was one main conference Crypto and two less-popular conferences EuroCrypt and AsiaCrypt. The academic conferences had few if any papers on any links to economics and institutions (politics, if you will). Some game theory-related papers were very important, like the mind-blowing &#8220;Zero Knowledge Interactive Proof Systems&#8221; work of Micali, Goldwasser and Rackoff.</p>
<p>I explored the ideas for several years. In my retirement from Intel in 1986 (thank you, 100-fold increase in the stock price!), I spent many hours a day reading crypto papers, thinking about new structures that were about to become possible.</p>
<p>Things like data havens in cyberspace, new financial institutions, timed-release crypto, digital dead drops through steganography, and, of course, digital money.</p>
<p>Around that time, I met Eric Hughes and he visited my place near Santa Cruz. We hatched a plan to call together some of the brightest people we knew to talk about this stuff. We met in his newly-rented house in the Oakland Hills in the late summer of 1992.</p>
<p><strong>CoinDesk: You mentioned implications for money&#8230; Were there any inclinations then that something like bitcoin or cryptocurrency would come along?</strong></p>
<p><strong>Tim: </strong>Ironically, at that first meeting, I passed out some Monopoly money I bought at a toy store. (I say ironically because years later, when bitcoin was first being exchanged in around 2009-2011 it looked like play money to most people cue the pizza story!)</p>
<p>I apportioned it out and we used it to simulate what a world of strong crypto, with data havens and black markets and remailers (Chaum&#8217;s &#8220;mixes&#8221;) might look like. Systems like what later became &#8220;Silk Road&#8221; were a hoot. (More than one journalist has asked me why I did not widely-distribute my &#8220;BlackNet&#8221; proof of concept. My answer is generally &#8220;Because I didn&#8217;t want to be arrested and imprisoned.&#8221; Proposing ideas and writing is protected speech, at least in the U.S. at present.)</p>
<p>We started to meet monthly, if not more often at times, and a mailing list rapidly formed. John Gilmore and Hugh Daniel hosted the mailing list. There was no moderation, no screening, no &#8220;censorship&#8221; (in the loose sense, not referring to government censorship, of which of course there was none.) The &#8220;no moderation&#8221; policy went along with &#8220;no leaders.&#8221;</p>
<p>While a handful of maybe 20 people wrote 80 percent of the essays and messages, there was no real structure. (We also thought this would provide better protection against government prosecution).</p>
<p>And of course this fits with a polycentric, distributed, permission-less, peer to peer structure. A form of anarchy, in the &#8220;an arch,&#8221; or &#8220;no top&#8221; true meaning of the word anarchy. This had been previously explored by David Friedman, in his influential mid-70s book &#8220;The Machinery of Freedom.&#8221; And by Bruce Benson, in &#8220;The Enterprise of Law.</p>
<p>He studied the role of legal systems absent some ruling top authority. And of course anarchy is the default and preferred mode of most people—to choose what they eat, who they associate with, what the read and watch. And whenever some government or tyrant tries to restrict their choices they often finds way to route around the restrictions: birth control, underground literature, illegal radio reception, copied cassette tapes, thumb drives ….</p>
<p>This probably influenced the form of bitcoin that Satoshi Nakamoto later formulated.</p>
<p><strong>CoinDesk: What was your first reaction to Satoshi&#8217;s messages, do you remember how you felt about the ideas?</strong></p>
<p><strong>Tim: </strong>I was actually doing some other things and wasn&#8217;t following the debates. My friend Nick Szabo mentioned some of the topics in around 2006-2008. And like a lot of people I think my reaction to hearing about the Satoshi white paper and then the earliest &#8220;toy&#8221; transactions was only mild interest. It just didn&#8217;t seem likely to become as big as it did.</p>
<p>He/she/they debated aspects of how a digital currency might work, what it needed to make it interesting. Then, in 2008, Satoshi Nakamoto released &#8220;their&#8221; white paper. A lot of debate ensued, but also a lot of skepticism.</p>
<p>In early 2009 an alpha release of &#8220;bitcoin&#8221; appeared. Hal Finney had the first bitcoin transaction with Satoshi. A few others. Satoshi himself (themselves?) even said that bitcoin would likely either go to zero in value or to a &#8220;lot.&#8221; I think many were either not following it or expected it would go to zero, just another bit of wreckage on the Information Superhighway.</p>
<p>The infamous pizza purchase shows that most thought of it as basically toy money.</p>
<p><strong>CoinDesk: Do you still think it&#8217;s toy money? Or has the slowly increasing value sort of put that argument to rest, in your mind?</strong></p>
<p><strong>Tim: </strong>No, it&#8217;s no longer just toy money. Hasn&#8217;t been for the past several years. But it&#8217;s also not yet a replacement for money, for folding money. For bank transfers, for Hawallah banks, sure. It&#8217;s functioning as a money transfer system, and for black markets and the like.]</p>
<p>I&#8217;ve never seen such hype, such mania. Not even during the dot.com bubble, the era of Pets.com and people talking about how much money they made by buying stocks in &#8220;JDS Uniphase.&#8221; (After the bubble burst, the joke around Silicon Valley was &#8220;What&#8217;s this new start-up called &#8220;Space Available&#8221;?&#8221; Empty buildings all around.)</p>
<p>I still think cryptocurrency is too complicated…coins, forks, sharding, off-chain networks, DAGs, proof-of-work vs. proof-of-stake, the average person cannot plausibly follow all of this. What use cases, really? There&#8217;s talk about the eventual replacement of the banking system, or credit cards, PayPal, etc. is nice, but what does it do NOW?</p>
<p>The most compelling cases I hear about are when someone transfers money to a party that has been blocked by PayPal, Visa (etc), or banks and wire transfers. The rest is hype, evangelizing, HODL, get-rich lambo garbage.</p>
<p><strong>CoinDesk: So, you see that as bad. You don&#8217;t buy the argument that that&#8217;s how things get built though, over time, somewhat sloppily…</strong></p>
<p><strong>Tim: </strong>Things sometimes get built in sloppy ways. Planes crash, dams fail, engineers learn. But there are many glaring flaws in the whole ecology. Programming errors, conceptual errors, poor security methods. Hundreds of millions of dollars have been lost, stolen, locked in time-vault errors.</p>
<p>If banks were to lose this kind of my money in &#8220;Oops. My bad!&#8221; situations there&#8217;d be bloody screams. When safes were broken into, the manufacturers studied the faults — what we now call &#8220;the attack surface&#8221; — and changes were made. It&#8217;s not just that customers — the banks — were encouraged to upgrade, it&#8217;s that their insurance rates were lower with newer safes. We desperately need something like this with cryptocurrencies and exchanges.</p>
<p>Universities can&#8217;t train even basic &#8220;cryptocurrency engineers&#8221; fast enough, let alone researchers. Cryptocurrency requires a lot of unusual areas: game theory, probability theory, finance, programming.</p>
<p>Any child understands what a coin like a quarter &#8220;does,&#8221; He sees others using quarters and dollar bills and the way it works is clear.</p>
<p>When I got my first credit card I did not spend a lot of time reading manuals, let alone downloading wallets, cold storage tools or keeping myself current on the protocols. &#8220;It just worked, and money didn&#8217;t just vanish.</p>
<p><strong>CoinDesk: It sounds like you don&#8217;t like how innovation and speculation have become intertwined in the industry…</strong></p>
<p><strong>Tim: </strong>Innovation is fine. I saw a lot of it in the chip industry. But we didn&#8217;t have conferences EVERY WEEK! And we didn&#8217;t announce new products that had only the sketchiest ideas about. And we didn&#8217;t form new companies with such abandon. And we didn&#8217;t fund by &#8220;floating an ICO&#8221; and raising $100 million from what are, bluntly put, naive speculators who hope to catch the next bitcoin.</p>
<p>Amongst my friends, some of whom work at cryptocurrency companies and exchanges, the main interest seems to be in the speculative stuff. Which is why they often keep their cryptocurrency at the exchanges: for rapid trading, shorting, hedging, but NOT for buying stuff or transferring assets outside of the normal channels.</p>
<p><strong>CoinDesk: Yet, you seem pretty knowledgeable on the whole about the subject area&#8230; Sounds like you might have a specific idea of what it &#8220;should&#8221; be.</strong></p>
<p><strong>Tim: </strong>I probably spend way too much time following the Reddit and Twitter threads (I don&#8217;t have an actual Twitter account).</p>
<p>What &#8220;should&#8221; it be? As the saying goes, the street will find its own uses for technology. For a while, Silk Road and its variants drove wide use. Recently, it&#8217;s been HODLing, aka speculating. I hear that online gambling is one of the main uses of ethereum. Let the fools blow their money.</p>
<p>Is the fluff and hype worth it? Will cryptocurrency change the world? Probably. The future is no doubt online, electronic, paperless.</p>
<p>But bottom line, there&#8217;s way too much hype, way too much publicity and not very many people who understand the ideas. It&#8217;s almost as if people realize there&#8217;s a whole world out there and thousands start building boats in their backyards.</p>
<p>Some will make, but most will either stop building their boats or will sink at sea.</p>
<p>We were once big on manifestos, These were ways not of enforcing compliance, but of suggesting ways to proceed. A bit like advising a cat… one does not command a cat, one merely suggests ideas, which sometimes they go with.</p>
<p><strong>Final Thoughts:</strong></p>
<ul>
<li>Don&#8217;t use something just because it sounds cool…only use it if actually solves some problem (To date, cryptocurrency solves problems for few people, at least in the First World).</li>
<li>Most things we think of as problems are not solvable with crypto or any other such technology (crap like &#8220;better donation systems&#8221; are not something most people are interested in).</li>
<li>If one is involved in dangerous transactions drugs, birth control information practice intensive &#8220;operational security&#8221;….look at how Ross Ulbricht was caught.</li>
<li>Mathematics is not the law</li>
<li>Crypto remains very far from being usable by average people (even technical people)</li>
<li>Be interested in liberty and the freedom to transact and speak to get back to the original motivations. Don&#8217;t spend time trying to make government-friendly financial alternatives.</li>
<li>Remember, there are a lot tyrants out there.</li>
</ul>
<p style="background-color : #ccffcc; font-size:80%">These documents are
licensed under the <a rel="license" href="http://creativecommons.org/licenses/by-sa/3.0/">Creative Commons Attribution-Share Alike 3.0 License</a></p>
</body>
</html>

View File

@ -1,75 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<style>
body {
max-width: 30em;
margin-left: 2em;
}
p.center {
text-align:center;
}
</style>
<link rel="shortcut icon" href="../rho.ico">
<title>
True Names and TCP
</title>
</head><body>
<h1>True Names and TCP
</h1><p>
Vernor Vinge <a
href="http://www.amazon.com/True-Names-Opening-Cyberspace-Frontier/dp/0312862075">made
the point</a> that true names are an instrument of
government oppression. If the government can associate
your true name with your actions, it can punish you for
those actions. If it can find the true names associated
with a transaction, it is a lot easier to tax that
transaction. </p><p>
Recently there have been moves to make your cell phone
into a wallet. A big problem with this is that cell
phone cryptography is broken. Another problem is that
cell phones are not necessarily associated with true names, and as soon as the government hears that they might control money, it starts insisting that cell phones <em>are</em> associated with true names. The phone companies dont like this, for if money is transferred from true name to true name, rather than cell phone to cell phone, it will make them a servant of the banking cartel, and the bankers will suck up all the gravy, but once people start stealing money through flaws in the encryption, they will be depressingly grateful that the government can track account holders down and punish them except, of course, the government probably will not be much good at doing so. </p><p>
TCP is all about creating connections.  It creates connections between network addresses, but network adresses correspond to the way networks are organized, not the way people are organized, so on top of networks we have domain names.  </p><p>
TCP therefore establishes a connection <em>to</em> a domain name rather than a mere network address but there is no concept of the connection coming <em>from</em> anywhere humanly meaningfull. </p><p>
Urns are “uniform resource names”, and uris are “uniform resource identifiers” and urls are “uniform resource locators”, and that is what the web is built out of. </p><p>
There are several big problems with urls: </p><ol><li><p>
They are uniform: Everyone is supposed to agree on one domain name for one entity, but of course they dont.  There is honest and reasonable disagreement as to which jim is the “real” jim, becaŭse in truth there is no one real jim, and there is fraŭd, as in lots of people pretending to be Paypal or the Bank of America, in order to steal your money.</p></li><li><p>
They are resources: Each refers to only a single interaction, but of course relationships are built out of many interactions.  There is no concept of a connection continuing throughout many pages, no concept of logon.  In building urls on top of TCP, we lost the concept of a connection.  And becaŭse urls are built out of TCP there is no concept of the content depending on both ends of the connection that a page at the Bank might be different for Bob than it is for Carol that it does in reality depend on who is connected is a kluge that breaks the architecture. </p><p>
Becaŭse security (ssl, https) is constructed below the level of a connection, becaŭse it lacks a concept of connection extending beyond a single page or a single url, a multitude of insecurities result. We want https and ssl to secure a connection, but https and ssl do not know there are such things as logons and connections. </li></ol><p>
That domain names and hence urls presuppose agreement, agreement which can never exist, we get cybersquatting and phishing and suchlike. </p><p>
That connections and logons exist, but are not explicitly addressed by the protocol leads to such attacks as cross site scripting and session fixation. </p><p>
A proposed fix for this problem is yurls, which apply Zookos triangle to the web: One adds to the domain name a hash of a rule for validating the public key, making it into Zookos globally unique identifier.  The nickname (non unique global identifier) is the web page title, and the petname (locally unique identifier) is the title under which it appears in your bookmark list, or the link text under which it appears in a web page. </p><p>
This, however, breaks normal form.  The public key is an attribute of the domain, while the nickname and petnames are attributes of particular web pages a breach of normal form related to the loss of the concept of connection a breach of normal form reflecting the fact that that urls provide no concept of a logon, a connection, or a user.  </p><p>
OK, so much for “uniform”.  Instead of uniform identifiers, we should have zooko identifiers, and zooko identifiers organized in normal form.  But what about “resource”, for “resource” also breaks normal form. </p><p>
Instead of “resources”, we should have “capabilities”.  A resource corresponds to a special case of a capability, a resource is a capability that that resembles a read only file handle. But what exactly are “capabilities”? </p><p>
People with different concepts about what is best for computer security tend to disagree passionately and at considerable length about what the word “capability” means, and will undoubtedly tell me I am a complete moron for using it in the manner that I intend to use it, but barging ahead anyway: </p><p>
A “capability” is an object that represents one end of a communication channel, or information that enables an entity to obtain such a channel, or the user interface representation of such a channel, or such a potential channel. The channel enables to possessor of the capability to do stuff to something, or get something.  Capabilities are usually obtained by being passed along the communication channel. Capabilities are usually obtained from capabilities, or inherited by a running instance of a program when the program is created, or read from storage after originally being obtained by means of another capability. </p><p>
This definition leaves out the issue of security to provide security, capabilities need to be unforgeable or difficult to guess.  Capabilities are usually defined with the security characteristics central to them, but I am defining capabilities so that what is central is connections and managing lots of potential connection.  Sometimes security and limiting access is a very important part of management, and sometimes it is not.</p><p>
A file handle could be an example of a capability it is a communication channel between a process and the file management system.  Suppose we are focussing on security and access managemnet to files: A file handle could be used to control and manage permissions if a program that has the privilege to access certain files could pass an unforgeable file handle to one of those files to a program that lacks such access, and this is the only way the less privileged program could get at those files. </p><p>
Often the server wants to make sure that the client at one end of a connection is the user it thinks it is, which fits exactly into the usual definitions of capabilities.  But more often, the server does not care who the client is, but the client wants to make sure that the server at the other end of the connection is the server he thinks it is, which, since it is the client that initiates the connection, does not fit well into many existing definitions of security by capabilities.
</p>
<p style="background-color : #ccffcc; font-size:80%">These documents are
licensed under the <a rel="license" href="http://creativecommons.org/licenses/by-sa/3.0/">Creative
Commons Attribution-Share Alike 3.0 License</a></p>
</body></html>

View File

@ -265,7 +265,7 @@ So general Malloc might send general Bob the the message:
> facing overwhelming enemy attack, falling back. You and general Dave may soon be cut off.
and general Dave the message:
and general Dave the message:
> enemy collapsing. In pursuit.
@ -292,7 +292,7 @@ yields an advantage of least two to one in getting ones way.
This is a Byzantine fault. And if people get away with it, pretty soon no
one is following process, and the capacity to act as one collapses. Thus
process becomes bureaucracy. Hence todays American State Department
and defense policy. Big corporations die of this, though states take longer
and defence policy. Big corporations die of this, though states take longer
to die, and their deaths are messier. It is a big problem, and people, not
just computer programs, fail to solve it all the time.