forked from cheng/wallet
Merge remote-tracking branch 'origin/docs'
This commit is contained in:
commit
78a14309e5
@ -3,19 +3,29 @@
|
||||
title: Blockdag Consensus
|
||||
---
|
||||
|
||||
# The solution outlined
|
||||
# The problem
|
||||
|
||||
For the reasons discussed, proof of work is incapable of handling a very large number of transactions per second. To replace fiat money, we need a consensus algorithm capable of a thousand times greater consensus bandwidth. There are plenty of consensus algorithms that can handle much higher consensus bandwidth, but they do not scale to large numbers of peers. They are usually implemented with a fixed number of peers, usually three peers, perhaps five, all of which have high reliability connections to each other in a single data centre.
|
||||
|
||||
In a decentralized open entry peer to peer network, you are apt to get a very large number of peers, which keep unpredictably appearing and disappearing and frequently have unreliable and slow connections.
|
||||
|
||||
Existing proof of stake crypto currencies handle this by "staking" which is in practice under the rug centralization. They are not really a decentralized peer to peer network with open entry.
|
||||
|
||||
## The solution outlined
|
||||
|
||||
### A manageable number of peers forming consensus
|
||||
|
||||
In a decentralized peer to peer network, it is impossible to avoid forks, and
|
||||
even with Practical Byzantine Fault Tolerant Consensus and Raft which
|
||||
have a vast and ingenious mathematical structure to prove that forks are
|
||||
impossible, and with a known and fixed very small number of peers all
|
||||
inside a single data centre with very good network connections, they
|
||||
wound up furtively allowing forks through the back door, to be resolved
|
||||
later, possibly much later, because getting a fork free answer could often
|
||||
take a very long time, and the network, though theoretically always live, in
|
||||
that it would theoretically deliver a definitive result eventually as long as
|
||||
$f+1$ non faulty peers were still at it, “eventually” was sometimes longer
|
||||
than people were willing to wait for.
|
||||
have a vast, complex, and ingenious mathematical structure to prove that
|
||||
forks are impossible, and with a known and fixed very small number of
|
||||
peers all inside a single data centre with very good network connections,
|
||||
they wound up "optimizing" the algorithm to furtively allow forks through
|
||||
the back door, to be resolved later, possibly much later, because getting a
|
||||
fork free answer could sometimes take a very long time, and though the
|
||||
network was theoretically always live, in that it would theoretically deliver
|
||||
a definitive result eventually as long as $f+1$ non faulty peers were still
|
||||
at it, “eventually” was sometimes longer than people were willing to wait.
|
||||
|
||||
Practical Byzantine Fault Tolerant Consensus is horrendously complicated
|
||||
and hard to understand and becomes a lot more complicated when you
|
||||
@ -30,8 +40,9 @@ So, your consensus mechanism must reduce, but cannot eliminate, forks
|
||||
and therefore should not try. (There are a bunch of impossibility proofs,
|
||||
which any proposed consensus mechanism must thread a very narrow path
|
||||
between) and when a fork happens, as it likely very often will, has to
|
||||
resolve that fork by everyone moving to the branch of the fork that has the
|
||||
most support.
|
||||
resolve that fork by everyone eventually moving to the prong of the fork
|
||||
that has the most support, as happens in Bitcoin proof of work. Which is
|
||||
why Bitcoin proof of work scales to large numbers of peers.
|
||||
|
||||
In proof of work consensus, you slow down the production of forks by
|
||||
making everyone wade through molasses, and you can tell how much
|
||||
@ -50,10 +61,18 @@ power. There are very few big miners.
|
||||
|
||||
But you don’t want all the peers to report in on which fork they are on,
|
||||
because too much data, which is the problem with all the dag consensus
|
||||
systems. Proof of work produces a very efficient and compact form of
|
||||
evidence of support for a fork.
|
||||
systems. They all, like Practical Byzantine Fault Tolerant Consensus and
|
||||
Byzantine Fault Tolerant Raft, of which they are variants, rely on on all
|
||||
the peers telling all the other peers what branch they are on. Proof of work
|
||||
produces a very efficient and compact form of evidence how much support
|
||||
there is for a prong of a fork.
|
||||
|
||||
## The solution, in overly simple outline form
|
||||
### Sampling the peers
|
||||
|
||||
So we have to sample the peers, or rather have each peer draw consensus
|
||||
from the same representative sample. And then we implement something
|
||||
similar to Paxos and Raft within that small sample. And sometimes peers
|
||||
will disagree about which sample, resulting in a fork, which has to be resolved.
|
||||
|
||||
For each peer that could be on the network, including those that have been
|
||||
sleeping in a cold wallet for years, each peer keeps a running cumulative
|
||||
@ -62,38 +81,79 @@ its total.
|
||||
|
||||
On each block of the chain, a peer’s rank is the bit position of the highest
|
||||
bit of the running total that rolled over when its stake was added for that
|
||||
block, up to a maximum. (The cumulative total does not have unlimited
|
||||
bits.)
|
||||
block.
|
||||
|
||||
So if Bob has a third of the stake of Carol, and $N$ is a rank that
|
||||
corresponds to bit position higher than the stake of either of them, then
|
||||
Bob gets to be rank $N$ or higher one third as often as Carol. But even if
|
||||
Bob gets to be rank $R$ or higher one third as often as Carol. But even if
|
||||
his stake is very low, he gets to be high rank every now and then.
|
||||
|
||||
The highest ranking peers get to decide, and the likelihood of being a high
|
||||
ranking peer depends on stake.
|
||||
A small group of the highest ranking peers get to decide on the next block,
|
||||
and the likelihood of being a high ranking peer depends on stake.
|
||||
|
||||
Each peer gets to be rank $N+1$ half as often as he gets to be rank $N$, and he gets to be a rank higher than $N$ as often as he gets to be rank $N$.
|
||||
Each peer is going to use a consensus created by those few peers that are
|
||||
high ranking at this block height of the blockchain, and since there are few
|
||||
such peers, there will usually only be one such consensus.
|
||||
|
||||
Any group of two or more peers can propose a next block, and if lots of
|
||||
groups do so, we have a fork. If a group of $m$ peers propose a block, and
|
||||
the lowest rank of any member of the group is rank $N$, then the rank of
|
||||
the block is $N$. A higher rank block beats a lower rank block, and if two
|
||||
blocks are the same rank, the larger group beats the smaller group.
|
||||
Each peer gets to be rank $R+1$ half as often as he gets to be rank $N$, and he gets to be a rank higher than $R$ as often as he gets to be rank $N$.
|
||||
|
||||
If two blocks have the same rank, their weight is proportional to $m$.
|
||||
The algorithm for producing the next block ensures that if the Hedera assumptions are true, (that everyone of high rank knows what high ranking
|
||||
peers are online, and everyone always gets through) a single next block
|
||||
will be produced by the unanimous agreement of the highest ranked peers,
|
||||
but we don't rely on this always being true. Sometimes, probably often,
|
||||
there will be multiple next blocks, and entire sequences of blocks. In
|
||||
which case each peer has to rank them and weight them and choose the
|
||||
highest ranked and weighted block, or the highest weighted sequence of blocks.
|
||||
|
||||
If a peer believes that there are enough peers of its own rank or higher
|
||||
online to form a block with higher rank or the same rank and greater
|
||||
weight than any existing block, it attempts to contact them and form such
|
||||
a block.
|
||||
A peer cannot tell the difference between network bisection and a whole lot
|
||||
of peers being offline.
|
||||
|
||||
The intention is that in the common case, the highest weighted block possible
|
||||
will be one that can and must be formed by very few peers.
|
||||
We have to keep producing a result when a whole lot of peers are offline,
|
||||
rather than just stop working as Practical Byzantine Fault Resistant
|
||||
Consensus and its raft equivalent does. Which means that in a long lasting
|
||||
network bisection, there is going to be fork which will have to be resolved
|
||||
when the network recovers.
|
||||
|
||||
This does not describe how that form consensus. It describes how we get the
|
||||
problem of forming consensus down to sufficiently few peers that it becomes
|
||||
manageable, and how we manage to keep on going should they fail.
|
||||
[running average]:./running_average.html
|
||||
"how to calculate the running average"
|
||||
{target="_blank"}
|
||||
|
||||
The rank of a block is rank of the lowest rank peer of the group forming the block, or one plus the [running average] of previous blocks rounded up to the nearest integer, whichever is less.
|
||||
|
||||
The weight of a block is $m*2^R$, where $m$ is the size of the group
|
||||
forming the block and $R$ is the rank of the block.
|
||||
|
||||
If the number and weight of online peers is stable, there will be two or
|
||||
more peers higher than the running average rank as often as there are less
|
||||
than two peers of that rank or higher. So the next block will be typically
|
||||
be formed by the consensus of two peers of the running average rank,
|
||||
or close to that.
|
||||
|
||||
If a peer is of rank to form a block of higher or equal rank to any existing
|
||||
block, and believes that there are enough peers online to form a block with
|
||||
higher rank or the same rank and greater weight than any existing block, it
|
||||
attempts to contact them and form such a block.
|
||||
|
||||
If the group forming a block contains two or more members capable of
|
||||
forming a block with higher rank (because it also contains members of
|
||||
lower rank than those members and lower than the running average plus
|
||||
one), the group is invalid, and if it purportedly forms a block, the block is
|
||||
invalid.
|
||||
|
||||
This rule is to prevent peers from conspiring to manufacture a fork with a falsely high weight under the rule for evaluating forks.
|
||||
|
||||
The intention is that in the common case, the highest ranked and weighted
|
||||
block possible will be one that can and must be formed by very few peers,
|
||||
so that most of the time, they likely succeed in forming the highest
|
||||
possible weighted and ranked block. To the extent that the Hedera
|
||||
assumptions hold true, they will succeed.
|
||||
|
||||
This does not describe how they form consensus. It describes how we get
|
||||
the problem of forming consensus down to sufficiently few peers that it
|
||||
becomes manageable, and how we deal with a group failing to form
|
||||
consensus in a reasonable time. (Another group forms, possibly excluding
|
||||
the worst behaved or least well connected members of the unsuccessful
|
||||
group)
|
||||
|
||||
Unlike Paxos and and Raft, we don't need a consensus process for creating
|
||||
the next block that is guaranteed to succeed eventually, which is important
|
||||
@ -101,11 +161,27 @@ for if one of the peers has connection problems, or is deliberately trying
|
||||
to foul things up, "eventually" can take a very long time. Rather, should
|
||||
one group fail, another group will succeed.
|
||||
|
||||
Correctly behaving peers will wait longer the lower their rank before attempting to participate in block formation, will wait longer before participating in attempting to form a low weighted block, and will not attempt to form a new block if a block of which they already have a copy would be higher weight. (Because they would abandon it immediately in favour of a block already formed.)
|
||||
Correctly behaving peers will wait longer the lower their rank before attempting to participate in block formation, will wait longer before participating in an attempt to form a low weighted block, and will not attempt to form a new block if a block of which they already have a copy would be higher rank or weight. (Because they would abandon it immediately in favour of a block already formed.)
|
||||
|
||||
In the course of attempting to form a group, peers will find out what other high ranking peers are online, so, if we make the Hedera assumption that everyone always gets through eventually and that everyone knows who is online, there will only be one block formed, and it will be formed by the set of peers that can form the highest ranking block. Of course, that assumption I seriously doubt.
|
||||
To ensure that forks and network bisections are resolved, the timeout before attempting to forming a new block increases exponentially with the difference between the proposed block and the running average, and inverse linearly with the running average.
|
||||
|
||||
Suppose that two blocks of equal weight are produced. Well, then, we obviously have enough high ranking peers online and active to produce a higher weighted block, and some of them should do so, and if they don’t, chances are that the next block built on one block will have higher weight than the next block built on the other block.
|
||||
High ranking peers try to form a new block first, and in the event of
|
||||
network bisection, the smaller part of the bisection will on average have
|
||||
fewer peers, therefore the highest ranking peers available will on average
|
||||
be lower rank, so the smaller portion of the bisection will form new blocks
|
||||
more slowly on average.
|
||||
|
||||
In the course of attempting to form a group, peers will find out what other high ranking peers are online, so, if the Hedera assumption (that everyone
|
||||
always gets through eventually and that everyone knows who is online) is
|
||||
true, or true enough, there will only be one block formed, and it will be
|
||||
formed by the set of peers that can form the highest ranking block. Of course, the Hedera assumption is not always going to be true..
|
||||
|
||||
Suppose that two successor blocks of equal weight are produced: Well,
|
||||
then we have enough high ranking peers online and active to produce a
|
||||
higher weighted block, and some of them should do so. And if they fail to
|
||||
do so, then we take the hash of the public keys of the peers forming the
|
||||
block, their ranks, and the block height of the preceding block, and the
|
||||
largest hash wins.
|
||||
|
||||
When a peer signs the proposed block, he will attach a sequence number
|
||||
to his signature. If a peer encounters two inconsistent blocks that have the
|
||||
@ -117,32 +193,40 @@ have the same sequence number, he discounts both signatures, lowering the weight
|
||||
That is how we resolve two proposed successor blocks of the of the same
|
||||
blockchain.
|
||||
|
||||
Fork production is limited, because there are not that many high ranking peers, because low ranking peers hold back for higher ranking peers to take care of block formation,
|
||||
Fork production is limited, because there are not that many high ranking peers, and because low ranking peers hold back for higher ranking peers to take care of block formation,
|
||||
|
||||
But we are going to get forks anyway.
|
||||
But we are going to get forks anyway. Not often, but sometimes. I lack
|
||||
confidence in the Hedera assumptions.
|
||||
|
||||
What do we do if we have a fork, and several blocks have been created on
|
||||
each prong of the fork?
|
||||
one or both prongs of the fork?
|
||||
|
||||
If for each block on one prong of the fork the group forming the block
|
||||
beats or equals the groups of all of the blocks on the other prong of the
|
||||
fork, and at least one group wins at least once, that prong wins.
|
||||
Then we calculate the weight of the prong:
|
||||
|
||||
Which is to say, if every group on one prong is of higher or equal rank than any of the groups on the other prong, and at least as numerous as any of the groups of equal rank on the other prong, and at least one group on that prong is higher rank than one group on the other prong, or more numerous than one group of equal rank on the other prong, then that prong wins.
|
||||
$$\displaystyle\sum\limits_{i}{\large 2^{R_i}}$$
|
||||
|
||||
Otherwise, if some win and some lose, then to compare the prongs of a
|
||||
fork, the weight of a prong of a fork with $k$ blocks is
|
||||
$$\displaystyle\sum\limits_{i}^k \frac{m_i*2^{N_i}}{k}$$
|
||||
where $m_i$ is the size of the group that formed that block, and $N_i$ is the
|
||||
rank of the lowest ranked member of that group.
|
||||
Where $i$ is a peer that has formed blocks on that prong, and R_i$ is the
|
||||
rank of the last block on that prong of the fork that he has participated in forming.
|
||||
|
||||
This value, the weight a prong of the fork, will over time for large deep
|
||||
forks approximate the stake of the peers online on that prong, without the
|
||||
Which, in the event of a fork, will on average reflect the total stake of
|
||||
peers on that fork.
|
||||
|
||||
If two prongs have the same weight, take the prong with the most transactions. If they have the same weight and the same number of transactions, hash all the public keys of the signatories that formed the
|
||||
blocks, their ranks, and the block height of the root of the fork and take
|
||||
the prong with the largest hash.
|
||||
|
||||
This value, the weight of a prong of the fork will, over time for large deep
|
||||
forks, approximate the stake of the peers online on that prong, without the
|
||||
painful cost taking a poll of all peers online, and without the considerable
|
||||
risk that that poll will be jammed by hostile parties.
|
||||
|
||||
Each correctly behaving peer will circulate the highest weighted prong of the fork of which it is aware, ignore all lower weighted prongs, and only attempt to create successor blocks for the highest weighted prong.
|
||||
|
||||
To add a block to the chain requires a certain minimum time, and the
|
||||
lower the rank of the group forming that block, the longer that time. If a
|
||||
peer sees new block that appears to it to be unreasonably and improperly
|
||||
early, or a fork prong with more blocks than it could have, it will ignore it.
|
||||
|
||||
Every correctly behaving peer will copy, circulate, and attempt to build on the highest weighted chain of blocks of which he is aware and will ignore all others.
|
||||
|
||||
This produces the same effect as wading through molasses, without the
|
||||
@ -260,8 +344,7 @@ to add another block, rather than attempt to replace an existing block.
|
||||
In Paxos, once one peer accepts one value, it will eventually become the
|
||||
consensus value, assuming that everyone eventually gets through and that
|
||||
the usual network problems do not foul things up. Thus Paxos can provide
|
||||
a definitive result eventually, while Bitcoin's results are never definitive,
|
||||
merely exponentially probable.
|
||||
a definitive result eventually, while the results of Bitcoin type consensus are never definitive, merely exponentially probable.
|
||||
|
||||
In Paxos, a peer learns of the definitive and final consensus when it
|
||||
discovers that a majority of peers have accepted one value. Which if
|
||||
@ -273,14 +356,34 @@ other, but there is no moment when a peer discovers than one branch is
|
||||
definitive and final. It just finds that one branch is becoming more and
|
||||
more likely, and all the other branches less and less likely.
|
||||
|
||||
Thus paxos has a stronger liveness property than bitcoin, but this
|
||||
difference is in practice not important, for paxos may take an indefinitely
|
||||
long time before it can report a definite and final consensus, while Bitcoin
|
||||
takes a fairly definite time to report it is nearly certain about the consensus
|
||||
value and that value is unlikely is unlikely to change.
|
||||
Thus Paxos type algorithms (such as Paxos, Raft, and their Byzantine fault
|
||||
tolerant equivalents) have a stronger liveness property than bitcoin type
|
||||
algorithms, but this difference is in practice not important, for paxos may
|
||||
take an indefinitely long time before it can report a definite and final
|
||||
consensus, while a Bitcoin like consensus algorithm takes a fairly definite
|
||||
time to report it is nearly certain about the consensus value and that value
|
||||
is unlikely to change.
|
||||
|
||||
Further, in actual practice, particularly during a view change, Paxos and Raft are frequently in a state where a peer knows that one view is overwhelmingly likely to become final, and another view highly unlikely, but the state takes a while to finalize. And the client is waiting. So it winds up reporting a provisional result to the client.
|
||||
|
||||
Thus in practice, the difference between the liveness condition of Paxos
|
||||
type algorithms, which eventually provide a definitive proof that
|
||||
consensus has been achieved, is not usefully stronger than the weaker liveness condition of consensus algorithms that merely provide an ever
|
||||
growing likelihood that consensus has been achieved, and instead of
|
||||
telling the client that consensus has been achieved or not, merely tell the
|
||||
client how strong the evidence is that consensus has been achieved.
|
||||
|
||||
And in practice, Byzantine Fault Tolerant Paxos like algorithms tend to be
|
||||
"optimized" by behaving like bitcoin type algorithms, making their
|
||||
implementation considerably more complicated.
|
||||
|
||||
Crypto currency consensus algorithms derived from Paxos equivalent
|
||||
consensus fail to scale, because they actually have to find out that a
|
||||
majority of peers are on the same consensus. And there could be a lot of
|
||||
peers, so this is a lot of data, and it is quite easy to maliciously attack the
|
||||
network in ways to make this data difficult to obtain, if someone found it
|
||||
convenient to stall a transaction from going through. "The cheque is in the mail."
|
||||
|
||||
# Bitcoin does not scale to competing with fiat currency
|
||||
|
||||
Bitcoin is limited to ten transactions per second. Credit card networks
|
||||
|
@ -856,100 +856,161 @@ solve the problem of the number of items not being a power of two?
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
width="29em" height="17em"
|
||||
viewBox="8 209 164 102"
|
||||
style="background-color:ivory" stroke-width=".6" stroke-linecap="round">
|
||||
<g font-family="'Times New Roman'" font-size="6" font-weight="400" fill-rule="evenodd">
|
||||
<g id="height_3_tree" fill="none" >
|
||||
<path stroke="#b00000"
|
||||
d="
|
||||
M71.36 234.686s2.145-.873 3.102 0c1.426 1.303 14.645 21.829 16.933 23.136 1.302.745 4.496.45 5-2.3
|
||||
M145.916 220c0-.93.124-.992.992-1.364.869-.373 2.42-.373 3.04.558.62.93-2.852-4.94 18.607 38.394.715 1.443 2.348 1.186 4-2
|
||||
M147.218 218.5c1.303-.124 1.675.062 2.11.93.434.868.558 3.846.558 3.846-.25 2.496.31 3.597-1.365 19.166-1.675 15.568-1.54 21.825-.744 24.872.744 3.853 3.0 2.853 5.2 .295
|
||||
M71.36 234.686c2.42-.434 2.916-.93 6.079-.186 3.163.745 4.466 1.551 12.715 5.52 8.25 3.97 37.774 3.66 41.31 2.606C134.999 241.57 136 240 137 239
|
||||
M71.36 234.686s1.551-.558 2.171.186c.62.745 2.481 4.528 1.8 10.545-.683 6.016-2.854 20.719-2.854 22.577 0 2.171 1.116 2.482 2.543 1.8C76.447 269.11 76 268 77 264"/>
|
||||
<path stroke="#80e080"
|
||||
d="
|
||||
M10 253c-3.536 5.335-4.29 7.765-4.466 12.095-.139 3.405.677 6 3.3 2.094
|
||||
M18 245c1.799-1.55 20.903-7.319 35.603-10.855 14.7-3.535 18.918-7.69 52.35-8.621 33.432-.93 36.037-.869 38-3
|
||||
viewBox="0 200 230 102"
|
||||
style="background-color:ivory" stroke-width=".6" stroke-linecap="round" >
|
||||
<!--
|
||||
viewBox="0 200 230 102"
|
||||
viewBox="-12 209 360 102" -->
|
||||
<!-- viewBox="130 209 180 50" -->
|
||||
<g font-family="'Times New Roman'" font-size="10" font-weight="400" fill-rule="evenodd" fill="black" >
|
||||
<path stroke="#00D000" fill="none"
|
||||
d="
|
||||
M10 253c-3.536 5.335-4.29 7.765-4.466 12.095-.139 3.405.677 6 3.3 2.094
|
||||
M18 245 c1.799,-1.55 20.903,-7.319 35.603,-10.855
|
||||
14.7,-3.535 18.918,-7.69 52.35,-8.621 c20,0 0,-0 35.5,0
|
||||
q4,-0 4,-3.5
|
||||
M21 248c2.17-.372 6.744-1.343 10.792-1.736 5.858-.57 18.925-2.228 29-4.094 3.36-.728 7.673-1.492 7.618-2.812
|
||||
M16.963 253.232c-.0 0 3 4 5.2 4.2 1.7 .7 4.093 .6 7-2.3"/>
|
||||
<path stroke="#000000"
|
||||
d="
|
||||
M70.077 236c-.162-6.288 74.008-3.04 76-12
|
||||
m-7.417 12c.161-6.127 9.306-6.425 8.5-12"/>
|
||||
<g id="height_2_tree">
|
||||
<path stroke="#b00000"
|
||||
d="
|
||||
M31.3 250.83c1.054-.372 2.046-.744 2.357.31.31 1.055-.571 11.044.682 15.569C36.013 272.766 38 269.675 39 266"/>
|
||||
<path stroke="#000000"
|
||||
d="
|
||||
M29.767 252c1.774-4 38.858-5.904 39.18-12
|
||||
M60 252c0-4.875 10.41-6.871 11.205-12"/>
|
||||
<g id="height_1_tree">
|
||||
<path
|
||||
style="stroke:#B00000"
|
||||
d="m 10,264.1 c 1,-2.2 3.2,-2 3.85,-0.4 C 15,267 20,273 21,266"
|
||||
id="prev_leaf_link"/>
|
||||
<path stroke="#000"
|
||||
d="M10.09 264c0-4 18.868-.062 19.174-8.062M21.866 264c0-3.008 8.893-1.544 9.513-8"/>
|
||||
<g id="leaf_vertex" >
|
||||
<g style="stroke:#000000;">
|
||||
<path
|
||||
d="M 11.7,265 8,271
|
||||
M 11.7,265 9.5,271
|
||||
M 11.7,265 11,271"
|
||||
id="path1024" />
|
||||
</g>
|
||||
<rect id="merkle_vertex" width="4" height="4" x="8" y="264" fill="#00f"/>
|
||||
</g>
|
||||
<use width="100%" height="100%" transform="translate(12)" xlink:href="#leaf_vertex"/>
|
||||
<use width="100%" height="100%" transform="translate(20 -12)" xlink:href="#merkle_vertex"/>
|
||||
</g>
|
||||
<use width="100%" height="100%" transform="translate(30)" xlink:href="#height_1_tree"/>
|
||||
<use width="100%" height="100%" transform="translate(60 -28)" xlink:href="#merkle_vertex"/>
|
||||
</g>
|
||||
<g width="100%" height="100%" >
|
||||
<use transform="translate(68)" xlink:href="#height_2_tree"/>
|
||||
<use transform="translate(136 -44)" xlink:href="#merkle_vertex"/>
|
||||
<use transform="translate(144)" xlink:href="#height_1_tree"/>
|
||||
</g>
|
||||
</g>
|
||||
M16.963 253.232c-.0 0 3 4 5.2 4.2 1.7 .7 4.093 .6 7-2.3
|
||||
"/>
|
||||
<g id="blockchain_id" >
|
||||
<ellipse cx="14" cy="249" fill="#80e080" rx="8" ry="5"/>
|
||||
<text>
|
||||
<tspan x="11.08" y="251.265">id</tspan>
|
||||
<ellipse cx="14" cy="249" fill="#00D000" rx="8" ry="5"/>
|
||||
<text fill="black">
|
||||
<tspan x="11.08" y="251.265">id</tspan>
|
||||
</text>
|
||||
</g>
|
||||
<rect width="168" height=".4" x="8" y="276" fill="#000"/>
|
||||
<text y="278">
|
||||
<tspan dy="8" x="6" >Immutable append only file as a Merkle chain</tspan>
|
||||
</text>
|
||||
<use transform="translate(0,50)" xlink:href="#blockchain_id"/>
|
||||
<path
|
||||
style="fill:none;stroke:#80e080;"
|
||||
d="m 18,297 c 4,-6 4,-6 5.6,3 C 25,305 28,304 28.5,300"/>
|
||||
<g id="4_leaf_links">
|
||||
<g id="2_leaf_links">
|
||||
<g id="leaf_link">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;"
|
||||
d="m 29,299 c 4,-6 4,-6 5.6,3 C 35,305 38,304 38.5,300"/>
|
||||
<use transform="translate(20,33)" xlink:href="#leaf_vertex"/>
|
||||
<g id="balanced merkle tree" fill="none">
|
||||
<g id="height_4_tree" >
|
||||
<path stroke="#F00000"
|
||||
d="
|
||||
M146,222 C146,217 152,217 156,219 S202,238.2 206,240
|
||||
c4,2 8,3 8,-2
|
||||
M146,222 C146,217 152,217 154,221
|
||||
l15.5,33 l1,2 c1,2 3.5,2 3.5,-2
|
||||
M146,222 C146,217 152,217 151,222
|
||||
S149,264 149,266 c1,5 5,5 4,0
|
||||
"/>
|
||||
<!--
|
||||
<path stroke="#F00000"
|
||||
d="
|
||||
M146,220 q2,-3 4,-2 t26,14 t34,9 t2,-1
|
||||
M147,218.5 q2,-2 2,4 t-1,22 t1,22 t4,0
|
||||
M146,220 c0,-1 0,-1 1,-1 c1,0 3,0 3,1 s3,0 12,26 s 14,-1 14,9
|
||||
"/>
|
||||
-->
|
||||
<path stroke="#000000"
|
||||
d="
|
||||
M146, 220 c4,-20 151,2 151.5,-12
|
||||
m-7,12 c0,-6 9,-6 8.5,-12"
|
||||
/>
|
||||
<g id="height_3_tree">
|
||||
<path stroke="#F00000"
|
||||
d="
|
||||
M70,237 c5,-9 5,0 15,12 s11.5,7 14,3
|
||||
M70,237 c5,-9 5,0 3,20 s6,10 5,8
|
||||
"/>
|
||||
<path stroke="#000000"
|
||||
d="
|
||||
M70,237 c1,-5 9,-5 20,-5 s30,-1 39,-2.5
|
||||
C146,227 144, 227, 145.5,224
|
||||
m-7,12 c0,-6 9,-6 8,-12"
|
||||
/>
|
||||
<g id="height_2_tree">
|
||||
<path stroke="#F00000"
|
||||
d="
|
||||
M30,254 C33,249 35,249 35,263 S43,262 40,267"/>
|
||||
<path stroke="#000000"
|
||||
d="
|
||||
M30,254 C33,249 35,249 50,248 s20,-9 20,-9"/>
|
||||
<path id="uplink_1-2" stroke="#000000"
|
||||
d="
|
||||
M60 252c0-4.875 10.41-6.871 11.205-12
|
||||
"/>
|
||||
<g id="height_1_tree">
|
||||
<path stroke="#000"
|
||||
d="M10.09,264 q2,-4 11,-4 t9,-5
|
||||
M22,264 c0,-3 9,-1.5 9,-8"/>
|
||||
<g id="leaf_vertex" >
|
||||
<g style="stroke:#000000;">
|
||||
<path id="path1024"
|
||||
d="
|
||||
M 11.7,265 8,271
|
||||
M 11.7,265 9.5,271
|
||||
M 11.7,265 11,271
|
||||
">
|
||||
</g>
|
||||
<rect id="merkle_vertex" width="4" height="4" x="8" y="264" fill="#00F"/>
|
||||
</g><!-- end id="leaf vertex" -->
|
||||
<use width="100%" height="100%" transform="translate(12)" xlink:href="#leaf_vertex"/>
|
||||
<use width="100%" height="100%" transform="translate(20 -12)" xlink:href="#merkle_vertex"/>
|
||||
</g><!-- end id="height_1_tree" -->
|
||||
<use width="100%" height="100%" transform="translate(30)" xlink:href="#height_1_tree"/>
|
||||
<use width="100%" height="100%" transform="translate(60 -28)" xlink:href="#merkle_vertex"/>
|
||||
</g><!-- end id="height_2_tree" -->
|
||||
<g width="100%" height="100%" >
|
||||
<use transform="translate(68)" xlink:href="#height_2_tree"/>
|
||||
<use transform="translate(136 -44)" xlink:href="#merkle_vertex"/>
|
||||
</g>
|
||||
<use transform="translate(10,0)" xlink:href="#leaf_link"/>
|
||||
</g>
|
||||
<use transform="translate(20,0)" xlink:href="#2_leaf_links"/>
|
||||
</g>
|
||||
<use transform="translate(40,0)" xlink:href="#4_leaf_links"/>
|
||||
<use transform="translate(80,0)" xlink:href="#4_leaf_links"/>
|
||||
<use transform="translate(140,33)" xlink:href="#leaf_vertex"/>
|
||||
<text y="208">
|
||||
<tspan dy="8" x="6" >Immutable append only file as a collection of</tspan>
|
||||
<tspan dy="8" x="6" >balanced binary Merkle trees</tspan>
|
||||
<tspan dy="8" x="6" >in postfix order</tspan>
|
||||
</text>
|
||||
</g>
|
||||
</svg>
|
||||
</g><!-- end id="height_3_tree" -->
|
||||
<use transform="translate(144)" xlink:href="#height_3_tree"/>
|
||||
<use transform="translate(288 -60)" xlink:href="#merkle_vertex"/>
|
||||
</g><!-- end id="height_4_tree" -->
|
||||
</g> <!-- end id="balanced merkle tree" -->
|
||||
<text y="188">
|
||||
<tspan dy="12" x="6" >Immutable append only file as a collection of</tspan>
|
||||
<tspan dy="12" x="6" >balanced binary Merkle trees</tspan>
|
||||
<tspan dy="12" x="6" >in postfix order</tspan>
|
||||
</text>
|
||||
<g id="merkle_chain">
|
||||
<use transform="translate(0,50)" xlink:href="#blockchain_id"/>
|
||||
<path
|
||||
style="fill:none;stroke:#00D000;"
|
||||
d="m 18,297 c 4,-6 4,-6 5.6,3 C 25,305 28,304 28.5,300"/>
|
||||
<g id="16_leaf_links">
|
||||
<g id="8_leaf_links">
|
||||
<g id="4_leaf_links">
|
||||
<g id="2_leaf_links">
|
||||
<g id="leaf_link">
|
||||
<path
|
||||
style="fill:none;stroke:#000000;"
|
||||
d="m 29,299 c 4,-6 4,-6 5.6,3 C 35,305 38,304 38.5,300"/>
|
||||
<use transform="translate(20,33)" xlink:href="#leaf_vertex"/>
|
||||
</g><!-- end id="leaf link" -->
|
||||
<use transform="translate(10,0)"
|
||||
xlink:href="#leaf_link"
|
||||
/>
|
||||
</g> <!-- end id="2 leaf links" -->
|
||||
<use transform="translate(20,0)"
|
||||
xlink:href="#2_leaf_links"
|
||||
/>
|
||||
</g> <!-- end id="4 leaf links" -->
|
||||
<use transform="translate(40,0)"
|
||||
xlink:href="#4_leaf_links"
|
||||
/>
|
||||
</g> <!-- end id="8 leaf links" -->
|
||||
<use transform="translate(80,0)"
|
||||
xlink:href="#8_leaf_links"
|
||||
/>
|
||||
</g> <!-- end id="16 leaf links" -->
|
||||
<use transform="translate(160,0)"
|
||||
xlink:href="#16_leaf_links"
|
||||
/>
|
||||
</g> <!-- end id="merkle chain" -->
|
||||
<rect width="210" height=".4" x="8" y="276" fill="#000"/>
|
||||
<text y="280">
|
||||
<tspan dy="8" x="6" >Immutable append only file as a Merkle chain</tspan>
|
||||
</text>
|
||||
</g>
|
||||
</svg>
|
||||
|
||||
This is a Merkle binary dag, not a Merkle-patricia tree. I found that
|
||||
generalizing Merkle patricia trees to handle the case of immutable append
|
||||
only files led to more complicated data structures and more complicated
|
||||
descriptions. It has a great deal in common with a Merkle-patricia tree, but
|
||||
a Merkle vertex is its own type, even though it convenient for it to have
|
||||
much in common with a Merkle patricia vertex. Putting it into the patricia
|
||||
straightjacket generated a lot of mess. A Merkle patricia vertex is best
|
||||
handled as a special case of a Merkle vertex, and a Merkle patricia tree as
|
||||
special vertices within a Merkle dag.
|
||||
|
||||
The superstructure of balanced binary Merkle trees allows us to verify any
|
||||
part of it with only $O(log)$ hashes, and thus to verify that one version of
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
lang: en
|
||||
# katex
|
||||
title: Number encoding
|
||||
---
|
||||
|
||||
@ -10,6 +10,38 @@ in protocols tend to become obsolete. Therefore, for future
|
||||
upwards compatibility, we want to have variable precision
|
||||
numbers.
|
||||
|
||||
Secondly, to represent integers within a patricia merkle tree representing a database index, we want all values to be left field aligned, rather than right field aligned.
|
||||
|
||||
## Compression algorithm preserving sort order
|
||||
|
||||
We want to represent integers by byte strings whose lexicographic order reflects their order as integers, which is to say, when sorted as a left aligned field, sort like integers represented as a right aligned field. (Because a Merkle patricia tree has a hard time with right aligned fields)
|
||||
|
||||
To do this we have a field that is a count of the number of bytes, and the size of that field is encoded in unary.
|
||||
|
||||
Thus a single byte value, representing integers in the range $0\le n \lt 2^7$ starts with a leading zero bit
|
||||
|
||||
A two byte value, representing integers in the range $2^7\le n \lt 2^{13}+2^7$ starts with the bits 100
|
||||
|
||||
A three byte value, representing integers in the range $2^{13}+2^7 \le n \lt 2^{21}+2^{13}+2^7$ starts with the bits 101
|
||||
|
||||
A four byte value representing integers in the range $2^{21}+2^{13}+2^7 \le n \lt 2^{27}+2^{21}+2^{13}+2^7$ starts with the bits 11000
|
||||
|
||||
A five byte value representing integers in the range $2^{21}+2^{13}+2^7 \le n \lt 2^{35}+2^{27}+2^{21}+2^{13}+2^7+2^{13}+2^7$ starts with the bits 11001
|
||||
|
||||
A six byte value representing integers in the range $2^{35}+2^{21}+2^{13}+2^7 \le n \lt 2^{43}+2^{35}+2^{27}+2^{21}+2^{13}+2^7+2^{13}+2^7$ starts with the bits 11010
|
||||
|
||||
A seven byte value representing integers in the range $2^{43}+2^{35}+2^{21}+2^{13}+2^7 \le n \lt2^{51}+2^{43}+2^{35}+2^{27}+2^{21}+2^{13}+2^7+2^{13}+2^7$ starts with the bits 11011
|
||||
|
||||
An eight byte value representing integers in the range $2^{51}2^{43}+2^{35}+2^{21}+2^{13}+2^7 \le n \lt2^{57}+2^{51}+2^{43}+2^{35}+2^{27}+2^{21}+2^{13}+2^7+2^{13}+2^7$ starts with the bits 1110000
|
||||
|
||||
A nine byte value representing integers in the range $2^{57}+2^{51}+2^{43}+2^{35}+2^{21}+2^{13}+2^7 \le n \lt2^{65}+2^{57}+2^{51}+2^{43}+2^{35}+2^{27}+2^{21}+2^{13}+2^7+2^{13}+2^7$ starts with the bits 1110001
|
||||
|
||||
Similarly the bits 111 0111 indicate a fifteen byte value representing 113 bit integers.
|
||||
|
||||
To represent signed integers so that signed integers sort correctly with each other (but not with unsigned integers) the leading bit indicates the sign, a one bit for positive signed integers, and a zero bit for negative integers, and the if the signed integer is negative, we invert the bits of the byte count. Thus signed integers in the range $-2^6\le n \lt 2^6$ are represented by the corresponding eight bit value with its leading bit inverted.
|
||||
|
||||
This is perhaps a little too much cleverness except for the uncommon case where we actually need a representation that sorts correctly.
|
||||
|
||||
## Use case
|
||||
|
||||
QR codes and prefix free number encoding is useful in cases where we want data to be self describing – this bunch of bits is to be interpreted in a certain way, used in a certain action, means one thing, and not another thing. At present there is no standard for self description. QR codes are given meanings by the application, and could carry completely arbitrary data whose meaning and purpose comes from outside, from the context.
|
||||
|
32
docs/running_average.md
Normal file
32
docs/running_average.md
Normal file
@ -0,0 +1,32 @@
|
||||
---
|
||||
# katex
|
||||
title: running average
|
||||
---
|
||||
|
||||
The running average $a_n$ of a series $R_n$
|
||||
|
||||
$$a_n =\frac{a\_{n-1}\times u+R_n\times v}{u+v}$$
|
||||
|
||||
The larger $u$ and the smaller $v$, the longer the period over which the running average is taken.
|
||||
|
||||
We don’t want to do floating point arithmetic, because different peers
|
||||
might get different answers.
|
||||
|
||||
It is probably harmless if they get different answers, provided that this
|
||||
happens rarely, but easier to ensure that this never happens than to try to
|
||||
think about all the possible circumstances where this could become a
|
||||
problem, where malicious people could contrive inputs to make sure it
|
||||
becomes a problem.
|
||||
|
||||
Unsigned integer arithmetic is guaranteed to give the same answers on all
|
||||
hardware under all compilers. So, when we can, use unsigned integers.
|
||||
|
||||
$R_n$is a very small integer where we are worried about very small
|
||||
differences, so we rescale to represent arithmetic with eight or so bits after
|
||||
the binary point in integer arithmetic.
|
||||
|
||||
$\widetilde {a_n}$represents the rescaled running average $a_n$
|
||||
|
||||
$$\widetilde {a_n}=\frac{u\widetilde{a_{n-1}} +v{R_n}\times 2^8}{u+v}$$
|
||||
|
||||
$$a_n = \bigg\lfloor\frac{{\widetilde {a_n}}+2^7}{2^8}\bigg\rfloor$$
|
@ -32,6 +32,11 @@ and twist the arms of the miners to exclude transactions in those bitcoins,
|
||||
with the result bitcoins cease to be fungible and thus cease to be money, as
|
||||
uncut diamonds ceased to be money.
|
||||
|
||||
We need untraceability to prevent the blood diamonds attack. Even if you
|
||||
do not need your transactions to be untraceable, if they are traceable, ham
|
||||
fisted government intervention is likely to make your money disappear
|
||||
under you, as the value of uncut diamonds was lost.
|
||||
|
||||
Bitcoin is vulnerable to the one third attack. If one third of miners exclude
|
||||
“tainted” bitcoins and refuse to add to a chain ending in a block containing
|
||||
“tainted” bitcoins, other miners have an incentive to exclude “tainted”
|
||||
@ -65,7 +70,8 @@ recording the accumulated effect of many transactions.
|
||||
A true lightning network functions as full reserve correspondence
|
||||
banking with no central authority. The existing Bitcoin lightning network
|
||||
functions as marginal reserve banking with good conduct enforced by central
|
||||
authority.
|
||||
authority. Taproot was designed to make this fixable, and plans are afoot
|
||||
to fix it. At the time of writing, it was not clear to me that it has been fixed.
|
||||
|
||||
But even with sidechaining and the lightning network, we are going to
|
||||
need the capacity to put thousands of transactions per second on the
|
||||
@ -106,7 +112,7 @@ connection to the internet, or someone’s computer goes down), or, if
|
||||
someone breaks the circle in a Byzantine failure suggestive of Byzantine
|
||||
defection, then, and only then, it goes through on the primary blockchain.
|
||||
|
||||
The problem with the existing Bitcoin lightning network is that it has a
|
||||
The problem with the existing Bitcoin lightning network is, or recently was, that it has a
|
||||
hidden and unexplained central authority, whom you have to trust, and
|
||||
which does stuff that is never explained or revealed. This is not stable, and
|
||||
does not scale. Not only is it evil, it is incapable of connecting everyone
|
||||
|
@ -176,7 +176,7 @@ $$k \approx \frac{m\,l\!n(2)}{n}%uses\, to increase spacing, uses \! to merge le
|
||||
|
||||
$$k \approx\frac{m\>\ln(2)}{n}%uses\> for a marginally larger increase in spacing and uses \ln, the escape for the well known function ln $$
|
||||
|
||||
$$ \exp\bigg(\frac{a+bt}{x}\bigg)=\huge e\normalsize^{\big(\frac{a+bt}{x}\big)}%use the escape for well known functions, use text size sets$$
|
||||
$$ \exp\bigg(\frac{a+bt}{x}\bigg)=\huge e^{\bigg(\frac{a+bt}{x}\bigg)}%use the escape for well known functions, use text size sets$$
|
||||
|
||||
$$k\text{, the number of hashes} \approx \frac{m\ln(2)}{n}% \text{} for render as text$$
|
||||
|
||||
@ -192,14 +192,76 @@ scalable vector graphics files (`svg` files) to html can get complicated, and
|
||||
interfacing the resulting complicated html to markdown can get more
|
||||
complicated.
|
||||
|
||||
Which means that any time I want to edit the `svg`, I have to extract it into a
|
||||
temporary `svg` file, edit it in Inkscape and Visual Studio Code, minify it in
|
||||
Inkscape files are unreadable, and once they are cleaned up, Inkscape
|
||||
cannot read them. To (irreversibly) clean up an Inkscape file, minify it in
|
||||
Visual Studio Code to get rid of all confusing mystery cruft inserted by
|
||||
Inkscape, edit it back into markdown compatible form, and reinsert it in
|
||||
the markdown file.
|
||||
|
||||
Which assumes my images, once done, are rarely or never going
|
||||
to change.
|
||||
A sequence of straight lines is M point, L point, L point.
|
||||
|
||||
Z and z draw a straight line back to the beginning, use in conjunction with
|
||||
`fill="#red"` , for example `fill="#FF0000"`. If the line is open, `fill="none"`
|
||||
|
||||
Drawing smooth curves by typing in text is painful and slow, but so is
|
||||
drawing them in Inkscape. Inkscape is apt to do a series of C beziers with
|
||||
sharp corners between them, and when I try to fix the sharp corners, the
|
||||
bezier goes weird.
|
||||
|
||||
What Inkscape should do is let you manipulate a sequence of control
|
||||
points, and draw an M C S S ..S bezier to the point halfway between the
|
||||
final even numbered control point and the preceding control point,
|
||||
creating an additional control point by mirroring if the user only provides
|
||||
an odd number of control points.
|
||||
|
||||
What it does instead is complicated and mysterious.
|
||||
|
||||
To convert a sequence of straight lines into a smooth curve, you encounter
|
||||
much grief matching the end of one bezier to the start of the next.\
|
||||
And if you do not match them, you get corners between beziers.
|
||||
|
||||
* find the midpoint of each edge, or the midpoint of every second edge\
|
||||
When I say midpoint, somewhere along the line, close to where
|
||||
you want the turn to sharp, and far from where you want the turn to be gentle.
|
||||
|
||||
* A smooth curve starts a midpoint of first edge then:\
|
||||
C first vertex, second vertex, next midpoint S second next vertex
|
||||
next midpoint S second next vertex, next midpoint ... S second next
|
||||
vertex next midpoint ...\
|
||||
When I say second next vertex, I mean you skip a vertex, and thus
|
||||
do not have to find the preceding midpoint. But the implied vertex
|
||||
is the mirror of the preceding vertex, which if your midpoint is off,
|
||||
is not where you expect it to be.\
|
||||
If it fails to go through the midpoint parallel to what you think the
|
||||
endpoints are, you need to tinker the control point to reduce the
|
||||
discrepancy, or else it is apt to be off at subsequent midpoints.
|
||||
|
||||
* Or starts at midpoint of first edge then:\
|
||||
Q vertex midpoint T midpoint T midpoint ....\
|
||||
But the Q T T T chain is apt to act weird unless your midpoints are
|
||||
near the middle, because the implied vertex is the mirror of the
|
||||
preceding vertex, which may not be the vertex that you intended if
|
||||
your midpoint is not near the centre of the line segment to that vertex.
|
||||
|
||||
Using S and T has the enormous advantage that if your midpoint is a bit
|
||||
off to one side of the line segment or the other, you still get a smooth
|
||||
curve, no kinks between beziers But if it is too far off from the centre,
|
||||
your curve will be off from the line segments, often in weird, surprising,
|
||||
and complicated ways.
|
||||
|
||||
On the other hand, if you have a C or a Q bezier following a previous C
|
||||
or Q bezier, getting the join smooth is a bitch.
|
||||
|
||||
To find the midpoint of a line segment, I use `M` approximate midpoint
|
||||
`v6 v-12 v6 h6 h-12 h6` to mark the location with a cross.
|
||||
|
||||
To draw a long twisty curve, mark your intended path with a sequence of
|
||||
line segments, with your final line segment passing through your
|
||||
destination, because destination will be the midpoint of that line segment.
|
||||
Then do a M first point, C second point, third point, midpoint, then S
|
||||
even numbered vertex, midpoint following the even numbered vertex, starting
|
||||
with the fourth vertex and the midpoint to the fifth vertex. The midpoint of
|
||||
the final vertex will be your destination.
|
||||
|
||||
Scalable vector graphics are dimensionless, and the `<svg>` tag's height,
|
||||
width, and ViewBox properties translate the dimensionless quantities into
|
||||
@ -221,23 +283,17 @@ take its defaults from the element selected. (which can be an entire group,
|
||||
or the entire high top level group, in which case it will pick up sane and
|
||||
appropriate properties from the first relevant item in the group.)
|
||||
|
||||
And how did you set those sane and relevant properties in the first place?
|
||||
|
||||
By editing that element as text, very likely in your markdown file.
|
||||
|
||||
The enormous advantage of scalable vector graphics is that it handles
|
||||
repetitious items in diagrams beautifully, because you can define an item
|
||||
by reference to another item, thus very large hierarchical structure can be
|
||||
defined by very small source code.
|
||||
|
||||
Scalable vector graphics is best edited as text, except that one needs to
|
||||
draw lines and splines graphically, and in the process, all your nicely laid out text gets mangled into soup, and has to be unmangled.
|
||||
|
||||
You might decide to keep it around as soup, in an `svg` file that only
|
||||
Inkscape ever tries to read, but then you are going to have to edit it as text
|
||||
again. And I wound up embedding my vector graphics files in markdown
|
||||
rather than invoking them as separate graphics files because my last step
|
||||
was apt to be editing them as text.
|
||||
was apt to be editing them as text. Which irreversibly made them
|
||||
uneditable by Inkscape
|
||||
|
||||
It is convenient to construct and position all the graphical elements in
|
||||
Inkscape, then edit the resulting `svg` file in Visual Studio Code with the
|
||||
@ -284,8 +340,8 @@ graphic in the markdown preview pane.
|
||||
font-weight="400"
|
||||
stroke-width="2">
|
||||
<path fill="none" stroke="#00f000"
|
||||
d="M14.629 101.381c25.856-20.072 50.69-56.814
|
||||
54.433-18.37 3.742 38.443 40.484 15.309 40.484 15.309"/>
|
||||
d="M14 101, c40 -20, 30 -56,
|
||||
54 -18, s60 15, 40 15"/>
|
||||
<ellipse cx="60" cy="85" rx="12" ry="5" style="fill:red" />
|
||||
<text x="60" y="82" text-anchor="middle" style="fill:#A050C0;" >
|
||||
A simple scalable vector graphic
|
||||
|
Loading…
Reference in New Issue
Block a user