cgra's

2021-12-01

TRB Defect Exhibition - OOM and Other Spam

Filed under: TRB — cgra @ 15:15:18

Table of contents:


This is a continuation to the two previous articles (here and here), in a series of presenting some potentially annoying TRB defects.

These are all I believe are worth separately mentioning, and I'm currently aware of, but I have no reason to believe there wasn't any more issues to be found.

We’ll take a look at the std::vector specifics first, because it will become interesting when figuring out how TRB might occupy memory.

1 TRB std::vector’s memory footprint

Because TRB is built with a specific toolchain, we are in theory, able to say how the std::vector in TRB's case exactly works, even if any particular C++ standard, book, or compiler documentation didn’t ever completely specify it.

It’s still not an easy task in 100% accuracy to tell what this accumulated pile of computing history does, and I’m not attempting it here. But a perhaps reasonable explanation is given to the behavior I observed, but couldn’t find an exact "paper specification" of.

C++ std::vector has two separate, same-sounding concepts: size, and capacity. Capacity is how much contiguous memory it occupies, ie. reserves, while the size is how much of that reserved capacity is currently in use. Size cannot be bigger than capacity. The unit for both, size and capacity, is the size of an item. For example, if the item size is 8 bytes, a vector capacity of 10 would mean 10*8=80 bytes of occupied memory.

The std::vector's abstraction allows inserting new items to it, even if it was already full in capacity. When this happens, vector's capacity is first automatically doubled by allocating another memory block, twice as big, copying the old data there, and deallocating the original memory block. Only, if the capacity increment will more than double the old capacity of the vector, the new capacity will be: old capacity + the increase. Note, that the peak memory consumption during a size increase is therefore: old capacity + new capacity!

An interesting consequence is, that if a vector ends up any larger than 1/3 of the available memory, it cannot grow in capacity anymore. Every capacity-increasing vector operation will throw an allocation error.

Finally, std::vector's capacity doesn't automatically, or in any clean way, decrease. The abstraction does provide multiple operations that sound like they’d reduce the object’s memory footprint: clear(), erase(), resize(). In reality, they only affect the size.

However, the capacity reduction may not be that interesting to an ideal TRB, because everything either can have a hard upper boundary (mempool, block size, tx size, peer count, etc.), or grows at a predictable rate (block count). When you know how much memory is ever at most going to be needed, you just keep that much always available, and be done with it.

Clues for the std::vector behavior can be found in TRB’s build directory: build/buildroot-2015.05/output/build/host-gcc-final-4.9.2/libstdc++-v3/include/bits/, in files: stl_vector.h and vector.tcc.

A handful of std::vector code shortcuts:

Other sources used:

  • The C++ Programming Language, 3rd edition, by Bjarne Stroustrup
  • The C++ Programming Language, 4th edition, by Bjarne Stroustrup
  • ISO IEC 14882:1998, Programming languages -- C++ (The C++98 standard)
1.1 Other TRB STL containers

I haven’t thoroughly checked or tested, but to my understanding, the following C++ STL items in TRB don’t have a similar underlying capacity doubling property as std::vector:

  • std::multimap,
  • std::multiset,
  • std::map,
  • std::list,
  • std::deque, or
  • std::set.

2 vRecv burst processing defect

This defect has two key factors. The first is that there are amplifying messages that an attacking peer can send us, that require way more memory and/or CPU time than their size implies being reasonable. Such messages are: getdata, getblocks, and getheaders. Possibly in the same category also belongs addr, inv, and tx, but with much more limited effect.

The other key factor is that the attacker, for maximum impact, can fill up the victim node’s vRecv, if node is busy processing something else1. When the victim node resumes processing the attacking peer’s messages, it will process the whole vRecv in one go, before anything else will happen. If vRecv is filled with previously mentioned amplifying messages, the combined impact can be massive. The default vRecv upper limit is 10e6 bytes.

Let’s take a closer look at each message type, and start with a familiar one.

getdata
Roughly speaking, receiving one getdata of 10 block hashes may force TRB to read up to ~10e6 bytes from disk and storing into variable vSend. The message will be 24 + 1 + 10 * (4 + 32) = 385 bytes, so the amplification would be 10e6 / 385 ~ 26000. The required CPU time will also be significant2.

While the patch obey_sendbuffersize prevents a single getdata message from causing more than the -maxsendbuffer amount of memory load, a bursted series of 10-block getdata’s should bypass the limit without a problem.

Besides block hashes, also transaction hashes should work, while harder to achieve amplification as high. If large transactions do not show up naturally as often as needed, the anyone-can-spend angle may come into play: The attacker can first create and feed the victim node a fresh and beefy anyone-can-spend transaction (without a fee cost), before bombing with getdata messages, requesting that same transaction multiple times per message.

getblocks
In my case, TRB will process this purpose-built getblocks message in ~140 ms3, but the message is only 861 bytes small. The message is as follows:

Message header 24 bytes
Locator version 4 bytes
Locator size 1 byte
Fake hashes 25 * 32 bytes
Stop hash 32 bytes

Fake hashes and stop hash have the same purpose: None of them should actually exist. I used null hashes explicitly.

The message makes TRB think the block-requesting peer is at height 22397, and 100% in a losing branch. In response, TRB packs an inventory of 22397 first blocks from the main chain, to be advertised as a cure to the poor, lost peer.

Each message will cause a load of 22397 * (4 + 32) = 806292 bytes (see vInventoryToSend). The amplification would be, in the worst case, 44 * 806292 / 861 ~ 3700.

getheaders
This is similar to getblocks, but more severe. While getblocks loads CInv objects into vInventoryToSend, getheaders instead loads CBlock objects (without transactions) into vSend. The attack message is almost exactly the same as for getblocks, with only two differences: The header obviously has different command in it ("getheaders" != "getblocks"), and it has 30 fake hashes, instead of 25. Therefore, this message weighs 1021 bytes.

This many hashes is enough to make the victim node to believe the attacking peer is up to 1050585 blocks up in a losing branch, and needs to be corrected with the entire main chain, which at the time of writing is ~700000 items. Every item is 81 bytes in size5, so the memory amplification would be 700000 * 81 / 1021 ~ 55500.

On my computer, processing each of these messages takes ~300 ms6.

inv
An inventory of 50000 new items will take space 24 + 3 + 50000 * (4 + 32) = 1800027 bytes. Processing such a message will reserve 50000 * (37 * (4 + 32) + 2 * 8) = 6200000 bytes of memory, which makes an amplification of 6200000 / 1800027 ~ 3.44.

addr
Sending TRB a 10-address message will trigger node’s address relaying function and increases the memory amplification by two copies of each address. Additionally, address is stored into global mapAddresses (First AddAddress() call, then insert), and into attacking peer’s setAddrKnown. This makes an amplification of ~4. (See also peer addresses in detail.)

tx
Perhaps a bit more interesting, than inv or addr, because the memory amplification is a function of connected peer count. Besides adding transaction hash to attacking peer’s setInventoryKnown, a transaction not seen before will be queued for relay to other peers as well8. If a node has 125 simultaneously connected peers (the default maximum for TRB), the multiplier will be ~124.

A likely key to make this attack work is the anyone-can-spend transaction spam.

3 Peer-to-peer protocol state

The state TRB keeps track per peer is found in class CNode. A number of these state variables can be remotely made to occupy more RAM than is reasonable.

The rest of the state data is common to all peers. Whether a variable is peer-specific, or common to all peers, will be later mentioned separately, for each discussed variable.

3.1 High-impact variables

vNodesDisconnected (declaration, patch article)
Every disconnected peer is put on hold for 15 minutes into this dumpster, before finally garbage-collected. That enables an attacker to temporarily waste the target node's memory just by repeatedly connecting and disconnecting. The effect can be amplified by filling any peer-specific variables, before disconnecting, and creating larger garbage to collect.

This variable is common to all peers. In theory, also vector capacity doubling issue applies.

mapAlreadyAskedFor (declaration)
Inventory items from all peers, of supposed actual items we don't have. Every inventory item weighs 32+4 bytes. The life-cycle of each inventory item is as follows:

  • One added on every received inv item that is new and interesting (AskFor() call, then insert)
  • When accepting a trasaction to mempool, it’s hash is removed from this collection
  • When accepting a block, it’s hash is removed from this collection

Bogus entries never get removed, which makes this an easy spam target: An attacker that keeps on sending garbage inventory adverts, will eventually fill victim node’s entire memory.

For completeness, even normal operation may in theory, leave dangling transaction entries: If a specific transaction-including block was processed between a tx inv and a tx message -- the transaction didn’t get to be included into mempool, but was already advertised to us.

This variable is common to all peers.

setInventoryKnown (declaration)
A supposed spam guard for TRB itself. The problem is, this variable may only grow, as long as the peer doesn’t disconnect.

This is a set of inventory items that we know the peer in question knows of. Records are added when peer advertises items (blocks and transactions) to us, or when we advertise items to him. Records are only removed, if peer disconnects and whole peer data gets eventually garbage-collected.

While even in normal operation, this variable is "designed" to grow indefinitely, without an upper boundary, there’s also nothing that prevents peers from sending us bogus inventory. That’s how this variable provides another easy way to make TRB run out of memory.

This variable is peer-specific.

vInventoryToSend (declaration)
A collection of inventory items queued for being sent to the peer. These could be:

The most trouble are the hash responses to getblocks requests. See the bursted getblocks exploit.

What’s more, depending on the underlying vector capacity of the vInventoryToSend, the direct+indirect memory footprint of this collection will peak at 3 to 4 times the contained items’ actual size.

This variable is peer-specific. The vector capacity doubling issue applies.

3.2 Peer addresses

Nodes advertise each other the other nodes’ network addresses. An address is a bunch of constant bytes, a timestamp indicating currentness or age, an IP-address and a 16-bit port number. Total size of one peer address is 30 bytes.

Uniqueness of a peer address is determined by both, the IP and port value, which makes the nominally valid address space huge (equality operator, GetKey()). 2(32+16) total peer addresses to spam with, of which only a handful are considered invalid by TRB.

Every peer address a node is advertised, that is at most 10 minutes old, and the containing addr message was at most 10 items in size, is further queued for relaying to up to two other peers. The addresses are relayed only to one peer per a ThreadMessagerHandler2 round9, which takes 100 ms + a variable processing time. On relay, one peer’s vAddrToSend queue is completely dumped into it’s vSend (push 1, push 2).

Peer address relaying target is randomized10 by target peer / relayed IP pair / 24h period. What this means from the attacker’s perspective, is that if a certain IP-address relays to peer A, there are 65535 addresses in total that can be deterministically aimed at the same peer A -- just take every possible port number for that same IP-address. This makes a 1966050-byte payload of somewhat aimable, peer address spam.

Garbage peer addresses get cleaned up at age older than ~two weeks. The cleanup routine triggers every 10 minutes, and each time runs 20 seconds at most. Given that the attacker can set the advertised addresses’ age himself, this could make another way to arrange a buffer-filling window for the burst processing defect exploit.

3.2.1 Peer address attacks

Forced peer disconnect
It appears possible to force victim node’s other peers to disconnect, by sending multiple, full-buffer batches of young garbage peer addresses, eventually causing a "vSend flood control disconnect" on the other peers. As a preparation stage, or possibly as a continuous operation, the attacker may:

  • Create as many connections to the peer, as possible.
  • Test a sufficient set of individual IP addresses to see, which are relayed back to attacker’s peer connections, and which not. The ones that do not relay back, must relay to the other peers, and therefore provide each an aimable spam unit.

Out-of-memory
Simply advertising young garbage peer addresses to a node, the attacker has ~two weeks to fill mapAddresses (variable common to all peers), or until the memory runs out. If the attacking peer remains connected the whole time, setAddrKnown (peer-specific variable) will also fill for a maximum of 24 hours, depending on the time of the day the victim node was started at (See nLastRebroadcast11) . Up to two, other peers’ setAddrKnown’s will also fill per each garbage address, once addresses are being relayed.

Not to forget that an actively updated copy of mapAddresses resides also on disk.

Burst-amplified out-of-memory
If the above is done with possibly multiple attacking peer connections, and in maximal, full-buffer bursts, could possibly provide a further vAddrToSend (peer-specific variable) amplification. In addition, the vector capacity doubling issue applies, and there’s also a temporary memory peak when vAddrToSend is dumped into vSend.

Self-propagating, network-wide address spam
Because TRB will soon relay the peer addresses it was fed by the attacker, and given the garbage addresses will stay around for ~two weeks, this appears to be an automatic spam machine targeting the whole TRB network, with a mere single victim node as an entry point.

3.3 Lower-impact, amplifying variables

These variables provide a simple way to create large, dead peer connections, when exploiting the vNodesDisconnected dumpster vulnerability. The larger the dead connections are, the higher the probability for the victim node running out of memory within the 15 min dumpster cooldown.

The dumpster amplifying variables equally work for amplifying a coordinated sybil attack, where the attacker may utilize every peer slot the target TRB node still has available, to simultaneously cause as high as possible memory load, using multiple peer connections.

All variables mentioned here are peer-specific.

strSubVer
Represents the agent string of the peer, such as "/therealbitcoin.org:0.9.99.99/", or "/Satoshi:0.16.3/".

TRB doesn’t place an explicit limit on its length, and while the value isn’t used for anything, it’s nevertheless kept in memory until the peer connection is closed, and garbage-collected.

In practice, the agent string length is limited by whatever other, more general limits first affecting the incoming version message. Reception of messages is by default, limited to ~10e6 bytes at once, per peer, so a new peer could, if so wished, greet us with a ~10 MB user agent string!

mapAskFor
A collection of peer-advertised inventory, each item consisting of a type code and a hash. Every item we’ve been advertised the next time, and not received a corresponding actual item yet for, is kept in RAM for up to two minutes.

The peer may choose to dump us up to 50000 items per inventory message, and always send each unique item (or list) twice.

vRecv
The underlying type is CDataStream, which is a "stream-like" wrapper around a std::vector. Because of the internal std::vector capacity doubling mechanism, this variable may eventually become up to twice12 in size of the user’s parameter-specified size.

vSend
Similar to the vRecv above, this also may eventually become up to twice13 in size of the user’s parameter-specified size.

Because vSend flood control check will not interrupt an on-going vRecv processing, even if it does disconnect the peer, it may grow as large as a full vRecv input can generate data. This is not exactly "low-impact", but it’s specific to the burst processing defect. Otherwise vSend should remain low in impact.

There is also a vSend-specific peculiarity: Every time the buffer size grows to a multiple of 232 bytes, CNode::nMessageStart will wrap around and make checksumming pushed messages 4GB heavier (See here and here).

3.4 Peer statistics

For completeness, if an attacker could spoof his IP address without limits, there are three variables to fill with new peer IP addresses, or timestamps:

  • setBanned On every peer ban, a 4-byte IP address and a timestamp (8 bytes) is added.
  • setKnown On every new, unique (since restart) peer IP, a 4-byte IP address is added.
  • vTimeOffsets On every new, unique (since restart) peer IP, a 8-byte timestamp is added. Vector capacity doubling issue applies. Also, related to debug.log spamming, is how time offsets' log print grows linearly with the seen unique peer IP count.

4 Spamming debug.log

There are possibly other ways to do this, too, but here is an example.

When a message header (empty payload), with a 12-character command and a 10-digit size, is sent to TRB, a text like the following will print to the log:

11/27/21 16:14:09 CMessageHeader::IsValid() : (burnburnburn, 2147483648 bytes) nMessageSize > MAX_SIZE
11/27/21 16:14:09 

PROCESSMESSAGE: ERRORS IN HEADER burnburnburn

The print size is 171 bytes, and the header size is 24 bytes. When sending such headers in a continuous stream, this should make roughly a disk spam amplification of 171/24 = 7.125.

  1. Such as 1) other peer’s messages, 2) cleaning up garbage peer addresses, 3) possibly devise your own tactic []
  2. See also vSend, CPU time will dramatically increase every 4GB step []
  3. Naturally, this depends on the hardware []
  4. See vInventoryToSend for impact multiplier []
  5. See CBlock IMPLEMENT_SERIALIZE, notice that a 1-byte vtx size is still serialized, when vtx is empty []
  6. See also vSend, CPU time will dramatically increase every 4GB step []
  7. 1: Add to setInventoryKnown. AskFor(inv). 2: AskFor() mapAlreadyAskedFor insert, 3: AskFor() mapAskFor insert []
  8. First RelayMessage(), then RelayInventory(), then PushInventory() for every peer []
  9. pnodeTrickle selected and passed, addr's conditionally pushed []
  10. Expect some slow read ahead []
  11. This may, or may not also prove helpful in figuring out the address cleanup cycle’s timing []
  12. + another 64kB, but with eventual flood disconnect []
  13. + whatever the size of the bite the OS eats up the send buffer in, which may not be very interesting... []

No Comments »

No comments yet.

RSS feed for comments on this post. TrackBack URL

Leave a comment

Powered by a less-enormous pile of '???'