Friday, November 22, 2024

Hive: How we strived for a clear fork

The DAO soft-fork try was troublesome. Not solely did it end up that we underestimated the uncomfortable side effects on the consensus protocol (i.e. DoS vulnerability), however we additionally managed to introduce an information race into the rushed implementation that was a ticking time bomb. It was not best, and regardless that averted on the final occasion, the quick approaching hard-fork deadline appeared eerily bleak to say the least. We wanted a brand new technique…

The stepping stone in the direction of this was an thought borrowed from Google (courtesy of Nick Johnson): writing up an in depth postmortem of the occasion, aiming to evaluate the basis causes of the problem, focusing solely on the technical elements and applicable measures to stop recurrence.

Technical options scale and persist; blaming individuals doesn’t. ~ Nick

From the postmortem, one fascinating discovery from the attitude of this weblog put up was made. The soft-fork code inside [go-ethereum](https://github.com/ethereum/go-ethereum) appeared stable from all views: a) it was totally lined by unit exams with a 3:1 test-to-code ratio; b) it was totally reviewed by six basis builders; and c) it was even manually stay examined on a personal community… But nonetheless, a deadly information race remained, which may have doubtlessly induced extreme community disruption.

It transpired that the flaw may solely ever happen in a community consisting of a number of nodes, a number of miners and a number of blocks being minted concurrently. Even when all of these eventualities held true, there was solely a slight probability for the bug to floor. Unit exams can not catch it, code reviewers could or could not catch it, and guide testing catching it might be unlikely. Our conclusion was that the event groups wanted extra instruments to carry out reproducible exams that will cowl the intricate interaction of a number of nodes in a concurrent networked situation. With out such a instrument, manually checking the assorted edge instances is unwieldy; and with out doing these checks constantly as a part of the event workflow, uncommon errors would grow to be unimaginable to find in time.

And thus, hive was born…

What’s hive?

Ethereum grew giant to the purpose the place testing implementations turned an enormous burden. Unit exams are fantastic for checking varied implementation quirks, however validating {that a} shopper conforms to some baseline high quality, or validating that purchasers can play properly collectively in a multi shopper atmosphere, is all however easy.

Hive is supposed to function an simply expandable check harness the place anybody can add exams (be these easy validations or community simulations) in any programming language that they’re snug with, and hive ought to concurrently be capable of run these exams in opposition to all potential purchasers. As such, the harness is supposed to do black field testing the place no shopper particular inside particulars/state may be examined and/or inspected, relatively emphasis can be placed on adherence to official specs or behaviors below completely different circumstances.

Most significantly, hive was designed from the bottom as much as run as a part of any purchasers’ CI workflow!

How does hive work?

Hive’s physique and soul is [docker](https://www.docker.com/). Each shopper implementation is a docker picture; each validation suite is a docker picture; and each community simulation is a docker picture. Hive itself is an all encompassing docker picture. It is a very highly effective abstraction…

Since Ethereum purchasers are docker photos in hive, builders of the purchasers can assemble the absolute best atmosphere for his or her purchasers to run in (dependency, tooling and configuration sensible). Hive will spin up as many situations as wanted, all of them operating in their very own Linux techniques.

Equally, as check suites validating Ethereum purchasers are docker photos, the author of the exams can use any programing atmosphere he’s most acquainted with. Hive will guarantee a shopper is operating when it begins the tester, which may then validate if the actual shopper conforms to some desired habits.

Lastly, community simulations are but once more outlined by docker photos, however in comparison with easy exams, simulators not solely execute code in opposition to a operating shopper, however can really begin and terminate purchasers at will. These purchasers run in the identical digital community and may freely (or as dictated by the simulator container) join to one another, forming an on-demand personal Ethereum community.

How did hive assist the fork?

Hive is neither a substitute for unit testing nor for thorough reviewing. All present employed practices are important to get a clear implementation of any characteristic. Hive can present validation past what’s possible from a median developer’s perspective: operating in depth exams that may require advanced execution environments; and checking networking nook instances that may take hours to arrange.

Within the case of the DAO hard-fork, past all of the consensus and unit exams, we wanted to make sure most significantly that nodes partition cleanly into two subsets on the networking degree: one supporting and one opposing the fork. This was important because it’s unimaginable to foretell what hostile results operating two competing chains in a single community might need, particularly from the minority’s perspective.

As such we have carried out three particular community simulations in hive:

  • The primary to test that miners operating the complete Ethash DAGs generate right block extra-data fields for each pro-forkers and no-forkers, even when making an attempt to naively spoof.

  • The second to confirm {that a} community consisting of combined pro-fork and no-fork nodes/miners appropriately splits into two when the fork block arrives, additionally sustaining the break up afterwards.

  • The third to test that given an already forked community, newly becoming a member of nodes can sync, quick sync and light-weight sync to the chain of their selection.

The fascinating query although is: did hive really catch any errors, or did is simply act as an additional affirmation that all the pieces’s all proper? And the reply is, each. Hive caught three fork-unrelated bugs in Geth, however additionally closely aided Geth’s hard-fork growth by constantly offering suggestions on how adjustments affected community habits.

There was some criticism of the go-ethereum crew for taking their time on the hard-fork implementation. Hopefully individuals will now see what we have been as much as, whereas concurrently implementing the fork itself. All in all, I consider hive turned out to play fairly an essential function within the cleanness of this transition.

What’s hive’s future?

The Ethereum GitHub group options [4 test tools already](https://github.com/ethereum?utf8=%E2percent9Cpercent93&question=check), with not less than one EVM benchmark instrument cooking in some exterior repository. They don’t seem to be being utilised to their full extent. They’ve a ton of dependencies, generate a ton of junk and are very difficult to make use of.

With hive, we’re aiming to combination all the assorted scattered exams below one common shopper validator that has minimal dependencies, may be prolonged by anybody, and may run as a part of the day by day CI workflow of shopper builders.

We welcome anybody to make a contribution to the venture, be that including new purchasers to validate, validators to check with, or simulators to search out fascinating networking points. Within the meantime, we’ll attempt to additional polish hive itself, including help for operating benchmarks in addition to mixed-client simulations.

With a bit or work, possibly we’ll even have help for operating hive within the cloud, permitting it to run community simulations at a way more fascinating scale.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles