By only requiring each unique value once, the endorsement model is efficient compared to all users providing their own copies of the data. Additionally, the new model considers all endorsements ever sent in and works on a +/- system, which works with minimal overhead. The main values considered are the number of endorsements for the two most-endorsed values. If the most-endorsed data has N endorsements and the second-most-endorsed data has M endorsements, then the value N - M determines when consensus is reached. Once the data with the most endorsements has a large enough lead (required_lead), consensus is reached. Periodically the amount by which the data has to lead increases by one.

Again, this model is to mitigate users having a poor experience by getting slashed for sending in data that is essentially correct. The idea is to essentially prevent having incorrect data due to something like a small formatting difference from the accepted value. By endorsing data that already exists it removes the possibility for a typo when sending in already-existing data. Additionally, users who might be more nervous about getting slashed never have to send in data at all. They can participate solely as an endorser of existing values, reducing the perceived difficulty of participating, although it is still just as important to ensure the data is correct and correctly-formatted.

In addition to protecting users’ assets, the endorsement model strikes a good balance between getting a validated result quickly and still providing a certain amount of security. On one hand, each time there is a single deviation on the course to consensus, there are two extra spots that open to reach consensus. This is good for endorsers as it gives more opportunities, but it can slightly slow down getting the data vs a group-based consensus model (depending on the threshold used) for the case of a single mistake. The slowdown, however, is a result of slightly decreased risk of getting an incorrect value, so coupled with potentially quicker and more reliable results when multiple errors are made, this seems to be the superior model.

In addition to the above considerations, a plus/minus system is also easier to reason about than a group-based approach because consensus failures can reset a group on any data that is provided, including correct data. This can be confusing, which is why the simpler plus/minus model is better for understanding.

Last updated