Modefi
  • Introduction
  • Oracle Solutions Suite
    • Decentralized Aggregated Oracle
    • On-Demand Oracle
      • On-Demand Oracle - Technical Manual v0.1
        • The On-Demand Oracle System
        • Types of Users
          • Data Request Creators
            • Requesting Data
            • Setting Times
            • Cancelling Data Requests
            • Disputing Results
          • Validators
            • Account Management
            • Staking (and Unstaking)
            • Providing/Endorsing Data
            • Disputing Results
            • Receiving Payment
          • ODO Custodian
        • Algorithms
          • Computing Request Costs
          • Depositing and Withdrawing Coins
          • Staking to Endorse Data
          • User and Staking Slot Tiers
          • Timing/Lateness
          • Bumping
          • Withdrawing
          • Endorsing
          • Payment
          • Slashing
          • Reputation
          • Staking Bonuses
          • Disputes and Resolutions
          • Coin Credits
          • Account Transfer
      • On-Demand Oracle - High-Level Overview
    • Oracle Marketplace
  • Defi Dashboard
    • What is the Modefi DeFi Dashboard?
  • Token
    • Tokenomics
      • Token Distribution
      • Token Stats
      • Token Emission Schedule
    • Token Sale
    • Token Utility
  • General Information
    • History of Oracle Based Hacks / Exploits
      • Synthetix $1 Billion Exploit
      • Trader Exploits bZx Oracle for $330,000 Profit
      • $100 M Liquidated on Compound Following Oracle Exploit
  • Blockchain Basics
    • What is a Smart Contract?
    • What is an Oracle?
  • FAQ
    • Staking on Fantom
    • Staking on Binance Smart Chain
  • How-to's
  • Smart Contract Addresses
  • Links and Socials
  • Media Kit
  • Disclaimer
  • Terms and Conditions
  • Privacy Policy
Powered by GitBook
On this page
  1. Oracle Solutions Suite
  2. On-Demand Oracle
  3. On-Demand Oracle - Technical Manual v0.1
  4. Algorithms

Endorsing

By only requiring each unique value once, the endorsement model is efficient compared to all users providing their own copies of the data. Additionally, the new model considers all endorsements ever sent in and works on a +/- system, which works with minimal overhead. The main values considered are the number of endorsements for the two most-endorsed values. If the most-endorsed data has N endorsements and the second-most-endorsed data has M endorsements, then the value N - M determines when consensus is reached. Once the data with the most endorsements has a large enough lead (required_lead), consensus is reached. Periodically the amount by which the data has to lead increases by one.

Again, this model is to mitigate users having a poor experience by getting slashed for sending in data that is essentially correct. The idea is to essentially prevent having incorrect data due to something like a small formatting difference from the accepted value. By endorsing data that already exists it removes the possibility for a typo when sending in already-existing data. Additionally, users who might be more nervous about getting slashed never have to send in data at all. They can participate solely as an endorser of existing values, reducing the perceived difficulty of participating, although it is still just as important to ensure the data is correct and correctly-formatted.

In addition to protecting users’ assets, the endorsement model strikes a good balance between getting a validated result quickly and still providing a certain amount of security. On one hand, each time there is a single deviation on the course to consensus, there are two extra spots that open to reach consensus. This is good for endorsers as it gives more opportunities, but it can slightly slow down getting the data vs a group-based consensus model (depending on the threshold used) for the case of a single mistake. The slowdown, however, is a result of slightly decreased risk of getting an incorrect value, so coupled with potentially quicker and more reliable results when multiple errors are made, this seems to be the superior model.

In addition to the above considerations, a plus/minus system is also easier to reason about than a group-based approach because consensus failures can reset a group on any data that is provided, including correct data. This can be confusing, which is why the simpler plus/minus model is better for understanding.

PreviousWithdrawingNextPayment

Last updated 1 year ago