RAI Controller Ungovernance

This post presents the options we have for ungoverning the RAI controller and explains a drawback with the current implementation.

Current RAI Controller

The system currently uses the absolute error between the market and redemption prices in order to compute rates. Using a percent error though produce consistent rates under RAI/USD price changes.

Rate Calculation

The mainnet RAI controller is currently only a P-controller, so the redemption rate is calculated as:

redemption_rate = Kp * (redemption_price - market_price)

where error = redemption_price - market_price.

Since the error term is the absolute error between redemption and market prices, the resulting rate is dependent on the RAI/USD price.

In other words, a 1% market fluctuation will produce different rates for different RAI target prices ($2, $3, $4, etc).

Example

To demonstrate the issue with using absolute error, consider the two cases below, each with a different starting redemption price.

Each case shows the redemption and market prices(top), the absolute error(middle), and the resulting redemption rate(bottom).

You can see that, while both market prices experience fluctations of 1%, the scenario shown on the right produces much higher rates because of the larger redemption price.

Implications and Potential Solution

If the target price was to drift considerably from itā€™s current value of approximately $3.0383, the system would produce rates of a much different magnitude.

Instead of an absolute error, the RAI controller could use a percent error to calculate a redemption rate:

pct_error = (redemption_price - market_price) / redemption_price

This would produce the same rates irrespective of the current redemption price. Thereā€™s one caveat though.

If we want to match the above case rate behavior, we simply need to multiply the original Kp by 3:

rate_pct = rate_absolute

Kp_new * pct_error = Kp * absolute_error

Kp_new * 0.01 = Kp * 0.03

Kp_new = Kp * 0.03 / 0.01

Kp_new = 3 * Kp = 3 * 7.5e^{-8} = 2.25e^{-7}

And with these formulas, we now get the following scenarios:

Ungoverning the Controller

Controller ungovernance is set to happen in late summer 2022. It may seem thereā€™s a long way to go but we must have a clear idea of the type of controller to use (absolute vs percentage error), whether the community may want to leverage an integral term as well as whether the Kp and Ki parameters can be changed within specific limits.

There are nine scenarios for the RAI controller:

  • Use an absolute proportional controller with no option to change its parameters, ever
  • Use an absolute proportional controller with the option to change its Kp value within specific bands (to be discussed which bands)
  • Use a percentage proportional controller with no option to change its parameters, ever
  • Use a percentage proportional controller with the option to change its Kp value within specific bands (to be discussed which bands)
  • Use a proportional-integral controller (same four variations as above with percentage vs absolute and fixed vs governable Kp & Ki within certain bands)
  • Deploy a controller and allow its implementation to be changed using predeployed smart contracts; here, RAI can have P or PI controllers already deployed where the community can change Kp/Ki values within certain bands as well as swap the controller logic between P, PI and their absolute and percentage error variations

This is a big decision for RAIā€™s future so weā€™d like to hear more people pitch in, specifically on whether controller parameters may be changed within certain bands and on whether the controller can have several predeployed implementations or a single one thatā€™s set in stone.

3 Likes

The big question is do we know enough today to set things in stone now? It seems that even just recently we learned about the absolute vs proportional fine distinction. The integral term likely has a use, but we are all scared of it.

I wonder if thereā€™s more unknown albatrosses in wait, and then if the ability to adapt remains necessary. But then in turn, how many other albatrosses are there in wait within the protocolā€™s ability to change/govern itself and what does that unknown ratio look like.

I have no clue, Iā€™m just a rockstar singer :notes:

2 Likes

So from this I get that the last option with multiple predeployed implementations would be best?

I am with @pelvis4084 on this one.

I would personally go with:
" * Deploy a controller and allow its implementation to be changed using predeployed smart contracts; here, RAI can have P or PI controllers already deployed where the community can change Kp/Ki values within certain bands as well as swap the controller logic between P, PI and their absolute and percentage error variations"

I am all for governance minimization as end-goal but I feel we still need time to understand more unexpected scenarios where that might hit us in the future for which we need human intervention.

The thing is multiple central banks have different policies that change all the time and not very efficient so that gives us a chance to learn and adapt accordingly.

1 Like

A few thoughts, with the caveats that itā€™s been a while since Iā€™ve done any controls, and I havenā€™t been following RAIā€™s controller performance as closely as Iā€™d like to:

  1. The concern about very different rates at different target prices is valid, but I think this is usually solved through the integral term. @pelvis4084 mentioned the community is scared of it, but Iā€™m curious as to why, since controllers with integral terms is very common
  2. Using a relative error instead of absolute could be a good workaround there, but my concerns would be:
    1. If the price drifts very low, small changes in absolute errors cause large changes in relative error, which would result in rates that are likely undesirable or problematic
    2. I believe most controllers use absolute error, not relative errors. Iā€™d guess the prior point is the reason for this, though it may be worth investigating if thereā€™s other reasons before deciding to go down this route

You can probably figure out that I think itā€™s likely too early to remove governance over the controller. In the scheme of things, RAI is still young and Iā€™d be hesitant to remove the ability to change things to soon

Thank you everyone, weā€™ll wait for more people to pitch in, although until now it seems like you all agree with having multiple pre-deployed implementations

Yes, this is the intended effect. For example, if RAI=$0.10, a ā€˜smallā€™ $0.03 deviation is 33% and should invoke a strong rate.

Why? The P/L of RAI participants depends on the relative error, not the absolute error.

Consider the same absolute market price deviations at different RAI price scales.

#1 Target Price = 3.00, Market Price = 3.05, deviation = -0.05

In this scenario, SAFE owners who have mint/sold RAI now have to overcome this 0.05 premium when buying back RAI to re-pay their debt. Assuming they minted and sold at target/market parity of 3.00, they now have to achieve 0.05/3.00 = 1.666% profit on their farming, leveraging opportunity, just to break even when repaying. This is not good for them.

#2 Target Price = 0.10, Market price = 0.15, deviation = - 0.05

The same absolute deviation as before, but when buying at the market and repaying debt, the SAFE owner must pay a 0.05/0.10= 50% premium. This is much worse for them and thus the system should more aggressively correct this situation with stronger negative rates.

With that said, we should consider if there are other factors that would prevent the market volatility from following the scale of the target price. For example, Coinbase pricing RAI at only two decimals would make volatility at $0.10 higher than current. However, I think if RAI was to go to $0.10,I imagine CB would increase granularity. No other factors come to mind right now.

1 Like

Why? The P/L of RAI participants depends on the relative error, not the absolute error.

This is a great point, thanks for the context. I can now see why relative error would be preferable in RAIā€™s case

With that said, we should consider if there are other factors that would prevent the market volatility from following the scale of the target price. For example, Coinbase pricing RAI at only two decimals would make volatility at $0.10 higher than current. However, I think if RAI was to go to $0.10,I imagine CB would increase granularity. No other factors come to mind right now.

Agreed, worth trying to think of other factors (though I also canā€™t think of any at the moment). It does feel like having the integral term as an option would be a useful tool to counter these kinds of situations, and you can always turn it off by setting Ki to zero, which is why Iā€™d be hesitant about choosing any of the ā€œonly proportional controllerā€ options

1 Like

Iā€™m in favor of using % error over absolute error in order to be future proof. We donā€™t want to have the price drift in either direction and be forced to update Kp/Ki as a result.

Of the RAI controller scenarios, I currently like the last option best:

  • Deploy a controller and allow its implementation to be changed using predeployed smart contracts; here, RAI can have P or PI controllers already deployed where the community can change Kp/Ki values within certain bands as well as swap the controller logic between P, PI and their absolute and percentage error variations

I donā€™t think we need the absolute error versions, and I think, in the spirit of ungovernance, that it might be ideal to have P & PI controllers where you can not only set the params within a range, but narrow the parameter range as well. So for example, every year that the controller params are deemed successful, we could narrow the parameter range by 20%. If thatā€™s too much work, I think we should just use the P & PI controllers with fixed param ranges.

That said, even though we shipped a PI controller, the I-term is still an unknown since we havenā€™t actually tried to set to anything other than 0. I think itā€™s imperative that we at least try the I-term in prod before we make any decisions about setting the controller type in stone. Given that ungovernance is coming in August, and because these kinds of real-world tests take time, I propose we start an experiment to turn on the I-term on RAIā€™s 1st birthday: February 17th, 2022. We can run it until August, and then have more information with which to decide on the controller type and params.

To be clear about what I expect from the PI controller to be different than the P-only:

  • with P-only, the controller experiences ā€œsteady-state errorā€ or extended market price (MP) deviations from RP (redemption price)
  • with PI, the controller would accumulate and persist steady-state error via the I-term
  • for example, weā€™ve had on avg -6% redemption rate over the past month, if the I-term accumulated that at a rate of 1/3 the P-term per month (a simplified way of expressing the Ki constant), then the redemption rate from the I-term would reach -2% by the end of the month, and the total redemption rate would be the P-term + I-term so -8%
  • if at the end of the said month the MP reached equilibrium with the RP, the P-term rate would be 0%, but the I-term rate would still be -2%
  • this persisted rate is what provides the incentive for RAI arbers to actually bring the RAI price back to the redemption price (and not just above/below which we see with P-only) because they are incentivized by the non-zero rate at equilibrium (which is 0% for P-only)
  • it also allows RAI arbers to arb via longer-time horizon positions, since they can count on the I-term rate to persist for some time, or at least dissipate gradually

The risk of the I-term is that it introduces second-order dynamics into the system, which can be scary. But with RAI in prod for nearly 1 year, the community having largely come to grips with the P-only controller, and 6 months before ungovernance proper, I think now is the time to run the I-term experiment and see if our fears are founded.

Iā€™ll follow in a separate post with a proposal for what the Ki & I-term controller params could be!

2 Likes

Percentage error seems to be clearly better than absolute error.

As for adding other terms (ie. not just P), I do think the benefits of other terms need to be weighed against the unknown-unknown risks of creating a more complicated system that is more vulnerable to attack. A big difference between traditional PID controller theory and the crypto world is that in the crypto world, there will be actors trying to manipulate the outcome. The IMO quite small gains of ensuring that target price more closely follows market price instead of drifting off by 1-2% should be weighed against the greater risks of attack that come from more complexity. The best argument for adding an I term would be if you discover a reason why adding it would lead to less risk of attack. Iā€™m sure an experiment of running the I term live will teach us things, but I donā€™t think that it would give sufficient information to answer that question, as attacks often donā€™t come until years into the systemā€™s evolution.

3 Likes

Thanks for the response Vitalik! I appreciate you making the argument for conservatism.

I do think the benefits of other terms need to be weighed against the unknown-unknown risks of creating a more complicated system that is more vulnerable to attack.

I think, for the simple reason that the risks are unknown-unknown, we should run the experiment in prod, while the system is still young and small (<$100M RAI) so we can be informed by it for any controller decisions we make for ungovernance.

A big difference between traditional PID controller theory and the crypto world is that in the crypto world, there will be actors trying to manipulate the outcome.

Part of the experiment would be to better understand how people could manipulate it, and what the impact of such a manipulation might be.

The IMO quite small gains of ensuring that target price more closely follows market price instead of drifting off by 1-2% should be weighed against the greater risks of attack that come from more complexity. The best argument for adding an I term would be if you discover a reason why adding it would lead to less risk of attack.

It depends what you consider ā€œan attackā€. In some sense, the price drifting off by 1-2% for extended periods of time is an attack, in that it can deter RAI users from arbing the price back to the target price, potentially weakening the stability of the system. The upside of this experiment would be to find out if the system can defend itself against these kinds of long-run imbalances, and bonus points for discovering new attack vectors.

Iā€™m sure an experiment of running the I term live will teach us things, but I donā€™t think that it would give sufficient information to answer that question, as attacks often donā€™t come until years into the systemā€™s evolution.

I donā€™t expect a 6-month experiment to be able to tell us all the answers, but I would expect that you, me, and the rest of the RAI community would be more informed about locking in the P-term indefinitely than a vague fear of second-order dynamics. Also second-order systems, while scary, can be useful. After all, EIP 1559 is basically an integral controller :slight_smile:

2 Likes

The best argument for adding an I term would be if you discover a reason why adding it would lead to less risk of attack

Agree and yes, I believe it will.

Adding an I-term will create more persistent rates during times of imbalance, which increases the incentives to correct the imbalance(mint and sell or market buy and hold). The increased incentive comes from the participants having a better forecast on future rates(their return) vs ephemeral Kp-only rates. This increases incentive stabilizes RAI.

The trade-off is possible decreased response to a current shock, in the case of the integral error opposing and canceling out the current error. We are working on selecting parameters to balance these two requirementsā€¦response time and rate stability.

I also view adding the I-term as less governance. With Kp-only we have to be pretty close in our estimate of what rate will correct a specific deviation. Adding an I-term will automatically increase the rate in times of deviation until the market responds.

Currently the RAI controller only considers the current market deviation of RAI when deciding system rates. However, the system will benefit from also considering historical error when determing the rate needed to converge market and redemption prices. This is similar to the methodologies of central banks, who examine recent macro-economic indicators when setting interest rates.

Considering historical error will produce stronger rates in time of persistent market deviation(steady state error). This will create stronger incentives for system participants, RAI holders and SAFE Owners, to act and correct the supply/demand imbalance.

To consider historical error, the RAI controller will be switched from a P-only to a PI Controller. The PI Controller is the most common type of closed loop feedback mechanism, used for control of fluid, temperature and even biological systems. Further, research has been shown that some modern monetary policies resemble PI control. (Hawkins et al. 2015)

Calculating the effect of historical error on the rate requires calculating the integral of the error(the sum of the errors in a discrete system). Ki and alpha parameters are then selected to calculate the final contribution of the historical error to the system rate. The Kp parameter is still used to set the current errorā€™s contribution to the rate and will remain unchanged.


Here are the Kp, Ki and alpha(decay) parameters that will be tested on kovan and then deployed to mainnet.

Integral Decay

The error integral is implemented with an alpha parameter that decays the impact of older errors on the controller response. It is a discrete implementation of a leaky integrator

Thus, newer errors are weighted higher than older errors. eg. With a decayed integral, a market deviation from 180 days ago has less effect on the controller response than a market deviation that occurred 7 days ago.

Decaying the error integral also mitigates the risk of any slight bias in the system introduced by code or measurement error. With the decay, this bias will not be able to accumulate forever and ultimately and errantly dominate the rate calculation.

The formula for calculating the error integral is:

In plain language:
ā€œThe integral of the error is the time decayed value of the existing integral plus the area of the new errorā€

Integral Calculation Example

The Effect of Decay

Since the decay of the error only reduces old errors impact and canā€™t completely eliminate them, we must find an alpha term that approximates the time period we care about.

This is a plot of the cumulative error weight of the new alpha term over a period of 120 days. Notice the weight approaches but doesnā€™t reach 1. There will still be errors older than 120 days that contribute, albeit very slightly, to the error integral and the controller response.

For the above alpha=0.9999997112,
95% of the error sum occurs within the first 120 days.
5% of the error sum is attributed to errors older than 120 days.

We refer to this as the ā€œ120-day alphaā€

Definition:

n-day alpha = an alpha that creates a sum where 95% of the sum comes from the most recent n days

Eg. Here is a plot of the 30-day alpha, alpha=0.999998845

The value of alpha determines the age and relative weighting of error(shape of the curve) to use when calculating rates from historical market deviations.

Ki selection

While alpha determines the age of the historical error to use, the Ki parameter determines the magnitude of the rate that comes from the historical error.

When selecting Ki, a major consideration is the length of time it would take for a constant market deviation to double the system rate.

In other words:
ā€œHow many days until the rate from the integral equals the rate from the current error?ā€
equivalently,
ā€œHow many days until the Ki rate/Kp rate ratio equals 1?ā€

For a given Ki, the number of days is dependent upon the alpha parameter.

Given:
Kp=7.5E^-8
Ki=2.4E^-14
alpha=0.9999997112

Table showing the ratios for the new parameters over time for constant error.

Ki/Kp Rate after 10 days Ki/Kp Rate after 20 days Ki/Kp Rate after 30 days Ki/Kp Rate after 60 days Ki/Kp Rate after 90 days
0.26 0.45 0.60 0.87 1.0

As expected, the rate from the integral gets stronger the longer a steady error exists.

Controller Responses

Given these new parameters, here is the new systemā€™s response to step and impulse inputs.

redemption price = 3.0
Kp=7.5E^-8
Ki=2.4E^-14
alpha=0.9999997112

Step Input(Constant Error)

-1% error

-5% error

Table of expected systems rates given various levels of constant error(step input) for multiple days. The values for the above plots are bold.

constant error, $ constant error, % annual rate after 30 days annual rate after 60 days annual rate after 90 days
0.03 1% 11.9 14.2 15.3
0.09 3% 40.1 48.8 53.1
0.15 5% 75.4 94.0 103.4
-0.03 -1% -10.6 -12.4 -13.2
-0.09 -3% -28.6 -32.8 -34.7
-0.15 -5% -43.0 -48.4 -50.8

Note: Because of the conversion to APY, the above rates are not symmetrical for positive and negative error. However, the per-second rates in the system are symmetrical and have the same magnitude.

Impulse Input(Instantaneous Error)

-3% impulse

-10% impulse

Table of expected systems rates given various levels of instantaneous error(impulse input). The values for the above plots are bold.

1-day impulse error, $ constant error, % annual rate after 30 days annual rate after 60 days annual rate after 90 days
-0.09 -3% -0.3 -0.1 -0.1
-0.15 -5% -0.5 -0.2 -0.1
-0.30 -10% -1.0 -0.5 -0.2
0.09 3% 0.3 0.1 0.1
0.15 5% 0.5 0.2 0.1
0.20 10% 1.0 0.5 0.2

Note: Because of the conversion to APY, the above rates are not symmetrical for positive and negative error. However, the per-second rates in the system are symmetrical and have the same magnitude.

References

This is a grid showing step responses for various settings of alpha and Ki for Kp=7.5E^-8.

7 Likes

Thanks for following up with the Integral-term controller param explanations Bert!

I want to add some flavor about our methodology and thought process while evaluating the leak and Ki params.

Initially, we made a grid of possible parameters we could evaluate (at the end of Bertā€™s post, reposted here).

Looking at the numbers, we tried to answer what the leak and Ki should be and initially narrowed down on a Ki=2.23E-14 with a 90day leak. Notes below.

Screen Shot 2022-02-08 at 2.41.20 PM

Rationale for 90 day leak:

  • 90 day leak = the weight of all data older than 90 days is 5%, and the weight of everything in the last 90 days is 95%
  • RAI is new, so making historical inferences based on too long of a timeline is dangerous, once RAI is 5 years old, maybe weā€™ll move to 1-2 year leak
  • weā€™re running a 5 month experiment, so 90 day leak gives us time to evaluate the entire integral leak lifecycle
  • the leak also mitigates overshoot - if there was no leak, overshoot would be required to reach new equilibrium, with leak the amount of overshoot required is less, so a 90 day leak can limit I-term driven overshoot to roughly 1-2 months tops
  • if the leak is too fast (e.g. 30 days) then it doesnā€™t really act as an integrator over a long time period, only a few weeks, which seems pointless
  • due to the non-linear nature of our integral leak, if the leak is 60 days, then it mostly integrates over the last 30 days
  • so 90 days seems like a sweet spot where we integrate everything over a 3 months, and weight highest the last 1-2 months, and ignore everything older than 3 months

Rationale for Ki = 2.23E-14:

  • I-term should be tuned to correct steady state error, and largely ignore short-term impulses
  • our leak makes our I-term weird, because it doesnā€™t weight all the times in the given window the same, it exponentially decays historical error
    • so with a 90 day leak, 86% of the I-term is from the latest 60 days, and only 14% from 60 days+
    • this means our integral term is always ā€œfront loadedā€ somewhat, which we should keep in mind
  • the last month has been a prod experiment with +1% error and -7% (P-only) rates
  • we can ask ā€œwhat should rates be todayā€ after a constant 1% error for 30 days
  • I would think after 1 month of +1%, should be around 10-11%, or roughly +50% from P-only
  • this is also roughly based on the fact that CVXRAI returns are about 15%, meaning the real rates on RAI this month were (+8%), meaning if the I-term reached -7-8% then real rates on RAI would be 0%, and there would be no reason to hold RAI deposits in CVXRAI, and people would definitely sell their RAI for other coinsā€¦
    • so hypothetically, slowly raising rates by 3-4% over the past month may have caused some farmers to dump RAI as the farming rewards became less competitive, which would help bring RAI back down closer to equilibriumā€¦
    • for example, rates on FRAX are around 10%, so even a few % drop in real RAI rates would incentivize people to switch over
  • total rates of 10-11% after 30 days of constant 1% error with a 90 day leak ā†’ corresponds to Ki of 2.23E-14
  • with Ki=2.23E-14 at 60/90 days, the rates would be 12.6% and 13.2%, so it gets almost up to 100% of Kp (maxes out at around 80%)
    • tbh this feels a bit weak to me, I think I would prefer if I-term could overcome P-term eventually (especially after 2-3 months)
    • like, if increasing rates by 3-4% didnā€™t work in month 1, why would raising rates by an additional 1% in months 2 and 3 solve anything?
    • the 180 day leak sort of fixes this, the 30 day rates are only somewhat larger than the equivalent 30 day rates with 90 day leak, but with 180 day leak the 60/90 day rates are much higher
  • comparing to Ki = 4.46E-14
    • after 30 days of constant 1%, the total rate is 15%, so I-term is already ~100% of the P-only response ā†’ feelsā€¦ too fast to me
    • and then after 90 days it maxes out at a constant 19.3% or 163% of the P-value, meaning the maximum PI response is about 2.5x stronger than P-only
    • this feelsā€¦ maybe a little too strong, but maybe right?
    • I would probably have a preference for a non-leaky integral with evenly weighted error sampling over 90 days, because that would more closely do what I want, which is ā€œget stronger over time, but not too fastā€
    • because on one hand, it feels weird to look at a rate of 19.3% resulting from only a 1% price gap, but also itā€™s a 1% price gap that persisted for 90 days even while the rate increased from 7% to 19%ā€¦
    • so I guess I wouldnā€™t mind the Ki=4.46E-14 if it ramped up slower, even if I do think its terminal strength is probably about right

After our initial parameter discussion, we decided to look more closely at intermediate Ki/leak steps, adding leak=120 days and Ki=3.35E-14 to the chart. We looked for params that, compared to (leak=90, Ki=2.23E-14) would result in:

  • a stronger maximum controller I-term response (>78% of P-value)
  • about the same 30d rate (50% of P-value)
  • round numbers, to make it easy to reason about :]

We arrived at leak=120 and Ki=2.4E-14, with the rationale as follows:

  • due to the nature of the leak exponentially decaying historical error and front-loading the I-term, it is difficult to increase the maximum controller response without also increasing the 30d rateā€¦
  • increasing leak to 120 days and holding Ki constant (top box) increases the 1% error 30/60/90d rates by 0.4%/0.9%/1.5% respectively, which seems like an acceptable tradeoff.
  • increasing Ki slightly from 2.23E-14 to 2.4E-14 increases the 1% error 30/60/90d rates by 0.3%/0.5%/0.6% when compared to above, which is more front-loaded than is idealā€¦
  • however, increasing Ki to 2.4E-14 also results in nice round numbers when comparing Ki/Kp over time:
Ki/Kp Rate after 10 days Ki/Kp Rate after 20 days Ki/Kp Rate after 30 days Ki/Kp Rate after 60 days Ki/Kp Rate after 90 days
0.26 0.45 0.60 0.87 1.0
  • so in summary, we like the Ki=2.4E-14 with leak=120 days, because it has a 90day Ki, meaning that after 90 days of constant error, the I-term should be 100% of the P-term
  • and while the I-term being 60% of the P-term after only 30 days is higher than I would generally want, it seems like an OK trade off to get a stronger 60d/90d response and nice round numbers

Conclusion

Of course, tuning RAI has thus far involved a great deal of trial and error. The current Kp is about 1/4x as strong as it was when RAI launched, and it was only by testing in prod that we were able to determine that the current levels seem to work well. I donā€™t expect weā€™ll get the Ki params ā€œrightā€ either on our first try, but we will hopefully learn enough to make a more informed decision about how the Ki should work before our scheduled ungovernance in August.

2 Likes