A Mathematical View of Automated Market Maker (AMM) Algorithms and Its Future

April 20, 2023

Leo Lau, G.W. XIE at AnchorDAO Lab

In this paper, we will first review four Automated Market Maker (AMM) algorithms that are implemented by protocols like Bancor, Uniswap, Balancer and Curve. Recent developments, possible improvements, and the future of AMM algorithms will also be discussed.

Contents

  1. Bancor’s bonding curve and trading formulas
  2. Impermanent loss calculations for Uniswap
  3. Positive gains when price is within 2ρ
  4. Liquidity distribution, liquidity deposition, range order in Uniswap V3
  5. Balancer’s market maker function and trading formulas
  6. Smart Order Router (SOR) algorithm
  7. Curve’s StableSwap and trading formulas
  8. Dynamic weight, customizable price pegs and smooth price transition of Curve V2
  9. Market maker function of Curve V2 in a 2-token pool setting
  10. Price function of Curve V2 compared to CPMM and StableSwap
  11. Repegging process: Xcp criteria, EMA price oracle, relative price change step size s
  12. Dynamic transaction fees of Curve V2
  13. DEX aggregator: general solution to Balancer’s SOR algorithm
  14. Pivot algorithm: an attempt to solve the impermanent loss problem
  15. Single-sided liquidity solutions
  16. Designing better dynamic weights for Curve V2
  17. Applying price range when the price function is not analytical
  18. Clipper: an AMM algorithm optimized for small trades
  19. TWAMM: an AMM algorithm optimized for large, long-term orders
  20. Application of TWAMM on Constant Product Market Maker (CPMM) and Logarithmic Market Scoring Rule (LMSR)
  21. Application of TWAMM on time-dependent AMMs such as YieldSpace
  22. Conclusions and future work

Bancor

Bancor¹ utilizes the concept of bonding curve to determine price. Bonding curve is the relation between the price of a token and its total supply.

The invariant chosen by Bancor is F, called connector weight, which is the ratio between R (reserve token number in the liquidity pool) and the product of S (BNT total supply outside the liquidity pool) and P (the relative price between BNT and reserve token). We could substitute the equation of P and integrate² both sides and get the relation between P and S. It is an exponential expression where the exponent α is related to the connector weight F (F between 0 and 1). The smaller F is, the bigger α will be, which means the price will change more rapidly with respect to BNT’s total supply.

Using this expression and simple integrations, we can derive the relation between T (BNT token bought) and E (reserve token paid), where R0, S0 are the current values of R and S.

If we want to exchange between token A and token B, selling token A for token B. We first need to buy BNT token from pool A, using token A, if we do not have any. Next we need to buy token B from pool B, using BNT. Below are the exact formulas needed to calculate how much token we will receive. The relative price between token A and token B can be expressed in terms of relative price between BNT token and token A / B.

Pros: Bancor allows single-sided liquidity deposition in certain pools, determined by Bancor governance. There is a limit to how much single-sided liquidity can be deposited, also determined by Bancor governance. Within limit, Bancor will supplement equal value of BNT token when users deposit single-sided liquidity in the form of the other token. This doubles the effective liquidity. If the limit is reached, if one wants to deposit single-sided liquidity, he has to wait for someone to withdraw single-sided liquidity, or someone to deposit single-sided liquidity in BNT.

Bancor protocol also compensates impermanent loss (will be discussed later) in the form of transaction fee earned on the BNT part when users deposit single-sided liquidity. If the transaction fee does not fully compensate impermanent loss, Bancor will mint BNT to make sure impermanent loss is zero. As a result, liquidity providers can enjoy a stable income if they have deposited liquidity for a certain time (100 days to be fully compensated).

Cons: All swaps need BNT token as an intermediary as explained above. We will experience slippage twice as a result. All liquidity pools consist of BNT and another token, because of the same reason, and thus lack diversity. The BNT token price can also be affected due to elastic supply needed to enable impermanent loss compensation and single-sided liquidity deposition.

Bancor introduces the idea of network token BNT which is connected to all tokens with different connect weights, corresponding to different price-determining bonding curves.

Uniswap

Uniswap uses Constant Product Market Maker (CPMM) to determine price. Before we dive into the algorithms Uniswap V2 and V3 use, let us first understand what Impermanent Loss (IL) is and how to calculate it.

If the AMM function is convex (the price increases as we buy / price decreases as we sell), then a single trade with no transaction fee compensation will always cause liquidity providers to lose money. Suppose one trade causes the AMM function to move from point 1 to point 2. The spot price (absolute value of the function’s derivative) at point 1 and point 2 are P1 and P2. P3 is the actual trading price. Due to the nature of convex functions, P1 > P3 > P2. Impermanent loss is defined as the difference in value between the current value of liquidity provider’s tokens in the pool after trading and the current value of liquidity provider’s tokens if he simply holds onto his tokens and does not use them for liquidity provision, which are denoted by V and Vheld. For convenience, impermanent loss and price will be expressed in terms of token Y (the price of 1 token Y is unit 1). After some simple derivations, it is easy to show that impermanent loss is always less than zero (x2 > x1, P2 < P3).

Uniswap³ V2 utilizes a simple but powerful formula to determine trades. The product of pool token reserve numbers is a constant. Compared to Bancor, it gets rid of the network token. The trades are fully determined by the token numbers in the liquidity pool.

Due to the nature of this function, the value of two tokens in the pool will always be the same (prices we use in this paper will always be relative prices).

Using the same logic, it is not hard to compute the impermanent loss of a single trade with and without fee in Uniswap V2. Suppose the trade changes the price from P to Pk. The impermanent loss, measured in percentage, can be solely expressed as a function of k.

This function, not surprisingly, is always less or equal to zero, as we can see from the above impermanent loss without fee figure. IL(k) will be symmetric if the horizontal-axis is plotted in logarithmic space. The takeaway is: the larger the relative price changes, the bigger the impermanent loss will be. This can be explained as liquidity providers’ more valuable tokens are bought from the pool, leaving them with more less-valuable tokens.

Next, let us look at how IL(k) will behave if we add transaction fee:

The impermanent loss function IL(k, ρ) derived looks very similar to the impermanent loss function without fee. We can do a sanity check by setting ρ to zero, arriving at the same result. A typical Uniswap V2 fee percentage is ρ = 0.3%. When plotting the impermanent loss function, we can see there is a above-zero part between roughly k = 0.994 to 1 (roughly 2ρ span). In this region, impermanent loss is positive, implying liquidity providers actually gain value (transaction fee earned outperforms loss in this region). By introducing transaction fee, when price moves in a certain range, liquidity providers will have a positive gain.

In the above discussion, we only considered the case where the relative price goes down. We can also calculate the exact range of k in which liquidity providers will have a positive gain.

When ρ is small, the total range of k, considering both conditions (price going up and down), is approximately 4ρ (2ρ each). This means when the price moves within 2ρ from the original price, liquidity providers will have a positive gain. We can also calculate the maximum trading quantity in terms of the token reserve, which not surprisingly, equals to ρ when ρ is small.

When the price movement is more volatile, it seems like liquidity providers will always come out on the losing end. However, in reality this is not the case. We are well aware of the work by Dave White et al⁴, which solved this conundrum. Sadly it is out of the scope of this introductory level medium paper. We intend to study this problem further in the future.

As for impermanent loss derivations for other popular AMM algorithms, including Uniswap V3, we refer readers to this lovely paper by Jiahua Xu et al⁵. Those derivations will be the topic for another day.

Pros: The first to implement a convex function of token numbers in the pool to determine prices.

Cons: Liquidity provision is even across all price ranges, meaning capital efficiency is low.

To increase the liquidity utility and reduce the impermanent loss risk, Uniswap⁶ V3 allows users to provide liquidity only within certain price ranges.

From Uniswap V3 whitepaper

This is achieved by translation of the Uniswap V2 function:

From Uniswap V3 whitepaper

Translating the function downward by the y value of point a, and leftward by the x value of point b, as depicted and described in the figure and equation above, ensures the same effective trading outcome between a and b as if we are using the green curve as our price determining function. When price goes out of this range, one of the token reserves will be sold out, effectively concentrating liquidity to this price range.

There is an excellent paper by Dan Robinson⁷ on calculating liquidity distributions of many AMMs.

It can also be trivially shown that two liquidity providers’ liquidity in the same price range can be simply added together.

When depositing liquidity, it is shown above that the value of each asset is not necessarily equal in Uniswap V3. Only when P is equal to the geometric mean of Pa and Pb, the value of each asset is equal to each other. When P is less than the geometric mean, the value of asset X is larger than the value of asset Y. When P is more than the geometric mean, the value of asset X is smaller than the value of asset Y.

When the current price is completely out of the price range set by the liquidity provider, Uniswap V3 counts the input of the liquidity provider as a range order and only allows him to deposit one type of token (the type depends on whether the price range is completely above or below the current price). For instance, consider a liquidity pool consists of ETH and Dai. If the price range is completely above the current price of ETH, users will only be allowed to deposit ETH. If the price range is completely below the current price of ETH, users will only be allowed to deposit Dai. When the price completely crosses the price range set by the liquidity provider, the asset he deposits will be all converted to the other type of token. Because users can only deposit one type of token, range order can only realize two out of four traditional limit orders (take-profit order, buy-limit order). Buy-stop order and stop-loss order, on the other hand, could not be realized. As of now, we do not know what the purpose is for restricting the token type for range order.

Pros: Uniswap V3 introduces the concept of liquidity distribution, by allowing its users to deposit liquidity in price ranges. By concentrating liquidity, it improves the capital efficiency. Higher liquidity and lower slippage are achieved when depositing the same value of assets. Providing liquidity in a price range also in some way lowers the risk of impermanent loss.

Cons: Users can only deposit certain types of token when doing range orders. Buy-stop order and stop-loss order therefore can not be realized.

Uniswap V2 and V3 introduce CPMM and liquidity distribution in their AMM algorithms. Providing liquidity in price ranges essentially enables Uniswap V3 to be a universal AMM, with the ability to become any possible AMM by changing its liquidity distribution.

Balancer

Balancer⁸ extends 2-token pools of Uniswap V2 to multi-token pools. The value of each type of asset in a Balancer pool holds an invariant weight that adds up to 1. It is not hard to show that this is equivalent to the power product of the reserve number of each asset is a constant. The price of asset n relative to asset t can also be derived as the ratio between the reserve number of asset t and n, normalized by their weights.

Based on the constant invariant, we can derive trading formulas with different inputs (trading between asset o and asset i). Asset o in this standard of notation always is the asset bought out. Asset i is the asset sent in. A and B are the token sent in / received and the current token reserve number. We can also calculate the token i sent in or token o bought out, given how the price changes.

Balancer also introduces the Smart Order Router (SOR⁹) algorithm.

https://docs.balancer.fi/v/v1/smart-contracts/sor/

The general idea of this algorithm is to divide an order into several small pieces to trade in different Balancer pools, to achieve a better swap result. Suppose we want to trade in pool 1 and pool 2. If the total amount N we want to trade is below A in the above figure, we will only trade in pool 1, as the price in pool 1 is always better than the price in pool 2. If the total amount exceeds A, we will trade part of the order in pool 1 and part in pool 2. The amount traded in each pool will bring the price in each pool equal (B + C = N).

It is easy to prove the optimal strategy is always the one that brings the price in each pool equal (if the price is not equal, we can always find a pool with better price to improve our swap result).

The price function, with respect to trade amount, in general, is a nonlinear function. Balancer simplifies the price function as a linear function. If there are n pools, the optimal strategy can be expressed as:

If there exists a price function such that when swapping all the tokens in its corresponding pool can not bring the price equal to all other price functions’ initial values before swapping, then the trivial, optimal strategy will be swapping all the tokens in that pool. Before doing the more complicated calculation, we need to first determine whether this condition is satisfied. If only some price functions’ initial values can not be matched, then only those price functions should be removed from calculation.

In this calculation, gas fee is not considered. In reality, the optimal strategy should keep a balance between the route gain and the gas fee loss.

The SOR algorithm, we believe could be used in a wider context. For instance, the price functions can be functions of other AMM protocols’ pools. Due to limitations of our current knowledge, we are not sure whether actual AMM aggregators use the same kind of logic to achieve better prices. A more general solution without any price function approximation will be discussed later in this paper.

Pros: Balancer generalizes 2-token pools to multi-token pools, and introduces the SOR algorithm to achieve better prices for its users.

Cons: “A liquidity pool is only as strong as its weakest asset.” The more types of tokens in one pool, the higher the risk.

Balancer is a multi-token portfolio management tool that allows flexible token value distributions, with a price optimization algorithm.

Curve

Curve merges Constant Sum Market Maker (CSMM) and Constant Product Market Maker (CPMM) together to achieve lower price slippage. We can think of this algorithm as adding a constant price part to the Uniswap/Balancer model to make the resulting function pegged to a certain price.

Curve¹⁰ V1, known as StableSwap, designs its algorithm for stablecoin trading. It multiplies the CSMM with a weight and adds CPMM:

First we consider a special case, where the number of each token in the liquidity pool is the same. It is trivial to show the equation at equilibrium holds (χ is the weight, Dⁿ⁻¹ is multiplied to make the CSMM and CPMM have the same order of magnitude). However, when the liquidity pool is out of equilibrium, if χ is a constant number, the equation will no longer hold. Therefore, we need to make χ dynamic. Curve V1 chooses a functional form of χ that when at extreme imbalance, goes to zero, meaning the equation is dominated by CPMM. At equilibrium, χ equals to A. A is a constant number, optimized by simulating historic data. Substituting χ gives us an equation that holds all the time.

Next, let us derive how StableSwap actually calculates swap outcomes. Based on the current token numbers in the pool, we can calculate D. For instance, if we want to swap for token j, we can separate xⱼ and solve the equation for xⱼ:

The equation can be reduced to a quadratic form. Sadly, there is no math library to solve quadratic equations right now in Vyper. Thus StableSwap implements Newton’s method to solve for xⱼ. The iteration formula doubles its precision every iteration. Therefore, an acceptable xⱼ can be calculated within set gas limit. Finally, the difference between xⱼ after and before swap will be the amount of token j bought out.

From StableSwap whitepaper
From StableSwap whitepaper

The StableSwap market maker, compared to CPMM, is pressed, flattened against x + y = const. This ensures the swap price at close or equal to 1 with very small slippage in the vicinity of the equilibrium point (when one token in the pool is not close to be almost sold out). When one token in the pool is almost sold out, the price starts dropping drastically. This is easy to understand: the curvature/slippage of the function is concentrated/pushed elsewhere to ensure small slippage near the equilibrium.

The CPMM and the dynamic weight in this model are used to punish informed extremely large orders, preventing tokens in the pool to be completely sold out.

Pros: By adding CSMM and CPMM together with dynamic weight, Curve’s StableSwap achieves very small slippage, ideal for stablecoins.

Cons: The price is always pegged at 1. The pool will be bought out if the market price significantly differs from the pool price. Therefore, StableSwap only works for stablecoins.

To ensure a more smooth price transition and customizable price pegs, Curve¹¹ V2 modifies the dynamic weight χ to K, as shown below:

K0 varies between 0 (imbalance) and 1 (equilibrium), χ and K (normalized by A) as functions of K0 are plotted below:

We can get a grasp of how Curve V2 smooths the price transition from the figure above. Basically it makes the dynamic weight quickly decline, when moving away from the equilibrium. The lower γ is, the more rapid the decline is. Making the dynamic weight quickly decline to zero essentially is equivalent to enforcing the function to behave much more like CPMM, even the pool is only a little bit imbalanced.

There is an awesome tweet by DW on twitter¹² that explains the same concept.

The price transition problem is solved. Now we discuss how Curve V2 implements other price pegs rather than 1. Having a price peg (they call it price scale in the whitepaper) means there exists a equilibrium point on the market maker curve where the scaled token numbers are equal:

The scaled token numbers satisfy a similar equation as StableSwap. Take the simplest 2-token pool for instance, the market maker function can be expressed in terms of A, γ, p, D, x, y. The function can be simplified to a cubic function with respect to x, y (a sextic function with respect to D).

A plot of this function with typical values is shown below:

The price of token x relative to token y can also be plotted. There is a constant part of the Curve V2 price function near the equilibrium point (1000, 1000). Curve V2 delays the price movement slightly, instead of completely compared to StableSwap. As the trading amount increases, the price starts to react at smaller slippage, compared to CPMM. To summarize, Curve V2 achieves very small slippage near the equilibrium point and better slippage than CPMM in other region. As for other price pegs rather than 1, we simply changes p in the cubic / sextic equation above. Therefore, the price peg problem is also solved.

We can use a similar Newton’s method in StableSwap to calculate swap results. First, we calculate D based on the current token numbers in the pool (this time using Newton’s method since the equation is way more complicated). Second, if we want to swap for token i, we use Newton’s method again to solve for xᵢ. Again, the difference (normalized by its price scale) will be the amount of token i bought out (all the xᵢ are scaled token numbers).

To ensure the roots of the polynomial function can be solved within set gas limit, the Curve whitepaper discusses the starting guesses they choose, as well as the parameters in the function. They use a method called fuzzing (hypothesis framework) to determine those optimal values. Currently, we do not know any detail about this method and would love to learn more.

In order to ensure small slippage (trading near the equilibrium point), Curve V2 constantly repegs the market maker function, by changing the price scale. However, repegging could lead to value losses endured by liquidity providers. Curve V2 introduces a variable called Xcp to mitigate this problem:

If the loss after one repeg is larger than half the Xcp accumulated (value gains from original Xcp), the algorithm will keep the market maker function the same. There are several questions about this we would like to answer in the future, since the whitepaper only briefly discusses Xcp. A look at its source code may help.

  1. Does the Xcp value proportional to the value calculated using current token numbers in the pool?
  2. Does depositing or withdrawing liquidity count towards Xcp?
  3. If withdrawing liquidity counts towards Xcp, will it be stopped if the decrease in Xcp is too large?

For repegging, Curve V2 uses EMA (Exponential Moving Average) price oracle to determine the oracle price. The new oracle price vector is determined by a linear combination of the last swap price vector and the previous oracle price vector. The new price scale vector changes in a similar direction as the oracle price, but not completely equal to the new oracle price. They lag the price scale vector behind the oracle price by introducing the relative price change step size s. The equation can be easily derived using Euclidean geometry. The EMA price oracle and the price scale delay are here to reduce the effect of volatile recent price movements and better represent the long-term market price.

Regarding the relative price change step size s, based on our “refreshing Curve finance webpage” experience, s changes on the scale of at least tens of minutes for some pools. How Curve V2 updates s is an interesting question that is out of the scope of our current knowledge. Looking at its source code will help as well.

A plot demonstrating one single repegging process is shown below:

Suppose we start our swap at x = 1000 and end our swap at x = 1400. Originally, the price is pegged at 1. After the swap, the price moves to 0.6. To simplify and only for demonstration purposes, we set the new price scale equal to the spot price (price is now pegged at 0.6), and solve the sextic equation to get D. Now the market maker function is pegged at 0.6 as shown above.

Repegging is essentially equivalent to finding a new market maker function that goes through the current token numbers point ((x, y) in the 2-token pool case), with a equilibrium point at (x0, y0) such that y0/x0 is equal to the absolute value of the derivative at (x0, y0). A fun project would be fetching real Curve finance pool parameters to make a better demonstration (possibly an animation) of the repegging process.

Due to the market maker feature of Curve V2 discussed above, it is sensible to make the transaction fee a linear combination of 2 tiers of transaction fees with dynamic weights, measuring how far away we are from the equilibrium point (whether the current price movement is more like StableSwap or CPMM). The fmid and fout value chosen by Curve V2 are 0.04% and 0.4%. A figure demonstrating how the fee changes in a 2-token pool is plotted below (assuming no repegging or liquidity change):

Pros: The market maker function can be pegged to any price, which suits all the tokens instead of only stablecoins. The price transition is smoother than StableSwap. Curve V2 also constantly updates the price scale, according to its internal price oracle, to better represent the market price, and ensure trading near the equilibrium point. Dynamic fees make sure a even better price on top of this.

Cons: Gas fees could be higher due to solving cubic and sextic equations. Repegging based solely on its internal price oracle could be risky. We wonder if there are scenarios where the price scale is noticeably different from the market price while passing the Xcp criteria. Cross-checking the price with other oracles could help if that is the case.

Curve’s StableSwap and dynamic peg V2 are here to make the trading slippage as small as possible. StableSwap always pegs at 1 while V2 makes the pegs follow the market price.

Some recent advancements and possible improvements in AMM algorithms will be discussed in the following.

DEX Aggregators

DEX aggregators are protocols that aggregate existing AMM protocols to achieve better swap results. Balancer’s SOR algorithm, as explained above, works in DEX aggregators as well, ensuring a mathematically optimal swap strategy.

The general solution of Balancer’s SOR algorithm, without any price function approximation, can be expressed below:

Because price functions could be any form, depending on the AMM algorithms they are generated from. This means equations that satisfy conditions like total token number conservation and equal final price, may not have a analytical solution.

Therefore, we introduce a technique that is commonly used in fields like machine learning called gradient descent. We define the loss function as the variance of the values of different price functions. After choosing a starting guess (a trivial, uninformed guess would be equal swap amount N/n in each pool), we can iterate (changing each swap amount by the partial derivative of the loss function with respect to that variable, multiplied by the learning rate l) to get an optimal result, with set error tolerance.

Since the total trade amount as a function of the final equal price is monotonic, this method should be able to find the global minimum (variance = 0). Again, calculations above assume there is no trivial solution (there does not exist a price function such that when swapping all the tokens in its corresponding pool can not bring the price equal to all other price functions’ initial values before swapping).

Pivot Algorithm

The Pivot algorithm tries to pivot the market maker function by making it go through a fixed point (x0, y0).

The price at (x0, y0) will always be the current market price Pt by design. This, in concept, ensures that arbitrage will always bring the pool back to point (x0, y0). The impermanent loss will be zero because of this feature. However, in reality, this algorithm does not have enough parameters to fit both the current reserve (x, y) and (x0, y0). This means we have to wait for the pool to go back to (x0, y0) and then change the market maker function.

As we can see from the above figure, the after swap point is not on the new market maker function (blue and dashed-blue curves). The pool may not have any incentive to go back to (x0, y0) either, if the current market price is smaller than the spot price at current reserves.

We wonder if there exists such function that goes through both (x, y) and (x0, y0) with tunable derivatives at (x0, y0) to fit the market price. If we assume the function to be convex, then the market price can not be smaller than the linear segment slope between those two points. Thus, there might not be a complete solution to this problem if the market maker function has to be convex.

Single-Sided Liquidity

It can be inconvenient for liquidity providers to deposit all types of assets when depositing liquidity. I wonder there exist other mechanisms that are different from the elastic supply approach taken by Bancor. By intuition, there are two solutions: 1. swap part of the tokens first using the same protocol 2. deposit single-sided liquidity regardless and let arbitrage bring the price back to the market price.

For instance, we want to deposit liquidity in a 2-token pool with equal value.

We only have token x. It is not hard to calculate how much we need to swap so that the value of each token is equal after the swap. It is also easy to show that β is always between 0 and 1, meaning a reasonable result. However, the price after the swap can be different from the price when depositing liquidity. Therefore I wonder if protocols actually make the swap and liquidity deposition as one atomic operation. There is also price slippage when doing the swap. How protocols like Balancer and Curve handle single-sided liquidity deposition remains a question to us as of right now. It makes sense to do the operation described above if the slippage is small.

The second approach as described in the Balancer and Curve whitepaper, is to deposit regardless. This could alter the price quite a bit. The resulting arbitrage may make the impermanent loss significant too. We personally do not see any counter measurement in the Balancer whitepaper and docs. Curve, on the other hand, introduces something called imbalance fee which ranges from 0% to 0.02%, when depositing single-sided liquidity. In reality, there is no real incentive for depositing single-sided liquidity under the second approach, due to arbitrage and impermanent loss.

It would be interesting to learn more about other innovations related to single-sided liquidity.

γ Value

In Curve V2, there is a constant called γ. What would happen if we make it dynamic as well? For example, we can make it a function of K0. The simplest case would be making it equal to K0. The motivation here is to make the function behave even more like StableSwap when it is close to the equilibrium and even more like CPMM when it is far away.

The purple dashed curve, which is between the StableSwap and small γ curve, should give us a market maker function in between StableSwap and Curve V2. However, when we plot the market maker function, it behaves exactly like StableSwap:

There are two solutions to this problem: 1. make A smaller 2. choose a higher power number of K0 to represent γ. Both seems viable, however, 1. ruins the the purpose of A being a big number: to make the market maker function pegged to a price. Further testing done by us seems to show changing A would not make a difference in the functional behavior (the market maker function still looks like StableSwap after changing A).

The second solution would make the gas fee higher. A higher power number of K0 corresponds to a higher order polynomial equation we need to solve. In fact, the reason Curve V2 chooses that particular form of dynamic weight K is to mimic the behavior of the function of K0 to a large power, while not making the order of the polynomial higher.

The interesting question here is: can we find a better dynamic weight that simplifies the equation we need to solve while maintaining the same or better functionalities of Curve V2? When designing such dynamic weight, we also have to keep in mind that we need to keep a balance between small slippage and the capability of the market maker function reacting to informed large orders. Clearly StableSwap with only a price peg will not work in this regard, because almost all the tokens will be bought out if the pegged price is different from the market price. Only when the balance is maintained can repegging be viable.

Price Range

We can apply the price range concept to Curve V2. Since there is no analytical expression of the price with respect to the token numbers in the pool, we need to interpolate the relation between the price and token numbers. The amount of shift applied to the market maker function is determined by the price range. Writing such program could make the capital efficiency even higher.

Clipper

Clipper¹³ uses an AMM algorithm that best suits the need of small trades. It generalizes Constant Product Market Maker (CPMM) and Constant Sum Market Maker (CSMM) as its two extreme cases (k = 1 and k = 0).

When there are only 2 types of tokens (X and Y), the invariant can be reduced to a simpler form where x0 and y0 are the token numbers set by the initial liquidity provider. Below is how the pool behaves under different k values. The x and y-axis are normalized by x0 and y0.

Smaller k values correspond to lower slippage (the function is less convex) in the vicinity of (1, 1). When k is between 0 and 1, the invariant function could intersect with x and y-axis. This implies tokens in the pool could be sold out. The price at such intersection is zero, implying the price is better than CPMM price until a turning point. After passing the turning point, the CPMM price is better. This can be illustrated in the figure below:

Again x-axis is normalized. The price of X token relative to Y decreases as we move away from the initial point (1, 1). We can precisely calculate where the intersection happens:

Pros: By introducing k, Clipper achieves lower slippage (better price) when trading quantity is small. The following chart from the Clipper whitepaper further demonstrates this point.

From Clipper whitepaper

Cons: When trading quantity passes a certain threshold, the price will become significantly worse than CPMM.

In order to guarantee better price, the algorithm has to constantly re-peg (changing x0 and y0) to keep the current pool reserve near the (1, 1) point. It could use the same mechanism Curve uses. The algorithm re-pegs by following its internal price oracle, which tracks the market price. Essentially, this is equivalent to solve the following formula, but this time x, y are known. P is given by the price oracle. Finally solving this equation for x0 gives us the new equilibrium point.

This ensures we always trade close to the market price with smaller slippage. Currently we have not investigated whether Clipper implements this or not, as this is not explained in the Clipper whitepaper. A further look at its source code is needed.

The price range concept can also be applied to Clipper:

TWAMM

In all the AMMs discussed above, we can only trade in one direction at one instance. What if I tell you there is an algorithm that came out recently that allows trading in both directions simultaneously?

The TWAMM¹⁴ (Time Weighted Automated Market Maker, pronounced as “tee-wham”) algorithm transforms a long-term order over a period of time into an integral of infinitely small virtual orders. The orders can go both ways at the same time. Additionally, orders executed with the same time range, in the same trading direction get pooled together to simplify the calculation. As a result, long-term orders over a time period are executed at the price equal to the time weighted market price of that period.

As of right now, there only exist closed-form TWAMM solutions for two types of AMMs, CPMM and LMSR (Logarithmic Market Scoring Rule).

Let us consider the general case where during a period of time, the total sale of token X is xin and the total sale of Y is yin. The selling rate of X is f(t) and the selling rate of Y is g(t). The net change to the number of token X from time t to t + dt can be calculated as the sale number of token X, subtracted by the number of token X bought during this period, with the exchange rate at dy/dx. Because the selling quantity of token Y is infinitely small during this period, the spot price can be used as the actual exchange rate.

Thus, we arrive at a nonlinear first order differential equation. Depending on the form of dy/dx, f(t) and g(t), the equation may or may not have a closed-form solution.

When applied to CPMM, the equation can be integrated, if f(t)/g(t) is a constant, meaning the selling strategies of token X and Y are the same. We can further simplify the expression:

There is an analytical expression of the integral. Using properties of the hyperbolic functions, we can get a nice looking final solution (the token X number in the pool after trading), which only depends on the original position of the pool (x0, y0) and xin, yin. The final token Y number can be expressed as well, by switching the position of xin and yin, x0 and y0 in the final expression of xend, since the market maker function of CPMM is totally symmetric with respect to x and y. The product of the token X number and token Y number is equal to k as expected.

This form of differential equation derived from CPMM, actually has a technical name called “Riccati equation”. The general form of the Riccati equation looks like:

There is no general closed-form solution to the Riccati equation. However, there are special cases where the Riccati equation can be solved. There is a paper¹⁵ discussing those cases. If the coefficients of the Riccati equation satisfy this condition:

Then the Riccati equation can be transformed into a Bernoulli type equation. The Bernoulli type equation can be solved quite easily. This should give us the same result as before. As we can see from above, satisfying this condition is the same as keeping f(t)/g(t) constant, which we assumed in the first method to solve the differential equation.

When f(t)/g(t) is not a constant, what forms of f(t) and g(t) we can choose to make the differential equation have closed-form solutions is still an open question. Finding such solutions will give us more options (the selling strategies of token X and Y do not have to be the same).

Now let us apply TWAMM to LMSR:

Again, we assume the selling strategies are the same. Then the differential equation can be integrated. We can further simplify the final token X and Y number expression as:

Similarly, the differential equation is not guaranteed to have closed-form solutions when the selling strategies are different.

Once we obtain xend and yend, we can calculate how much of the token X and token Y each side will receive:

Since during this period, all the orders in the same trading direction are pooled together. Each individual trader will get his fair share of the token based on the percentage he contributes to xin and yin.

Pros: TWAMM makes the price slippage for large orders smaller, by allowing counter-parties to trade against those large orders simultaneously. In the most ideal case (xin/yin = x0/y0), zero slippage trading can be achieved. In this case, xend = x0, yend = y0, TWAMM basically serves as an order book, which exchanges tokens between each side without providing liquidity. Long-term orders are broken into infinitely small orders which are executed virtually between blocks. Due to this nature, it is less susceptible to sandwich attacks since the attacker has to put an order at the end of a block, and another order at the beginning of the following block.

Cons: The gas fee could be very high if we allow the orders to expire at any time. This is due to the fact that we have to calculate the integral results (in the paper they call this “lazy evaluation”) multiple times. In the worst scenario, we have to calculate results for every single block. Therefore, in practice we have to make the orders expire at certain blocks to simplify the calculation. Besides, the liquidity pools TWAMM uses have to be different from the existing liquidity pools since there is no concept of virtual orders and lazy evaluations. The regular traders do not want to pay for the extra gas fee incurred by lazy evaluations when they interact with TWAMM (the pool is updated whenever someone interacts with it).

We can also apply TWAMM to time-dependent AMMs such as YieldSpace¹⁶:

There are two forms of the market making function, both of which lead to differential equations that currently we do not know how to solve. The differential equations can be reduced to a single differential equation in the second form case.

Conclusions and Future Work

We hope this comprehensive-ish, introductory, study note style of paper can provide some insights to both people who do not know anything about AMM algorithms and people with more experience.

To summarize, the core of AMM algorithms is basically about the design of market maker functions and manipulation of their curvature distributions. There is obviously another paper we need to read by Guillermo Angeris and Tarun Chitra¹⁷, which discusses about this in detail. On top of this, there are efficient price solutions like DEX aggregators and efficient liquidity provision solutions like price range. Recent TWAMM algorithm sheds light on how we can use AMM algorithms to achieve the order book type of matchmaking, common in centralized exchange. We believe the future of AMM algorithms will come closer to order book style.

As for future works, we plan to dig deeper on some of the problems mentioned in this paper. This includes reading papers [4], [5], [7], [16], [17], deriving impermanent loss formulas for other AMMs, deriving liquidity distributions for other AMMs, understanding how fuzzing works, answering the 3 questions we asked about Xcp, learning more about how to choose the most effective price oracle and s value, making an animation of the Curve repegging process, learning more about innovations on single-sided liquidity, trying to design a better dynamic weight K, applying price range to other AMMs, applying TWAMM to other AMMs and finding more closed-form solutions.

Acknowledgements

The authors thank Fangyuan Zhao, Showen Peng, DW for useful discussions on the topics of this paper. The authors would also like to thank Lianxuan Li at Huobi Research. The authors specially thank Dave White and Dan Robinson at Paradigm for the invitation to the TWAMM discussion group and their insightful discussions.

References

[1] Bancor Protocol Continuous Liquidity for Cryptographic Tokens through their Smart Contracts

https://storage.googleapis.com/website-bancor/2018/04/01ba8253-bancor_protocol_whitepaper_en.pdf

[2] Formulas for Bancor system

https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA

[3] Uniswap V2 Core

https://uniswap.org/whitepaper.pdf

[4] Uniswap’s Financial Alchemy

https://research.paradigm.xyz/uniswaps-alchemy

[5] SoK: Decentralized Exchanges (DEX) with Automated Market Maker (AMM) protocols

https://arxiv.org/abs/2103.12732

[6] Uniswap V3 Core

https://uniswap.org/whitepaper-v3.pdf

[7] Uniswap V3: The Universal AMM

https://www.paradigm.xyz/2021/06/uniswap-v3-the-universal-amm/

[8] A non-custodial portfolio manager, liquidity provider, and price sensor

https://balancer.fi/whitepaper.pdf

[9] Smart Order Router V2

https://docs.balancer.fi/developers/smart-order-router

[10] StableSwap — efficient mechanism for Stablecoin liquidity

https://curve.fi/files/stableswap-paper.pdf

[11] Automatic market-making with dynamic peg

https://curve.fi/files/crypto-pools-paper.pdf

[12] https://twitter.com/dken_w/status/1422623679150649345

[13] New Invariants for Automated Market Making

https://github.com/shipyard-software/market-making-whitepaper/blob/main/paper.pdf

[14] TWAMM

https://www.paradigm.xyz/2021/07/twamm/

[15] Analytical solutions of the Riccati equation with coefficients satisfying integral or differential conditions with arbitrary functions

https://arxiv.org/abs/1311.1150

[16] YieldSpace: An Automated Liquidity Provider for Fixed Yield Tokens

https://yield.is/YieldSpace.pdf

[17] Improved Price Oracles: Constant Function Market Makers

https://arxiv.org/abs/2003.10001

Disclaimer: This paper is for general information purposes only. It does not constitute investment advice or a recommendation or solicitation to buy or sell any investment and should not be used in the evaluation of the merits of making any investment decision. It should not be relied upon for accounting, legal or tax advice or investment recommendations. This paper reflects the current opinions of the authors and is not made on behalf of AnchorDAO Lab or its affiliates and does not necessarily reflect the opinions of AnchorDAO Lab, its affiliates or individuals associated with AnchorDAO Lab. The opinions reflected herein are subject to change without being updated.