permissiuned distributed ledgers and the governance of money

We explore the economics and optimal design of “permissioned” distributed ledger technology (DLT) in a credit economy. Designated validators verify transactions and update the ledger at a cost that is derived from a supermajority voting rule, thus giving rise to a public good provision game. Without giving proper incentives to validators, however, their records cannot be trusted because they cannot commit to verifying trades and they can accept bribes to incorrectly validate histories. Both frictions challenge the integrity of the ledger on which credit transactions rely. In this context, we examine the conditions under which the process of permissioned validation supports decentralized exchange as an equilibrium, and analyze the optimal design of the trade and validation mechanisms. We solve for the optimal fees, number of validators, supermajority threshold and transaction size. A stronger consensus mechanism requires higher rents be paid to validators. Our results suggest that a centralized ledger is likely to be superior, unless weaknesses in the rule of law and contract enforcement necessitate a decentralized ledger. 1 Introduction Money is a social convention. People accept money in payment in the expectation that others will do so in the future. Within an equilibrium with monetary exchange, holding money is a record of goods sold or services rendered in exchange for money. In this sense, money is a record-keeping device. Kocherlakota (1998) and Kocherlakota and Wallace (1998) showed that money as a recordkeeping device was capable of doing the job of a complete ledger of all past transactions in the economy. The motto is that “money is memory”. In this spirit, monetary theorists have regarded money as performing the role of a publicly available and freely accessible recordkeeping device. While the concept of money as memory has been well-known in theoretical circles, the advance of cryptography and digital technology has opened the possibility of taking the idea of a complete digital ledger more literally, and building a monetary system around such a ledger. However, with a public ledger the issues that loom large are who should have the authority to update the ledger and how. This is all the more so given the incentive problems that arise to misrepresent ownership of funds. Under traditional accountbased money overseen by an intermediary, for instance a bank, this authority is delegated to intermediaries. The bank updates the ledger by debiting the account of the payer and crediting the account of the receiver. However, in monetary systems without a central intermediary, the ledger must be updated by other means, such as decentralized ledger technology (DLT), as exemplified by Bitcoin. DLT is a record keeping device in the spirit of Kocherlakota’s analysis of money as memory. In permissioned DLT, a known network of validators can update the distributed ledger via the agreement of a supermajority of the validators. The aim is to achieve agreement on the state of some data in a network of nodes that constantly exchange the underlying data file, the ledger.1 Applications of permissioned DLT are being explored for securities settlement 1For example, in a trade finance application, the state might be the delivery status of a set of shipments and respective payments, and the network includes anyone authorized to access the system. Users can transact in this system, for example by initiating a new shipping order. The purchase instruction needs to be written into the ledger, which means that validators need to read the past ledger and verify that the new transaction is indeed genuine. Once this has happened, they vote for the transaction to be included in the ledger. The set of rules by which agreement is achieved is called the consensus algorithm. This is a computer 2 systems, trade finance solutions, “stablecoins”, and central bank digital currencies.2 In this paper, we present what we believe is a first economic analysis of permissioned DLT in a monetary economy. We examine the economic opportunities and challenges of this technology focusing on the strategic elements underlying its optimal design.3 We abstract from the details of the computing or cryptographic implementation and focus on the incentives of the validators that are needed to sustain mutually beneficial exchange as an equilibrium of a game. The validation protocol is constrained by two technological limitations. First, there is no technical way of forcing a validator to verify and sign any given transaction. Second, nothing can technically prevent a validator from validating multiple ledgers with conflicting histories. We examine theoretically how the optimal validation protocol deals with these constraints and derive the optimal number of validators, their compensation, and the optimal voting rule. In turn we can determine how the optimal validation protocol impacts the level of trade in the economy. A model of credit Our model has three building blocks. The first modeling block consists of an intertemporal model of exchange involving credit. Our economy has two types of infinitely lived agent, early and late producers. In each period, an early producer is randomly matched with a late producer and the pair engages in two subsequent production stages. In the early stage, the early producer produces goods for the late producer. In the late stage, the late producer should reciprocate and produce some goods for the early producer. We impose two main frictions on producers. First, there is private information: late producers can be faulty and cannot produce but early producers cannot tell the difference between faulty and other producers. Second, late producers cannot commit to reciprocating. Therefore there is no trade unless a record-keeping device – the memory of our economy – tracks the actions of late producers and, in particular, whether the late producer has ever defaulted in the past. protocol that specifies the conditions under which a ledger is considered as valid. Importantly, the consensus mechanism also guides how to choose between multiple versions of a ledger if conflicts should emerge. 2See e.g. Townsend, 2020, Baudet et al (2020), Arner et al (2020), Auer et al (2020), and for how to use blockchain technology to settle assets, Chiu and Koeppl (2019). A recent survey indicates that 86% of central banks are conducting research or development in the area of CBDC (Boar and Wehrli, 2021). 3Of course, a decentralized payment system too will rely on central bank money for underpinning of a stable value and an elastic supply, see Frost et al (2020). The decentralization concerns how this money circulates in the economy once it is issued by the central bank. 3 Just as in Kocherlakota (1998), rather than users owning and paying with monetary tokens, the ledger’s memory of the production history suffices to allow for trustless exchange: it is well known (see e.g. Rocheteau and Nosal, 2017) that there is an equilibrium with trade when the trading history of a late producer is publicly and freely observable and automatically updates itself according to the behavior of the late producer. In turn, since the ledger’s memory is the essential value underpinning of the economy, ensuring its integrity is the quintessential design issue to be solved. The second modeling block endogenizes the process of updating the history of trades, that is the validation of records on the ledger. We assume that a number of agents known as “validators” are in charge of reading and updating the ledger of trade histories. For each trade involving a late and early producer, some validators have to verify the history of the late producer and communicate the result to the early producer. The history is understood to be “good” (that is, without default) whenever a supermajority of validators say it is so. We assume that verifying histories has a known common cost, while the cost of communicating the result is idiosyncratic, reflecting, e.g. the possibilities of operational failures for some validators. Validators privately learn their cost of communicating histories, which consists of a common component and an idiosyncratic one. Since verification and communication are both costly activities, validators must be compensated for their efforts and they will expect a payment from the pair of early and late producers whose history they have to validate. Since validation requires a supermajority of validators, this structure gives rise to a global game that we analyze using the approach of Morris and Shin (1998, 2003). As we explained above, validators cannot be trusted, because (1) they cannot commit to verifying histories, so while messages sent by validators are observable, their checking is a costly non-observable action which raises a moral hazard problem and (2) they can accept side payments to record a false entry. Our third and final building block is the analysis of the optimal design of the trade and validation mechanisms. The optimal mechanism chooses the number of validators, the supermajority threshold, the compensation of validators, as well as the trade allocation that maximize the gains from trade subject to incentive compatibility conditions. 4 Results We first show that reaching decentralized consensus – a unique equilibrium – among validators entails paying higher rents to validators in order to sustain a higher level of decentralized consensus as an equilibrium outcome. Given their private costs, validators play a game that has attributes of a public good provision game – the public good is provided if and only if a supermajority provides it – which we proceed to solve using global game methods (see Carlson and van Damme, 1993 and Morris and Shin, 1998, 2003). We show that there is a unique, dominance solvable equilibrium if and only if the rewards to being a validator are higher than a threshold that increases in the average cost of validation. We show that a decentralized consensus fails to be sustained when the rents accruing to validators falls below the threshold. This is so, even when validation would be a possible equilibrium in a complete information game. The reason is that the uncertainty surrounding the fundamental communication cost can reverberate throughout the validation process: validators may choose to abstain from validating a trade when, given their private cost, they believe that other validators will also abstain. However, there is an equilibrium where validators validate a trade as long as their private costs is below some level. Driving the idiosyncratic component of the cost to zero, we find the validation process “works” whenever the fundamental communication cost is below that level. In turn, this gives the probability that the validation process will be successful. That success probability falls with the supermajority threshold, but increases with the payments validators obtain. Therefore a higher supermajority requires higher payments to validators in order to guarantee the same success probability. In other words, reaching a higher level of consensus among validators requires higher rents to be paid to validators. Our second result is that, despite strategic uncertainty, a trading equilibrium can arise where the ledger truthfully reflects the history of trades. We characterize the optimal trade size, supermajority, and number of validators; where optimality is defined as the surplus from the trade net of the validation costs. Naturally, the optimal solution is constrained by the incentives of late producers and validators. We find that intertemporal incentives are key to characterizing the optimal solution. When intertemporal incentives are strong in the sense that the present values of future rewards are high, validators can be trusted as they would 5 have much to lose from accepting a bribe. In this case, the supermajority threshold should be high, there should be few validators and they should earn high rents. The trading allocation is high in this case. In the limit, the supermajority tends to unanimity with few (measure zero) validators, trade is efficient, but the rent to the few validators is arbitrarily large. On the contrary, when intertemporal incentives are weak, the future high rents are not enough to deter validators from accepting a bribe. Since the bribe size falls with the number of validators, there should be many. However the supermajority should be relatively low so that consensus is weak, and as a result the validators’ rents should be relatively small. In this case, the trade size will also be small. Our findings therefore suggest a number of initial conclusions. While it is costly to duplicate verification and communication across many validators, we find conditions under which many validators are better than one. Therefore, to use Aymanns et al’s (2020) terminology, we find conditions under which a (trading) platform should be vertically disintegrated – a group of agents should handle the interaction between users – rather than vertically integrated, when a single intermediary has the monopoly over managing the interaction of the platform users. Also there are economies of scope in trading and validation: achieving good governance and honest record-keeping is made easier by having validators who also participate in the market themselves and thus have an intrinsic interest in keeping it going smoothly. This also implies that validators should be selected from the market participants. Our results on the supermajority are naturally dependent on the communication cost being stochastic and unknown. When the communication cost is common knowledge, unanimity is optimal and, to reduce the incentive to bribe validators, they should be sufficiently many. Hence it is typically sub-optimal to have a central validator whenever they have to satisfy incentive constraints and the communication cost is common knowledge. However, while it is optimal to have many validators absent any other frictions, it gives rise to a free-rider problem in solving information frictions similar to the one in the seminal paper of Grossman and Stiglitz (1976). Since verifying a label is costly, we show that, under some conditions, validators have an incentive to skip verification while still communicating a good label, which jeopardizes the whole legitimacy of the ledger. To resolve this free-rider problem 6 and maintain the integrity of the ledger, we show that (absent a unanimity rule where all validators are pivotal) the allocation of validators should be dependent on which label they communicate to the ledger and how their communication compares with the supermajority. When the label they communicate differs from the supermajority (which is observable and verifiable), validators should be excluded from trading and validating in the future. In this context, we derive a folk’s theorem of sort for validators; as validators become more patient, the free-rider problem has no bite and any allocation satisfying the validators’ participation constraint can be implemented in our strategic set-up. Literature A sizable literature analyzes the incentives of miners in Bitcoin and similar cryptocurrencies to follow the proof-of work protocol.4 Kroll et al (2013) and Prat and Walters (2020) examine free entry and the dynamics of the “mining” market,5 while Easley et al (2019) and Hubermann et al (2021) examine the economics of the transaction market. Budish (2018) and Chiu and Koeppl (2019), and show that ensuring the finality of transactions in Bitcoin is very costly as so-called “majority” or “history reversion” attacks are inherently profitable, while Auer (2019) examines whether the transaction market can generate sufficient miner income to ensure the finality.6 Leshno and Strack (2020) present a generalization of such analysis, demonstrating that no other anonymous and proof-of-workbased cyrptocurrencies can improve upon the performance of Bitcoin. This can serves as a benchmark for the analysis at hand to compare permissioned and permissionless market designs. Further to this, even in the absence of incentives to reverse history, sunspot equilibria 4Also the variant with betting on the truth instead of costly computation, i.e. proof-of-stake, is attracting increased attention (see Abadi and Brunnermeier 2018, Saleh 2021, and Fanti et al 2020). However, proofof-stake can also be attacked via so called “long-run attacks” (see Deirmentzoglou et al 2018 for a survey). Therefore, proof-of-stake implicitly assumes the existence of some overarching social coordination to (see Buterin, 2014). 5See also Cong et al. (2019) for an analysis of the concentration of mining and efficiency. 6Such attacks are outlined in Nakamoto (2008). In these, the majority of computing power is used to undo a transaction in the blockchain by creating an alternative transaction history that does not contain the transaction. It is noteworthy that other attacks on cryptocurrencies are possible, including the possibility of “selfish” mining analyzed in Eyal and Sirer (2014). Gervais et al. (2016) present a dynamic analysis of the costs and benefits of various attack vectors. Garatt and van Oordt (2020) examine the role of fixed cost of capital formation for the security of Proof-of-Work Based Cryptocurrencies, Böhme et al. (2015) and Schilling and Uhlig (2019) present discussions of broader economic implications and governance issues, respectively. 7 can arise in proof-of work based blockchains (Biais et al 2019).7 The literature on validator incentives and design of permissioned versions of distributed ledgers is sparser. Most closely related to our analysis is Amoussou-Guénou et al (2019), who first modeled the interaction between validators as a game entailing non-observable effort to check transactions and costly voting. They also analyzed that game in terms of moral hazard and public good provision. Relative to their analysis, our contribution is to link the ledger validation game to monetary exchange, establish the uniqueness of the equilibrium via a global game approach, and characterize the optimal mechanism design, in particular in terms of number of validators, size of transactions, and optimal supermajority voting threshold. In our work, all validators are profit-seeking, and the issue at heart is how the market can be designed so that profit-seeking validators actually verify the ledger and validate only correct histories.8 The focus on dealing with free-riding and coordination relates to several classical strands of papers on the coordination with many actors. Reminiscent of Grossman and Stiglitz (1976), free riding can prevail in the case of multiple validators. Consistent with Biais et al (2019) and Amoussou-Guenou et al (2019) we also derive a folk theorem. Last, we note that we do not model monopoly power on the market for transactions. Our paper also has ramifications in the banking literature, starting with Diamond (1984) or Williamson (1986, 1987) where banks are modeled as a way to save on monitoring costs. Another approach, pioneered by Leland and Pyle (1977) and developed by Boyd and Prescott (1986) models banks as information-sharing coalitions. Gu et al. (2016) show that higher rents can discipline intermediaries, while Huang (2019) uses that model to study the optimal number of intermediaries when they have an incentive to divert deposits. A related analysis that study the optimal composition of the money stock between inside and outside money can be found in Monnet (2006), Cavalcanti and Wallace (1999a, b), and Wallace (2005). Global games techniques have also been introduced in the banking literature to study the probability a bank run occurs by Rochet and Vives (2004) and Goldstein and Pauszner 7See Carlstens et al (2016) for a related argument based on simulations and Pagnotta (2021) for an examination of multiple equilibria in the presence of a feedback loop between blockchain security and cryptocurrency valuation. 8Note that Amoussou-Guénou et al (2019) do not examine history reversion attacks; rather, byzantine attackers are assumed to attempt bringing the system to a halt for exogenous reasons. 8 (2005). In game theory, the literature on incentives with public and private monitoring is large and it is beyond the scope of this paper to summarize it all (see Kandori, 2001 for an early survey). However, we would like to mention Rahman (2012), who studies a problem of private monitoring where the observation of the monitor(s) is not verifiable. He shows that sending a false positive to test the “attention” of the monitor can be optimal, or in his own words “the principal allocates private information to provide incentives.” When considering the free-rider problem we also find that there must be enough faulty producers to induce the correct behavior from validators. However, our planner does not know if the match involves a faulty producer when it does, while the principal in Rahman (2012) knows when the false positive is sent. The paper describes in Section 2 the main features of permissioned DLT that we think any model of permissioned DLT should capture. Section 3 then lays down the basic set-up and characterizes benchmark allocations absent a record-keeping device and a freely accessible one. Section 4 defines incentive feasible allocation with DLT, and characterizes the optimal allocation including the optimal number of validators. We analyze the free-rider problem in Section 5. Conclusion In this paper, we have presented an economic analysis of permissioned decentralized ledger technology in an economy where money is essential. To our knowledge, our analysis is the first economic analysis of permissioned DLT in such a context. It links a ledger validation game to monetary exchange, establishes the uniqueness of the equilibrium via a global game approach, and characterizes the optimal mechanism design, examining the optimal supermajority voting rule, number of validators, and size of transactions. 39 We believe our analysis is a timely one, as permissioned DLT is rapidly becoming an industry standard for digital currencies and in other applications. In particular, our results can shed light on the burgeoning literature on central bank digital currency insofar as it gives conditions under which a central authority should manage the ledger of transactions.33 The economic discussion of technology and the economics of central bank digital money has thus far centered on the balance sheet effects and related systemic implications.34 Here, we focus not on balance sheets and the issue of how the value of a currency can be guaranteed (central backing is of the essence for a CBDC irrespective of our analysis), but on the governance of money when it is used in exchange as the record-keeping device of society. Of course, we have made simplifying assumptions in order to better grasp the basic economics of memory. Future work should relax some of these. For instance, we have assumed that one individual can only have one account, so that the reputation of the individual and his/her account are intertwined. However, ledgers only record transactions for one account and it is usually difficult to trace the identity of the owner of the account. However, our analysis would extend directly to reputation on accounts rather than on individuals.35 Also and to simplify the analysis we have assumed that validators all agree to accept bribes in unison. It would be interesting to also study the cooperative games between validators in more detail. We have also taken as given that agents use a private permissioned ledger as they want to preserve their anonymity in trades. Tirole (2020) and Chiu and Koeppl (2020) make progress on this front. In order to better compare the different types of ledger, future work should also include the benefit from preserving anonymity. Also our mechanism design approach implies that we have ignored every industrial organization aspect of DLT, which might be significant if this technology were to be widely adopted in the future

Comments

Popular posts from this blog

ft

gillian tett 1