Questions and Answers

On Monday, November 6th, we held a Discord AMA on the recent series of videos, blog posts, and wiki pages that introduce the upcoming IOTA 2.0 protocol update. Community members asked several questions to Andrew Cullen, Andrea Villa, Billy Sanders, Darcy Camargo, Nikita Polianskii, and Piotr Macek from our team of researchers and engineers; they also added their own insights.

We’ve reproduced the questions and answers (lightly edited for clarity) here for those of you who missed it. To explore the issues raised here further, check out our series of blog posts introducing the IOTA 2.0, listed at the bottom of this page. And for those of you who prefer to watch rather than read, we’ve also shared some of the best questions with long-time community member Linus Naumann in the video below.

Thirty Questions About IOTA 2.0

1) What’s your expectation of how the Mana market will develop? How much monetary value compared to what you stake are you expected to receive in return for staking (APY)? 1-5%?

Andrew: Firstly, the Mana market isn’t something the IOTA Foundation intends to invest resources into developing, but is something that we intend to leave to the community. We have put a lot of thought into how we’ve implemented Mana itself, to ensure that it is flexible enough to be used in lots of different ways, from its primary use for block issuance to Layer 1 and Layer 2 smart contracts, to a measure of reputation, to buying and selling on a "Mana market". I could foresee the earliest embodiments of Mana markets taking the form of third-party providers developed by the community that sell Mana for fiat currency or IOTA tokens. This version of a Mana market would exist alongside block issuance service providers that simply issue blocks on users' behalf and don't even require Mana from the user. Offering block issuance as a service is a form of Mana market as well because you’re paying for a service that requires Mana. More sophisticated DEX-based Mana marketplaces would evolve after the arrival of L1 smart contracts.

Unfortunately, I can't speculate about the expected APY for staking but I can tell you that there is a simple 1:2:3 rule for delegation/staking to calculate how much additional Mana you can earn for delegation or staking over simply holding tokens. Holding Y tokens earns you X Mana, delegating those tokens earns you at least 2X Mana, and staking those tokens earns you at least 3X Mana (provided you do your job correctly as a validator).

2) What are the hardware requirements to run a node as a validator?

Piotr: Running a validator node will not require much more resources than running a regular node. The only difference is that a validator will need to run an inx-validator INX extension, which doesn’t use a lot of resources. We’ve just implemented the validator plugin, so we don't yet know the exact requirements, but the plugin basically constructs and issues a block in certain intervals, which doesn't consume a lot of resources.

3) Is the monetary value of Mana taken into account for the development parameters; e.g. artificially reducing capacity to increase the value of Mana?

Andrew: The supply and demand for Mana are certainly taken into account in choosing the parameters of the network. One way this is taken into account is that in the early stages of the network, the decay of Mana will be very slow, which allows participants to hoard Mana because the Mana will be of low value at this stage. However, when the network is more mature and Mana becomes more valuable, the decay rate will increase so that Mana can no longer be hoarded. So we will not artificially limit supply just to make it valuable, it's sort of the opposite actually. Mana will become valuable when the network is useful, but there won't be any tricks to make it valuable before that.

4) How does IOTA 2.0 address the scalability challenges that the previous version faced, and what innovative features have been implemented to ensure faster and more efficient transactions?

Andrea: The IOTA 2.0 protocol has been redesigned from the ground up to have a leaderless and parallel voting mechanism. All validators, regardless of the mechanism involved in selecting them, can simultaneously vote on blocks and transactions appearing on the Tangle. This is opposed to the current milestone approach where a section of the Tangle can only be evaluated at the time of receiving a milestone. In fact, in IOTA 2.0, nodes keep a live tracking of the weight of any object until reaching the acceptance threshold.

5) Can you provide insights into the security enhancements of IOTA 2.0, particularly in relation to preventing potential attacks and securing the network for long-term viability?

Andrea: Unlike the linear confirmation process in the previous version, IOTA 2.0 allows multiple parts of the Tangle to be validated concurrently, significantly increasing transaction throughput. The same is also true for its validation and voting process: it is parallel and decentralized, as there is no single Coordinator to rely upon. In IOTA 2.0, a set of validators is selected following the outcome of a committee selection process, backed by delegated Proof of Stake. A committee of validators is selected on every epoch: a multiple of slots roughly a day in length. In this scheme, a candidate validator can become an actual validator by providing enough stake, receiving stake delegations by users in the network, and proving its ability to follow the protocol before the selection process of the following epoch. This way, the network is not only secured by enough stake, but it is also resilient against validators becoming malicious or going offline.

6) What strategies has IOTA 2.0 adopted to reduce its carbon footprint and contribute to more environmentally friendly blockchain technology?

Piotr: Compared to the previous version, the improvement is that we don't rely on Proof of Work for congestion control, which reduces the amount of computation. Here is a detailed analysis that was performed on the GoShimmer prototype some time ago. The energy usage and the overall approach didn't change, so it's still roughly up-to-date:  https://blog.iota.org/energy-consumption-of-iota-2-0/.

7) How does IOTA 2.0 address the potential threat of adversarial validators attempting to manipulate the confirmation process in the Tangle, and could you provide more details on the preventive measures that have been implemented to safeguard the network?

Nikita: The protocol itself serves as our safeguard, enabling us to tolerate up to a third of adversarial validators in the committee. This protocol can be segmented into various components, including the congestion control mechanism, the approval weight module, the tip selection algorithm, the chain switching rule, etc.

Each of these components contributes to providing protection in distinct ways.  Our tip selection algorithm enables the selection of all blocks issued by honest nodes, ensuring that no honest block can be censored or disregarded by adversarial validators. Otherwise, any adversarial blocks will eventually be ignored by honest nodes. Our tip selection also allows consistent generation of slot commitments by all honest nodes. A block issued X seconds ago is expected to receive an approval weight of value F(X) from the committee, preventing blocks with an old timestamp from infiltrating the Tangle.

The congestion control mechanism is designed to prevent spamming by adversarial validators and nodes. Using the approval weight module, each node logically interprets the local DAG, Tangle, and arrives at the same conclusions concerning the confirmation and finalization of blocks and transactions.

For more in-depth insights, we recommend referring to the Wiki articles Communication Layer: Congestion Control, Block Creation and Processing, and Consensus on a DAG.

8) Regarding the alignment of goals and incentives between users and validators in IOTA 2.0, can you elaborate on the specific mechanisms that enable users to produce their own blocks and how this eliminates the extraction of value from users through inflation, ultimately benefiting the entire network?

Andrew: The "specific mechanisms" in question pretty much span the entire protocol so it isn’t easy to point you towards one protocol feature that enables this. First of all, it’s the Mana-based congestion control that enables users to create their own accounts and issue and pay for their own blocks. They also generate their own Mana by simply holding tokens (we consider holding tokens to be valuable to the protocol as it makes the tokens scarce and the network more secure), and they can increase their generation rate by delegating (considered a more valuable use of tokens and so earns more rewards). Validators are user accounts that have locked up their tokens, and they issue validator blocks periodically to essentially vote for the parts of the Tangle they see as correct, which is an extra valuable service and earns even more Mana. These different classes of users earning different amounts of Mana have a similar implication to inflation because Mana (access to issue blocks) is distributed from those who contribute least to the protocol to those who contribute most to the protocol. However, this phenomenon occurs in Mana only, and not in the base token. Also, users never pay a validator to approve their transactions as can be seen in many protocols with fees for prioritization, and this eliminates other forms of value extraction as well as inflationary ones.

9) Could you explain the role of Mana burn in preventing spam and regulating the number of blocks users can issue within a specific time interval, and how this mechanism differs from other similar approaches, such as Ethereum's EIP-1559?

Andrew: Our Mana burn mechanism has some similarities with EIP-1559. Our idea for this Mana burn mechanism came from our earlier adaptive PoW proposal rather than being inspired by EIP-1559. We define a reference Mana cost (RMC)  as equivalent to the base fee in EIP-1559, but the main difference with our approach is that it doesn't include an additional tip for prioritization. The RMC adapts slowly to increasing demand for access, it can increase each 10 second-slot to increase the cost of a transaction in response to persistent increases in demand. However, in spikes of congestion, this mechanism is of little help, and this is where the tip comes in for EIP-1559, to prioritize traffic during these spikes. In our case, this prioritization is instead done without a Deficit Round-Robin (DRR) scheduler, which allows transactions through for each account holder at a rate proportional to their Mana holdings. They can't pay for priority, because this leads to unpredictable fees and delays that we see in other networks. Instead, users take what they can get based on their Mana holdings during these peaks, and the payoff is predictable delays and fees based entirely on the protocol-defined RMC. Check out the Wiki article on congestion control for a detailed explanation if you haven't already.

10) How does IOTA 2.0's congestion control mechanism, which allocates throughput based on users' Mana holdings, ensure consistent and predictable fees while preventing unpredictable delays in transaction processing, as typically seen in congested blockchain networks?

Andrew: Firstly, our congestion control doesn’t use a priority queue. Priority queues are the primary cause of unpredictable fees and delays in most blockchain networks, i.e. validators choose the highest-value transactions to include. In IOTA 2.0, we set the fee at the protocol level using a mechanism very similar to Ethereum's base fee (EIP-1559), but we don’t allow any additional priority tip on top of that fee. In times of high congestion, transactions are prioritized based on the total Mana holdings using a DRR scheduler. The DRR scheduler offers far more consistent and predictable delays than a priority queue in times of high congestion but comes with the trade-off that you can't "buy priority" in times of high congestion, you have to work with the rate you have available based on your total Mana holdings.

11) The section about the scheduler mentions that it iterates through block issuers based on their deficit, which is proportional to their Mana. How does this approach eliminate the need for users to bid for transaction priority, ensuring efficient block processing without excessive fees or delays?

Andrew: There is simply no way to increase the priority in the scheduler for your transaction in times of high congestion. The DRR scheduler works like a bucket with holes in it for each account holder, and the size of the hole is proportional to your Mana holdings, so your transactions get through at a rate proportional to Mana holdings. There is no way around this by paying extra, it's baked into the protocol. We have done lots of research on scheduling policies and this one provides by far the best consistency, fairness, and security properties so that user experience won't suffer as it does in other projects with unpredictable priority fees in times of congestion.

12) Is there a cap on the number of positions for IOTA 2.0 node validators, and what is the maximum number of positions in the validator committee?

Andrew: Anyone can register as a validator, and there is no limit on the number of registered validators. However, there is a cap on the size of the validator committee (the active validators) in any epoch (which is roughly one day), and that is a protocol parameter. We have a parameters taskforce currently determining the optimal combination of parameters like these for launch, but there will probably be around 50 validators per epoch in the committee. Selection of the validators for the committee will initially be based entirely on the staked tokens, i.e., only the top stakers for the epoch will be included in the committee each epoch.

13) What level of MPS/TPS was achieved during internal stress tests and what level would you expect for the testnet? Are there any (hard/soft) caps in place? Could you please provide an overview of the expected finality times? Are they fixed or might they evolve over time (if dependent on parameters)?

Piotr: We haven't performed any actual stress testing yet, as the software hasn't yet reached a state to test those kinds of things. Internally on our local machines, we've been running up to 500 BPS; however, this number doesn't say much on its own because those were simple data blocks that didn't execute the VM and didn't mutate the ledger. We have yet to design a proper stress test, taking into account different block and transaction types, and hardware on which we're going to run those tests. The only cap is the scheduler rate, which has not yet been chosen. Finality (meaning that something will never, under any circumstances be reversed) will be in a matter of minutes, as this happens on the commitment level, and generating a commitment is delayed by a minute. We will, however, have a confirmation flag on the block level as well, which will also mean that a supermajority of validators have seen the block, and under normal conditions that's going to be as good as final. However, we don't call this finality, because there are some edge conditions under which such blocks could, in theory, be reversed. This is extremely unlikely though. This paper analyzes the confirmation time. It doesn’t use the latest version of the acceptance gadget, but the dependencies still hold.

Community Member: Adding on to this, the hardware plays a big factor in the throughput a node will reach. When we spam-tested Chrysalis, smaller servers could sustain a constant 1200, and bigger servers had no problem going way over 3000. Pure MPS numbers don't mean anything. It will be a tester effort to find out what bottlenecks the RC will have and how to mitigate them. Back then, starting at between three and four vCPUs, disk would be the limiting factor.

Piotr: Right now we hope the IO won’t be a major problem as we operate mostly in memory, but we will have to find out anyway what the actual bottleneck is.

14) After the launch of IOTA 2.0, what advantages will we have compared with other cryptocurrencies (e.g. ADA, SOL)? What are the disadvantages? In the meantime, after the launch of IOTA 2.0, what is our roadmap for the next steps? What directions need to be taken after IOTA 2.0 to strengthen it?

Andrea: IOTA 2.0 stands out in the DLT landscape primarily because of its unique data structure where the Mempool is inherently integrated into the data structure itself, unlike traditional blockchains. This integration facilitates continuous and parallel voting on transactions throughout the network, which enables a more dynamic and fluid consensus mechanism compared to the block-by-block approach of conventional blockchains.

The seamless inclusion of the Mempool within the Tangle also means that conflict resolution becomes a more streamlined process. There is no waiting for the next block to address conflicts; they are resolved in real time as they arise, allowing for a system that is more responsive and agile.

Looking towards the future with the advent of extended Layer 1 programmability, IOTA 2.0 is poised to mitigate critical issues such as Miner Extractable Value (MEV). Traditional blockchains are susceptible to MEV because miners can manipulate transaction order within a block to their advantage, potentially destabilizing the network and introducing unfairness. IOTA's architecture naturally avoids such pitfalls as it doesn't rely on block or miner-based transaction ordering, which could lead to a more stable and equitable network environment. In a nutshell: block proposer and builder separation arises from the data structure itself instead of being an addendum on top of an existing mechanism.

15) How decentralized will the validator committee really be if the committee consists of for example 10 of the richest wallets or if all people delegate to a few validators? What I was hoping to see with IOTA 2.0, is a system where, in theory, multiple small validators can outvote the largest validator. For example, 1000 nodes with 100,000 IOTA each have the same voting weight as one node with 100,000,000 IOTA. The more validators, the more secure the network is. I was hoping to run my own IOTA node and to help secure the network. But if I'm not one of the top IOTA holders or a famous community member who gets a lot of delegations, my node will never be included in the validator committee, and therefore my node doesn't raise the strength and security of the whole network. Is this correct?

Nikita: That is a good question as it is about true decentralization. In the current version of the protocol, the committee will be formed by taking active validators with the most stake. The size will probably be around 32 validators. So, if you don’t have enough delegation plus your own staking, then you might not be included in the committee. We reduce the power of the richest validators by capping the voting weight of each committee member. In general, to improve decentralization, we expect to have a fair randomized lottery in the committee selection process in a future upgrade. Then the chance of all stakeholders to sit in the committee will be fair and proportional to the amount of their stake.

Community Member: Doesn't anything but one token = one vote (e.g. linear relation between tokens and voting power) open you up to Sybil attacks? Since one user with 100 IOTA can easily make 100 accounts with one IOTA. Reducing the power of the richest validators only works if they are honest (and in that case, who cares?). If they aren’t honest, they can easily work around that.

Nikita: There are two properties that we are talking about here. The first property is to have a committee selection procedure that is robust to splitting and merging staked tokens, i.e. it gives a fair opportunity for everyone to participate in the committee. The second property is for smaller users with smaller stakes (100 users with 10 staked IOTAs) to be able to outbid the influence of the wealthiest stakeholder (one user with 1000 staked IOTAs). Satisfying the first property negates the second property because the richest user as the richest user could create 100 entities with 10 staked IOTAs and control them all. But we can achieve certain tradeoffs between both properties, e.g. we can make an expected voting weight of a user fair (proportional to the staked tokens) and cap the power of every single user in the committee.

16) On a similar note, it seems like network bandwidth is a limiting factor for every DLT if it is truly distributed on a global scale. Decreasing the messaging overhead, allowing asynchronous transactions and parallel transactions, optimizing node software to multithread, and keeping compute requirements low are ways to increase network throughput. Yet, in any DLT, a supermajority of the network still needs to see a transaction before it is confirmed,  This makes network latency a big factor for confirmation and finalization. Currently, slot commitments in IOTA 2.0 occur at 10-second intervals and this provides finality. Before this 10-second interval is a 'confirmation' probabilistic, and after a slot commitment is it deterministic? The deCOO is running at roughly two-second intervals on Shimmer. This provides a 'clock' similar in some ways to the slots on IOTA 2.0.  Could the slot commitments run at intervals of less than two seconds, or does it need to be longer to allow for heterogeneous networking conditions or other factors? Could there be cases where confirmed but not finalized transactions would be 'good enough?'

Nikita: In the current protocol version, the confirmation process is deterministic, assuming the network remains synchronous for a minute or so and experiences no unexpected congestion. This assumption is generally reasonable in most use cases; for instance, it is totally fine to rely on confirmation or even acceptance when the transaction's value is relatively small.

Issues may arise when the network becomes asynchronous. Specifically, the only scenario where a confirmed transaction might not transition to a finalized state is if, immediately after confirmation, the network becomes fully partitioned, resulting in various commitments made by validators. In cases involving slow or adversarial validators, it's possible that a commitment might lack the confirmed transaction, despite the supermajority of the network acknowledging its confirmation. If a network disruption occurs, there is potential for the network to adopt the commitment chain of these slow or adversarial validators, excluding the confirmed transaction. While we haven't observed such cases in our test scenarios, it's theoretically possible to design artificial examples where this issue could occur.

We can make slot duration as small as possible, but this will not completely solve the "slow finalization" problem since nodes don’t produce and reveal slot commitments immediately after the slot ends but after specifically designed periods. These delays are important for consistency because they ensure that all honest nodes produce the same commitments for slots under synchronous settings.  You can read more details about the consensus flags in our Wiki article on consensus.

17) What were the main barriers that caused so much delay in IOTA 2.0? What's your least favorite part of IOTA 2.0 that you think needs to be optimized?

Andrea: The properties that arise from our data structure are awesome: common and causal Mempool is directly part of our data structure. But our data structure is very hard to get right! The DAG arising from the blocks in the Tangle needs to logically co-exist with a nested and orthogonal DAG living within it: the UTXO DAG. This represents the causality of the spends involved in the transactions observed on the Tangle. While these processes are, to some extent, orthogonal to each other, they must also be coordinated for convergence to arise across the network. The interplay between this interaction and the tip selection mechanism required us to reconsider very fundamental concepts: time perception, tip selection, etc.

The current IOTA 2.0 protocol is a result of a long series of failed ideas and refinements of successful ideas. I believe that the way we internally model conflict preference across conflict sets could be simplified by the introduction of new handy primitives we introduced in other sections of the code. In addition, the "explicit voting" via specialized reference types that every block carries came into existence because of the need to identify portions of the Tangle to be orphaned; unless this need proves itself useful again in the future I believe that a simplified and "implicit" voting mechanism is possible: reducing complexity in the tip selection and booking components.

18) How is the strategy of holding and selling Mana in IOTA different from the conventional approach of selling block reward tokens immediately? Given that block producers in traditional blockchains like Ethereum can (1) use ETH rewards as a proxy for network throughput by bidding up GAS prices and (2) delay selling their rewards in anticipation of higher demand and prices, is there an aspect of Mana's design that inherently differs or offers advantages in this regard?

Andrew: There are two ways that our approach is fundamentally different. The first is that the rewards in question are not the base token, and the second is that there is no straightforward option for "bidding up" the price of access. The only way you can increase the price of issuance single-handedly is by congesting the network, which is tremendously expensive and completely unfeasible. Delaying selling rewards in anticipation of higher demand is something that would be no different from other cryptos.

Billy: Also, our system encourages users to hold Mana rather than dump it: Mana is not worth anything until there is use of the ledger. When there is use, there is adoption, and the system can sustain the value extraction. In most DPoS systems, people buy coins, earn some rewards, and then cash out. So the systems are constantly bleeding value.

Community Member: This however does raise the question if users want to run a validator since it is essentially gambling for higher prices, as the rewards won't be enough to pay for node operation. Also, there might be competition with other income methods like yield farming.

Billy: There are certainly some tradeoffs. But this is a tradeoff for people who want to be early adopters. The first Bitcoin miners had no guarantee they weren’t wasting their time. But in the end, we want to encourage people to be validators who want to use the system and want to stick around rather than people who want to make a quick buck.

19) You are introducing a new asset, "Mana". Now ledger, clients, and the end-user should have to take care of tracking more complicated stuff to be able to use the protocol. Do you think this can be hidden from the end user?

Piotr: It can be hidden in part so that the regular end-user doesn’t need to see how much Mana they need to issue a block and what are the generation and decay factors and all the other complex stuff - so a wallet could only show something like "you need to wait five minutes to generate Mana to issue another block" or something like that. Of course, that will depend on the wallet implementation but we will strive to make it as seamless as possible.

20) Could other crypto projects (even blockchains)  use the IOTA 2.0 consensus mechanism?

Nikita: Yes, other projects could potentially use ideas from our consensus protocol (e.g. our confirmation rule requires only three network trips to achieve confirmation, which is the bare minimum).  One important design choice of our consensus protocol is dynamic availability, i.e. even under serious network disruptions and many validators being offline, the network constantly updates an available ledger. To the best of our knowledge, this feature is currently not present in other DAG-based protocols (except PoW-based Kaspa, which does not have absolute finality).

21) Is the work of the research team finished or are you working on something else? Are you planning to change the protocol again after IOTA 2.0?

Billy: The research work for IOTA 2.0 is done, but the protocol will most definitely change. For instance, we’re already planning to implement cryptographic sortitioning in the committee selection mechanism. There were several ideas that we wanted to include in IOTA 2.0, but we had to draw a line in the sand so we could deliver. In the future, we want to get into the pace of smaller protocol updates and continual maintenance, rather than massive projects like Coordicide.

Piotr: As you can see on the project board for ‘iota-core’, there are a lot of nice-to-have issues. However, these changes won’t come as a big release that turns the protocol on its head, but rather in smaller releases. One such update will be changing the committee rotation algorithm to something more robust than taking the top stakers.

Community Member: Now that IOTA 2.0 is decentralized, how are you going to push future changes to the protocol? Are you going to create forks, use the discussions of the tips, etc.? You need to get acceptance of those changes from the community using the protocol.

Billy: We have an on-chain protocol update mechanism, like other chains.  For this initial version of the protocol, we went with a super simple mechanism: after a new protocol version is proposed, validators can signal their support. If enough validators indicate enough support for enough time, then everyone switches to the new version.

22) With all the changes made to IOTA 2.0, how much have we deviated (if at all) from the original machine-to-machine economy vision and being the security layer for large-scale Internet of Things? Doesn't the prerequisite of needing Mana pose some adoption hurdles to ensure sensors and small devices get adequate throughput during periods of congestion?

Billy: We have expanded on this vision. The M2M economy is not here yet, so it makes sense to focus on some things in between. Also, a chain that can be an M2M economy chain is capable of so much more. In the Wiki, we outline our vision of Digital Autonomy for Everyone, a much broader concept that will not only enable the M2M economy but also capture more use cases.

Community Member: Understandable, the M2M vision is a longer-term play, and a network capable of doing that could also be so much more. My question was more to understand if some of the constraints for small footprint and light devices in the M2M/IoT scenario would be easy to handle with the new Mana system to guarantee throughput. Setting up tons of devices and preloading them with an amount of IOTA to generate Mana to guarantee throughput seems a bit of a stretch to me and I'm sure there are better ways of doing that. I was wondering if you or anyone at the IOTA Foundation see that as an adoption issue going forward and if there exists a simpler solution to do it? In other words, what would Mana or throughput management for small edge devices look like?

Billy: Mana is great for IoT devices since all they need is a wallet. Also, using our account system, you can have a central controller that holds the Mana and just lists the PKs that can spend it.  So it is actually a very flexible system.

23) How partition tolerant will IOTA 2.0 be in a scenario where parts of the network continue to function offline or on an intranet and then attempt to rejoin the main Tangle later?

Billy: A situation like this can happen, and there are different scenarios:

  • Nonvalidator nodes are cut off from the main part of the network. In this case, the nodes will stop accepting stuff and thus not issue and commit anything. After they're re-connected, they receive all the blocks and commitments they've missed and continue normally.
  • A minority of the committee is disconnected. In this case, a commitment fork takes place and acceptance continues in both parts, but finality can only continue on one or none of the forks, never two. In this situation, the minority partition will have to switch their chains, discard all the blocks and commitments that were issued when the partition was taking place, and accept the majority partition's version of the Tangle and the commitment chain.

Community Member: So would it be accurate to say that IOTA 2.0 is not really partition tolerant as originally envisioned? On a simpler note, would it be right to say, as per the CAP theorem, partition tolerance has now been given less priority over consistency and availability of data? This would also mean that "offline" or "isolated" sub-Tangles that orphan from the main DAG cannot function in a silo without reconnecting, correct?

Billy: No, IOTA 2.0 is definitely partition tolerant. We used some ideas that are common in the ecosystem to deal with the CAP theorem, such as explained in this blog post.

24) Will Mana have any role to play in an account's (non-consensus) governance voting power?

Billy: This isn’t completely decided at this point - there are some problems with using it like that. Mana as a non-soul-bound resource has the same drawbacks as using a base token (IOTA, SMR) as voting power, because it can be bought and transferred, so it doesn't reflect the reputation of an account well. Having a soul-bound resource could allow us to use it as a reputation score, which then could be used as a voting power because it cannot be bought or transferred.

25) How often is Mana decay and generation applied? Because if the decay exceeds the generation, a user using a hardware wallet (which creates a potentially significant delay between the network state on which the wallet generates the payload and the time the signed payload gets issued by an issuer) might face issues if they want to take their Mana with them.

Piotr: Mana is generated every slot, but it's decayed every epoch - in the implementation, it doesn’t mean that we iterate through all the accounts and outputs and update them as that would be costly. Instead, whenever we need to read the Mana value of an account/output, we dynamically calculate the decay and generation. I'm not sure if I understand the other part of the question correctly, but it's completely OK to create a transaction and issue it some minutes or hours later (with some exceptions where the transaction requires some context). The transaction contains a slot index in which it was created, and that value is used to calculate the generation and decay of Mana, so if you issue that transaction the next day, the potential Mana is generated only up to the slot index that is contained in the transaction.

Darcy: As a complementary comment: Mana is generated linearly (to time and tokens) and decays proportional to the Mana held (at a global fixed rate). This means that a user’s total Mana is always increasing unless they move funds (that will not generate Mana for them anymore) or burn Mana to issue blocks.

Community Member: That was exactly the case I thought of when writing this question. I assumed a whale doing a large Mana sell to an account with very low token holdings. In this case, the decay would be higher than the minimal amount and the user wouldn't be able to send a transaction constructed in a past epoch, assuming it instantly applies. Backdating would essentially give a way to undo Mana burn (since the generation would be re-applied). This might also allow you to game the system in some ways.

Piotr: Not necessarily. our Mana would decay less but also generate less. And the next transaction that you don't backdate will decay all the Mana that you skipped previously, so the decay will always get you.

26) How can Mana be converted into Block Issuance Credits (BIC)? Is there a different answer to the above question if Mana is stored in BIC form instead of on a UTXO?

Piotr: It can be converted into a transaction. If you spend an output that has some stored and/or potential Mana, you can allot it to an account within a transaction which turns it into BIC. Regarding the second question: No, all Mana is decayed at the same rate.

27) Why would I stake/delegate with one of the smaller validators? Doesn't our staking model encourage us to delegate to the 50 biggest stakers? How do you encourage creating new smaller validators?

Andrew: Although our initial implementation of committee selection will be based on top stakers only, we plan to implement randomized committee selection in a future update. With randomized committee selection, all validators, no matter how small, will stand a chance to be selected. For the sake of simplicity at launch, we will begin with only top stakers though.

So as you have said, it would be advisable for delegators to distribute their funds across the top 50 stakers in this initial setup.

28) Do I as a delegator have any risks in getting my tokens locked if the validator misbehaves?

Andrea: No you don't. Delegator funds are also not locked: they are simply used as part of the validator selection process, as they are counted as part of the stake function for the candidate to be selected. However, if you decide to undelegate your funds before the epoch's end you will not be eligible for rewards for that epoch. As you can see, you have an incentive to delegate to a validator that is doing its job, because if that validator does not you will not reap any rewards!

Community Member: So if I delegate, do I automatically stake? Could you briefly explain the difference between staking and delegating?

Andrea: No, delegation is a very separate process from staking. A staker locks its funds and has to perform validation duties if selected as part of an epoch's committee.

On the other hand, a delegator can only delegate tokens to a staker and has no specific duty. The only role of the delegator and its funds is to incentivize the validator's duties: the more delegations a validator can attract the bigger the rewards they and their delegators can obtain.

29) Which feature still needs to be implemented for IOTA 2.0? How is testing going? Is there still a lot to do on that side?

Piotr: As you can see on the project board on GitHub, all the features for the first release were recently implemented and we're just refactoring and testing code here and there. We don't plan to implement any new features before releasing the testnet.

As for the testing, some Nil Pointer exceptions and memory leaks have already come up, but that was to be expected. We currently don't see any roadblocks or big problems.

Community Member: What percentage of Code coverage do you target? And how much is already done?

Piotr: It's not the test coverage that we aim for, because a big part of the code is already covered with tests. The hardest problems arise when the node is running and suddenly some things align in a certain way and it crashes. It's mostly these kinds of problems that we want to focus on, as those are close to impossible to find with basic unit tests.

Community Member: In that case what is concretely missing for you to run the public testnet? By letting the community participate, you could find such edge cases faster because more people are using and testing the network.

Piotr: Because it's additional overhead for us. Currently, we want to fix problems that we find ourselves so that you don't experience them in the testnet. Some of those bugs could crash the whole network, so as you remember from the previous devnet that we had last year, after every bugfix we had to do a release with a fresh snapshot, etc. Which I'm sure was also annoying for you.

Community Member: And what are the biggest hurdles currently?

Piotr: We have found some bugs that are hard to reproduce and debug. Some of them might be caused by the chain manager, so we hope that the reactive chain manager will solve some of those, as this has fixed some hard-to-catch bugs in other places (we had a problem with marking blocks as solid, which happened once in a couple of weeks of testing; using the reactive approach fixed that).

30) Andrew mentioned in a subthread that the epoch length will be roughly one day. What is the reason for this time to be rather lengthy compared to other blockchains? Long epoch times make it easier to target a validator, either via network attacks or socially, for example by bribing them.  Would this also give a corrupt committee (even when the malicious actors do not have 33% of the tokens, they might have luck in a future random selection) more time to deal damage?

Billy: Currently, we are not robust against malicious committees. The reason why is that (1) we currently take the top 32 stakers, and (2) we structured the incentives so that the stake will be evenly balanced amongst these validators.  As such, we will have all the stake securing every committee, and so a malicious committee means that one-third of the stake is malicious!

We plan to implement a more sophisticated committee selection mechanism via cryptographic sortitioning. The reason this is not in the current version is that we didn't want more delays caused by mucking about with cryptography

Community Member: Would shortening the epoch time to e.g. one hour cause issues with the current system? Too much overhead?

Billy: Yes, there is a lot of precomputation at epoch boundaries, namely computing the rewards distribution. This makes it easier to do this less frequently.  We could maybe do it once an hour, but we didn't see a reason to change it. Also, it's nice for the epoch to be a power of two slots.  We chose (2^13) 10s since it is close to a day and thus somewhat intuitive.


Links in this article


IOTA 2.0 Introduction

Part 1: Digital Autonomy for Everyone: The Future of IOTA

Part 2: Five Principles: The Fundamentals That Every DLT Needs

Part 3: Data Flow Explained: How Nodes Process Blocks

Part 4: Data Structures Explained: The Building Blocks that Make the Tangle

Part 5: Accounts, Tokens, Mana and Staking

Part 6: A New Consensus Model: Nakamoto Consensus on a DAG

Part 7: Confirming Blocks: How Validators Operate

Part 8: Congestion Control: Regulating Access in a Permissionless System

Part 9: Finality Explained: How Nodes Sync the Ledger

Part 10: An Obvious Choice: Why DAGs Over Blockchains?

Part 11: What Makes IOTA 2.0 Secure?

Part 12: Dynamic Availability: Protocol-Based Assurances

Part 13: Fair Tokenomics for all Token Holders

Part 14: UTXO vs Accounts: Merging the Best of Both Worlds

Part 15: No Mempool, No MEV: Protecting Users Against Value Extraction

Part 16: Accessible Writing: Lowering the Barriers to Entry


Follow us on our official channels for the latest updates:
Discord | Twitter | LinkedIn | Instagram | YouTube | CoinMarketCap

Tags

IOTA Foundation

Official posts from the IOTA Foundation, and migrated posts from old platforms.