8

Dosing The State, The Ethereum Scalability Challenge

 3 years ago
source link: https://www.trustnodes.com/2020/06/27/dosing-the-state-the-ethereum-scalability-challenge
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Despite an increase in ethereum’s gas limit, the blocksize has not changed much because there isn’t a complete relationship between the amount of transactions and the amount of data.

As much was revealed some time ago when ethereum reached full capacity despite processing only half its all time high transactions.

There is obviously some relationship between bytes and gas in ethereum as can be seen above and if this was a line chart, then you’d see a general increase. Yet it’s a bit of a messy relationship.

Gas in ethereum is an abstract unit of measurement of how many computing resources are required to perform an action.

As contracts are Turing complete, you can have endless while loops. Gas puts a limit to that and that limit currently is 12 million.

jqMvYzB.png!webEthereum transactions, June 2020

Despite a 50% increase in the gas limit, ethereum transactions are still some way off all time high. That’s because much of the gas is taken by token transactions or smart contract transactions.

A simple token transaction requires 80,000 gas units, while for eth it’s 20,000. A dapp transaction would depend, but often they require more gas units than even a token transaction.

That makes matters quite a bit more complex than in bitcoin where a transaction is just 250 bytes or that can be the base and then each transaction is 20 bytes.

In ethereum there is no such protocol level compression of transactions and that’s because it uses accounts.

For the network to manage these accounts, it updates every block what is called the state. Griffin Ichiba Hotchkiss of the Ethereum Foundation says :

“The complete ‘state’ of Ethereum describes the current status of all accounts and balances, as well as the collective memories of all smart contracts deployed and running in the EVM.

Every finalized block in the chain has one and only one state, which is agreed upon by all participants in the network. That state is changed and updated with each new block that is added to the chain.”

The very simplified equivalent of state for bitcoin is UTXO, both difficult concepts to understand with a simplified description being memory (ram) or a snapshot of the network so we all know who has how much or what code function a contract contains.

To sync ethereum from the genesis block you have to go through 400 million ‘nodes,’ which less confusingly can be described more as connection points, that give you these snapshots with it all taking about a week.

Every block, some 3,000 such connection points – which are small compressions of interactions or account changes – have to be updated by all the circa 10,000 nodes to keep in sync.

That mean your computer’s ram is constantly kept busy with every block, every 15 seconds, and your disk also is reading and writing every block these changes that are happening to the ethereum network.

So the more changes there are, the more busy your ram and disk, until at some point there is some limit to your resources and you fall off sync.

You can’t keep up. And that’s not in the past in downloading history, but in the present in processing the network.

To that is added a paper from last year that reveals the very hard task of coders to engage in pretty high maths to link together things like gas and bytes or in this case “the execution cost and the utilised resources, such as CPU and memory.”

“We discover a number of discrepancies in the metering model, such as significant inconsistencies in the pricing of the instructions,” they say , further adding:

“We design a genetic algorithm that generates contracts with a throughput on average 200 times slower than typical contracts.

We then show that all major Ethereum client implementations are vulnerable and, if running on commodity hardware, would be unable to stay in sync with the network when under attack.”

The equivalent in bitcoin here would be making a block that is designed to be very hard to validate, so your computer mining system potentially even crashes.

In bitcoin however if you did that, then there’s a good chance you’d lose your 6.25 bitcoin reward and if you kept it up as a pool, there’s a good chance you’d lose all your miners.

While in eth you’d lose only however much it costs to publish the contract which according to the researchers is very disproportionate to the effects it can have.

Making the scaling of the ethereum network in its current form a very complicated and time consuming task if it is still to be runnable by someone like Trustnodes for our own internal blockchain analysis if we want.

There’s alwaysInfura of course, the nodes in the cloud provider that powers maybe even the majority of ethereum network operations.

Its incubator, ConsenSys, has recently through some sort of backed project partnered with AMD to build “the next generation of decentralized compute, storage and bandwidth for the planet,” according to Joseph Lubin, the founder.

The real solution however for the present is what can be called as contract level sharding of the blockchain.

zMBRBzv.png!webEthereum scaling solutions, June 2020

If we focus on the last three on the right, these are their own networks and even their own blockchains that ‘talk’ to ethereum, but for now they don’t talk to each other.

If history repeats, it might be the very first step to connect the stand alone world computer to many world computers that all are still on the same page.

Initially as you might know, if you had a laptop then you only had access to the data within that laptop. Then a breakthrough connected different laptops, initially through cable connections, and now you had access to your neighbors’ data in their laptop.

Since these computers could talk, we can now allow everyone to have access to our data like this very page, without everyone needing to store our data, unless we want to store them.

We don’t need to store them however unless we are the ones sharing the data. We only need to connect to others so that we can allow them to see what we have and we see what they have.

That is, we need OMG to talk to ZK. We can’t have bitcoin talking to eth because they are different protocols, but even there node connections are happening .

If we get these data clusters to talk to each other, then there shouldn’t be any limit to scalability provided they all run on the same protocol.

That is something that takes time but if you view it this way then Nakamoto was obviously right, as was Gregory Maxwell because they’re at different time levels.

Nakamoto was probably thinking of the general evolution of the technology and in his statements he was clearly thinking it would develop in the same way as the internet which is a global distributed system.

While Maxwell was thinking of the present and pointed out perhaps very rightly that you can’t leap jump to the future without doing the many, many things that are required to get there.

Meaning we’re scaling and maybe the hardest task has been done by now. At this stage it is more about sitting back and enjoying the leveling up which necessarily will take some time.

Copyrights Trustnodes.com


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK