'Turbo Geth' Seeks to Scale Ethereum – And It's Already in Beta
Instead of tackling ethereum's transaction costs, developer Alexey Akhunov focused on the blockchain's state, and the software is ready.
There's software ready to help ethereum scale – right now.
Revealed exclusively to CoinDesk, the raw architecture of Turbo Geth has been completed – and is currently available to early adopters for testing. Alexey Akhunov, the independent software developer that built the software, told CoinDesk that unlike many other scaling solutions, Turbo Geth looks at tackling ethereum's so-called state, instead of transaction congestion and costs.
The term "state" in this context describes the every-increasing history of all computations of the network. By rewriting Geth, the Ethereum Foundation's in-house software for interacting with the blockchain, Akhunov said, he's cut storage down to one-fifth its current size.
This approach allows ethereum nodes to run on cheaper hardware. What's more, it's something that many in the ethereum community are passionate about because less expensive hardware helps keep the network decentralized.
"We probably can go 10x just from optimizations," Akhunov said on a scalability panel during the ethereum conference Dappcon in Berlin this summer.
Alluding to code improvements that could streamline ethereum – before it upgrades to scaling tech sharding – the statement was received with much applause.
It aligns with the anticipation many in the industry feel for Akhunov's work, heralded as one of ethereum's most promising scaling solutions (although one not pegged to the formal scaling roadmap).
And while there's still work to be done – Turbo Geth lacks many of the features users expect from a fully-functional client currently – Akhunov believes the software will inspire others to take similarly experimental approaches to design.
"One of the contributions I have made is I've broadened the design space, and said 'well, what if we don't do it this way, but we do it the other way,'" Akhunov told CoinDesk, adding:
All about organization
Turbo Geth takes how traditional clients store information and turns the process completely on its head.
"The main difference is the way it organizes the database that stores the state and the history of state," Akhunov told CoinDesk.
In essence, Turbo Geth takes what has become the dominant way of storing data in ethereum clients, called the hash tree, and replaces that structure with a highly simplified index.
For example, while the hash tree requires many steps in order to retrieve information, Turbo Geth fuses a diverse range of data – such as histories of accounts, nodes, contracts and blocks – into compact strings of information that are lighter to store and faster to retrieve.
The result is that for a full archive node – a type of ethereum node that stores the full history of the state – Turbo Geth creates substantial gains. Compared the 1.2 terabytes of disk space required by Geth today, Turbo Geth users only need 252.11 gigabytes of disk space to run a full archive node.
On top of that, because Turbo Geth vastly minimizes how information is stored at a client level, "the database layout is much more straightforward to use when you want to just look up information from the past," Akhunov said.
The layout makes it much faster to retrieve information, he continued, adding:
Not yet public
While these gains are notable, there's work that remains to be done before Turbo Geth is an actionable client like Geth and Parity, ethereum's second most popular software client.
In addition to lacking a user-friendly interface, Turbo Geth would take about two weeks in order to sync with the blockchain.
"Obviously that's not acceptable for most people," Akhunov said.
As such, Akhunov said Turbo Geth will need to add support for a feature that cuts synchronization time by allowing clients to link up with snapshots provided by other archive nodes.
Within the Parity architecture, this is known as "warp sync," and Akhunov said there might be a way to bootstrap Turbo Geth from this Parity feature.
And still, while the client is getting close to finished, Akhunov built the software entirely on his own and emphasized that he doesn't have the capacity to deal with requests from the public – meaning that Turbo Geth is strictly in private beta for now.
To build the client, Akhunov received financial support from the Ethereum Foundation and Infura, the ConsenSys-led provider of software that allows decentralized applications to interface with ethereum in a lightweight way. Moving forward, however, the developer envisions handing the Turbo Geth project over to a committed team so that he can continue his research into ethereum scalability.
"I would try to give it into good hands," Akhunov told CoinDesk.
Deeper research
For Akhunov, Turbo Geth doesn't quite fulfill his vision for a fully scalable ethereum.
While the storage enhancements are substantial, he said: "When I started working on Turbo Geth I made an assumption that the bottleneck is of the ethereum client is mostly its access to the state, which was true to a certain extent, but it's not 100 percent. I have changed my point of view slightly since then."
For example, while Turbo Geth makes it cheaper and easier for users to run nodes, it doesn't impact scalability directly – like increasing transaction speed for example.
Going forward, then, the developer wants to dig deeper into how clients function – not just at the level of individual software, such as Geth and Parity, but how combinations of software intercommunicate.
"In order to solve the scaling bottleneck we have to look at how clients interoperate and maybe there's incompatibility between them," he told CoinDesk. "Often the slowest bit is dragging you down."
For example, Akhunov pointed to several unsolved mysteries on the ethereum blockchain, such as quirks occurring at the mining level, where periodically, miners are producing long chains of blocks that eventually get abandoned.
As such, the developer said he'd like to dedicate his time to studying the ethereum network and observing client interoperability issues in order to better understand where the scalability bottleneck is occurring.
He concluded:
STORY CONTINUES BELOW
Hard drive image via Shutterstock