My Journey to Becoming a Validator on Ethereum 2.0, Part 2
Note: You can still become a validator on the Ethereum 2.0 network! There will be a wait-time to be activated as a validator 鈥 as of December 4, 2020, the wait-time is roughly 9.9 days. See the steps to staking in Part 1 of this series.
This article is the second in a four-part series on how to run your own Eth2 validator. If you are new to this series, be sure to check out Part 1: Getting Started, Part 3: Installing Metrics and Analyzing P&L and Part 4: Safely Migrating Your Eth2 Node.
- Disclaimer
- Introduction
- Validator Configuration Considerations
- Future Proofing Hardware
- To Run or Not To Run an Eth1 Client?
- Virtual vs. Local Hosting
- Eth2 Client Choice and Avoiding Penalties
- Setting up AWS instance
- Operating System
- Pricing
- Storage
- Ports
- SSH Keys and Instance Launch
- Installing Teku
- Requirements
- Install binary
- Create non-root user
- Create systemd service
- Launch
Disclaimer
This is a post I鈥檓 writing as an employee of ConsenSys and someone planning to stake on the beacon chain. The former statement means I prioritize ConsenSys products (ConsenSys products are typically best in class for Ethereum, and I also have access to engineering teams who can help me answer questions and troubleshoot). The latter statement means I鈥檓 optimizing for cost and ease of use: I don鈥檛 have thousands of ETH to yield substantial rewards, so I am taking some shortcuts. These are decisions I鈥檝e made to make staking on Ethereum 2.0 as straightforward and accessible for individuals as possible but come with trade-offs to decentralization and privacy. However, you can follow the broad strokes of the tutorial below and make different choices. In fact, if you can do that, I would encourage you to!聽
Last, staking in Ethereum 2.0 is highly experimental and involves long-term commitment (I鈥檓 allotting three years). Please do not participate if you are not comfortable with the high-level of risk attendant with something still in development. If you鈥檙e unsure about whether you should stake, please join the ConsenSys discord and ask in the #teku-public channel.聽
Introduction
In the previous post, we discussed the reasons for Ethereum 2.0鈥檚 deployment and walked through staking 32 ETH in the Ethereum 1.0 mainnet Deposit Contract. We touched on key generation and how the staking process on Launchpad links Ethereum 1.0 to 2.0.
On November 23rd, the minimum amount of staked ETH to launch Ethereum 2.0鈥攁bout 524,288鈥攚as met. People can continue to stake in the contract and the number of validators has risen to over 33,000 as of Dec 4th.
While it was extremely exciting to make it into the Genesis block as a validator, seconds later I had a similar experience to my colleague Aaron Davis in our internal ConsenSys staking channel: For what crazy task had I signed up? Luckily, I have access to incredibly brilliant and technical people charitable enough to give this rube some tips, pointers and insight about running staking infrastructure. I hope to pass on a fraction of that valuable advice here to any other interested parties.
That鈥檚 what the first part of this article will be: What are some things you should consider when picking hardware and software to run an Ethereum 2.0 validator client? The second part will walk through the specific hardware / software combination I have chosen for my validator client. The choices you make for your configuration will depend on your resources, personal inclination and technical capacity. I鈥檒l do my best to highlight how personal biases and circumstances informed my choices.
Last, before we jump into it, I want to reiterate these posts are almost like journal entries for my experience staking 32 ETH (albeit journal entries with extensive technical asides). As such, I may change my approach a bit as the beacon chain progresses. For example, I went in thinking I would definitely be using AWS. As you鈥檒l read below, I鈥檓 now reconsidering that. I鈥檓 also going to be very clear about the financial element of staking. Staking is a way of supporting the Ethereum ecosystem, but for sustainable public use, it also should be accessible to and possible for folks who have the ETH to do so.聽
Future Proofing Hardware
The basic requirements for running a validator today are relatively easy to satisfy. Mara Schmiedt and Collin Meyers鈥 Validator Guide on Bankless has a good rundown of minimum requirements. The most challenging aspect of determining Ethereum 2.0 validator equipment is balancing the current needs of the Beacon Chain Phase 0 with any future currently-unknown demands as development continues. (This is not a concern if you鈥檙e comfortable maintaining close watch of your validator and able to quickly and easily address changes)聽
Ben Edgington, Teku Project Manager and publisher of Eth2.news, provided me with some edge cases where the network might demand more of the validator client. Short-term, the biggest concern would be something like the Medalla time server incident, in which a bug and subsequent fix in the Prysm client halted finalization on the testnet (roughly speaking, the network couldn鈥檛 鈥減roduce blocks鈥). Since there was no finality on the network (no 鈥渃onfirmed blocks鈥), validators had to hold many more transactions in their RAM than normal.聽
Machines with 8GB RAM鈥攚hich would have been more than enough under normal circumstances鈥攂egan encountering 鈥渙ut of memory鈥 issues which may have led to slashing. A spike like this, though unusual, is not out of the question for Phase 0 mainnet.
Future configuration ambiguities for the network are the merging of Ethereum 1.0 and 2.0 and the introduction of sharding. We still don鈥檛 know when those merges will happen or exactly what they will require. I鈥檇 like to have a strong CPU backbone going into Phase 0 (4 virtual core, 16GB RAM with 100GB SSD) and then focus my attention for future adjustments around storage space (leaving wiggle room here). Please note this may turn out to be overkill and eat up staking rewards.
Those are the 鈥渒nown unknowns鈥 of the future, let鈥檚 discuss the main configuration decision points for validators today.
To Run or Not to Run an Eth1 client?
It鈥檚 a rite of passage I try to subject our bootcamp students to as often as possible: syncing an Ethereum 1.0 client. It鈥檚 an open secret that actually hosting a 鈥渇ull鈥 Ethereum 1.0 node is an act of love rather than an hardened, Prisoner鈥檚 Dilemma solution. 鈥淔ull鈥 must be put in quotes because even Ethereum core developers have a tough time agreeing on a definition of what 鈥渇ull node鈥 actually means.
One thing we can all agree on: It takes more time and storage to sync an Ethereum 1.0 client than you鈥檇 imagine. Our validators need to have a way of querying the Ethereum 1.0 network database (we鈥檒l get into why a bit later). If you鈥檇 like to save the space and headache of syncing locally, you can use an Infura endpoint, which is available for free with registration.聽
Even though this saves significant storage and logistical concern, it does sacrifice a certain amount of decentralization for the Eth1 and Eth2 networks simultaneously. If Infura were to go down, which is exceedingly rare but does occur, that would cause a ripple effect across the Ethereum 2.0 validators using it for their Ethereum 1.0 endpoint. Something to consider!
(Just to be clear: the difficulty of syncing an Ethereum full node has to do with the nature of the Ethereum world state, not with the Ethereum 1.0 core developers who have done an amazing job dealing with this extremely challenging problem.)
Virtual vs Local Hosting
The next consideration for me was hosting a validator node locally (in my home) or virtually (on a virtual service provider like Amazon Web Services (AWS), Digital Ocean, etc.). As I mentioned in the previous piece, I don鈥檛 trust myself to run a consistent validator node from home, even on an old laptop stored away somewhere. Clumsy and goofy, I would either kick it over or forget about it.
I鈥檓 opting to run my node on AWS because that鈥檚 what I鈥檓 most familiar with (After going through this whole process, however, I鈥檓 second-guessing this decision. I鈥檒l discuss this later). The trade-off here is again decentralization: If AWS goes down or has any issues (like it did recently), I鈥檓 at their mercy. If enough people are running nodes on AWS, it can affect the larger Ethereum network.
People will probably self-select for this one. Local hosting is a special kind of project and not everyone has the time, resources or commitment required. While virtual hosting is a centralizing force, I鈥檓 opting to go with it due to its ease-of-use and (hopefully) guaranteed uptime.聽
If you would like to run a validator node locally, you can still follow the general direction of this tutorial, Somer Esat鈥檚 excellent series of tutorials with different clients or even sync a Raspberry Pi Model 4B with 8GB RAM with Ethereum on ARM. (Syncing on Raspberry Pi is still very experimental and folks should wait till Ethereum on ARM team confirms its stability)
Eth2 Client Choice and Avoiding Penalties
Another major choice is the Ethereum 2.0 client among the existing clients: Lighthouse, Lodestar, Nimbus, Prysm and Teku. I am heavily biased towards Teku and not savvy enough to debate the finer points of the clients. This article is a decent overview of client performance on Medalla. (Keep in mind the Medalla demanded much more from processors than mainnet will.)
Proof of Stake incorporates explicit disincentives to encourage correct behavior on the network. These take the form of dinging validators for being offline and the more dramatic move of slashing actors for taking malicious action against the network, knowingly or otherwise.
The most common issue will be making sure your validator is available to the network. This means making sure your validator is online. There鈥檚 the DevOps-approach to this issue鈥攃reating the monitoring and alerting system to alert you if your validator goes offline鈥攖hat I won鈥檛 cover here.
But there is another way to be unavailable to the network, and that is if your client begins misbehaving for one reason or another. After the time server bug caused a network slowdown on Medalla, Eth2 core developers came together to develop 鈥淸a] standard format for transferring a key鈥檚 signing history allows validators to easily switch between clients without the risk of signing conflicting messages.鈥 Not all clients have this protection, but Teku does. Here鈥檚 where you can read about using Teku鈥檚 Slash Protection (runs by default) including migrating between Teku nodes or a non-Teku to Teku node.
If you do have trouble with your client and need to restart the entire network, you need to be aware of Weak Subjectivity. Proof of Work allows anyone to go back to the genesis block of the network and trustlessly build the network state from scratch. Proof of Stake, however, has a catch in that regard: If a third of the network validators at a certain point exit yet continue to sign blocks and attestation, they can alter the network state and feed that false data to a node syncing to the network if the malicious actors catch the syncing node before the syncing node reaches where the validators withdrew their funds. You can read more about it here, but the short answer is you need to have either a 鈥渨eak subjectivity checkpoint鈥 or an encoded state file, essentially an archive of the network. Teku provides start-up flags for both.
The last penalty concern is the most serious: Slashing. Slashing occurs when a staker works against the rules of the network. It鈥檚 different from getting penalized from being offline. It鈥檚 actively working against the network by submitting conflicting validator information. There are some really great materials for learning more about slashing and how to prevent it:
The main takeaway is don鈥檛 run one validator key on multiple clients. Apparently this is what caused the first slashing event ever, which occurred on Dec 2nd. There have been a number of slashings since! If you鈥檙e migrating from one instance to another, quadruple check you鈥檙e not going to have two machines working from the same key:
AWS + Infura + Teku Validator Configuration Specs
My Genesis block setup:
Operating System: Ubuntu Server 20.04 LTS (HVM) (Amazon Web Service)
Processor: Intel Xeon Platinum 8000 series processor, 3.1 GHz. (Amazon Web Service)
Memory: 4 virtual cores, 16GB RAM (Amazon Web Service)
Storage: 100GB SSD, 300 / 3000 IOPS (Amazon Web Service)
Ethereum 1.0 client: Infura endpoint (free tier)
Ethereum 2.0 client: Teku
Looking through all the above considerations, I was unsure of the best approach to building a validator setup. For myself, I鈥檇 like to pick a machine and generally not worry about changing it for at least two years. This helps with overall validator cost (I can get a significant discount from locking in with a virtual provider for a few years) and I鈥檓 not particularly agile with spinning up servers. This future-proofing or 鈥渙ver-spec鈥檌ng鈥 approach hopefully makes my life over the next two years a bit easier.
Initially, I was confident AWS was the best virtual platform and it鈥檚 the service I鈥檒l use for this post and the next. However, after going through the whole process, I realized AWS might be overkill for the individual developer. AWS鈥 real strength seems to be its capacity to dynamically scale up to meet demand which comes at a premium cost. This makes economic sense for a large scale, enterprise-level project, but individual Ethereum 2.0 current client requirements do not require such rigor.
I鈥檓 going to continue with AWS but am also entertaining the option of running an instance on Digital Ocean, which may be more appropriate for an individual developer. More on that at a later date.
Setup Infura Account
I鈥檓 choosing to use Infura as my Ethereum 1.0 endpoint. For now, the beacon chain is watching the Deposit Contract on Ethereum 1.0 to activate new stakers when they have properly deposited 32 ETH and submitted appropriate BLS signatures.
In the future, the beacon chain will start testing further processing by starting to use state information from Ethereum 1.0 to create parallel checkpoints on the beacon chain.
As we mentioned above, there are two main ways to have visibility to the Ethereum 1.0 network. One is to sync and actively maintain an Ethereum 1.0 node, which creates a local Ethereum 1.0 state database. The other is to use a service like Infura, which provides an easy Ethereum 1.0 endpoint, accessible through HTTPS or WebSockets.
Sign for an account here. Once you have an account, click the Ethereum logo on the left-hand side.
Click 鈥淐reate New Project鈥 in upper right hand corner
Name your project (mine is 鈥淓th 2 Validator鈥), and go to 鈥淪ettings,鈥 make sure your network endpoints is 鈥淢ainnet鈥 and copy the HTTPS endpoint:
We鈥檒l be using this later for our Teku client start-up command!
Setting up AWS instance
Setting up an EC2 instance on Amazon is straight-forward. We鈥檒l have a few tweaks here and there to help our virtual instance play well with Teku.
Create an AWS account (different from an Amazon.com account) and login to AWS Console. Get to the EC2 page, which will look something like this:
Click the 鈥淟aunch Instance鈥 button. You鈥檒l then see the following screen:
Operating System
This is where we select what machine image (which I think of as an operating system) we would like to use on our virtual instance. I鈥檓 selecting Ubuntu Server 20.04, which is a Linux-based Server environment. The Ubuntu Server environment has a few key optimization differences from the Ubuntu Desktop environment. The main difference for our purposes is Ubuntu Server lacks a Graphical User Interface (GUI). Where we鈥檙e going, there鈥檚 only a command line! Brings me back to my Apple II days.
After selecting our operating system, we then choose our instance type:
I find this menu quite overwhelming, so I鈥檓 going to break it down a bit for you. Here we鈥檙e selecting the computing core / RAM power / CPU for our machine. It is the 鈥渞aw鈥 or 鈥渁ctive memory鈥 of the machine and separate from the storage (hard drive) of a device. Think of it as the engine of our virtual instance, on which we will run our Ubuntu Server operating system. AWS separates these into separate instance families denoted by the letter / number combination on the far left column.
The instance families have different hardware arrangements just as different car engines have different configurations of pistons, plugs and fuels to meet different demands. We will focus on two of their 鈥済eneral computation鈥 instance families, the m5 and t3.
I鈥檓 selecting the m5.xlarge instance, which provides 4 virtual computing cores (vCPUs) and 16GB RAM. After running Ethereum 2.0 mainnet for a day or so, my machine has not used more than 4% of the available vCPU. As mentioned in the 鈥淔uture Proofing鈥 section above, the Ethereum 2.0 network demands will only grow. But for the next few months, absent any prolonged major network spikes, I would most likely be fine with an m5.large instance (2 virtual cores / vCPUs, 8GB RAM)
Technical folks more savvy than myself have also recommended the t3.large instance as a reasonable option for Ethereum 2.0. t3.large has 2 vCPUs and 8GB memory, same as m5.large, but the t3 family is built for more 鈥渂urstable鈥 network activity (spikes in CPU) rather than the m5 family built for consistent CPU loads.
Pricing
The last thing to mention before we move on to storage is price. AWS is expensive compared to other options like Digital Ocean. You pay for CPU (your instance family) and storage (your hard-drive) separately based on what you use. CPU is paid for by the hour. Since validators have to be online 24 hours, you can use the price table below (from December 2020) to make some rough calculations:聽
These are on-demand prices. AWS does provide something called Reserved Instance pricing, where if you promise to have a virtual instance from a year to three years, you can get up to 50-60% cost reduction on the above price table. (Thanks to Jason Chroman for this tip!)
From the EC2 homepage, click the 鈥淩eserved Instances鈥 on the left-hand menu, shown below:
Click on 鈥淧urchase Reserved Instance鈥:
In the menu that pops up, put in the instance type details and the amount of time desired to see pricing for (I鈥檓 choosing m5.xlarge and a 36-month term):
Click 鈥淪earch鈥 and see the price options:
There鈥檚 a significant price discount, over 50% I believe, but I鈥檓 locked in for three years. Once you purchase the Reserved Instance, AWS then applies it to an existing virtual box or will apply it once it is launched. Remember this does NOT include storage space (hard-drive).聽
Note: I鈥檓 not doing this yet, as I鈥檓 not yet convinced AWS is the best option for an individual staking one to three Ethereum 2.0 validator nodes. I鈥檓 running an instance with on-demand pricing to see how it goes before committing.聽
Storage
Going back to our instance launch process, we鈥檙e moving on to the 鈥淎dd Storage鈥 tab
The brilliant technical people I consulted recommended a storage amount of 100GB General Purpose SSD. Storage is typically not a bottleneck with Eth2 clients. However, this is without running a full Eth1 client. For Eth1 storage, a conservative guesstimate would be about 1TB. Be sure to account for this if you鈥檙e not using Infura.
I don鈥檛 know the unit on the IOPS column in the image above, but it鈥檚 the input-output for the hard-drive communicating with the CPU. This is a classic bottleneck for full Eth1 node syncing. If you鈥檙e looking to sync a full Eth1 client on this machine and you鈥檙e having issues, this could be a place to look.
Ports
Skipping over 鈥淎dd Tags,鈥 move on to 鈥淐onfigure Security Group.鈥 These are the different openings created for different kinds of incoming and outgoing communication with the instance.
AWS automatically opens the traditional SSH port, as that鈥檚 the main way we鈥檒l interact with the instance. Coin Cashew and Somer Esat鈥檚 excellent guides both recommend disabling password access for SSH, but we鈥檒l see when we launch the instance that鈥檚 not the default option for AWS. However, it is good to randomize your SSH port to a number between 1024-65535. This is to prevent malicious actors from network-scanning the standard SSH port. See how to secure your SSH port generally here and specifically for AWS here.
We have to add two security rules to accommodate the Teku client and it has to do with peer-to-peer communication. Blockchain networks are decentralized in the sense that nodes talk directly to each other. Rather than consulting a central node, an individual node will develop and maintain an understanding of the network state by 鈥済ossiping鈥 with many nodes. This means when one client handshakes with another, they swap information about the network. Done enough times with different nodes, information propagates throughout the network.聽 Currently, my Eth2 Validator node has 74 peers with which it鈥檚 chatting.
Teku communicates with other nodes on the 9000 port, so we鈥檒l open that up for UDP and TCP, two different kinds of communication protocols.聽
Afterwards, it should look something like this:
SSH Keys and Instance Launch
Last, go to 鈥淩eview and Launch,鈥 an overview of the choices made. Once approved, there will be a pop-up menu about SSH keys. I鈥檓 not showing mine because it contains sensitive information. Namely, the keypair used to authenticate and login to the virtual instance via SSH (local command line). If you don鈥檛 already have a pair, AWS will generate one for you. You must download this and treat it like an Ethereum private key! It鈥檚 the only way to connect to your instance and AWS will not save it for you.
Once everything is hunky-dory, this window will appear:
Okay! That鈥檚 done with, let鈥檚 move on to accessing and securing our instance then installing and running Teku!
Accessing Instance
The main way to access the AWS instance is through SSH, 鈥渁 cryptographic protocol for operating network services securely over an unsecured network.鈥 As mentioned earlier, AWS by default disables password authentication for accessing the instance. You can only use the keypair generated before the instance launch. The keypair should have a .pem file ending.聽
AWS provides a clean way to get your SSH command. Clicking on the running instance from thee main EC2 page, there鈥檚 a button in the upper right hand that says 鈥渃onnect鈥:
In the next page will be an SSH command specific for your instance. It will be structured like this:
ssh -i "PATH_TO_AWS_KEYPAIR.pem" [email protected]_IDENTIFIER.compute-ZONE.amazonaws.com
Entering this command into a terminal will begin the SSH session. The first time, the local machine will ask if you鈥檇 like to trust the ECDSA fingerprint provided by AWS. This is to prevent a man-in-the-middle attack and, if concerned, a user can get their instance鈥檚 fingerprint following these steps.
In a terminal separate from the current SSH session, transfer the validator key files needed to run Teku. In the previous blog post, we walked through staking 32 ETH and obtaining validator keys for Ethereum 2.0. At the end, we were left with this file structure:
eth2deposit-cli/
鈹斺攢鈹 validator_key_info/
聽聽聽鈹溾攢鈹 KEYSTORE-M_123456_789_ABCD.json
聽聽聽鈹溾攢鈹 KEYSTORE-M_123456_789_ABCD.txt
聽聽聽鈹斺攢鈹 DEPOSIT_DATA_YOUR_TIMESTAMP_HERE.json
We need to transfer the validator_key_info file to our virtual instance. Secure Copy Protocol (scp) allows us to do this securely. Adapt the generic scp command below using the path to the directory above and the previous SSH command:
scp -r -i "PATH_TO_AWS_KEYPAIR.pem" /PATH_TO_KEYS/eth2deposit-cli/validator_key_info/ [email protected]_IDENTIFIER.compute-ZONE.amazonaws.com:~
(Note the 鈥:~鈥 at the end of the whole command.)
You should see a file transfer occur. If you navigate back to your SSH session and type in ls, you should see the transferred directory.
Installing Teku
Now that we have the validator files we need, we鈥檙e going to install Teku. First, we have to update existing programs and install the required Java systems:
Install Java
sudo apt update && sudo apt install default-jre && sudo apt install default-jdk
Double check Java installed was successful with:
java -version
Install Binary
Find the latest stable Teku release here. Copy the link address to the tar.gz file, then from your SSH session, download it. Here鈥檚 what mine looked like, your version will most likely be different:
curl -JLO
https://bintray.com/consensys/pegasys-repo/download_file?file_path=teku-20.11.1.tar.gz
Decompress the downloaded file with the following command. If you have a different version, sub that file name in as opposed to teku-20.11.1.tar.gz :
tar -zxvf teku-20.11.1.tar.gz
For cleanliness sake, remove the tar.gz file.
After all these steps, here鈥檚 what your home directory should look like (Teku version number and contents may be different:
ubuntu/
鈹斺攢鈹 teku-20.11.1/
聽聽聽鈹溾攢鈹 LICENSE
聽聽聽鈹溾攢鈹 bin/
聽聽聽鈹溾攢鈹 lib/
聽聽聽鈹溾攢鈹 license-dependency.html
聽聽聽鈹溾攢鈹 teku.autocomplete.sh
鈹斺攢鈹 validator_key_info/
聽聽聽鈹溾攢鈹 KEYSTORE-M_123456_789_ABCD.json
聽聽聽鈹溾攢鈹 KEYSTORE-M_123456_789_ABCD.txt
聽聽聽鈹斺攢鈹 DEPOSIT_DATA_YOUR_TIMESTAMP_HERE.json
Create a non-root user
This step is copied from Somer Esat鈥檚 excellent Ubuntu / Teku tutorial
We鈥檙e going to create a non-root user called teku who can operate Teku. Type the following below:
sudo useradd --no-create-home --shell /bin/false teku聽
We鈥檙e going to create a custom data directory for Teku as well, then give the teku user access to it:
sudo mkdir /var/lib/teku && sudo chown -R teku:teku /var/lib/teku
Create systemd service
This step is adapted from Somer Esat鈥檚 excellent Ubuntu / Teku tutorial
This step will make a service that will run Teku in the background. It will also allow the machine to automatically restart the service if it stops for some reason. This is a necessary step to make sure our validator runs 24-7.
Create the service file by using the nano text editor:
sudo nano /etc/systemd/system/teku.service
In this file (which should be empty), we鈥檙e going to put in a series of commands for the systemd to execute when we start the service. Here鈥檚 the code below, you鈥檒l have to sub in the follow items we鈥檝e collected throughout this journey:
- Infura Eth1 HTTP Endpoint
- validator_key_info directory path with two valid key-related files
- Custom data path (lib/var/teku)
Put those values in the bold code below, then copy it all into the nano text editor:
[Unit]
Description=Teku Beacon Node
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
User=teku
Group=teku
Restart=always
RestartSec=5
ExecStart=/home/ubuntu/teku-20.11.1/bin/teku --network=mainnet --eth1-endpoint=INFURA_ETH1_HTTP_ENDPOINT_GOES_HERE --validator-keys=/home/ubuntu/validator_key_info/KEYSTORE-M_123456_789_ABCD.json:/home/ubuntu/validator_key_info/validator_keys/KEYSTORE-M_123456_789_ABCD.txt --rest-api-enabled=true --rest-api-docs-enabled=true --metrics-enabled --validators-keystore-locking-enabled=false --data-base-path=/var/lib/teku
[Install]
WantedBy=multi-user.target
Type command-X, then type 鈥淵鈥 to save your changes
We have to restart 鈥渟ystemctl鈥 to update it:
sudo systemctl daemon-reload
Start the service:
sudo systemctl start teku
Check to make sure it鈥檚 starting okay:
sudo systemctl status teku
If you see any errors, get more details by running:
sudo journalctl -f -u teku.service
You can stop the Teku service by running:
sudo systemctl stop teku
Check the Teku troubleshooting page for common errors or check out the Teku discord, which is monitored by the team.
Once you feel you have things ironed out, enable the service to restart if it shuts down by running:
sudo systemctl enable teku
There you have it! Things should be cooking along right now. When inspecting the Teku service, you will see a series of logs noting a Sync Event, this is your validator syncing the beacon chain. Once it reaches the head, those logs will change to read Slot Event, and you will also see your attestation performance and block proposals.
Launch
On December 1st at 12pm UTC, the Beacon Chain鈥檚 first blocks were validated. The first block came from Validator 19026, with the enigmatic graffiti, 鈥淢r F was here.鈥 Twelve seconds later came the next block, graffiti indicating the validator might be located in Zug, Switzerland. The Eth2 Beacon Chain grew steadily, block by block every 12 seconds. Then came the next hurdle: would enough validators be online to finalize the first Epoch? Yes! 82.27% of the validators attested to the validity of the Epoch 0, the proverbial ground floor of the Beacon Chain. You can read more about the Beacon Chain launch, and what鈥檚 next, here.聽
We are now on Epoch 760, which means the Beacon Chain has been running smoothly for almost a week.聽
Here鈥檚 a shot from my perspective of the genesis moment, using the setup described in this post:
In the next installment, we鈥檒l do a recap of how things are going. I鈥檓 going to access the metrics from Teku, discuss the cost of running AWS, and briefly discuss the state of the network.
Stay tuned!
Resources and links
Thanks to James Beck, Meredith Baxter, Jason Chroman, Aaron Davis, Chaminda Divitotawela, Ben Edgington, The Dark Jester, Somer Esat, Joseph Lubin, Collin Meyers, Nick Nelson, Mara Schmiedt, Adrian Sutton, and Alex Tudorache for support and technical assistance.