Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Everything that you need to setup a validator from picking the right hardware to getting a validator up and running.
Before configuring Nginx, install GoAccess, a real-time web log analyzer.
Update your package lists:
Install GoAccess:
This chapter gives you an insight of what you need to run a node in Secret Network. Since Secret Network uses Intel SGX, nodes have to fulfill special requirements.
32GB RAM (use 20GB+ swap)
512GB SDD
Ubuntu 22.04 LTS
CPU compilant with SGX (see Hardware Compliance)
Motherboard with support for SGX in the BIOS (see Hardware Compliance)
64GB RAM
1TB NVMe SSD
Ubuntu 22.04 LTS
CPU compilant with SGX (see Hardware Compliance)
Motherboard with support for SGX in the BIOS (see Hardware Compliance)
Website: psychz
Things to note before setting up on Psychz:
Only certain CPU's are available with setup
You need to manually Open the BIOS and select the configuration.
Ensure your hardware meets the Hardware Compliance requirements.
This takes about 1 day, you may need to make sure they have the servers, if they do not make sure you request only the same configuration or the SGX wont be enabled.
Make sure you request them to install the Latest BIOS for the SGX to work
The Dashboard portal is only enabled once you signup. Go to your device section. Go to IPMI (remote management) and create a session to log into the BIOS, I used the Java to download and connect to the BIOS over the port. You need to make sure Hyper Threading is disabled in the BIOS so that you can get an ok platform message.
Login with your credentials and proceed with SGX installation.
In this section we quickly explain what the verification process for your hardware entails and how it works. Instructions for verification are included in the setup guides!
Attestation Certificate
This is a self-signed X.509 certificate that contains a signed report by Intel, and the SGX enclave. The report contains both a report that the enclave is genuine, a code hash, and a signature of the creator of the enclave.
Seed
this is a parameter that is shared between all enclaves on the network in order to guarantee deterministic calculations. When a node authenticates successfully, the network encrypts the seed and shares it with the node. Protocol internals are described here
This section will explain node registration in the Secret Network. If you just care about installation you can just follow the setup guides and ignore this document. If, however, you want to learn more about what's going on behind the scenes here read on.
In order to verify that each node on the Secret Network is running a valid SGX node, we use a process that we call registration. Essentially, it is the process of authenticating with the network.
The process is unique and bound to the node CPU. It needs to be performed for each node, and you cannot migrate registration parameters between nodes. The process essentially creates a binding between the processor and the blockchain node, so that they can work together.
For this reason, the setup will be slightly more complex than what you might be familiar with from other blockchains in the Cosmos ecosystem.
The registration process is made up of three main steps:
Enclave verification with Intel Attestation Service - this step creates an attestation certificate that we will use to validate the node
On-chain network verification - Broadcast of the attestation certificate to the network. The network will verify that the certificate is signed by Intel, and that the enclave code running is identical to what is currently running on the network. This means that running an enclave that is differs by 1 byte will be impossible.
Querying the network for the encrypted seed and starting the node
At the end of this process (if it is successful) the network will output an encrypted seed (unique to this node), which is required for our node to start. After decryption inside the enclave, the result is a seed that is known to all enclaves on the network, and is the source of determinism between all network nodes.
For a deeper dive into the protocol see the protocol documentation
Registration instructions are included in the Mainnet and Testnet Setup guides!
Note: This documentation assumes you have followed the instructions for Running a Full Node for Testnet.
WARNING: This will erase your node database. If you are already running validator, be sure you backed up your config/priv_validator_key.json
and config/node_key.json
prior to running unsafe-reset-all
.
The state-sync configuration in ~/.secretd/config/app.toml
is as follows:
SNAP_RPC
variable to a snapshot RPCSet the state-sync BLOCK_HEIGHT
and fetch the TRUST_HASH
from the snapshot RPC. The BLOCK_HEIGHT
to sync is determined by finding the latest block that's a multiple of snapshot-interval.
WARNING: This will erase your node database. If you are already running validator, be sure you backed up your config/priv_validator_key.json
and config/node_key.json
prior to running unsafe-reset-all
.
It is recommended to copy data/priv_validator_state.json
to a backup and restore it after unsafe-reset-all
to avoid potential double signing.
This generally takes several minutes to complete, but has been known to take up to 24 hours.
Snapshots are compressed folders of the database to reach the current block quickly.
You can either chose to use the #id-1-download-the-secret-network-package-installer-for-debian-ubuntu or do it with the #id-1-download-the-secret-network-package-installer-for-debian-ubuntu-1
WARNING: This will erase your node database. If you are already running validator, be sure you backed up your priv_validator_key.json
prior to running the command. The command does not wipe the file. However, you should have a backup of it already in a safe location.
All of the above steps can also be done manually if you wish.
Quicksync / snapshots are provided by Lavender.five Nodes.
Reset your node.
This will ensure you connect to peers quickly.
Never save your validator’s keys on the remote server. You should be using your local machine and saving your keys on there to broadcast to the remote server.
In order to use a local CLI, you must:
Install the daemon on your local machine by going through the normal installation process
Set the daemon’s config
to the remote server
You will need to create new users for running Prometheus securely. This can be done by doing:
Create the directories for storing the Prometheus binaries and its config files:
Set the ownership of these directories to our prometheus
user, to make sure that Prometheus can access to these folders:
Download and Unpack Prometheus latest release of Prometheus:
The following two binaries are in the directory:
Prometheus - Prometheus main binary file
Promtool
The following two folders (which contain the web interface, configuration files examples and the license) are in the directory:
Consoles
Console_libraries
Copy the binary files into the /usr/local/bin/
directory:
Set the ownership of these files to the prometheus
user previously created:
Copy the consoles
and console_libraries
directories to /etc/prometheus
:
Set the ownership of the two folders, as well as of all files that they contain, to our prometheus
user:
In our home folder, remove the source files that are not needed anymore:
Install Grafana on our instance which queries our Prometheus server.
Enable the automatic start of Grafana by systemd
:
Grafana is running now, and we can connect to it at http://your.server.ip:3000
. The default user and password is admin
/ admin
.
Now you have to create a Prometheus data source:
Click the Grafana logo to open the sidebar.
Click “Data Sources” in the sidebar.
Choose “Add New”.
Select “Prometheus” as the data source
Set the Prometheus server URL (in our case: http://localhost:9090/)
Click “Add” to test the connection and to save the new data source
Finally, we're going to install a basic dashboard for Cosmos SDKs. For further reference in these steps, see: https://github.com/zhangyelong/cosmos-dashboard
After restarting your node, you should be able to access the tendermint metrics(default port is 26660): http://localhost:26660
Append a job
under the scrape_configs
of your prometheus.yml
Copy and paste the Grafana Dashboard ID 11036
OR content of cosmos-dashboard.json, click on Load
to complete importing.
Set chain-id to secret-3
You're done!\
From here, you're going to want to set up alerts for if something happens with your node, which will be a follow-up document.
This is largely just a copy of scaleway's setup, but updated and customized for Secret Network.
Docker and Docker Compose will allow you to run the required monitoring applications with a few commands. These instructions will run the following:
Grafana on port 3000
: An open source interactive analytics dashboard.
Prometheus on port 9090
: An open source metric collector.
Node Exporter on port 9100
: An open source hardware metric exporter.
The dashboard for Cosmos SDK nodes is pre-installed, to use it:
Enable Tendermint metrics in your secret-node
After restarting your node, you should be able to access the Tendermint metrics (default port is 26660): http://localhost:26660
If you did not replace NODE_IP
with the IP of your node in the Prometheus config, do so now. If your node is on the docker host machine, use 172.17.0.1
Login to Grafana and open the Cosmos Dashboard from the Manage Dashboards page.
Set the chain-id to secret-3
The docker images expose the following ports:
3000
Grafana. Your main dashboard. Default login is admin\admin.
9090
Prometheus. Access to this port should be restricted.
9100
Node Exporter. Access to this port should be restricted.
Your secret node metrics on port 26660
should also be restricted.
If you followed the basic security guide, these ports are already restricted. You will need to allow the grafana port:
sudo ufw allow 3000
You can also allow access from a specific IP if desired:
sudo ufw allow from 123.123.123.123 to any port 3000
Nodes on Secret Network are required to be fully patched, and compliant with network requirements. While this requirement makes running a node and maintaining it harder, it is a necessary tradeoff that needs to be done if the network is to remain open and permissionless.
Part of the registration process on the network will validate the patch level of your platform (Motherboard + CPU). This requires your to have the necessary updates that mitigate known vulnerabilities that might lead to compromise of data protected by SGX.
Let's start with the different components that need to be updated -
Processor microcode (ucode) - Microcode is a type of low-level computer programming that is used to control the operations of a microprocessor. It is typically stored in the microprocessor itself or in a read-only memory (ROM) chip that is connected to the microprocessor. Microcode is used to define the basic set of instructions that a microprocessor can execute, as well as the operations that it can perform on data. It is usually written in a specialized microcode programming language, and it forms the lowest level of a computer's instruction set architecture.
SGX Platform Software (PSW) - This software package provides a set of tools and libraries to make use of the Intel SGX instruction set
The PSW packages can be updated using your standard operating system install methods. For example, in Linux do this:
While there are a few ways to update the processor microcode, it is important to note that for SGX, the updated microcode must be loaded through the BIOS. That means that upgrading the microcode using early load or late load (installing through the operating system) will not affect the SGX patch level of the platform.
To find out whether the microcode needs to be updated and find the latest version, we must first get the family, model, and stepping of our processor.
To find the stepping, model, and family of your processor, you can use the lscpu
command. This command displays detailed information about the CPU architecture.
Open a terminal window on your system and type the following command:
2. The output of this command will include the stepping, model, and family of your processor, as well as other information about the CPU architecture.
Here is an example of the output you might see:
In this example, the family, model and stepping of the processor are 6, 85, and 3, respectively.
Next, we take these values and translate them to hex and structure them as follows: <family>-<model>-<stepping>. In this example we get: 06-55-03
. This is our microcode file name for our processor.
Pro Tip: These numbers also allow us to get our CPUID, in the following order:
|model 1st digit|family|model 2nd digit|stepping|
. For example, 06-9e-0d -> 906ED
After we have our microcode file name, we use it to find the latest version of our microcode, which is available here: https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files/blob/main/releasenote.md. Continuing the previous example, the latest version of microcode for 06-55-03
is 0x0100015e
Now that we know what our microcode should be, we can compare it to our current microcode. Get your current version with:
cat /proc/cpuinfo | grep microcode
or dmesg | grep microcode
Note - On Azure machines will always return 0xFFFFFFFF as their microcode version regardless of the actual patch level
If your version does not match the latest one, you will need to update your BIOS. To do that, contact your motherboard vendor, or your cloud service provider and download or request the BIOS version that contains the latest microcode for your CPU.
(or SCRT_ENCLAVE_DIR=/usr/lib secretd init-enclave | grep -Po 'isvEnclaveQuoteStatus":".+?"'
)
An outtput like this should be generated:
(or isvEnclaveQuoteStatus":"SW_HARDENING_NEEDED"
)
Where the important fields are isvEnclaveQuoteStatus and advisoryIDs. This is are fields that mark the trust level of our platform. The acceptable values for the isvEnclaveQuoteStatus
field are:
OK
SW_HARDENING_NEEDED
With the following value accepted for testnet only:
GROUP_OUT_OF_DATE
For the status CONFIGURATION_AND_SW_HARDENING_NEEDED
we perform a deeper inspection of the exact vulnerabilities that remain. The acceptable values for mainnet are:
"INTEL-SA-00334"
"INTEL-SA-00219"
Consult with the Intel API for more on these values.
If you do not see such an output, look for a file called attestation_cert.der
which should have been created in your $HOME
directory. You can then use the command secretd parse <path/to/attestation_cert.der>
to check the result a successful result should be a 64 byte hex string (e.g. 0x9efe0dc689447514d6514c05d1161cea15c461c62e6d72a2efabcc6b85ed953b
.
Running secretd init-enclave
should have created a file called attestation_cert.der
. This file contains the attestation report from above.
Contact us on the proper channels on scrt.network/discord
The details we will need to investigate will include:
Hardware specs
SGX PSW/driver versions
BIOS versions
The file attestation_cert.der
Output is:
Make sure you have the environment variable SCRT_ENCLAVE_DIR=/usr/lib
set before you run secretd
.
Output is:
Make sure the directory ~/.sgx_secrets/
is created. If that still doesn't work, try to create /root/.sgx_secrets
Output is:
Make sure the aesmd-service
is running systemctl status aesmd.service
Output is:
Please disable hyperthreading and overclocking/undervolting (Turboboost) in your BIOS.
I'm seeing CONFIGURATION_AND_SW_HARDENING_NEEDED
in the isvEnclaveQuoteStatus
field, but with more advisories than what is allowed
This could mean a number of different things related to the configuration of the machine. Most common are:
["INTEL-SA-00161", "INTEL-SA-00233"] - Hyper-threading must be disabled in the BIOS
["INTEL-SA-00289"] - Overclocking/undervolting must be disabled by the BIOS (sometimes known as Turboboost)
["INTEL-SA-00219"] - Integrated graphics should be disabled in the BIOS - we recommend performing this step if you can, though it isn't required
If you are still having trouble getting rid of INTEL-SA-00219 and INTEL-SA-00289, here are some possible settings to look for outside of the CPU settings:
Primary Display = 'PCI Express'
IGPU Multi-Monitor = Disabled
Onboard VGA = Disabled
I'm seeing SGX_ERROR_DEVICE_BUSY
Most likely you tried reinstalling the driver and rerunning the enclave - restarting should solve the problem
In order to become a validator, you node must be fully synced with the network. You can check this by doing:
When the value of catching_up
is false, your node is fully sync'd with the network. You can speed up syncing time by State Syncing to the current block.
This is the secret
wallet which you used to create your full node, and will use to delegate your funds to you own validator. You must delegate at least 1 SCRT (1000000uscrt) from this wallet to your validator.
If you get the following message, it means that you have no tokens, or your node is not yet synced:
Copy/paste the address to get some test-SCRT from the faucet. Continue when you have confirmed your account has some test-SCRT in it.
(remember 1 SCRT = 1,000,000 uSCRT, and so the command below stakes 100 SCRT).
You should see your moniker listed.
(remember 1 SCRT = 1,000,000 uSCRT)
In order to stake more tokens beyond those in the initial transaction, run:
Currently deleting a validator is not possible. If you redelegate or unbond your self-delegations then your validator will become offline and all your delegators will start to unbond.
You are currently unable to modify the --commission-max-rate
and --commission-max-change-rate"
parameters.
Modifying the commision-rate can be done using this:
Unjailing
To unjail your jailed validator
Signing Info
To retrieve a validator's signing info:
Query Parameters
You can get the current slashing parameters via:
Query Parameters
You can get the current slashing parameters via:
`secretcli` is the Secret Network light client, a command-line interface tool for interacting with nodes running on the Secret Network. To install it, follow these instructions:
Get the latest release of secretcli for your OS HERE.
Mac/Windows: Rename it from secretcli-${VERSION}-${OS}
to secretcli
or secretcli.exe
and put it in your path
Ubuntu/Debian: sudo dpkg -i secret*.deb
Linux and MacOS users:
You can find alternate node endpoints in the API registry, or run your own full node
See more details on how to use the CLI here
Ensure you Validator Backup before you migrate it. Do not forget!
If you don't have the mnemonics saved, you can back it up with:
This prints the private key to stderr
, you can then paste in into the file mykey.backup
.
To check on the new full node if it finished catching-up:
Only continue if catching_up
is false
To prevent double signing, you should stop the validator node before stopping the new full node to ensure the new node is at a greater block height than the validator node.
Please read about the dangers in running a validator.
The validator should start missing blocks at this point. This is the desired behavior!
On the validator node, the file is ~/.secretd/config/priv_validator_key.json
.
You can copy it manually or for example you can copy the file to the new machine using ssh:
After being copied, the key (priv_validator_key.json
) should then be removed from the old node's config
directory to prevent double-signing if the node were to start back up.
The new node should start signing blocks once caught up.
Prometheus is a flexible monitoring solution in development since 2012. The software stores all its data in a time series database and offers a multi-dimensional data model and a powerful query language to generate reports of the monitored resources.
This tutorial makes no assumptions about previous knowledge, other than:
You are comfortable with a Linux operating system, specifically Ubuntu 20.04
You are comfortable being able to ssh into your node, as all operations will be done from the command line\
Once you've submitted a delegation to a validator, you can see it's information by using the following command:
Example:
Or if you want to check all your current delegations with distinct validators:
Once you begin an unbonding-delegation, you can see its information by using the following command:
Or if you want to check all your current unbonding-delegations with distinct validators:
Additionally, you can get all the unbonding-delegations from a particular validator:
A redelegation is a type delegation that allows you to bond illiquid tokens from one validator to another:
Here you can also redelegate a specific shares-amount
or a shares-fraction
with the corresponding flags.
The redelegation will be automatically completed when the unbonding period has passed.
This is intended to guide you in selecting SGX-compliant VPS options for the Secret Network.
When renting a compliant bare metal machine from a VPS provider, ensure you do not accept any chassis or CPU substitutes they propose, unless those substitutes are on the Hardware Compliance list.
All cost estimates are based on the following recommendations:
Processor: E-series rather than E3 (due to age)
SSD: 512GB+
RAM: 64GB+
Just because a VPS is cheaper it does not make it better.
Websites: Global or United States
Currently, it is advised to exercise caution when considering OVHCloud servers, as there are concerns regarding the inadequate updating of their mainboards.
Please contact the node support in case you got more questions:
The following are examples of Hardware Compliant servers:
Example in the US: https://us.ovhcloud.com/bare-metal/infra/infra-2-le/
Global example: https://www.ovhcloud.com/en/bare-metal/rise/rise-3/
OVHCloud servers can come with either an ASUS or Asrock motherboard. The Asus motherboard does NOT support Intel SPS. If you receive the Asus motherboard, you'll need to create a ticket to have the motherboard replaced with the Asrock motherboard: Asrock E3C246D4U2-2T
Navigate to the server's management page
Under General Information, ensure SGX is enabled
3. Navigate to the IPMI tab. This will be used to disable overclocking and other necessary settings.
4. Enable Remote KVM
5. Create a DEL hotkey
6. Reset the server, and continue executing the DEL hotkey until you enter the BIOS.
7. Disable Intel Speedstep Technology
8. Under Chipset Configuration:
9. Save and Exit the bios
10. Reset the server again
11. Continue from Setting Up a Node
This is intended to guide you in selecting SGX compliant hardware for Secret Network.
CPU must support SGX via SPS. CPUs that only support SGX via Intel ME will not work.
The following are confirmed compliant Intel CPUs:
Only Intel processors support SGX. AMD processors are *NOT* supported.
The distinguishing factor of these motherboards is that they support Intel SGX.
This is not an exhaustive list of supported motherboards. These are simply motherboards proven supported by community members.
Ensure that Hyperthreading & overclocking/undervolting are disabled in the bios.
Alternatively, Eddie from FreshSCRTs is helping users expedite the delivery of their VPS as well as giving some upgrades from phoenixnap. You can pursue that by doing the following.
Leaseweb has been tested and confirmed working by the Secret Network community.
Ensure that Hyperthreading/Logical Processors & overclocking/undervolting are disabled in the bios.
Things to note before setting up on Nforce.
Only certain chasis and CPU configuration have SGX enabled.
You need to manually communicate with Nforce to give you the right configuration so that SGX works on the BIOS.
For the purpose of this Guide i selected the HP DL20 G10 Chasis. For the CPU i slected the Intel E2174G (3.8-4.7 Ghz, 4C/8T)
With 32 GB ram, Ubuntu OS 20.04, and 512 GB SSD.
This takes about 1 day, you may need to make sure they have the servers, if they do not make sure you request only the same configuration or the SGX wont be enabled.
The SSC portal is only enabled once you finish the payment. Go to your dedicated servers and select the image. Go to remote management and create a session to log into the BIOS via IPMI. You'll need to make sure Hyper Threading is disabled.
VPS Provider | Cost/month | Setup Instructions |
---|---|---|
This is not a comprehensive list of compliant hardware, but rather a guide for what has been verified to work. is often show as SGX compliant, but it does not discriminate against whether SGX is supported via SPS or Intel ME. Only SGX via SPS is supported.
Brand | Family | Model |
---|
Brand | Tag | Versions | Link |
---|
Website:
Rent a with any of the hardware that shows as working on the
Continue with the node setup guide
Signup for phoenixnap using .
Message Eddie your order number .
Website:
Rent a with any of the hardware that shows as working on the
I
Continue with the node setup guide
Website:
Login with your credentials and proceed with .
TBD
144
95
89
185
210
144
Hetzner
*NOT SUPPORTED*
160
X1 (Professional Line)
Intel | XEON E-Series |
|
|
|
|
|
|
|
|
|
|
|
|
|
Intel | XEON Gold-Series |
|
XEON Platinum-Series |
|
AMD | *NOT SUPPORTED* |
Website: Microsoft Azure
Using Azure is not recommened anymore as of now because of higher pricing than bare-metals and not enough RAM (32GB is possible to use, but not recommended anymore).
When renting a compliant bare metal machine from a VPS provider, ensure you do not accept any chassis or CPU substitutes they propose, unless those substitutes are on the Hardware Compliance list.
Microsoft Azure is tested and confirmed working by the Secret Network Community.
To setup a node on Microsoft Azure do the following.
Visit the Azure Confidential Compute page here and click "Get Started"
Click "Get it now" on the following page and signup for a Microsoft Azure Account.
While provisioning your VPS be sure to have at least 500GB of premium SSD storage available.
After your confidential compute VM is deployed, continue with the node setup guide starting here.
Once you begin a redelegation, you can see its information by using the following command:
Or if you want to check all your current unbonding-delegations with distinct validators:
Additionally, you can get all the outgoing redelegations from a particular validator:
Goaccess is a powerful tool when it comes to providing usage statistics for your endpoints.
This tutorial will guide you through configuring Nginx for logging, anonymizing logs, monitoring web traffic with GoAccess, and setting up log rotation for Nginx logs.
This guide is intended for intermediate users who are familiar with Linux, Nginx, and using the command-line interface.
SSH into Your Machine: Use an SSH client to log into your server.
Frist, ensure that all packages are up to date. This can prevent security vulnerabilities.
Install Glances: An advanced system monitor for Linux.
Install Nginx: A high-performance web server and a reverse proxy, often used for load balancing.
Configure UFW Firewall for Nginx: Make sure Nginx can receive HTTP and HTTPS traffic.
Remove Existing Nginx Configuration:
Edit New Configuration: Use a text editor like nano to create a new configuration file.
Replace the contents with your load balancer configuration. You can use the Example Nginx config as a starting point.
Modify RPC/LCD/gRPC server entries and domain names as required.
Save and exit the editor (CTRL + X
, then Y
to confirm, and Enter
to save).
Test Nginx Configuration: Ensures your syntax is correct.
Enable and Start Nginx: This will make sure Nginx starts on boot and starts running immediately.
Install Certbot: This tool automates obtaining free SSL/TLS certificates from Let's Encrypt.
Obtain the SSL Certificate for your domain:
Finally, you can add a cronjob to crontab to enable auto-newing of the certificates:
The Sentry Node Architecture is an infrastructure example for DDoS mitigation on Tendermint-based networks.
Secret Nodes (Validators) are responsible for ensuring that the network can sustain denial of service attacks.
One recommended way to mitigate these risks is for validators to carefully structure their network topology in a so-called sentry node architecture.
Validator nodes should only connect to full-nodes they trust because they operate them themselves or are run by other validators they know socially. A validator node will typically run in a data center. Most data centers provide direct links the networks of major cloud providers. The validator can use those links to connect to sentry nodes in the cloud. This shifts the burden of denial-of-service from the validator's node directly to its sentry nodes, and may require new sentry nodes be spun up or activated to mitigate attacks on existing ones.
Sentry nodes can be quickly spun up or change their IP addresses. Because the links to the sentry nodes are in private IP space, an internet based attacked cannot disturb them directly. This will ensure validator block proposals and votes always make it to the rest of the network.
For those implementing Sentries on Validators who already have Public IP exposed. Currently any peer, be it a validator or full node, is given 16 attempts with exponential backoff, which in total amounts to around 35 hours, to connect. If the node remains unreachable then it is automatically removed from the address book. An unreachable validator node is not gossiped across the network i.e. all other nodes will each try to connect to the unreachable validator node before removing it from their address book.
Log into your sentry node(s), and validator, then run the following commands to get the peer information:
Get node id
Save your peer information, be sure to remember which are for sentries and which is for your validator, you'll need it later:
To setup basic sentry node architecture you can follow the instructions below:
Sentry Nodes should edit their config.toml:
First follow the Full Node Guide
Edit the full nodes config file you want to use as a sentry node:
Proceed to add the peer id of your validator to the .secretd/config/config.toml
:
Now proceed to restart your secret node with the following command.
You now have a sentry node running!
Validators nodes should add their sentry node peer information to their .secretd/config/config.toml
:
Proceed to add the peer id of your sentry nodes to the persistent_peers list and set pex to false:
Now proceed to restart your secret node with the following command.
You're now running your validator behind a sentry node!
https://github.com/cosmos/gaia/blob/master/docs/validators/security.md
Creating Archives nodes is not possible as this time. Please use the provided API Archive nodes in API Endpoints Mainnet (Secret-4) if you need access to an Archive.
An archive node keeps all the past blocks. An archive node makes it convenient to query the past state of the chain at any point in time. Finding out what an account's balance, stake size, etc at a certain block was, or which extrinsics resulted in a certain state change are fast operations when using an archive node. However, an archive node takes up a lot of disk space - nearly 2TB for secret-4
as of Feb 1, 2023.
More on hardware support here.
Note that syncing from scratch/following these instructions takes several weeks, since state-sync is not available for Archive Nodes.
To setup your archive node you can follow the instructions below:
secretd
To install secretd
, please visit Install secretd.
Setup the node using the Running a Full Node guide. You should stop at the Set minimum-gas-price Parameter step.
Do NOT begin syncing yet!
secretd
Now that you have registered the node with the latest version, install v1.2.0-archive
.
Note that the secret-node
system file is created in a previous step.
If everything above worked correctly, the following command will show your node streaming blocks (this is for debugging purposes only, kill this command anytime with Ctrl-C). It might take a while for blocks to start streaming, so grab some 🍿 while you wait!
You now have an Archive node running!
Syncing a node from scratch means that from time to time you'll need to perform an upgrade (at the block height that the upgrade was originally took place on mainnet).
You will need to use the designated archive-node binaries when available. For the rest of the upgrades, use the binaries for the respective version from the releases page.
As of the writing of these lines, the upgrade timing (in block-height) are:
v1.3.0 - block height 3,343,000
(binaries).
v1.4.0 - block height 5,309,200
(binaries).
v1.5.0 - block height 5,941,700
(binaries).
v1.6.0 - block height 6,537,300
(binaries).
For more detailed upgrade instructions, you can refer to the v1.5.0 upgrade instructions.
Here is an example nginx.conf
for Loadbalancing on Nginx:
Supermicro |
|
|
|
|
|
|
|
Dell |
| BIOS Version |
| BIOS version |
| BIOS version |
HP |
| BIOS version |
| BIOS version |
ASUS |
| BIOS version |
Asrock |
|
GIGABYTE |
| BIOS version F06 |
A complete to go command that should fit most needs can be found at #fast-state-sync-script. Be aware that this script can also fail or cause problems. In that case please ask for help in the channels above.
Statesync is a module built into the Cosmos SDK to allow validators to rapidly join the network by syncing your node with a snapshot enabled RPC from a trusted block height.
This greatly reduces the time required for a node to sync with the network from days to minutes. The limitations of this are that there is not a full transaction history, just the most recent state that the state-sync RPC has stored. An advantage of state-sync is that the database is very small in comparison to a fully synced node, therefore using state-sync to re-sync your node to the network can help keep running costs lower by minimizing storage usage.
By syncing to the network with state-sync, a node can avoid having to go through all the upgrade procedures and can sync with the most recent binary only.
This documentation assumes you have followed the instructions for Running a Full Node.
First, adjust the configuration to be compatible with state-sync:
IAVL fast node must be disabled, otherwise the daemon will attempt to upgrade the database whil state sync is occuring.
To ensure that state-sync works on your node, it has to look for the correct snapshots that the snapshot RPC provides.
SNAP_RPC is the RPC node endpoint that is used for statesyncing
Set the state-sync BLOCK_HEIGHT
and fetch the TRUST_HASH
from the snapshot RPC. The BLOCK_HEIGHT
to sync is determined by finding the latest block that's a multiple of snapshot-interval.
The output should be similar to:
using statesync properly?
You can find help in Telegram here
Visit the Secret Network Discord here and ask in #node-discussion or #node-support for help
This will erase your node database. If you are already running validator, be sure you backed up your config/priv_validator_key.json
prior to running unsafe-reset-all
.
It is recommended to copy the signing state of the node by coping data/priv_validator_state.json
and only running unsafe-reset-all
to avoid potential double signing.
The code below stops the node, resets the temporary directory and resets the node into a fresh state.
This generally takes several minutes to complete, but has been known to take up to 24 hours. To better help the process along, add seeds.
When state-sync fails, you can restart the process and try again using the condensed script below. This usually fixes some of the random problems with it:
To safe time, you can use this script to quickly init everything you need for statesync. Please be aware that this might be dangerous if you have a validator.
Go to their dedicated hosts/ Bare Metal services section and rent a Intel E-2286G Processor (6 cores / 12 threads @ 4.0 GHz)
Using the Boot Connection log into the BIOS and Ensure that Hyperthreading & overclocking/undervolting are disabled in the bios.
This section will take you through the process of taking a node from fresh machine to full validator on the public testnet pulsar-3.
This section will take you through the process of taking a node from fresh machine to full validator. The general steps are as follows:
Got problems with using SGX and DCAP attestation in your system? Please ask in the Telegram or Discord for help. For Validators, you can also ask in the SN Validators chat.
If you're running a local machine and not a cloud-based VM -
Update your BIOS to the latest available version
Go to your BIOS menu
Enable SGX (Set to "YES", it's not enough to set it to "software controlled")
Disable Secure Boot
Disable Hyperthreading
Please use Ubuntu 22.04 LTS If you install SGX on a fresh node to ensure that DCAP will work correctly. Ubuntu 20.04 LTS is not supported by default anymore.
Make sure the SGX driver is installed. The following devices should appear:
If your kernel version if 5.11
or higher, then you probably already have the SGX driver installed. Otherwise - please update the kernel version to 5.11
or higher to ensure that these two devices appear.
Also make sure that the user under which the node is supposed to run has privileges to access SGX:
The sgx_prv
should appear.
If it does not - Logout and re-login may be needed, for the change to take effect.
First, you need to add the Intel repository to APT and install the necessary SGX libraries:
If your system has 5th Gen Intel® Xeon® Scalable Processor(s)
For the DCAP attestation to work, you'll need to register your platform with Intel. This is achieved by the following:
You can check the file /var/log/mpa_registration.log
, to see if the platform is registered successfully.
The Quote Provider library is needed to provide the data for DCAP attestation.The configuration file for it should can be found here:
/etc/sgx_default_qcnl.conf
Running a baremetal/physical machine
The simplest would be to use the PCCS run by SCRTLabs. Modify the following parameters in the file:
You can set those parameters by the following command:
Running on Cloud VPS providers
For cloud VPS providers, the cloud service providers may provide their own PCCS. Please see their documentation for more infomation.
Note: You'll need to restart the AESMD service each time the configuration is changed
Next, restart your aesmd service for the changes to take effect.
Download and run the check-hw tool (included in the Release package). You should see the following:
That would mean all the above steps are ok, and you're good to go.
In case you see some error messages, but at the end the following:
That would mean there's a problem with DCAP attestation.
However the EPID attestation still works. Although you may technically run the node, it's strongly recommended to fix this. The EPID will be phased-out by Intel on April 2025.
To get a more detailed error info, run check-hw --testnet
This document details how to join the Secret Network testnet
as a full node. Once your full node is running, you can turn it into a validator in the optional last step.
Secret Network has strict Hardware Requirements. If your machine does not meet them, it will *NOT* work as a node.
Ubuntu/Debian host (with ZFS or LVM to be able to add more storage easily)
A public IP address
Open ports TCP 26656 & 26657
Note: If you're behind a router or firewall then you'll need to port forward on the network device.
secretd
Choose a moniker for yourself, and replace <MONIKER>
with your moniker below. This moniker will serve as your public nickname in the network.
This will generate the following files in ~/.secretd/config/
genesis.json
node_key.json
priv_validator_key.json
genesis.json
The genesis file is how other nodes on the network know what network you should be on.
Initialize /opt/secret/.sgx_secrets
:
You can choose between two methods, 3a (automatic) or 3b (manual):
WARNING: This method is experimental, and may not work. If it doesn't work, skip to step 3b.
The following commands will create the necessary environment variables and attempt to automatically register the node.
Attestation certificate should have been created by the previous step
Verify the certificate is valid. A 64 character registration key will be printed if it was successful.
secretd
Configure secretd
. Initially you'll be using the bootstrap node, as you'll need to connect to a running node and your own node is not running yet.
If you already have a wallet funded with SCRT
, you can import the wallet by doing the following:
Otherwise, you will need to set up a key. Make sure you back up the mnemonic and the keyring password.
Register your node on-chain
2. Pull & check your node's encrypted seed from the network
3. Get additional network parameters
These are necessary to configure the node before it starts.
From here on, commands must be ran on the full node.
In order to be able to handle NFT minting and other Secret Contract-heavy operations, it's recommended to update your SGX memory enclave cache:
minimum-gas-price
ParameterWe recommend 0.0125uscrt
per gas unit:
Your node will not accept transactions that specify --fees
lower than the minimun-gas-price
you set here.
secret-node
:Note that the secret-node
system file is created when installing sgx.
You are now a now ready to finally sync the full node. 🎉.
secretd tendermint show-node-id
And publish yourself as a node with this ID:
Be sure to point your CLI to your running node instead of the bootstrap node
secretcli config node tcp://localhost:26657
If someone wants to add you as a peer, have them add the above address to their persistent_peers in their ~/.secretd/config/config.toml.
And if someone wants to use your node from their secretcli then have them run:
Got problems with using SGX and DCAP attestation in your system? Please ask in the Telegram or Discord for help. For Validators, you can also ask in the SN Validators chat.
If you're running a local machine and not a cloud-based VM -
Update your BIOS to the latest available version
Go to your BIOS menu
Enable SGX (Set to "YES", it's not enough to set it to "software controlled")
Disable Secure Boot
Disable Hyperthreading
Please use Ubuntu 22.04 LTS If you install SGX on a fresh node to ensure that DCAP will work correctly. Ubuntu 20.04 LTS is not supported by default anymore.
Make sure the SGX driver is installed. The following devices should appear:
If your kernel version if 5.11
or higher, then you probably already have the SGX driver installed. Otherwise - please update the kernel version to 5.11
or higher to ensure that these two devices appear.
Also make sure that the user under which the node is supposed to run has privileges to access SGX:
The sgx_prv
should appear.
If it does not - Logout and re-login may be needed, for the change to take effect.
First, you need to add the Intel repository to APT and install the necessary SGX libraries:
If your system has 5th Gen Intel® Xeon® Scalable Processor(s)
For the DCAP attestation to work, you'll need to register your platform with Intel. This is achieved by the following:
You can check the file /var/log/mpa_registration.log
, to see if the platform is registered successfully.
The Quote Provider library is needed to provide the data for DCAP attestation.The configuration file for it should can be found here:
/etc/sgx_default_qcnl.conf
Running a baremetal/physical machine
The simplest would be to use the PCCS run by SCRTLabs. Modify the following parameters in the file:
You can set those parameters by the following command:
Running on Cloud VPS providers
For cloud VPS providers, the cloud service providers may provide their own PCCS. Please see their documentation for more infomation.
Note: You'll need to restart the AESMD service each time the configuration is changed
Next, restart your aesmd service for the changes to take effect.
That would mean all the above steps are ok, and you're good to go.
In case you see some error messages, but at the end the following:
That would mean there's a problem with DCAP attestation.
However the EPID attestation still works. Although you may technically run the node, it's strongly recommended to fix this. The EPID will be phased-out by Intel on April 2025.
To get a more detailed error info, run check-hw --testnet
This document details how to join the Secret Network secret-4
mainnet as a full node. Once your full node is running and synced to the last block, you can use it
Ubuntu/Debian host, recommended is Ubuntu 20.04 LTS or 22.04 LTS.
A public IP address, so that other nodes can connect to you.
Open ports TCP 26656 & 26657
Note: If you're behind a router or firewall then you'll need to port forward on the network device.
secretd
This guide assumes you've already installed the latest version of secretd and SGX.
Choose a moniker for yourself, and replace <MONIKER>
with whatever name you like (could be some random string, or just how you like to name to node) below. This moniker is your public nickname of the node in the network.
This will generate the following files in ~/.secretd/config/
genesis.json
node_key.json
priv_validator_key.json
genesis.json
Initialize /opt/secret/.sgx_secrets
:
The following commands will create the necessary environment variables and attempt to automatically register the node.
The attestation certificate should have been created by the previous step
Verify the certificate is valid. A 64-character registration key will be printed if it was successful.
secretd
Configure secretd
. Initially you'll be using the bootstrap node, as you'll need to connect to a running node and your own node is not running yet.
If you already have a wallet funded with SCRT
, you can import the wallet by doing the following:
Otherwise, you will need to set up a key. Make sure you back up the mnemonic and the keyring password.
This will output your address, a 45 character-string starting with secret1...
.
Register your node on-chain
2. Pull & check your node's encrypted seed from the network
3. Get additional network parameters
These are necessary to configure the node before it starts.
From here on, commands must be ran on the full node.
In order to be able to handle NFT minting and other Secret Contract-heavy operations, it's recommended to update your SGX memory enclave cache:
minimum-gas-price
ParameterWe recommend 0.1uscrt
per gas unit:
Your node will not accept transactions that specify --fees
lower than the minimun-gas-price
you set here.
IAVL fast node must be disabled, otherwise the daemon will attempt to upgrade the database whil state sync is occuring.
secret-node
:Note that the secret-node
system file is created when installing sgx.
You are now a now ready to finally sync the full node. 🎉.
secretd tendermint show-node-id
And publish yourself as a node with this ID:
Be sure to point your CLI to your running node instead of the bootstrap node
secretcli config node tcp://localhost:26657
If someone wants to add you as a peer, have them add the above address to their persistent_peers in their ~/.secretd/config/config.toml.
And if someone wants to use your node from their secretcli then have them run:
This section will take you through the process of taking a node from fresh machine to full validator. The general steps are as follows:
Unlike other Tendermint/Cosmos based daemons, secretd
cannot be built from source due to the SGX requirement.
secretd
The most common method for installing secretd
is the Secret Network package installer for Debian/Ubuntu:
Website:
Continue with the node setup guide
Ensure your hardware is .
Unlike other Tendermint/Cosmos based daemons, secretd
cannot be built from source due to the SGX requirement. For other builds other than .deb
, see the .
Reading
RPC address of an already active node. You can use http://bootstrap.pulsar3.scrtlabs.com:26657
, or any other node that exposes RPC services. Alternate RPC nodes available in the
This guide assumes you've already installed the latest version of secretd and SGX. To setup an archive node, you must follow the instructions.
For more information on SGX, see instructions for and . See if you'd like a more comprehensive overview on what's happening in these steps.
If this step was successful, you can skip straight to .
The following steps should use secretd
be ran on the full node itself. To run the steps with secretd
on a local machine, there.
This will output your address, a 45 character-string starting with secret1...
. Copy/paste it to get some test-SCRT from . Continue when you have confirmed your account has some test-SCRT in it.
Also checkout by [ block pane ]
for fine tuning your machine for better uptime.
Go to to continue.
You can skip syncing from scratch or download a snapshot by to the current block.
To turn your full node into a validator, see .
Ensure your hardware is .
Download and run the check-hw tool (included in the ). You should see the following:
Secret Network has strict Hardware Requirements, see . If your machine does not meet them, it will *NOT* work as a node.
Reading
RPC address of an already active node. You can use any node that exposes RPC services, please see .
For more information on how to install SGX, see instructions for .
If you need help with installing secretd, please take a look at .
You can choose between two methods, or :
WARNING: This method is experimental, and may not work. If it doesn't work, skip to .
If this step was successful, you can skip straight to .
If registration was NOT succesfull consider checking out the help or contact a fellow validator on our .
The following steps should use secretd
be ran on the full node itself. To run the steps with secretd
on a local machine, there.
Also checkout by block pane
for fine tuning your machine for better uptime.
Go to or to continue.
To sync to head quickly, please see .
You can skip syncing from scratch or download a snapshot by to the current block.
To turn your full node into a validator, see .
or the node
If you wish to create an archive node, replace step 3 with .
For other builds other than .deb
, see the .
Overview
A coordinated group of validators (maximum: 80) secure the Secret Network. Each validator uses Tendermint, a Byzantine fault-tolerant Delegated Proof-of-Stake (DPoS) consensus engine. Each validator stakes their SCRT coins and coins from delegators to earn rewards by successfully running the protocol, verifying transactions, and proposing blocks. If they fail to maintain a consistent (downtime) and honest node (double-signing), slashing penalties will occur, resulting in coins deducted from their account.
All SCRT holders can become a Secret Network validator or delegator and participate in staking and governance processes.
For information on running a node, delegating, staking, and voting, please see the validator guide below and visit our governance documentation.
Here is a list of Hardware Hardware Compliance for running a validator.
For detailed information on how to set up and run a validator, see the Becoming A Validatorsection.
Slashing for downtime is based on actual block times and NOT theoretical block times, such as SignedBlocksWindow
and MinSignedPerWindow
network parameters. Validators signing less than MinSignedPerWindow blocks out of every SignedBlocksWindow will experience a downtime slash.
Parameters: 11250 blocks out of every 22500-blocks
For a block time of 6.8 seconds, it roughly translates to being up for less than 21.25 hours out of every 42.5-hour window.
For a block time of 6.4 seconds, it roughly translates to being up for less than 20 hours out of every 40-hour window.
Slashing of 0.01% of the validator and its delegators' staked SCRT
Jailing of validators for 10 minutes. Validators do not earn block rewards for the jailing period and must manually unjail their node with:
Slashing for double-signing is when the validator signs the same block twice. This happens most commonly during migration, and the original validator node is not shut off appropriately, and their priv_validator_key.json
is running on two machines at the same time. The best way to avoid double signing is by using a remote signer such as the TenderMint Key Management System (TMKMS) or Horcrux.
A validator signs the same block height twice
Slashing of 5% of the validator and its delegators' staked SCRT
Jailing forever (tombstoned) of the validator node
A tombstoned validator cannot earn block rewards anymore, and its delegators must re-delegate their SCRT to another validator
The Secret Network uses the same slashing penalties as Cosmos. Burning SCRT through slashing validators for downtime and/or double-signing discourages poor practices and dishonesty by protocol-recognized actors. If slashing occurs frequently, a validator may lose their ability to vote on future blocks for some time.
`secretcli` is the Secret Network light client, a command-line interface tool for interacting with nodes running on the Secret Network. To install it, follow these instructions:
Get the latest release of secretcli for your OS HERE.
Mac/Windows: Rename it from secretcli-${VERSION}-${OS}
to secretcli
or secretcli.exe
and put it in your path
Ubuntu/Debian: sudo dpkg -i secret*.deb
Linux and MacOS users:
You can find alternate node endpoints in the API registry, or run your own full node\
See more details on how to use the CLI here
In order to become an active validator, you must have more stake than the bottom validator. You may still execute the following steps, but you will not be active and therefore won't receive staking rewards.
In order to become a validator, you node must be fully synced with the network, using either the Quicksync / Snapshot or Statesync.
After you completed these steps, you can check this by doing:
When the value of catching_up
is false, your node is fully sync'd with the network and ready to go.
This is the secret
wallet which you used to create your full node, and will use to delegate your funds to you own validator. You must delegate at least 1 SCRT (1000000uscrt) from this wallet to your validator.
If you get the following message, it means that you have no tokens, or your node is not yet synced:
Before creating your validator, backup your validator key.
WARNING: if you don't backup your key and your node goes down, you will lose your validator and have to start a new one.
Remember 1 SCRT = 1,000,000 uSCRT, and so the command below stakes 10 SCRT
You should see your moniker listed.
(remember 1 SCRT = 1,000,000 uSCRT)
In order to stake more tokens beyond those in the initial transaction, run:
Currently deleting a validator is not possible. If you redelegate or unbond your self-delegations then your validator will become offline and all your delegators will start to unbond.
You are currently unable to modify the --commission-max-rate
and --commission-max-change-rate"
parameters.
Modifying the commision-rate can be done using this:
To unjail your jailed validator
To retrieve a validator's signing info:
You can get the current slashing parameters via:
Uncomplicated Firewall (UFW) is a program for managing a netfilter firewall designed for easy use. It uses a command-line interface (CLI) with a small number of simple commands, and is configured with iptables. UFW is available by default in all Ubuntu installations after 18.04 LTS, and features tools for intrusion prevention which we will cover in this guide.
Start by checking the status of UFW.
Then proceed to configure your firewall with the following options, preferably in this order.
The order is important because UFW executes the instructions given to it in the order they are given, so putting the most important and specific rules first is a good security practice. You can insert UFW rules at any position you want to by using the following syntax (do not execute the following command when setting up your node security):
The example command above would be placed in the first position (instead of the last) of the UFW hierarchy and deny a specific IP address from accessing the server.
This sets the default to allow outgoing connections unless specified they should not be allowed.
This sets the default to deny incoming connections unless specified they should be allowed.
This allows SSH connections by the firewall.
This limits SSH login attempts on the machine. The default is to limit SSH connections from a specific IP address if it attempts 6 or more connections within 30 seconds.
Allow 26656 for a p2p networking port to connect with the Tendermint network; unless you manually specified a different port.
Allow 1317 if you are running a public LCD endpoint from this node. Otherwise, you can skip this.
Allow 26657 if you are running a public RPC endpoint from this node. Otherwise, you can skip this.
This enables the firewall you just configured.
At any point in time you can disable your UFW firewall by running the following command.
It is CRUCIAL to backup your validator's private key. It's the only way to restore your validator in an event of a disaster. The validator private key is a Tendermint Key - a unique key used to sign consensus votes.
To backup everything you need to restore your validator, simply do the following
If you are using the software sign (which is the default signing method of tendermint), your Tendermint Key is located in ~/.secretd/config/priv_validator_key.json
.
Backup ~/.secretd/config/priv_validator_key.json
.
Backup ~/.secretd/data/priv_validator_state.json
.
Backup the self-delegator wallet. See the wallet section.
Or you can use hardware to manage your Tendermint Key much more safely, such as YubiHSM2.
Monitoring is inmensely important so to ensure the liveness and reliabilty of your infrastructure. If your validator is not signing blocks it will eventually get slashed losing you and your delegators some of their SCRT balance. Same for full nodes it is important they are able to serve queries as if they are down performance of dApps and other applications will be limited.\
Monitoring is best done by a dedicated piece of software that provides both analytics and alerts. Some of those options are laid out below so to help you set them up. Consider relying on more than 1 monitoring solutions and leverage external RPCs so to secure your setup even further.
Prometheus
Grafana
Docker
PagerDuty
Goaccess
To add basic security to your node, we've provided a guide covering 2 simple tools.
Uncomplicated Firewall (UFW)
Key-based SSH authentication.
Within the #Cosmos, conversations around node security tend to start with whether or not you use backup servers, sentries, and a remote-signing key management solution. This does not see the forest for the trees. While those steps are certainly important, they are *final* security steps. We should instead be discussing the first steps you make when setting up a new Tendermint node; raise the floor of security, rather than the ceiling, if you will.
This is intended to be a very basic guide on Linux security practices. If you want to more in-depth information, you can read about it here.
The following topics will be covered: 1. SSH Key Setup
2. Server Configuration
3. Setting up a Basic Firewall
4. Using Local CLI Machinesbas
When you receive your server, you will be provided a root user login, and a password. You’ll be inclined to log in with that login and password, but we have steps before we do that! We first want to create out ssh key as we’ll be disabling password login shortly.
An SSH (Secure Shell) key is a way to identify yourself as a user without using a password. It has 2 parts: the pubkey and private key. When you create the SSH key, you give your pubkey to a computer you wish to log into. You can then “show” the server your private key and it will admit you automatically. This makes it far more secure than a password, as then only you will have access to the server via your key.
This document assumes you’re using a Mac. If you need instructions for Linux or Windows, see the Github instruction for generating an SSH key.
Open Terminal
Generate the SSH key:
3. When you’re prompted to “Enter a file in which to save the key,” press Enter. This accepts the default file location.
4. At the prompt, type a secure passphrase. For more information, see “Working with SSH key passphrases.”
Your SSH key is now created, but we have to add it to the agent for it to be usable.
Start the ssh-agent in the background
2. Open your SSH config file
3. Add the following text block to your file
4. Add your SSH key to the ssh-agent
Your SSH key is now set up! This only has to happen once, so you can skip this if you need to refer back to this document.
Uncomplicated Firewall (UFW) is a program for managing a netfilter firewall designed for easy use. It uses a command-line interface (CLI) with a small number of simple commands and is configured with iptables. UFW is available by default in all Ubuntu installations after 18.04 LTS, and features tools for intrusion prevention which we will cover in this guide.
Start by checking the status of UFW.
2. Enable SSH
3. Enable p2p
This is the default p2p port for Tendermint systems, but if you’ve changed the port, you’ll need to update the ufw setting.
4. Enable UFW
5. Confirm UFW is enabled
Note that at any time you can disable ufw by doing:
Download the latest version of Node Exporter:
Unpack the downloaded archive. This will create a directory node_exporter-1.2.2.linux-amd64
, containing the executable, a readme and license file:
Copy the binary file into the directory /usr/local/bin
and set the ownership to the user you have created in step previously:
Remove the leftover files of Node Exporter, as they are not needed any longer:
To run Node Exporter automatically on each boot, a Systemd service file is required. Create the following file by opening it in Nano:
Copy the following information in the service file, save it and exit Nano:
Reload Systemd to use the newly defined service:
Run Node Exporter by typing the following command:
Verify that the software has been started successfully:
You will see an output like this, showing you the status active (running)
as well as the main PID of the application:
If everything is working, enable Node Exporter to be started on each boot of the server:
Restart the aesmd.service
4. Restart secret-node.service
If you aren't seeing any blocks being produced, that likely means you don't have any active peers. To solve this:
Add seed nodes
2. Restart secret-node
You'll be tempted to add persistent_peers as well, but unless you have control over the peers, DO NOT add them. Peers change frequently and interfere with the built-in network peering protocol.
SSH keys, similarly to cryptocurrency keys, consist of public and private keys. You should store the private key on a machine you trust. The corresponding public key is what you will add to your server to secure it.
Be sure to securely store a secure backup of your private SSH key.
From your local machine that you plan to SSH from, generate an SSH key. This is likely going to be your laptop or desktop computer. Use the following command if you are using OSX or Linux:
Decide on a name for your key and proceed through the prompts.
Copy the contents of your public key.
Your file name will differ from the command below based on how you named your key.
Give the ssh folder the correct permissions.
Chmod 700 (chmod a+rwx,g-rwx,o-rwx) sets permissions so the user or owner can read, write and execute, and the Group or Others cannot read, write or execute.
Copy the contents of your newly generated public key.
Now log into the server that you want to protect with your new SSH key and create a copy of the pubkey.
Create a file and paste in the public key information you copied from the previous step. Be sure to save the file.
Now add the pubkey to the authorized keys list.
Once you've confirmed that you can login via the new key, you can proceed to lock down the server to only allow access via the key.
Edit sshd_config to disable password-based authentication.
Change PasswordAuthentication yes" to "PasswordAuthentication no" and then save.
Restart ssh process for settings to take effect.
For additional security node operators may choose to secure their SSH connections with FIDO U2F hardware security devices such as YubiKey, SoloKey, or a Nitrokey. A security key ensures that SSH connections will not be possible using the private and public SSH key-pair without the security key present and activated. Even if the private key is compromised, adversaries will not be able to use it to create SSH connections without its associated password and security key.
This tutorial will go over how to set up your SSH connection with FIDO U2F using a YubiKey, but the general process should work with other FIDO U2F security devices.
For SSH secured with FIDO U2F to work both the host and server must be running SSH version 8.2 or higher. Check what version of SSH is on your local machine, and your server by running:
It does not matter if there are mismatched versions between the host machine and server; as long as they are both using version 8.2 or higher you will be able to secure your ssh connection using FIDO U2F.
SSH key-pairs with FIDO U2F authentication use 'sk' in addition to the typical commands you would expect to generate SSH key-pairs with and support both ecdsa-sk and ed25519-sk.
YubiKeys require firmware version 5.2.3 or higher to support FIDO U2F using ed25519-sk to generate SSH key-pairs. To check the firmware version of a YubiKey, connect the YubiKey to your host machine and execute the following command:
To allow your host machine to communicate with a FIDO device through USB to verify attestation and assert signatures the libsk-fido2 library must be installed.
Generate an ed25519-sk key-pair with the following command with your YubiKey connected to your host machine (NOTE: you will be prompted to touch your YubiKey to authorize SSH key-pair generation):
You can now use your new ed25519-sk key-pair to secure SSH connections with your servers. Part of the key-pair is from the YubiKey, and is used to secure the SHH connection as part of a challenge-response from the devices.
In this section we will cover:
Logging In
Creating a new user
Disable root login
Disable password login
When you provision a new server, you will be provided a username, password, and ip address. Generally that username will be root. Let’s log in with them now in the form of ssh username@ip
.
Initiate login to server
2. Type Yes
3. Enter password
You are now logged into root. However, we do NOT want this as an option, so let’s fix it.
Since we no longer want to be able to log in as root, we’ll first need to create a new user to log into.
Create a new user
You’re going to want to choose a unique username here, as the more unique, the harder it’ll be for a bad actor to guess. We’re going to use mellamo
.
You will then be prompted to create a password and fill in information. Don’t worry about the information, but make sure your password is complicated!
2. Give them sudo privileges
sudo is the name for “master” privileges, so we need to modify the user to add them to that group.
3. Verify user has sudo access
Disabling root login takes away an easy method for hackers to get in. The easiest way of accessing remote servers or VPSs is via SSH and to block root user login under it, you need to edit the /etc/ssh/sshd_config file.
From the remote server, open /etc/ssh/sshd_config
2. Save and exit sshd_config, then restart the service.
Return to you local machine.
2. Copy your ssh key to the server
3. Confirm you can login with just your SSH key
Done! You can now log in exclusively with your SSH key.
Now that you can log in with just your ssh key, you should now disable password login.
Return to your remote server, and open /etc/ssh/sshd_config again
2. Find ChallengeResponseAuthentication and set to no:
3. Next, find PasswordAuthentication set to no too:
4. Search for UsePAM and set to no, too:
5. Save and exit sshd_config, then restart the service.
Congratulations! You can only login with your ssh key now. Be sure to back it up in case something happens to your machine!
Generate SSH key
Open ~/.ssh/config
Check UFW status
Confirm UFW is enabled
As your Prometheus is only capable of collecting metrics, we want to extend its capabilities by adding Node Exporter, a tool that collects information about the system including and exposes them for scraping.
Collectors are used to gather information about the system. By default a set of collectors is activated. You can see the details about the set in the . If you want to use a specific set of collectors, you can define them in the ExecStart
section of the service. Collectors are enabled by providing a --collector.<name>
flag. Collectors that are enabled by default can be disabled by providing a --no-collector.<name>
flag.
If you get this, generally your server has restarted erroneously. In order to fix it and get secret-node running again, you must reinstall sgx and reload the aesm.service. .
NOTE: More information on how to get started using a YubiKey can be found . You should have a general understanding of how to use a YubiKey before attempting this ssh guide.
SSH into the server
Logged into root
Create user mellamo
Testing sudo privileges
Set PermitRootLogin to “no”
Log out of server
Copy keys
Log in with SSH key
Install Grafana on our instance which queries our Prometheus server.
Enable the automatic start of Grafana by systemd
:
Grafana is running now, and we can connect to it at http://your.server.ip:3000
. The default user and password is admin
/ admin
.
Now you have to create a Prometheus data source: - Click the Grafana logo to open the sidebar. - Click “Data Sources” in the sidebar. - Choose “Add New”. - Select “Prometheus” as the data source. - Set the Prometheus server URL (in our case: http://localhost:9090/) - Click “Add” to test the connection and to save the new data source.
Prior to using Prometheus, it needs basic configuring. Thus, we need to create a configuration file named prometheus.yml
The configuration file of Prometheus is written in YAML which strictly forbids to use tabs. If your file is incorrectly formatted, Prometheus will not start. Be careful when you edit it.
Open the file prometheus.yml
in a text editor:
Prometheus’ configuration file is divided into three parts: global
, rule_files
, and scrape_configs
.
In the global
part we can find the general configuration of Prometheus: scrape_interval
defines how often Prometheus scrapes targets, evaluation_interval
controls how often the software will evaluate rules. Rules are used to create new time series and for the generation of alerts.
The rule_files
block contains information of the location of any rules we want the Prometheus server to load.
The last block of the configuration file is named scape_configs
and contains the information which resources Prometheus monitors.
Our file should look like this example:
The global scrape_interval
is set to 15 seconds which is enough for most use cases.
We do not have any rule_files
yet, so the lines are commented out and start with a #
.
In the scrape_configs
part we have defined our first exporter. It is Prometheus that monitors itself. As we want to have more precise information about the state of our Prometheus server we reduced the scrape_interval
to 5 seconds for this job. The parameters static_configs
and targets
determine where the exporters are running. In our case it is the same server, so we use localhost
and the port 9090
.
As Prometheus scrapes only exporters that are defined in the scrape_configs
part of the configuration file, we have to add Node Exporter to the file, as we did for Prometheus itself.
We add the following part below the configuration for scraping Prometheus:
Overwrite the global scrape interval again and set it to 5 seconds. As we are scarping the data from the same server as Prometheus is running on, we can use localhost
with the default port of Node Exporter: 9100
.
If you want to scrape data from a remote host, you have to replace localhost
with the IP address of the remote server.
Tip: For all information about the configuration of Prometheus, you may check the configuration documentation.
Set the ownership of the file to our Prometheus
user:
Our Prometheus server is ready to run for the first time.
Start Prometheus directly from the command line with the following command, which executes the binary file as our Prometheus
user:
The server starts displaying multiple status messages and the information that the server has started:
Open your browser and type http://IP.OF.YOUR.SERVER:9090
to access the Prometheus interface. If everything is working, we end the task by pressing on CTRL + C
on our keyboard.
If you get an error message when you start the server, double-
check your configuration file for possible YAML syntax errors. The error message will tell you what to check.
The server is working now, but it cannot yet be launched automatically at boot. To achieve this, we have to create a new systemd
configuration file that will tell your OS which services should it launch automatically during the boot process.
The service file tells systemd
to run Prometheus as prometheus
and specifies the path of the configuration files.
Copy the following information in the file and save it, then exit the editor:
To use the new service, reload systemd
:
We enable the service so that it will be loaded automatically during boot:
Start Prometheus:
Your Prometheus server is ready to be used.
We have now installed Prometheus to monitor your instance. Prometheus provides a basic web server running on http://your.server.ip:9000
that provide access to the data collected by the software.
Grafana allows you to easily visualize your monitoring results and other analytics
You will need to install docker and docker-compose.
The following instructions assume Ubuntu 20.04 on an x86-64 CPU.
Test the installation:
Apply executable permissions to the binary:
Download the current stable release of Docker Compose:
Test the installation:
Install docker:
Setup the docker stable repository:
Add Docker’s official GPG key:
Update the apt package index and install packages to allow apt to use a repository over HTTPS:
Clone the node_tooling repo and descend into the monitoring folder:
In the Prometheus folder, modify cosmos.yaml, replace NODE_IP
with the IP of your node. (If your node is on the docker host machine, use 172.17.0.1
)
Replace the default Prometheus config with the modified cosmos.yaml
From the node_tooling/monitoring
directory:
Start the containers deploying the monitoring stack (Grafana + Prometheus + Node Exporter):
Login to Grafana at http://your-ip:3000
with username admin
and password admin
The containers will restart automatically after rebooting unless they are stopped manually.
Configure Nginx to format logs and set up a server block.
Open the Nginx configuration file:
Add the following log format into your http group in nginx:
Warning: This logs the users IP address directly. It's not recommended to do it in this fashion, if possible anonymize the address as seen below.
(optional) Instead anonymize IP addresses in logs:
Configure a server block:
Test the new configuration:
Reload Nginx to apply changes:
Log rotation in Nginx is a process for managing log files to prevent them from becoming excessively large and consuming too much disk space. As Nginx continuously logs web requests, these files can grow rapidly. Without rotation, they can lead to performance issues and make log analysis more difficult. The default setting is for log rotation is daily, which means that the logs that goaccess can use for its reporting are also only daily. To increase that timeframe, do the following:
Edit log rotation configuration:
Add the configuration, please change the monthly to daily or weekly if you need daily or weekly rotation of the logs.
Apply the new rotation configuration:
Generate a HTML report:
If you wish to automate this, use crontab to generate recurring reports:
Open crontab for editing (use sudo, otherwise crontab will not access to the log file):
Add the line to automate hourly report generation:
On the Secret Network mainnet, we delegate uscrt
, where 1scrt = 1000000uscrt
. Here's how you can bond tokens to a validator (i.e. delegate):
Example:
<validator-operator-address>
is the operator address of the validator to which you intend to delegate. If you are running a full node, you can find this with:
Where <key-alias>
is the name of the key you specified when you initialized secretd
.
While tokens are bonded, they are pooled with all the other bonded tokens in the network. Validators and delegators obtain a percentage of shares that equal their stake in this pool.\
General Overview
This section only covers some of the SecretCLI commands, for all commands please reference the SecretCLI documentation instead.
There currently is in place a 21 days unbonding rule, during which no rewards are handed out.
If for any reason the validator misbehaves, or you just want to unbond a certain amount of tokens, use this following command.
The unbonding will be automatically completed when the unbonding period has passed.
To withdraw the delegator rewards:
To provide endpoints to users, you can use Nginx. It is a powerful tool for providing a loadbalanced endpoint that distributes the load across multiple nodes.
This section should give you a rough overview of how to run multiple nodes in a cluster than you can provide for your own dApp to use.
Parameters define high level settings for staking. You can get the current values by using:
With the above command you will get the values for:
Unbonding time
Maximum numbers of validators
Coin denomination for staking
Example:
All these values will be subject to updates though a governance
process by ParameterChange
proposals.
You can find help in Telegram
Visit the Secret Network Discord and ask in #node-discussion or #node-support for help
You can also query all of the delegations to a particular validator:
Example:
A staking Pool
defines the dynamic parameters of the current state. You can query them with the following command:
With the pool
command you will get the values for:
Not-bonded and bonded tokens
Token supply
Current annual inflation and the block in which the last inflation was processed
Last recorded bonded shares
These pages go into detail for setting up infrastructure other than full nodes and validators.
Note: Mantlemint is currently in beta. This means some of these instructions may not work as expected, or could be subject to change
Mantlemint is a fast core optimized for serving massive user queries. A mantlemint node will perform 3-4x more queries than a standard Secret Node.
Native query performance on RPC is slow and is not suitable for massive query handling, due to the inefficiencies introduced by IAVL tree. Mantlemint is running on fauxMerkleTree
mode, basically removing the IAVL inefficiencies while using the same core to compute the same module outputs.
If you are looking to serve any kind of public node accepting varying degrees of end-user queries, it is recommended that you run a mantlemint instance alongside of your RPC. While mantlemint is indeed faster at resolving queries, due to the absence of IAVL tree and native tendermint, it cannot join p2p network by itself. Rather, you would have to relay finalized blocks to mantlemint, using RPC's websocket.
Mantlemint has been adapted for Secret Network from Terra.
Superior LCD performance
With the exception of Tendermint RPC/Transactions.
Super reliable and effective LCD response cache to prevent unnecessary computation for query resolving
Fully archival; historical states are available with ?height
query parameter.
Fully synced RPC Node with Websockets available
To start a Mantlemint node, you'll need **** access to at least 1 running RPC node. Since Mantlemint cannot join p2p network by itself, it depends on RPC to receive recently proposed blocks. This RPC node should also have Websockets enabled. Websockets are how Mantlemint receives new blocks after it catches up to the current block.
1TB of storage (recommended SATA or NVMe SSD)
16 GB of RAM (recommended 32 GB)
2 available CPU cores (recommended 4 cores)
Mantlemint can only be ran as a full archive node. For this reason it requires a large amount of storage (as of August 2022 this is currently 450GB).
a. Clone the repository https://github.com/scrtlabs/mantlemint
b. If you haven't already, install golang
c. Run go build -mod=readonly -o build/mantlemint ./sync.go
If you don't want to compile yourself, you can just download the mantlemint
executable built for Ubuntu 20.04
Install SGX the same way you would for a node, as described in the Node Setup section
secretd
packageTo allow Mantlemint to sync blocks and run queries that contain encrypted data, we will need to register it.
This directory should be separate from other mantlemint instances or secretd instances
Create a config/app.toml
file in your mantlemint directory, and set it with a similar app.toml file to what a secretd
node uses
Example app.toml
file:
To start mantlemint, we highly recommend using a snapshot. The earlier one starting from block 3340000. This will save having to switch out mantlemint binaries to account for network upgrades.
To download and unpack a snapshot, use the following command
Make sure the files are unpacked in your chosen mantlemint directory
Pro Tip: Currently Mantlemint is fully archival. That means snapshots are really large. This command will download and unpack the snapshot without having to use twice the storage amount
Now we are ready to run Mantlemint. It's slightly awkward to run as you have to set multiple environment variables, but they're fairly straightforward. An example run command would be -
That's it!
Most noderunners have the common problem that stale nodes, meaning nodes that are stuck at a certain block height and will not continue keeping up the chain tip, provide horrible UX to users. To mitigate this, we can add some extra monitoring tools to dynamically add or remove nodes from the cluster.
The provided software gives you an entry level solution for this. As everything is written in Python, you can adjust this to your API setup.
To install, please do the following:
Ensure you have Python installed. If not, download and install Python from python.org. You'll also need pip, Python's package manager, to install required libraries. If you're using a Linux-based system, ensure you have NGINX installed and properly configured.
Since we edit the nginx config directly, we need to give pyhton3 sudo rights.
Be aware when you follow this tutorial.
To clone the AutoHealBot repository, use the following steps:
Install Git: Ensure Git is installed on your system. If not, install it from git-scm.com.
Clone the Repository: Open a terminal and run:
Navigate to the Directory: After cloning, change to the repository directory:
This clones the entire repository to your local machine, allowing you to access all files and resources. To proceed with the tutorial, follow additional setup instructions provided in the repository's README or other documentation.
When installing, make sure to install this under root with sudo
, otherwise the script will later not find the libraries later on.
To configure your environment variables, copy over the .env.example
in the repository.
Replace the placeholders with actual values:
NGINX_CONFIG_PATH
: The path to your NGINX configuration file.
BASE_RATE
and NODE_MULTIPLIER
: Adjust as needed.
RPC_PORT
, GRPC_PORT
, LCD_PORT
: Set to your specific ports.
FILE_PATH
: Path to the text file with node URLs.
TIME_BEFORE_FALLEN_BEHIND
: Maximum allowed time before a node is considered unhealthy.
UPDATE_TIME
: Time between health checks.
Create a text file with the node URLs. For example, create nodes.txt
with one URL per line, make sure to include the RPC
port to each node here as well:
In the AutoHealBot script, "upstream blocks" refer to sections in the NGINX configuration that specify which backend servers handle different types of traffic. This setup divides the backend nodes into separate streams: RPC, gRPC, and LCD. The script checks the health of these nodes and updates the corresponding upstream blocks to reflect the healthy nodes for each stream. It ensures that traffic is routed to servers that are online and functional.
As a reference, the upstream blocks are defined as:
Run the script to start the asynchronous health checks and NGINX updates:
Troubleshooting
Environment Variables Not Loaded: Ensure your .env
file is in the same directory as the script or specify its path explicitly with dotenv_path
.
NGINX Not Reloading: Check if you have the necessary permissions to reload NGINX and ensure systemctl or other command-line utilities are in your PATH.
With this setup, the script will run asynchronously, periodically checking node health, updating the NGINX configuration, and reloading the NGINX service as needed.
It's possible to run multiple Secret Nodes on the same Secret-compatible server, and it is fairly easy to do so.
There are 2 important things that must be done for each node:
A unique system file is necessary for each node
A unique sgx_secrets
path is necessary for each node
All Secret Nodes should have their own user to simplify
It's easiest to do this with auto-register
, but it's possible manual as well
Each node must be registered
This process assumes you already have a full node running. If you do not, proceed by Setting Up a Full Node, then returning.
This isn't necessary, but will help with keeping nodes organized. From here on, the assumption is the username is secret
, but it can be anything of your choosing.
This will make it so you don't need to install secretd
multiple times, and therefore, can upgrade all nodes at the same time.
On the new user, execute steps 1 and 2 of Setting Up a Full Node. You should now have a .secretd
directory on the new user, and the correct genesis file.
The variables SCRT_ENCLAVE_DIR
and SCRT_SGX_STORAGE
are going to need to be custom for each user/node. These variables are NOT the same as the ones in step 3 of setting up a full node.
In order for these nodes to work in tandem, they cannot use the same ports. I recommend this tool to help automate changing them.
Which will then create a command that looks like this:
Note that this service file has two environment variables that are set, as well as a --home
directory. These will be unique to your user.
From here, you can return to step 9 of setting up a full node. Note that the service file name is different. The following is what the system file commands would look like.
At this point, all unique behavior for additional nodes is complete!