Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
It is CRUCIAL to backup your validator's private key. It's the only way to restore your validator in an event of a disaster. The validator private key is a Tendermint Key - a unique key used to sign consensus votes.
To backup everything you need to restore your validator, simply do the following
If you are using the software sign (which is the default signing method of tendermint), your Tendermint Key is located in ~/.secretd/config/priv_validator_key.json
.
Backup ~/.secretd/config/priv_validator_key.json
.
Backup ~/.secretd/data/priv_validator_state.json
.
Backup the self-delegator wallet. See the wallet section.
Or you can use hardware to manage your Tendermint Key much more safely, such as YubiHSM2.
Ensure you Validator Backup before you migrate it. Do not forget!
If you don't have the mnemonics saved, you can back it up with:
This prints the private key to stderr
, you can then paste in into the file mykey.backup
.
To check on the new full node if it finished catching-up:
Only continue if catching_up
is false
To prevent double signing, you should stop the validator node before stopping the new full node to ensure the new node is at a greater block height than the validator node.
Please read about the dangers in running a validator.
The validator should start missing blocks at this point. This is the desired behavior!
On the validator node, the file is ~/.secretd/config/priv_validator_key.json
.
You can copy it manually or for example you can copy the file to the new machine using ssh:
After being copied, the key (priv_validator_key.json
) should then be removed from the old node's config
directory to prevent double-signing if the node were to start back up.
The new node should start signing blocks once caught up.
If you get this, generally your server has restarted erroneously. In order to fix it and get secret-node running again, you must reinstall sgx and reload the aesm.service. Source Discord conversation.
Re-install sgx
Restart the aesmd.service
4. Restart secret-node.service
If you aren't seeing any blocks being produced, that likely means you don't have any active peers. To solve this:
Add seed nodes
2. Restart secret-node
You'll be tempted to add persistent_peers as well, but unless you have control over the peers, DO NOT add them. Peers change frequently and interfere with the built-in network peering protocol.
SSH keys, similarly to cryptocurrency keys, consist of public and private keys. You should store the private key on a machine you trust. The corresponding public key is what you will add to your server to secure it.
Be sure to securely store a secure backup of your private SSH key.
From your local machine that you plan to SSH from, generate an SSH key. This is likely going to be your laptop or desktop computer. Use the following command if you are using OSX or Linux:
Decide on a name for your key and proceed through the prompts.
Copy the contents of your public key.
Your file name will differ from the command below based on how you named your key.
Give the ssh folder the correct permissions.
Chmod 700 (chmod a+rwx,g-rwx,o-rwx) sets permissions so the user or owner can read, write and execute, and the Group or Others cannot read, write or execute.
Copy the contents of your newly generated public key.
Now log into the server that you want to protect with your new SSH key and create a copy of the pubkey.
Create a file and paste in the public key information you copied from the previous step. Be sure to save the file.
Now add the pubkey to the authorized keys list.
Once you've confirmed that you can login via the new key, you can proceed to lock down the server to only allow access via the key.
Edit sshd_config to disable password-based authentication.
Change PasswordAuthentication yes" to "PasswordAuthentication no" and then save.
Restart ssh process for settings to take effect.
For additional security node operators may choose to secure their SSH connections with FIDO U2F hardware security devices such as YubiKey, SoloKey, or a Nitrokey. A security key ensures that SSH connections will not be possible using the private and public SSH key-pair without the security key present and activated. Even if the private key is compromised, adversaries will not be able to use it to create SSH connections without its associated password and security key.
This tutorial will go over how to set up your SSH connection with FIDO U2F using a YubiKey, but the general process should work with other FIDO U2F security devices.
NOTE: More information on how to get started using a YubiKey can be found HERE. You should have a general understanding of how to use a YubiKey before attempting this ssh guide.
For SSH secured with FIDO U2F to work both the host and server must be running SSH version 8.2 or higher. Check what version of SSH is on your local machine, and your server by running:
It does not matter if there are mismatched versions between the host machine and server; as long as they are both using version 8.2 or higher you will be able to secure your ssh connection using FIDO U2F.
SSH key-pairs with FIDO U2F authentication use 'sk' in addition to the typical commands you would expect to generate SSH key-pairs with and support both ecdsa-sk and ed25519-sk.
YubiKeys require firmware version 5.2.3 or higher to support FIDO U2F using ed25519-sk to generate SSH key-pairs. To check the firmware version of a YubiKey, connect the YubiKey to your host machine and execute the following command:
To allow your host machine to communicate with a FIDO device through USB to verify attestation and assert signatures the libsk-fido2 library must be installed.
Generate an ed25519-sk key-pair with the following command with your YubiKey connected to your host machine (NOTE: you will be prompted to touch your YubiKey to authorize SSH key-pair generation):
You can now use your new ed25519-sk key-pair to secure SSH connections with your servers. Part of the key-pair is from the YubiKey, and is used to secure the SHH connection as part of a challenge-response from the devices.
Never save your validator’s keys on the remote server. You should be using your local machine and saving your keys on there to broadcast to the remote server.
In order to use a local CLI, you must:
Install the daemon on your local machine by going through the normal installation process
Set the daemon’s config
to the remote server
Overview
A coordinated group of validators (maximum: 80) secure the Secret Network. Each validator uses Tendermint, a Byzantine fault-tolerant Delegated Proof-of-Stake (DPoS) consensus engine. Each validator stakes their SCRT coins and coins from delegators to earn rewards by successfully running the protocol, verifying transactions, and proposing blocks. If they fail to maintain a consistent (downtime) and honest node (double-signing), slashing penalties will occur, resulting in coins deducted from their account.
All SCRT holders can become a Secret Network validator or delegator and participate in staking and governance processes.
For information on running a node, delegating, staking, and voting, please see the validator guide below and visit our governance documentation.
Here is a list of Hardware Hardware Compliance for running a validator.
For detailed information on how to set up and run a validator, see the Becoming A Validatorsection.
Slashing for downtime is based on actual block times and NOT theoretical block times, such as SignedBlocksWindow
and MinSignedPerWindow
network parameters. Validators signing less than MinSignedPerWindow blocks out of every SignedBlocksWindow will experience a downtime slash.
Parameters: 11250 blocks out of every 22500-blocks
For a block time of 6.8 seconds, it roughly translates to being up for less than 21.25 hours out of every 42.5-hour window.
For a block time of 6.4 seconds, it roughly translates to being up for less than 20 hours out of every 40-hour window.
Slashing of 0.01% of the validator and its delegators' staked SCRT
Jailing of validators for 10 minutes. Validators do not earn block rewards for the jailing period and must manually unjail their node with:
Slashing for double-signing is when the validator signs the same block twice. This happens most commonly during migration, and the original validator node is not shut off appropriately, and their priv_validator_key.json
is running on two machines at the same time. The best way to avoid double signing is by using a remote signer such as the TenderMint Key Management System (TMKMS) or Horcrux.
A validator signs the same block height twice
Slashing of 5% of the validator and its delegators' staked SCRT
Jailing forever (tombstoned) of the validator node
A tombstoned validator cannot earn block rewards anymore, and its delegators must re-delegate their SCRT to another validator
The Secret Network uses the same slashing penalties as Cosmos. Burning SCRT through slashing validators for downtime and/or double-signing discourages poor practices and dishonesty by protocol-recognized actors. If slashing occurs frequently, a validator may lose their ability to vote on future blocks for some time.
Monitoring is inmensely important so to ensure the liveness and reliabilty of your infrastructure. If your validator is not signing blocks it will eventually get slashed losing you and your delegators some of their SCRT balance. Same for full nodes it is important they are able to serve queries as if they are down performance of dApps and other applications will be limited.\
Monitoring is best done by a dedicated piece of software that provides both analytics and alerts. Some of those options are laid out below so to help you set them up. Consider relying on more than 1 monitoring solutions and leverage external RPCs so to secure your setup even further.
Prometheus
Grafana
Docker
PagerDuty
Goaccess
To add basic security to your node, we've provided a guide covering 2 simple tools.
Uncomplicated Firewall (UFW)
Key-based SSH authentication.
Within the #Cosmos, conversations around node security tend to start with whether or not you use backup servers, sentries, and a remote-signing key management solution. This does not see the forest for the trees. While those steps are certainly important, they are *final* security steps. We should instead be discussing the first steps you make when setting up a new Tendermint node; raise the floor of security, rather than the ceiling, if you will.
This is intended to be a very basic guide on Linux security practices. If you want to more in-depth information, you can read about it here.
The following topics will be covered: 1. SSH Key Setup
2. Server Configuration
3. Setting up a Basic Firewall
4. Using Local CLI Machinesbas
When you receive your server, you will be provided a root user login, and a password. You’ll be inclined to log in with that login and password, but we have steps before we do that! We first want to create out ssh key as we’ll be disabling password login shortly.
An SSH (Secure Shell) key is a way to identify yourself as a user without using a password. It has 2 parts: the pubkey and private key. When you create the SSH key, you give your pubkey to a computer you wish to log into. You can then “show” the server your private key and it will admit you automatically. This makes it far more secure than a password, as then only you will have access to the server via your key.
This document assumes you’re using a Mac. If you need instructions for Linux or Windows, see the Github instruction for generating an SSH key.
Open Terminal
Generate the SSH key:
3. When you’re prompted to “Enter a file in which to save the key,” press Enter. This accepts the default file location.
4. At the prompt, type a secure passphrase. For more information, see “Working with SSH key passphrases.”
Your SSH key is now created, but we have to add it to the agent for it to be usable.
Start the ssh-agent in the background
2. Open your SSH config file
3. Add the following text block to your file
4. Add your SSH key to the ssh-agent
Your SSH key is now set up! This only has to happen once, so you can skip this if you need to refer back to this document.
Uncomplicated Firewall (UFW) is a program for managing a netfilter firewall designed for easy use. It uses a command-line interface (CLI) with a small number of simple commands and is configured with iptables. UFW is available by default in all Ubuntu installations after 18.04 LTS, and features tools for intrusion prevention which we will cover in this guide.
Start by checking the status of UFW.
2. Enable SSH
3. Enable p2p
This is the default p2p port for Tendermint systems, but if you’ve changed the port, you’ll need to update the ufw setting.
4. Enable UFW
5. Confirm UFW is enabled
Note that at any time you can disable ufw by doing:
In this section we will cover:
Logging In
Creating a new user
Disable root login
Disable password login
When you provision a new server, you will be provided a username, password, and ip address. Generally that username will be root. Let’s log in with them now in the form of ssh username@ip
.
Initiate login to server
SSH into the server
2. Type Yes
3. Enter password
Logged into root
You are now logged into root. However, we do NOT want this as an option, so let’s fix it.
Since we no longer want to be able to log in as root, we’ll first need to create a new user to log into.
Create a new user
You’re going to want to choose a unique username here, as the more unique, the harder it’ll be for a bad actor to guess. We’re going to use mellamo
.
You will then be prompted to create a password and fill in information. Don’t worry about the information, but make sure your password is complicated!
2. Give them sudo privileges
sudo is the name for “master” privileges, so we need to modify the user to add them to that group.
3. Verify user has sudo access
Disabling root login takes away an easy method for hackers to get in. The easiest way of accessing remote servers or VPSs is via SSH and to block root user login under it, you need to edit the /etc/ssh/sshd_config file.
From the remote server, open /etc/ssh/sshd_config
2. Save and exit sshd_config, then restart the service.
Return to you local machine.
2. Copy your ssh key to the server
3. Confirm you can login with just your SSH key
Done! You can now log in exclusively with your SSH key.
Now that you can log in with just your ssh key, you should now disable password login.
Return to your remote server, and open /etc/ssh/sshd_config again
2. Find ChallengeResponseAuthentication and set to no:
3. Next, find PasswordAuthentication set to no too:
4. Search for UsePAM and set to no, too:
5. Save and exit sshd_config, then restart the service.
Congratulations! You can only login with your ssh key now. Be sure to back it up in case something happens to your machine!
Uncomplicated Firewall (UFW) is a program for managing a netfilter firewall designed for easy use. It uses a command-line interface (CLI) with a small number of simple commands, and is configured with . UFW is available by default in all Ubuntu installations after 18.04 LTS, and features tools for intrusion prevention which we will cover in this guide.
Start by checking the status of UFW.
Then proceed to configure your firewall with the following options, preferably in this order.
The order is important because UFW executes the instructions given to it in the order they are given, so putting the most important and specific rules first is a good security practice. You can insert UFW rules at any position you want to by using the following syntax (do not execute the following command when setting up your node security):
The example command above would be placed in the first position (instead of the last) of the UFW hierarchy and deny a specific IP address from accessing the server.
This sets the default to allow outgoing connections unless specified they should not be allowed.
This sets the default to deny incoming connections unless specified they should be allowed.
This allows SSH connections by the firewall.
This limits SSH login attempts on the machine. The default is to limit SSH connections from a specific IP address if it attempts 6 or more connections within 30 seconds.
Allow 26656 for a p2p networking port to connect with the Tendermint network; unless you manually specified a different port.
Allow 1317 if you are running a public LCD endpoint from this node. Otherwise, you can skip this.
Allow 26657 if you are running a public RPC endpoint from this node. Otherwise, you can skip this.
This enables the firewall you just configured.
At any point in time you can disable your UFW firewall by running the following command.
is a flexible monitoring solution in development since 2012. The software stores all its data in a time series database and offers a multi-dimensional data model and a powerful query language to generate reports of the monitored resources.
This tutorial makes no assumptions about previous knowledge, other than:
You are comfortable with a Linux operating system, specifically Ubuntu 20.04
You are comfortable being able to ssh into your node, as all operations will be done from the command line\
As your Prometheus is only capable of collecting metrics, we want to extend its capabilities by adding Node Exporter, a tool that collects information about the system including and exposes them for scraping.
Download the latest version of Node Exporter:
Unpack the downloaded archive. This will create a directory node_exporter-1.2.2.linux-amd64
, containing the executable, a readme and license file:
Copy the binary file into the directory /usr/local/bin
and set the ownership to the user you have created in step previously:
Remove the leftover files of Node Exporter, as they are not needed any longer:
To run Node Exporter automatically on each boot, a Systemd service file is required. Create the following file by opening it in Nano:
Copy the following information in the service file, save it and exit Nano:
Reload Systemd to use the newly defined service:
Run Node Exporter by typing the following command:
Verify that the software has been started successfully:
You will see an output like this, showing you the status active (running)
as well as the main PID of the application:
If everything is working, enable Node Exporter to be started on each boot of the server:
You will need to create new users for running Prometheus securely. This can be done by doing:
Create the directories for storing the Prometheus binaries and its config files:
Set the ownership of these directories to our prometheus
user, to make sure that Prometheus can access to these folders:
Prior to using Prometheus, it needs basic configuring. Thus, we need to create a configuration file named prometheus.yml
The configuration file of Prometheus is written in which strictly forbids to use tabs. If your file is incorrectly formatted, Prometheus will not start. Be careful when you edit it.
Open the file prometheus.yml
in a text editor:
Prometheus’ configuration file is divided into three parts: global
, rule_files
, and scrape_configs
.
In the global
part we can find the general configuration of Prometheus: scrape_interval
defines how often Prometheus scrapes targets, evaluation_interval
controls how often the software will evaluate rules. Rules are used to create new time series and for the generation of alerts.
The rule_files
block contains information of the location of any rules we want the Prometheus server to load.
The last block of the configuration file is named scape_configs
and contains the information which resources Prometheus monitors.
Our file should look like this example:
The global scrape_interval
is set to 15 seconds which is enough for most use cases.
We do not have any rule_files
yet, so the lines are commented out and start with a #
.
In the scrape_configs
part we have defined our first exporter. It is Prometheus that monitors itself. As we want to have more precise information about the state of our Prometheus server we reduced the scrape_interval
to 5 seconds for this job. The parameters static_configs
and targets
determine where the exporters are running. In our case it is the same server, so we use localhost
and the port 9090
.
As Prometheus scrapes only exporters that are defined in the scrape_configs
part of the configuration file, we have to add Node Exporter to the file, as we did for Prometheus itself.
We add the following part below the configuration for scraping Prometheus:
Overwrite the global scrape interval again and set it to 5 seconds. As we are scarping the data from the same server as Prometheus is running on, we can use localhost
with the default port of Node Exporter: 9100
.
If you want to scrape data from a remote host, you have to replace localhost
with the IP address of the remote server.
Set the ownership of the file to our Prometheus
user:
Our Prometheus server is ready to run for the first time.
Start Prometheus directly from the command line with the following command, which executes the binary file as our Prometheus
user:
The server starts displaying multiple status messages and the information that the server has started:
Open your browser and type http://IP.OF.YOUR.SERVER:9090
to access the Prometheus interface. If everything is working, we end the task by pressing on CTRL + C
on our keyboard.
If you get an error message when you start the server, double-
check your configuration file for possible YAML syntax errors. The error message will tell you what to check.
The server is working now, but it cannot yet be launched automatically at boot. To achieve this, we have to create a new systemd
configuration file that will tell your OS which services should it launch automatically during the boot process.
The service file tells systemd
to run Prometheus as prometheus
and specifies the path of the configuration files.
Copy the following information in the file and save it, then exit the editor:
To use the new service, reload systemd
:
We enable the service so that it will be loaded automatically during boot:
Start Prometheus:
Your Prometheus server is ready to be used.
We have now installed Prometheus to monitor your instance. Prometheus provides a basic web server running on http://your.server.ip:9000
that provide access to the data collected by the software.
Download and Unpack latest release of Prometheus:
The following two binaries are in the directory:
Prometheus - Prometheus main binary file
Promtool
The following two folders (which contain the web interface, configuration files examples and the license) are in the directory:
Consoles
Console_libraries
Copy the binary files into the /usr/local/bin/
directory:
Set the ownership of these files to the prometheus
user previously created:
Copy the consoles
and console_libraries
directories to /etc/prometheus
:
Set the ownership of the two folders, as well as of all files that they contain, to our prometheus
user:
In our home folder, remove the source files that are not needed anymore:
Generate SSH key
Open ~/.ssh/config
Check UFW status
Confirm UFW is enabled
Create user mellamo
Testing sudo privileges
Set PermitRootLogin to “no”
Log out of server
Copy keys
Log in with SSH key
Collectors are used to gather information about the system. By default a set of collectors is activated. You can see the details about the set in the . If you want to use a specific set of collectors, you can define them in the ExecStart
section of the service. Collectors are enabled by providing a --collector.<name>
flag. Collectors that are enabled by default can be disabled by providing a --no-collector.<name>
flag.
Tip: For all information about the configuration of Prometheus, you may check the .
From here, you're going to want to set up alerts for if something happens with your node, which will be a follow-up document.
This is largely just a copy of scaleway's setup, but updated and customized for Secret Network.
Clone the node_tooling repo and descend into the monitoring folder:
In the Prometheus folder, modify cosmos.yaml, replace NODE_IP
with the IP of your node. (If your node is on the docker host machine, use 172.17.0.1
)
Replace the default Prometheus config with the modified cosmos.yaml
You will need to install docker and docker-compose.
The following instructions assume Ubuntu 20.04 on an x86-64 CPU.
Test the installation:
Apply executable permissions to the binary:
Download the current stable release of Docker Compose:
Test the installation:
Install docker:
Setup the docker stable repository:
Add Docker’s official GPG key:
Update the apt package index and install packages to allow apt to use a repository over HTTPS:
Docker and Docker Compose will allow you to run the required monitoring applications with a few commands. These instructions will run the following:
Grafana on port 3000
: An open source interactive analytics dashboard.
Prometheus on port 9090
: An open source metric collector.
Node Exporter on port 9100
: An open source hardware metric exporter.
The docker images expose the following ports:
3000
Grafana. Your main dashboard. Default login is admin\admin.
9090
Prometheus. Access to this port should be restricted.
9100
Node Exporter. Access to this port should be restricted.
Your secret node metrics on port 26660
should also be restricted.
If you followed the basic security guide, these ports are already restricted. You will need to allow the grafana port:
sudo ufw allow 3000
You can also allow access from a specific IP if desired:
sudo ufw allow from 123.123.123.123 to any port 3000
Install Grafana on our instance which queries our Prometheus server.
Enable the automatic start of Grafana by systemd
:
Grafana is running now, and we can connect to it at http://your.server.ip:3000
. The default user and password is admin
/ admin
.
Now you have to create a Prometheus data source:
Click the Grafana logo to open the sidebar.
Click “Data Sources” in the sidebar.
Choose “Add New”.
Select “Prometheus” as the data source
Set the Prometheus server URL (in our case: http://localhost:9090/)
Click “Add” to test the connection and to save the new data source
Finally, we're going to install a basic dashboard for Cosmos SDKs. For further reference in these steps, see: https://github.com/zhangyelong/cosmos-dashboard
Append a job
under the scrape_configs
of your prometheus.yml
Set chain-id to secret-3
You're done!\
Install Grafana on our instance which queries our Prometheus server.
Enable the automatic start of Grafana by systemd
:
Grafana is running now, and we can connect to it at http://your.server.ip:3000
. The default user and password is admin
/ admin
.
Now you have to create a Prometheus data source: - Click the Grafana logo to open the sidebar. - Click “Data Sources” in the sidebar. - Choose “Add New”. - Select “Prometheus” as the data source. - Set the Prometheus server URL (in our case: http://localhost:9090/) - Click “Add” to test the connection and to save the new data source.
From the node_tooling/monitoring
directory:
Goaccess is a powerful tool when it comes to providing usage statistics for your endpoints.
This tutorial will guide you through configuring Nginx for logging, anonymizing logs, monitoring web traffic with GoAccess, and setting up log rotation for Nginx logs.
This guide is intended for intermediate users who are familiar with Linux, Nginx, and using the command-line interface.
The dashboard for Cosmos SDK nodes is pre-installed, to use it:
Enable Tendermint metrics in your secret-node
After restarting your node, you should be able to access the Tendermint metrics (default port is 26660):
If you did not replace NODE_IP
with the IP of your node in the Prometheus config, do so now. If your node is on the docker host machine, use 172.17.0.1
Login to Grafana and open the Cosmos Dashboard from the page.
Set the chain-id to secret-3
Start the containers deploying the monitoring stack (Grafana + Prometheus + Node Exporter):
Login to Grafana at http://your-ip:3000
with username admin
and password admin
The containers will restart automatically after rebooting unless they are stopped manually.
After restarting your node, you should be able to access the tendermint metrics(default port is 26660):
Copy and paste the 11036
OR content of , click on Load
to complete importing.
To withdraw the delegator rewards:
General Overview
This section only covers some of the SecretCLI commands, for all commands please reference the SecretCLI documentation instead.
Configure Nginx to format logs and set up a server block.
Open the Nginx configuration file:
Add the following log format into your http group in nginx:
Warning: This logs the users IP address directly. It's not recommended to do it in this fashion, if possible anonymize the address as seen below.
(optional) Instead anonymize IP addresses in logs:
Configure a server block:
Test the new configuration:
Reload Nginx to apply changes:
Log rotation in Nginx is a process for managing log files to prevent them from becoming excessively large and consuming too much disk space. As Nginx continuously logs web requests, these files can grow rapidly. Without rotation, they can lead to performance issues and make log analysis more difficult. The default setting is for log rotation is daily, which means that the logs that goaccess can use for its reporting are also only daily. To increase that timeframe, do the following:
Edit log rotation configuration:
Add the configuration, please change the monthly to daily or weekly if you need daily or weekly rotation of the logs.
Apply the new rotation configuration:
Generate a HTML report:
If you wish to automate this, use crontab to generate recurring reports:
Open crontab for editing (use sudo, otherwise crontab will not access to the log file):
Add the line to automate hourly report generation:
Before configuring Nginx, install GoAccess, a real-time web log analyzer.
Update your package lists:
Install GoAccess:
Once you begin an unbonding-delegation, you can see its information by using the following command:
Or if you want to check all your current unbonding-delegations with distinct validators:
Additionally, you can get all the unbonding-delegations from a particular validator:
There currently is in place a 21 days unbonding rule, during which no rewards are handed out.
If for any reason the validator misbehaves, or you just want to unbond a certain amount of tokens, use this following command.
The unbonding will be automatically completed when the unbonding period has passed.
A redelegation is a type delegation that allows you to bond illiquid tokens from one validator to another:
Here you can also redelegate a specific shares-amount
or a shares-fraction
with the corresponding flags.
The redelegation will be automatically completed when the unbonding period has passed.
Once you've submitted a delegation to a validator, you can see it's information by using the following command:
Example:
Or if you want to check all your current delegations with distinct validators:
On the Secret Network mainnet, we delegate uscrt
, where 1scrt = 1000000uscrt
. Here's how you can bond tokens to a validator (i.e. delegate):
Example:
<validator-operator-address>
is the operator address of the validator to which you intend to delegate. If you are running a full node, you can find this with:
Where <key-alias>
is the name of the key you specified when you initialized secretd
.
While tokens are bonded, they are pooled with all the other bonded tokens in the network. Validators and delegators obtain a percentage of shares that equal their stake in this pool.\
Once you begin a redelegation, you can see its information by using the following command:
Or if you want to check all your current unbonding-delegations with distinct validators:
Additionally, you can get all the outgoing redelegations from a particular validator:
Parameters define high level settings for staking. You can get the current values by using:
With the above command you will get the values for:
Unbonding time
Maximum numbers of validators
Coin denomination for staking
Example:
All these values will be subject to updates though a governance
process by ParameterChange
proposals.
You can also query all of the delegations to a particular validator:
Example:
A staking Pool
defines the dynamic parameters of the current state. You can query them with the following command:
With the pool
command you will get the values for:
Not-bonded and bonded tokens
Token supply
Current annual inflation and the block in which the last inflation was processed
Last recorded bonded shares