Now that you have learned where etcd used and had some fun playing with an online etcd cluster simulator, let's move on and create our own etcd cluster in Amazon EC2 using CoreOS (a lightweight operational system designed for massive deployments using container based solutions).
Go to this CoreOS web page and inside stable channel, find us-east-1 region (North Virginia) and select the ami related with HVM (this kind of Amazon Linux Images is suited for many different EC2 machine families)

Select the instance type. As we are only having some fun, a t2.micro should be enough (remember that this procedure will be charged as long as your instances are started. After all, remember to terminate all instances so that you will not have any surprises in your credit card billing!)

In instance configuration, launch 3 instances. Let the default configurations for the other fields

In terminal, issue the following statement:
$ curl https://discovery.etcd.io/new?size=3
Size 3 refers to the size of the initial etcd cluster (3, in our case). Now copy the output and follow the next step. In the same screen, at Advanced Details section, paste the following configurations replacing “your_token_here” for the copied value:
#cloud-config
hostname: etcd-node coreos: etcd2: discovery: your_token_here advertise-client-urls: http://$private_ipv4:2379 initial-advertise-peer-urls: http://$private_ipv4:2380 listen-client-urls: http://0.0.0.0:2379 listen-peer-urls: http://$private_ipv4:2380 units: - name: etcd2.service command: start

In the storage step, keep 8 GB and select SSD (gp2)

In the tag step, issue the name etcd-node In the security group step, allow 22 and 2379/2380 TCP ports to anywhere. Since we are only playing around, there will be no problem (see Production Considerations bellow to further information)

Launch the instances and download your key so that you will be able to ssh the machines later. Click in view instances button and wait for instances initialization.
Nice! Now copy the url with the token you've entered before by right clicking at any instance > Instance Settings > User Data. Paste that url into your browser and note that it will return a json with 3 entries, each one representing each node that we've just bootstrapped.
If the returned json doesn't reflect that, probably something went wrong. You'll have to repeat every steps again and regenerate a new fresh token. Take care with the configurations in Advanced Details step because it's a yaml document and, as you might know, it has a special syntax that must be strictly respected. To certify that your configurations are correct, use the following validator: https://coreos.com/validate/
Ok, assuming that you've got success in the above steps, let's continue and ssh one of the nodes so that we can check the cluster's health. Select one instance in EC2 and copy its Public IP. Now open a terminal session and issue the following statements:
$ sudo chmod 400 key_you_have_downloaded.pem $ ssh -i key_you_have_downloaded.pem core@paste_public_ip_here
Once you have connected to the machine, issue the following:
$ etcdctl member list
This should give you the same list with the same values returned by the json we've seen before.
$ etcdctl cluster-health
This should show that your cluster is healthy and operating normally with three nodes. Perfect! In the next tutorial we'll learn how to interact with etcd rest api and etcdctl to manipulate data inside the cluster.
Now let's dive deep into the details of everything we've done until here and take some production considerations into account.
Production Considerations
We've bootstrapped a 3-node etcd cluster in Amazon EC2 that's operating perfectly. However, there are many considerations to take care when you are setting up etcd in production.
In order to obtain high availability in your etcd cluster in Amazon EC2 you should launch each etcd node in a different availability zone. That would prevent your cluster from outages if a specific availability zone experience any problems (it's always possible to check AWS services status at http://status.aws.amazon.com/)
In contrast with what we've done before using the default VPC created automatically by AWS, it's recommended that you set up your own VPC and have fine grained control over subnets, route tables, etc. Your nodes should be launched into private subnets without public IPs and should be only accessible over a VPN connection from your office. Doing so you would improve security and your security group configurations wouldn’t be opened to “Anywhere”, but only for your VPC CIDR
We've done a couple of manual steps here in order have our etcd cluster up. In production environment, it's highly recommended that theses steps are automated by using a provisioning tool like Hashicorp Terraform.