Skip to main content

AWS CLI— Know its Applications and Benefits

 AWS CLI— Know its Applications and Benefits





Amazon Web Services(AWS) is the market leader and top innovator in the field of cloud computing. It helps companies with a wide variety of workloads such as game development, data processing, warehousing, archive, development, and many more. But, there is more to AWS than just the eye-catching browser console. It’s time that you check out Amazon’s Command Line Interface — AWS CLI.


What is AWS CLI?



AWS Command Line Interface(AWS CLI) is a unified tool using which, you can manage and monitor all your AWS services from a terminal session on your client.

Although most AWS services can be managed through the AWS Management Console or via the APIs, there is a third way that can be very useful: the Command Line Interface (AWS CLI). AWS has made it possible for Linux, macOS, and Windows users to manage the main AWS services from a local terminal session’s command line. So, with a single-step installation and minimal configuration, you can start using all of the functionalities provided by the AWS Management Console using the terminal program. That would be:

  • Linux shells: You can use command shell programs like bash, tsch, and zsh to run commands in operating systems like Linux, macOS, or Unix
  • Windows Command Line: On Windows, you can run commands in PowerShell or in the Windows command prompt
  • Remotely: You can run commands on Amazon EC2 instances through a remote terminal such as PuTTY or SSH. You can even use AWS Systems Manager to automate operational tasks across your AWS resources

Apart from this, it also provides direct access to AWS services public APIs. In addition to the low-level API equivalent commands, the AWS CLI offers customization for several services.

This article will tell you everything that you need to know to get started with the AWS Command Line Interface and to use it proficiently in your daily operations.

Uses of AWS CLI


Below are a few reasons which are compelling enough to get you started with AWS Command Line Interface.

Easy Installation



Before AWS CLI was introduced, the installation of toolkits like old AWS API involved too many complex steps. Users had to set up multiple environment variables. But the installation of AWS Command Line Interface is quick, simple and standardized.

Saves Time


Despite being user-friendly, AWS Mangement Console is quite a hassle sometimes. Suppose you are trying to find a large

Amazon S3 folder. You have to log in to your account, search for the right S3 bucket, find the right folder and look for the right file. But with AWS CLI, if you know the right command the entire tasks will take just a few seconds.

Automates Processes



AWS CLI gives you the ability to automate the entire process of controlling and managing AWS services through scripts. These scripts make it easy for users to fully automate cloud infrastructure.

Installing the AWS CLI Using pip

Step 1: Install pip (on Ubuntu OS)

$ sudo apt install python3-pip

Step 2: Install CLI

$ pip install awscli --upgrade --user

Step 3: Check installation

$ aws --version

Once you are sure that AWS CLI is successfully installed, you need to configure it to start accessing your AWS Services through AWS CLI.

Configure AWS CLI


Step 4: Use the below command to configure AWS CLI

$ aws configure
AWS Access Key ID [None]: AKI************
AWS Secret Access Key [None]: wJalr********
Default region name [None]: us-west-2
Default output format [None]: json

As a result of the above command, the AWS CLI will prompt you for four pieces of information. The first two are required. Those are your AWS Access Key ID and AWS Secret Access Key, which serve as your account credentials. The other information that you will need is region and output format, which you can leave as default for time being.

NOTE: You can generate new credentials within AWS Identity and Access Management (IAM) if you do not already have them.

All set! You are ready to start using AWS CLI now. Let’s check out how powerful AWS CLI can be with help of few basic examples.

How to use AWS CLI?


Suppose you have got some services running on AWS and you made it happen using the AWS Management console. The exact same work can be done, but with a whole lot less effort using Amazon Command Line Interface.

Here’s a demonstration,

Let’s say you want to launch an Amazon Linux instance from EC2.

If you wish to use AWS Management Console, to launch an instance, you’ll need to:

  • Load the EC2 Dashboard
  • Click Launch Instance
  • Select AMI and instance types of choice
  • Set network, life cycle behavior, IAM, and user data settings on the Configure Instance Details page
  • Select storage volumes on the Add Storage page
  • Add tags on the Add Tags page
  • Configure a security group on the Configure Security Group page
  • Finally, review and launch the instance

And, don’t forget the pop up where you’ll confirm your key pair and then head back to the EC2 Instance dashboard to get your instance data. This doesn’t sound that bad, but imagine doing it all when working with a slow internet connection or if you have to launch multiple instances of different variations multiple times. It would take a lot of time and effort, wouldn’t it?

Now, let’s see how to do the same task by using AWS CLI.

Step 1: Creating a new IAM user using AWS CLI

Let’s see how to create a new IAM group and a new IAM user & then add the user to the group using AWS Command Line Interface

  • First, use create-group to create a new IAM group
$ aws iam create-group --group-name mygroup
  • Use create-user to create a new user
$ aws iam create-user --user-name myuser
  • Then add the user to the group using the add-user-to-group command
$ aws iam add-user-to-group --user-name myuser --group-name myiamgroup
  • Finally, assign a policy (which is saved in a file) to the user by using command put-user-policy
$ aws iam put-user-policy --user-name myuser --policy-name mypoweruserole --policy-document file://MyPolicyFile.jso
  • If you want to create a set of access keys for an IAM user, use the command create-access-key
$ aws iam create-access-key --user-name myuser

Step 2: Launching Amazon Linux instance using AWS CLI

Just like when you launch an EC2 instance using AWS Management Console, you need to create a key pair and security group before launching an instance

  • Use the command create-key-pair to create a key pair & use –query option to pipe your key directly into a file
$ aws ec2 create-key-pair --key-name mykeypair --query 'KeyMaterial' --output text > mykeypair.pem
  • Then create a security group and add rules to the security group
$ aws ec2 create-security-group --group-name mysecurityg --description "My security group"$ aws ec2 authorize-security-group-ingress --group-id sg-903004f8 --protocol tcp --port 3389 --cidr 203.0.113.0/24
  • Finally, launch an EC2 instance of your choice using the command run-instance
$ aws ec2 run-instances --image-id ami-09ae83da98a52eedf --count 1 --instance-type t2.micro --key-name MyKeyPair --security-group-ids sg-903004f8

There appears to be a lot of commands, but you can achieve the same result by combining all these commands into one and then save it as a script. That way you can modify and run the code whenever necessary, instead of starting from the first step, like when using AWS Management Console. This can drop a five-minute process down to a couple of seconds.

So, now you know how to use AWS CLI to create an IAM user and launch an EC2 instance of your choice. But AWS CLI can do much more.


Comments

Popular posts from this blog

GSoC Final Report

GSoC Final Report My journey on the Google Summer of Code project passed by so fast, A lot of stuff happened during those three months, and as I’m writing this blog post, I feel quite nostalgic about these three months. GSoC was indeed a fantastic experience. It gave me an opportunity to grow as a developer in an open source community and I believe that I ended up GSoC with a better understanding of what open source is. I learned more about the community, how to communicate with them, and who are the actors in this workflow. So, this is a summary report of all my journey at GSoC 2022. Name : Ansh Dassani Organization:   NumFOCUS- Data Retriever Project title : Training and Evaluation of model on various resolutions Project link:  DeepForest Mentors :  Ben Weinstein ,  Henry Senyondo , Ethan White Introduction                                        DeepForest is a pytho...

Deep Learning

What is deep learning? Deep learning is one of the subsets of machine learning that uses deep learning algorithms to implicitly come up with important conclusions based on input data. Genrally deeplearning is unsupervised learning or semi supervised learning and is based on representation learning that is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task, it learns from representative examples. For example: if you want to build a model that recognizes trees, you need to prepare a database that includes a lot of different tree images. The main architectures of deep learning are: -Convolutional neural networks -Recurrent neural networks -Generative adversarial networks -Recursive neural networks I'll be talking about them more in later part of this blog. Diffe...

Sensitivity of model to input resolution

Sensitivity of the model to resolution The Deepforest model was trained on 10cm data at 400px crops is way too sensitive to the input resolution of images and quality of images, and it tends to give inaccurate results on these images and it's not possible to always have images from drones from a particular height that is 10cm in our case, so we have to come up with a solution for how to get better results for multiple resolutions of data. So we have two solutions to get better predictions, which can be preprocessing of data and retraining the model on different input resolutions. In preprocessing what we can do is to try to get nearby patch size to give better results as the resolution of the input data decreases compared to the data used to train the model, the patch size needs to be larger and we can pass the appropriate patch size in ```predict_tile``` function, but retaining of the model on different resolutions can be more amount of work but yes we tried to achieve it by evalu...