Skip to main content

AWS CLI— Know its Applications and Benefits

 AWS CLI— Know its Applications and Benefits





Amazon Web Services(AWS) is the market leader and top innovator in the field of cloud computing. It helps companies with a wide variety of workloads such as game development, data processing, warehousing, archive, development, and many more. But, there is more to AWS than just the eye-catching browser console. It’s time that you check out Amazon’s Command Line Interface — AWS CLI.


What is AWS CLI?



AWS Command Line Interface(AWS CLI) is a unified tool using which, you can manage and monitor all your AWS services from a terminal session on your client.

Although most AWS services can be managed through the AWS Management Console or via the APIs, there is a third way that can be very useful: the Command Line Interface (AWS CLI). AWS has made it possible for Linux, macOS, and Windows users to manage the main AWS services from a local terminal session’s command line. So, with a single-step installation and minimal configuration, you can start using all of the functionalities provided by the AWS Management Console using the terminal program. That would be:

  • Linux shells: You can use command shell programs like bash, tsch, and zsh to run commands in operating systems like Linux, macOS, or Unix
  • Windows Command Line: On Windows, you can run commands in PowerShell or in the Windows command prompt
  • Remotely: You can run commands on Amazon EC2 instances through a remote terminal such as PuTTY or SSH. You can even use AWS Systems Manager to automate operational tasks across your AWS resources

Apart from this, it also provides direct access to AWS services public APIs. In addition to the low-level API equivalent commands, the AWS CLI offers customization for several services.

This article will tell you everything that you need to know to get started with the AWS Command Line Interface and to use it proficiently in your daily operations.

Uses of AWS CLI


Below are a few reasons which are compelling enough to get you started with AWS Command Line Interface.

Easy Installation



Before AWS CLI was introduced, the installation of toolkits like old AWS API involved too many complex steps. Users had to set up multiple environment variables. But the installation of AWS Command Line Interface is quick, simple and standardized.

Saves Time


Despite being user-friendly, AWS Mangement Console is quite a hassle sometimes. Suppose you are trying to find a large

Amazon S3 folder. You have to log in to your account, search for the right S3 bucket, find the right folder and look for the right file. But with AWS CLI, if you know the right command the entire tasks will take just a few seconds.

Automates Processes



AWS CLI gives you the ability to automate the entire process of controlling and managing AWS services through scripts. These scripts make it easy for users to fully automate cloud infrastructure.

Installing the AWS CLI Using pip

Step 1: Install pip (on Ubuntu OS)

$ sudo apt install python3-pip

Step 2: Install CLI

$ pip install awscli --upgrade --user

Step 3: Check installation

$ aws --version

Once you are sure that AWS CLI is successfully installed, you need to configure it to start accessing your AWS Services through AWS CLI.

Configure AWS CLI


Step 4: Use the below command to configure AWS CLI

$ aws configure
AWS Access Key ID [None]: AKI************
AWS Secret Access Key [None]: wJalr********
Default region name [None]: us-west-2
Default output format [None]: json

As a result of the above command, the AWS CLI will prompt you for four pieces of information. The first two are required. Those are your AWS Access Key ID and AWS Secret Access Key, which serve as your account credentials. The other information that you will need is region and output format, which you can leave as default for time being.

NOTE: You can generate new credentials within AWS Identity and Access Management (IAM) if you do not already have them.

All set! You are ready to start using AWS CLI now. Let’s check out how powerful AWS CLI can be with help of few basic examples.

How to use AWS CLI?


Suppose you have got some services running on AWS and you made it happen using the AWS Management console. The exact same work can be done, but with a whole lot less effort using Amazon Command Line Interface.

Here’s a demonstration,

Let’s say you want to launch an Amazon Linux instance from EC2.

If you wish to use AWS Management Console, to launch an instance, you’ll need to:

  • Load the EC2 Dashboard
  • Click Launch Instance
  • Select AMI and instance types of choice
  • Set network, life cycle behavior, IAM, and user data settings on the Configure Instance Details page
  • Select storage volumes on the Add Storage page
  • Add tags on the Add Tags page
  • Configure a security group on the Configure Security Group page
  • Finally, review and launch the instance

And, don’t forget the pop up where you’ll confirm your key pair and then head back to the EC2 Instance dashboard to get your instance data. This doesn’t sound that bad, but imagine doing it all when working with a slow internet connection or if you have to launch multiple instances of different variations multiple times. It would take a lot of time and effort, wouldn’t it?

Now, let’s see how to do the same task by using AWS CLI.

Step 1: Creating a new IAM user using AWS CLI

Let’s see how to create a new IAM group and a new IAM user & then add the user to the group using AWS Command Line Interface

  • First, use create-group to create a new IAM group
$ aws iam create-group --group-name mygroup
  • Use create-user to create a new user
$ aws iam create-user --user-name myuser
  • Then add the user to the group using the add-user-to-group command
$ aws iam add-user-to-group --user-name myuser --group-name myiamgroup
  • Finally, assign a policy (which is saved in a file) to the user by using command put-user-policy
$ aws iam put-user-policy --user-name myuser --policy-name mypoweruserole --policy-document file://MyPolicyFile.jso
  • If you want to create a set of access keys for an IAM user, use the command create-access-key
$ aws iam create-access-key --user-name myuser

Step 2: Launching Amazon Linux instance using AWS CLI

Just like when you launch an EC2 instance using AWS Management Console, you need to create a key pair and security group before launching an instance

  • Use the command create-key-pair to create a key pair & use –query option to pipe your key directly into a file
$ aws ec2 create-key-pair --key-name mykeypair --query 'KeyMaterial' --output text > mykeypair.pem
  • Then create a security group and add rules to the security group
$ aws ec2 create-security-group --group-name mysecurityg --description "My security group"$ aws ec2 authorize-security-group-ingress --group-id sg-903004f8 --protocol tcp --port 3389 --cidr 203.0.113.0/24
  • Finally, launch an EC2 instance of your choice using the command run-instance
$ aws ec2 run-instances --image-id ami-09ae83da98a52eedf --count 1 --instance-type t2.micro --key-name MyKeyPair --security-group-ids sg-903004f8

There appears to be a lot of commands, but you can achieve the same result by combining all these commands into one and then save it as a script. That way you can modify and run the code whenever necessary, instead of starting from the first step, like when using AWS Management Console. This can drop a five-minute process down to a couple of seconds.

So, now you know how to use AWS CLI to create an IAM user and launch an EC2 instance of your choice. But AWS CLI can do much more.


Comments

Popular posts from this blog

GSoC Final Report

GSoC Final Report My journey on the Google Summer of Code project passed by so fast, A lot of stuff happened during those three months, and as I’m writing this blog post, I feel quite nostalgic about these three months. GSoC was indeed a fantastic experience. It gave me an opportunity to grow as a developer in an open source community and I believe that I ended up GSoC with a better understanding of what open source is. I learned more about the community, how to communicate with them, and who are the actors in this workflow. So, this is a summary report of all my journey at GSoC 2022. Name : Ansh Dassani Organization:   NumFOCUS- Data Retriever Project title : Training and Evaluation of model on various resolutions Project link:  DeepForest Mentors :  Ben Weinstein ,  Henry Senyondo , Ethan White Introduction                                        DeepForest is a python package for training and predicting individual tree crowns from airborne RGB imagery. DeepForest comes with a prebuil

GSOC Project

DeepForest This project aims to make the model which would already be trained for the classification of species and detection of alive and dead, trees or birds using transfer learning on the current release model which is based on object detection, only detecting trees and birds, for now, It also involves improving the user interface for working with the multi-class model for a better understating of the species. Basic Understanding of project Through initial understanding and contribution to DeepForest, I have grasped a basic understanding that DeepForest uses Retinanet as a one-stage object detection model that utilizes a focal loss function to address class imbalance during training and which is composed of a backbone network. Backbone Network The backbone is responsible for computing a convolutional feature map over an entire input image and is an off-the-self convolutional network to predict individual tree crowns and birds from airborne RGB images. The pre-built model uses a semi

Start of the Coding Period

Start of the Coding Period After the admission to the GSoC program, there is a time period to get started with the project, contact the mentors and so on. After this, the Coding Period starts. This year, it started on May 27th. In my case, I had already contributed to DeepForest, so I had already set up my working environment even before the proposal submission. Thus, I dedicated this period to add detail to my proposal and to discuss with my mentors who were actually very helpful and were always ready to guide and discussed how to tackle the different tasks. I started by checking some papers on multi class object detection and how Resnet works, similar projects and going issue by issue in DeepForest to find all feature requests related to my project. Afterwards I outlined a list of all the methods with their priority and workflow for the whole project which was then discussed with my mentors. I immediately started with a pull request on making the model able to interact with multiple