Install Mongodb On Aws Ec2 Ubuntu

Set up EC2 command line tools, Generate an EC2 key pair, Create a security group; For step 1, download the tool, and scroll down to find link: Amazon EC2 Command Line Reference - Setting Up the CLI Tools (Linux and Mac OS X) - Setting Up the Amazon EC2 CLI Tools on RHEL, Ubuntu, or Mac OS X - Download and Install the CLI Tools. Step 2: Install MongoDB on Ubuntu 18.04. Now that the repository and key have been added to Ubuntu, run the commands below to install the package. Sudo apt update sudo apt install -y mongodb-org Step 3: Manage MongoDB. After installing MongoDB, the commands below can be used to stop, start and enable MongoDB to automatically startup when the systems boots up. Sudo systemctl stop mongod.service sudo systemctl start mongod.service sudo systemctl enable mongod.service. Shall I install MongoDB on the Ubuntu instance in EC2(Where else is it supposed to be installed)? I will be using S3 for storage and link the path in the DB. I am creating a website and want to use Mongo for DB. Its a MEAN stack. Will also be installing NodeJS on the same instance. Are there any other recommendations for Mongo on EC2 or AWS?

This tutorial explains, How to Install MongoDB on EC2 (Amazon Linux 2)

MongoDB is an opensource NoSQL database that keeps data as Jason-like structure, unlike SQL database that stores data in table structure.MongoDB provides high availability with replica sets.It provides high availability with replica sets.It scales horizontally using sharding.It can be used as a file system, called GridFS.

Also Read: How to install Anaconda on Linux

&& 15 Important PostgreSQL commands you must know

Install MongoDB on EC2 using Yum

Step 1– Update Amazon Linux 2

$ yum update -y

Step 2 – Create Yum repository

$ nano /etc/yum.repos.d/mongodb-org-4.2.repo

Put the following content inside /etc/yum.repos.d/mongodb-org-4.2.repo file and save and exit the file using CTRL + x command.

name=MongoDB Repository

Step 2– Install MongoDB on EC2

$ sudo yum install mongodb-org -y

Step 3- Check which init System your platform use to start/stop/restart MongoDB service.

$ ps –no-headers -o comm 1

Aws ec2 ubuntu username

i) If output of above command is systemd , run following command to start/stop/restart and check status of the MondoDB service.

$ sudo systemctl status mongod

$ sudo systemctl start mongod

$ sudo systemctl stop mongod

$ sudo systemctl restart mongod

ii) If output of above command is init, run the following command to start/stop/restart and check status of the MongoDB Service.

$ sudo service mongod start

$ sudo service mongod status

$ sudo service mongod stop

$ sudo service mongod restart

Step 4 – check the MongoDB process and configuration file path by using following command.

$ps -ef grep mongod

Step 5- Connect Mongodb

Connect the DB by using the following command


Step 6 -Check Mongodb Version

$mongo – -version

Some Basic Commands for MongoDB

1) To show all databases in MongoDB

> show dbs ;

2) create a database or switch to already created databases

> use my-db;

Note: This DB will not show in show dbs; command until we do not insert any data in this.

3) Insert data into the newly created database.

$ db.Any data that represents your collection .insert({“data you want “:”value of the data”})

> db.names.insert({“username_name”:”chandan”})

4) show your collections

> show collections;

5) Delete your collections

Ec2 of your collection.drop();

> db.”names”.drop() ;

6) Drop the database

Make sure to switch to the database you want to delete and run the following command.

> db.dropDatabase() ;


I hope you enjoyed this tutorial and learned to Install MongoDB on EC2. If you think this is helpful, please do share this post with others. Please also share your valuable feedback, comment or any query in the comment box. I will really happy to resolve your all queries.

Thank You

Install Mongodb On Aws Ec2 Ubuntu Commands

Install Mongodb On Aws Ec2 Ubuntu

If you think we helped you or just want to support us, please consider these:-

Connect to us: FacebookTwitter

This guide provides instructions on setting up production instances ofMongoDB across Amazon’s Web Services (AWS) EC2 infrastructure.

First, we’ll step through deployment planning (instance specifications,deployment size, etc.) and then we’ll set up a single production node.We’ll use those setup steps to deploy a three node MongoDB replica setfor production use. Finally, we’ll briefly cover some advanced topicssuch as multi-region availability and data backups.

If you installed MongoDB via theAWS Marketplacethis guide can be used to get your instance up and running quickly.Start with the section onConfigure Storageto set up a place for your data to be stored.After that refer to theStarting MongoDBsection to get your instance started and allow you to get started usingMongoDB.

If you’re interested in scaling your deployment, check out the sectionson: Deploy a Multi-node Replica Set andDeploy a Sharded Cluster below.


Generally, there are two ways to work with EC2 - via the command linetools or the AWS Management Console. This guide will use the EC2 commandline tools to manage security and storage and launch instances. Use thefollowing steps to setup the EC2 tools on your system:

  • Download the EC2 command line tools
  • Next, refer to Prerequisites and Setting Up the Tools from Amazon’s Getting Started with the Command Line Tools

Planning Your Deployment¶

Before starting up your EC2 instances, it’s best to sketch out a fewdetails of the planned deployment. Regardless of the configuration orthe number of nodes in your deployment, we’ll configure each one inroughly the same manner.

Instance Specifications¶

Amazon has several instance choices available ranging from low to high(based on CPU and memory) throughput. Each instance available serves adifferent purpose and plays a different role when planning yourdeployment. There are several roles to consider when deploying a MongoDBproduction cluster. Consider a situation where your deployment containsan even number of replicated data (mongod) instances, anarbiter participates in electing the primary but doesn’t hold any data.Therefore a Small instance may be appropriate for the arbiter role butfor data nodes you’ll want to use 64-bit (standard Large or higher)instances, which have greater CPU and memory scaling. For the purposesof this guide we’ll be focused on deploying mongod instancesthat use the standard Large instance. The AMI (ID: ami-41814f28) isthe 64-bit base Amazon Linux, upon which we’ll install MongoDB.

Storage Configuration¶

For storage we recommend using multiple EBS volumes (as opposed toinstance-based storage which is ephemeral) in a RAID-basedconfiguration. Specifically for production deployments you should useRAID 10across 4-8 EBS volumes for the best performance. When deploying RAID 10,you’ll need enough volume storage to be twice that of the desiredavailable storage for MongoDB. Therefore for 8 GiB of available storageyou’ll need to have 16 GiB of allocated storage space across multipleEBS volumes.


For the purposes of this guide, our topology will be somewhat simple:one to three EC2 instances, each with multiple EBS volumes attached, alllocated in the same availability zone (and by extension, within the sameregion). If you are interested in creating a deployment that spansavailability zones or regions, it’s best to do that planning up frontand take into account security group designations (they cannot spanregions) and hostname/DNS access (AWS internal IP addresses can only beused within a zone).

An example 3 node replica set with RAID 10 storage, spanning multipleavailability zones would like similar to the following. Availabilityzones within EC2 are similar to different server racks, therefore it isrecommended that you deploy your replica set across multiple zones.

For even greater redundancy and failover, you could also deploy yourreplica set across multiple regions (and go further with multiple zonesin each region):

Refer to the AWS documentation on Using Regions and Availability Zonesfor more information.


The recommended approach to securing your instances is to use multiplesecurity groups for your MongoDB deployment, one for each type ofinteraction. For example, you could use one group to managecommunication amongst the nodes in your cluster, another group thatallows your application to communicate with the database and optionally,a group for tools and maintenance tasks.

An example setup with two security groups might look like this:

Before starting up instances we want to get the security groups created.As previously discussed, we recommend using multiple groups, one foreach type of interaction. The following steps will show you how tocreate two groups (one for your app and another for your database) andprovide the authorizations necessary for communication between them.

From the command line, create the database group and authorize SSH:

Authorize communication within the group of MongoDB instances by addingthe group to itself. Note you’ll need to provide the user account number(using the -u flag) when authorizing groups:

Mongodb On Aws Ec2

Optionally, for testing you could also enable the port for the MongoDBweb-based status interface (port 28017):

Now create a group that will hold application servers, which willcommunicate with the database cluster:

Finally, authorize communication from the application servers (groupapplication) to the MongoDB instances (group database):

Refer to the AWS guideUsing Security Groupsfor more information on creating and managing security groups.

The next step is to generate an SSH key-pair that we’ll use to connectto our running EC2 instances. Amazon’s tools provide a mechanism toquickly generate a public-private key pair. Once generated, we’ll needto save the private key so that we can use it to connect via SSH later(clickherefor more info on key pairs and AWS).

First, generate the key pair:

Save the contents of the key to a file (including the BEGIN andEND lines) and make sure that file is only readable by you:

Optionally, you can also the key to the SSH agent to ease connecting to our instances later:

We’re finished with the pre-deployment steps; we’ve covered the storageand security considerations that’s necessary to setup and deploy ourinstances.

Deploy a Single Node¶

We’ll start our deployment by setting up single node because later onwe’ll use the same steps to set up a larger cluster. The first step isto launch the instance and then setup the EBS-backed RAID 10 storage forthe instance. Setting up the storage requires creating, attaching,configuring and formatting the volumes where our data will be stored.

Install Mongodb On Aws Ec2 Ubuntu Instance


If you created a MongoDB instance via the AWS Marketplace, skip aheadto Configure Storage below.

Launch Instance¶

From the command line we can launch the instance. We’ll need to supplyan ID for an Amazon Machine Image (AMI) that we’ll build our node from.We recommend using a 64-bit Amazon Linux AMI as the base of your MongoDBnodes. In this example, we are using ami-e565ba8c with the number ofnodes (1), security group (database), authentication keypair(cluster-keypair), type of instance (m1.large) and availabilityzone (us-east-1a). Depending on the region you deploy into, adifferent AMI ID may be needed:

Next, let’s add some tags to the instance so we can identify it later.Tags are just metadata key-value pairs:

Now we can ascertain some status information about running instances atAWS (includes EBS volumes as well):

Configure Storage¶

Now that the instance is running, let’s create the EBS volumes we’ll usefor storing our data. In this guide we’ll set up 4 volumes with 4 GiB ofstorage each (configured that’s 16 GiB but because we’re using a RAID 10configuration that will become 8 GiB of available storage).

First off, create the EBS volumes supplying the size (4) and zone(us-east-1a) and save the results into a temporary file that we’llread from for the next command. Here’s the command we’ll use:

Here’s the output of that command:

Now, let’s attach those newly created volumes to our previously launchedrunning instance from above. From the command line we’ll start with thetemp file (vols.txt), the running instance ID (i-11eee072), anda prefix for each attached device (/dev/sdh):

Assuming the volumes attached successfully, you should see something like this:

Now we’ll need to connect to the running instance via SSH and configurethose attached volumes as a RAID array. If you added the private key toyour running SSH agent, you should be able to connect with somethinglike (substituting your instance’s hostname):

And now create the RAID array using the built-in mdadmprogram. You’ll need the level (10), number of volumes (4), nameof the new device (/dev/md0) and the attached device prefix(/dev/sdh*):

Once mdadm is done and we’ve persisted the storageconfiguration, we’ll need to tune the EBS volumes to achieve desiredperformance levels. This tuning is done by setting the “read-ahead” oneach device. For more information refer to theblockdev man page.

With the RAID10 created we now turn to the Logical Volume Manager(lvm) which we’ll use to create logical volumes for the data,log files and journal for MongoDB. The purpose of using lvmis to (1) safely partition different objects from each other and (2)provide a mechanism that we can use to grow our storage sizes later.First we start by zeroing out our RAID, creating a physical devicedesignation and finally a volume group for that device.

Once the volume group has been created, we now create logical volumesfor the data, logs and journal. Depending upon the amount of availablestorage you may want to designate specific sizes vs. volume grouppercentages (as shown below). We recommend approximately 10GB for logstorage and 10GB for journal storage.

At this point we have three volumes to configure (/dev/vg0/..).For each volume we’ll create a filesystem, mount point and an entry inthe filesystem table. In the example below we used the ext4 filesystemhowever you could instead elect to use xfs, just be sure to edit themke2fs and sed commands accordingly.The /etc/fstab entriesrequire the partition (e.g. /dev/vg0/data), a mount point forthe filesystem (/data), the filesystem type (ext4 orxfs) and the mount parameters (defaults, auto, noatime,noexec, nodiratime00, refer to themount man pagefor more information on these parameters:

Now mount all of the storage devices. By adding the entry to/etc/fstab, we’ve shortened the call to mount because it will lookin that file for the extended command parameters.

With the devices mounted we issue one last call to set the MongoDBjournal files to be written to our new journal device, via a symboliclink to the new device:

Install and Configure MongoDB¶


If you created a MongoDB instance via the AWS Marketplace, skip aheadto [#Starting MongoDB] below.

Now that the storage has been configured, we need to install andconfigure MongoDB to use the storage we’ve set up, then set it to startup on boot automatically. First, add an entry to the localyum repository for MongoDB:

Next, install MongoDB and the sysstat diagnostic tools:

Set the storage items (data, log, journal) to be owned by the user(mongod) and group (mongod) that MongoDB will bestarting under:

Now edit the MongoDB configuration file and update the following parameters:

Starting MongoDB¶

Set the MongoDB service to start at boot and activate it:

When starting for the first time, it will take a couple of minutes forMongoDB to start, setup it’s storage and become available. Once it is,you should be able to connect to it from within your instance:

Just to confirm the system is working correctly, try creating a testdatabase, test collection and save a document:

Now that we’ve got a single node up and running with EBS backed RAIDstorage, let’s move on and create a multi-node replica set.

Deploy a Multi-node Replica Set¶

Replica Set Background¶

Replica sets are a form of asynchronous master/slave replication, addingautomatic failover and automatic recovery of member nodes. A replica setconsists of two or more nodes that are copies of each other (i.e.:replicas). SeeReplica Set Fundamentalsfor more information.

Create and Configure Instances¶

For this guide, we’ll set up a three node replica set. To set up eachnode use the instructions from Deploy a Single Node above. Oncethat’s completed, we’ll update the configurations for each node and getthe replica set started.


First we’ll need to edit the MongoDB configuration and update thereplSet parameter:

Save the configuration file and restart mongod:

Configure Replica Set¶

Once MongoDB has started and is running on each node, we’ll need toconnect to the desired primary node, initiate the replica set and addthe other nodes. First connect to the desired primary via SSH and thenstart mongo to initiate the set:

Next, add the other nodes to the replica set:

The 3 node replica set is now configured. You can confirm the setup bychecking the health of the replica set:

What we’ve completed here is a simple replica set; there are additionalconfigurations out there that may make more sense for your deployment,refer to the MongoDB documentation for more information. If you intendto use your replica set to help scale read capacity, you’ll also need toupdate your application’s code and add the appropriateslaveOk=truewhere necessary so that read results can be returned from additionalnodes more quickly.

Deploy a Sharded Cluster¶

MongoDB scales horizontally via a partitioned data approach known assharding. MongoDB provides the ability to automatically balance anddistribute data across multiple partitions to support write scalability.For more information, refer to the/sharding documentation.

Simple Sharding Architecture¶

To build our simple sharded configuration, we’ll be building upon thereplica set steps we just worked on. To get started you’ll need tocreate two additional replica set configurations, just the same fromabove. When configuring each server instance we’ll set theshardsvr parameter inside the mongod configurationfile. Next we’ll take one node from each replica set and set it to runas a config server as well. The config server maintains metadata aboutthe sharded data storage cluster. Finally, we’ll add instances for themongos router, which handles routing requests from your appto the correct shard. The recommended approach is to run this componenton your application servers. The following image shows a recommendedtopology for use with sharding:

Zoo tycoon full download mac. Zoo tycoon 2 download full version social advice Mac users interested in Zoo tycoon 2 download full version generally download: Zoo Tycoon Complete Collection. Enjoy the fun and challenge of building and managing the ultimate zoo. Click the 'Install Game' button to initiate the file download and get compact download launcher. Locate the executable file in your local folder and begin the launcher to install your desired game. View all 16 Zoo Tycoon Screenshots. Zoo Tycoon 2 Full Version Mac, free zoo tycoon 2 full version mac software downloads, Page 3. Download Zoo Tycoon Complete Collection for Mac to maintain an exotic world of animals. 1.2GB free disk space Video Memory (VRam): 16 MB or higher Media Required. Zoo Tycoon Free Trial Mac.

Create three nodes for each replica set. Use the following instructions:

  • Create and Configure Instances. Before saving/etc/mongod.conf, add this parameter:

    Save the configuration file and restart mongod:

Once /etc/mongod.conf has been updated, initiate the replica setand add the members as described inAdd Members to a Replica Set.Once that’s complete, choose one instance from each replica set and start anadditional mongod process those instances, this time as theconfig server component:

Aws Ec2 Ubuntu Username

Now that we’ve got N config servers running (where N is the number ofrunning replica sets) we can set up the request routermongos. This process typically runs on your applicationservers and handles routing database requests from your app to thecorrect database shard. Assuming you already have your applicationconfigured and deployed, use ssh to connect to each instance andinstall MongoDB using the steps from Install and Configure MongoDB.

Before we continue, it is important to consider the role DNS plays in asharded cluster setup. Generally we recommend using DNS hostnames forconfiguring replica sets, which Amazon handles appropriately, as opposedto using specific IP addresses. Essentially, AWS knows the mappingbetween public and private addresses and hostnames and managesinter-region domain name resolution. Therefore, by using the public DNSname for our servers we can ensure that whether our servers are in asingle region or across multiple regions, AWS will correctly route ourrequests. When it comes to setting up sharding, we recommend anadditional step of using DNS aliases for the instances that will beacting as config servers. The routers must know the hostnames of theconfig servers so by using DNS aliases we gain additional flexibility ifconfig servers ever need to change. All it takes is pointing the DNSalias to another instance and no additional update to the routerconfiguration is needed. For more information on this topic, refer todocs on changing config servers.


Once the DNS settings have taken effect, we can proceed with themongodb-manual:mongos setup. Go back to the command line on each of theinstances you’ll be using for mongodb-manual:mongos and start theservice and point it to the instances running the config server usingtheir DNS aliases (ex:, and along with theconfig server port `27019:

Aws Ec2 Ubuntu

With the mongodb-manual:mongos routers running, we can nowcomplete the setup for the sharding architecture. The last step is to addthe previously created replica sets to the overall system. Choose one ofthe instances that is running mongodb-manual:mongos, start themongodb-manual:mongo clientusing the hostname and port (27017) and connect to the admindatabase. You’ll need to have the name for each replica set (ex:replicaSetName1) and the hostnames for each member of the set (e.g:replicaSetHost1)

The addShard command will need to be repeated foreach replica set that is part of the sharded setup:

Once these steps have been completed, you’ll have a simple shardedconfiguration. The architecture we used includes 3 database shards forwrite scalability and three replicas within each shard for readscalability and failover. This type of setup deployed across multipleregions (ex: one node from each replica located in us-west-1) wouldalso provide some degree of disaster recovery as well.

Aws Ec2 Ubuntu Desktop

In order to utilize this newly created configuration, you’ll need tospecify which databases and which collections are to be sharded.

For more information, see Deploy a Sharded Cluster.

Backup and Restore¶

There are several ways to backup your data when using AWS, refer to theEC2 Backup and Restore guide for more information.