SHARING LIMITED STORAGE TO HADOOP CLUSTER

Rishabh Jain
4 min readOct 25, 2020

--

Hey! Computer guys something seems to be so interested here. Actually I am going to launch Hadoop Cluster with data nodes but not the whole storage of each actually the limited amount of storage will be shared to the name node of the Hadoop Cluster.

Before starting the setup if you are new to the world of Big Data and creation of Hadoop Cluster you should first visit to this blog and refer it. So, you can understand this concept of storage sharing.

Now let us start with creation of the partition in the Hard disk and also with the process of sharing the limited storage to the Name node of the Hadoop Cluster.

1. Make Partition

To see number of hard disks in the system use command

fdisk -l or lsblk as shown below.

After that enter to the Hard disk in which you want to make the partition.

Use the command fdisk /dev/xvdh as shown below.

Now start making the partition as shown below.

n => new partition

p => primary partition

Press enter for one time which actually tells the partition number.

Press enter for second time which tells the sector to start with.

Now enter the size of partition you want in GiB or MiB as shown below.

Now type w and press enter to save the partition as shown below.

Now type q and press enter to exit from the disk.

You can now view the partition made here by lsblk command as shown below.

After the partition activate a driver for it. So, that it can work fine.

2. Format Partition

You can do it by the command udevadm settle as shown below.

Now format the partition by using the command mkfs.ext4 /dev/xvdh1 as shown below.

3. Mount the Partition

Make a directory as shown below.

Now mount this folder with the partition made in the disk as shown below.

You can do it by the command mount /dev/xvdh1 /zz

You can confirm it by the command lsblk as shown below.

Now when the mounting is done just one thing remains to just write the folder name to the hdfs-site.xml file as shown below.

Now just save the file and start the datanode as shown below.

You can check it by the command jps as shown below.

Now you will see that the size of partition made will only be shared to the Hadoop Cluster. You can view it on the dashboard as shown below.

Hope you enjoyed and learnt a lot !!

--

--

Rishabh Jain
Rishabh Jain

Written by Rishabh Jain

I am a tech enthusiast, researcher and an integration seeker. I love to explore and learn about the right technology and right concepts from its foundation.

No responses yet