You are on page 1of 3

1. What is HDFS, and what is its primary purpose?

 HDFS is the Hadoop Distributed File System, designed to store and


manage large datasets across a distributed cluster.
2. What are the key components of HDFS?
 Key components include NameNode, DataNode, Secondary NameNode,
and the Hadoop Distributed File System Shell.
3. Explain the role of the NameNode in HDFS.
 The NameNode manages the namespace and metadata of the file system,
such as the structure and permissions, but not the data itself.
4. What is a DataNode in HDFS, and what is its role?
 DataNodes store the actual data blocks and are responsible for serving
read and write requests from clients.
5. How does HDFS ensure data reliability and fault tolerance?
 HDFS replicates data across multiple DataNodes, allowing recovery in case
of DataNode failures.
6. What is block size in HDFS, and why is it typically large (e.g., 128 MB or 256
MB)?
 Block size is the size of data chunks in HDFS. A large block size reduces the
overhead of managing a large number of blocks and improves read
throughput.
7. Explain the architecture of the Secondary NameNode and its role.
 The Secondary NameNode periodically checks and compacts the edit log,
creating a new checkpoint for the NameNode. It does not serve as a
backup NameNode.
8. What is a checkpoint in HDFS?
 A checkpoint is a snapshot of the file system's metadata, created by
merging the edits in the edit log with the current state of the filesystem.
9. How do you configure replication in HDFS, and why is it important?
 Replication is configured through the dfs.replication setting. It's crucial for
data reliability and fault tolerance.
10. What is the purpose of the Hadoop Distributed File System Shell?
 The HDFS Shell provides a command-line interface to interact with HDFS,
allowing users to perform various operations.
11. What is the command to copy a file from the local file system to HDFS?
 You can use the hadoop fs -copyFromLocal command.
12. How does HDFS handle data consistency and data locality?
 HDFS provides strong consistency by maintaining multiple replicas. Data
locality is achieved by placing replicas on the same rack or in the same
data center.
13. What is the default replication factor in HDFS, and how can you change it
for a specific file or directory?
 The default replication factor is 3. You can change it by using the -setrep
command or specifying it when writing data.
14. Explain the process of data replication and block placement in HDFS.
 When a file is written to HDFS, data is divided into blocks and replicated to
multiple DataNodes based on a replication factor. The blocks are placed
on different racks for fault tolerance.
15. What is a Rack Awareness in HDFS, and why is it important?
 Rack Awareness is HDFS's knowledge of network topology, which helps it
place replicas on different racks for fault tolerance and data locality.
16. How does HDFS handle small files, and why are small files a challenge for
HDFS?
 HDFS may not be efficient for handling many small files due to the large
block size. Small files lead to underutilization of storage.
17. What is the purpose of the fsimage and edits log files in HDFS?
 The fsimage file stores a snapshot of the filesystem namespace, while the
edits log contains all changes made since the last checkpoint.
18. Explain the process of data recovery after a DataNode failure in HDFS.
 HDFS replicates data across multiple DataNodes, so when a DataNode
fails, the system can continue serving data from the remaining replicas.
19. What is the purpose of the Balancer tool in HDFS, and when would you use
it?
 The Balancer tool is used to balance the distribution of data across
DataNodes in an HDFS cluster. You would use it when the cluster's data
distribution becomes uneven.
20. What is the default block size in HDFS, and how can you change it?
 The default block size is 128 MB. You can change it by modifying the
dfs.blocksize configuration.
21. How does HDFS ensure data integrity?
 HDFS uses checksums for data integrity. Data is verified during reads and
is re-replicated if errors are detected.
22. What is the purpose of the hadoop fsck command in HDFS?
 The hadoop fsck command is used to check the health and status of the
HDFS file system, reporting missing or corrupt blocks.
23. Explain the process of data deletion in HDFS and how data is marked for
deletion.
 Data is marked for deletion when the client deletes a file. The NameNode
maintains a record of deleted files, marking them as "trash."
24. How does HDFS handle data storage across different storage devices, such
as SSDs and HDDs?
 HDFS supports heterogeneous storage by allowing different DataNodes to
have varying storage capabilities, including SSDs and HDDs.
25. What is the significance of the dfs.permissions.enabled configuration in HDFS?
 dfs.permissions.enabled controls whether HDFS enforces file permissions.
When enabled, HDFS checks and enforces permissions for file access.
26. What are the advantages and disadvantages of HDFS as compared to
traditional file systems?
 Advantages include scalability and fault tolerance. Disadvantages include
lower performance for small files and high-latency access.
27. Explain how HDFS ensures high data availability in case of a NameNode
failure.
 Hadoop 2.x introduced HDFS HA (High Availability), which uses multiple
NameNodes with shared storage to ensure continuous operation in case
of a NameNode failure.
28. What is the HDFS federation concept, and how does it differ from HDFS
High Availability (HA)?
 HDFS federation allows multiple namespaces in a single cluster, while
HDFS HA provides fault tolerance for the entire namespace.
29. How does HDFS handle block replication when a new DataNode joins the
cluster?
 When a new DataNode joins, HDFS automatically replicates some blocks
to it based on the cluster's overall block distribution.
30. Explain the process of expanding an HDFS cluster by adding new
DataNodes.
 Expanding the cluster involves configuring the new DataNodes and
ensuring the NameNode is aware of the changes. No manual data
migration is needed.

You might also like