You are on page 1of 20

IT 3501 FULL STACK WEB DEVELOPMENT

III YEAR / V SEMESTER (B.Tech. – INFORMATION TECHNOLOGY)

UNIT II
NODE JS

PREPARED BY,

Prof. M.KARTHIKEYAN, M.E., HoD / IT

VERIFIED BY

HOD PRINCIPAL CEO/CORRESPONDENT

DEPARTMENT OF INFORMATION TECHNOLOGY

SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGODE – 637 205.


UNIT II
NODE JS
 Basics of Node JS
 Installation
 Working with Node packages
 Using Node package manager
 Creating a simple Node.js application
 Using Events – Listeners
 Timers – Callbacks
 Handling Data I/O
 Implementing HTTP services in Node.js
LIST OF IMPORTANT QUESTIONS
UNIT II
NODE JS

PART - A
1. What is Node.js and what is its primary use?
2. What are the basic steps for installing Node.js on your system?
3. How can you check if Node.js is properly installed on your system?
4. How do you create a package.json file for a Node.js project?
5. What is npm and what is its role in Node.js development?
6. How can you install a specific version of a package using npm?
7. How do you update npm to the latest version?
8. How do you install a package globally using npm?
9. What is the purpose of the package-lock.json file in Node.js projects?
10. How do you handle events in Node.js using event listeners?
11. How do you use timers and callbacks in Node.js?
12. How can you handle data input/output in Node.js?
13. How do you implement HTTP services in Node.js?
14. What is the difference between npm install and npm install --save?
15. How can you uninstall a package using npm?
PART - B
1. Explain the concept of asynchronous programming in Node.js. Explain how
asynchronous operations are handled and why they are important in building scalable
and efficient applications.
2. Explain the role of streams in Node.js. Discuss the different types of streams and how
they can be used for efficient data processing.
3. Discuss the concept of middleware in Express.js. Explain how middleware functions are
used, the order of execution, and how they contribute to building robust web
applications.
4. Explain the concept of clustering in Node.js. Discuss how clustering can improve the
performance and scalability of Node.js applications, and provide an example of
implementing clustering.
5. Explain in detail about how to create a simple Node.js application.
6. Build a scalable and event-driven application architecture using events and listeners in
Node.js
7. Create an HTTP server that handles incoming requests and sends appropriate responses
in Node.js.
8. Explain the concept of timers in Node.js. Discuss the different types of timers and their
use cases.
9. What are callbacks in Node.js? Explain their role and importance in asynchronous
programming.

10. How do you handle data input/output in Node.js? Explain the concept of streams and
their advantages.

11. Explain the concept of buffers in Node.js and their role in handling binary data.

12. What are file streams in Node.js? Explain their advantages and how they can be used for
efficient file handling.
LIST OF IMPORTANT QUESTIONS
UNIT II
NODE JS

PART - A
1. What is Node.js and what is its primary use?
Node.js is an open-source, server-side JavaScript runtime environment that allows developers to
build scalable network applications. Its primary use is for creating web servers and handling
asynchronous I/O operations.
2. What are the basic steps for installing Node.js on your system?
Visit the official Node.js website and download the installer for your operating system.
Run the installer and follow the installation wizard.
Once installed, you can verify the installation by opening a terminal or command prompt and
typing node -v to check the Node.js version.
3. How can you check if Node.js is properly installed on your system?
Open a terminal or command prompt and type node -v. If Node.js is installed correctly, it will
display the installed version number.
4. How do you create a package.json file for a Node.js project?
Open the command prompt or terminal and navigate to the root directory of your project.
Run the command npm init and follow the prompts to generate the package.json file.
Alternatively, you can use npm init -y to generate the package.json file with default values.
5. What is npm and what is its role in Node.js development?
npm (Node Package Manager) is the default package manager for Node.js. It allows developers
to install, manage, and share reusable packages of code.
npm plays a crucial role in managing dependencies, executing scripts, and publishing Node.js
packages.
6. How can you install a specific version of a package using npm?
Use the command npm install <package>@<version> to install a specific version of a package.
Replace <package> with the package name and <version> with the desired version number.
7. How do you update npm to the latest version?
Run the command npm install -g npm to update npm to the latest version.
The -g flag installs npm globally, allowing you to use the latest version in all projects.
8. How do you install a package globally using npm?
Use the command npm install -g <package> to install a package globally.
Replace <package> with the name of the package you want to install.
9. What is the purpose of the package-lock.json file in Node.js projects?
The package-lock.json file is automatically generated by npm to lock the versions of installed
dependencies.
It ensures that all developers working on the project use the same versions of dependencies,
promoting consistency and reproducibility.
10. How do you handle events in Node.js using event listeners?
Create an instance of the EventEmitter class from the events module using require('events').
Register event listeners using the on method, specifying the event name and a callback function
to execute when the event is emitted.
Use the emit method to trigger the event and execute the associated callback functions.
11. How do you use timers and callbacks in Node.js?
Node.js provides the setTimeout and setInterval functions to schedule code execution at a
specified delay or interval.
These functions accept a callback function as an argument, which gets executed after the
specified delay or at each interval.
Callback functions allow asynchronous execution, enabling non-blocking operations in Node.js.
12. How can you handle data input/output in Node.js?
Node.js provides the fs module for handling file system operations.
You can use functions like fs.readFile to read data from files, and fs.writeFile to write data to
files.
The fs module also provides functions for working with directories, streams, and other file
system operations.
13. How do you implement HTTP services in Node.js?
Node.js has a built-in http module that allows you to create HTTP servers and make HTTP
requests.
To create an HTTP server, you can use the http.createServer method and define the
request/response handling logic.
You can also use third-party packages like Express.js to simplify the process of creating HTTP
services in Node.js.
14. What is the difference between npm install and npm install --save?
npm install is used to install packages locally without saving them as dependencies in the
package.json file.
npm install --save is used to install packages and automatically save them as dependencies in
the package.json file.
15. How can you uninstall a package using npm?
Use the command npm uninstall <package> to remove a package from the current project.
Replace <package> with the name of the package you want to uninstall.
16. How do you handle errors in Node.js?
In Node.js, you can handle errors using try-catch blocks or by registering error event listeners.
For asynchronous operations, you can use error-first callbacks or handle errors using Promise
rejections.
Additionally, you can use middleware like errorHandler in Express.js to handle errors in web
applications.
17. What is the purpose of the process object in Node.js?
The process object is a global object in Node.js that provides information and control over the
current Node.js process.
It allows you to access command-line arguments, environment variables, and process-related
events.
The process object also provides methods for exiting the current process, setting timers, and
handling uncaught exceptions.
18. How do you pass command-line arguments to a Node.js application?
Command-line arguments can be accessed in a Node.js application using the process.argv
array.
The first element (process.argv[0]) is the path to the Node.js executable, and the second
element (process.argv[1]) is the path to the script file.
Additional command-line arguments passed after the script file can be accessed using
process.argv[2], process.argv[3], and so on.
19. How can you handle asynchronous code in Node.js?
Node.js provides several mechanisms for handling asynchronous code, such as callbacks,
Promises, and async/await.
Callbacks are the traditional way of handling asynchronous operations in Node.js.
Promises provide a more structured approach to handling asynchronous code and allow
chaining of operations.
Async/await is a syntax introduced in modern versions of Node.js that provides a more
synchronous-style code flow while still being asynchronous.
20. What is the purpose of the .gitignore file in Node.js projects?
The .gitignore file is used to specify files and directories that should be ignored by the version
control system, Git.
It helps prevent sensitive or unnecessary files from being committed to the repository, reducing
the repository's size and maintaining better organization.

PART – B
1. Explain the concept of asynchronous programming in Node.js. Explain how
asynchronous operations are handled and why they are important in building scalable
and efficient applications.
Asynchronous programming in Node.js allows non-blocking I/O operations, which means that
the program can continue executing other tasks while waiting for I/O operations to complete.
Asynchronous operations are handled using callbacks, Promises, or async/await.
In Node.js, when an asynchronous operation is initiated, such as reading a file or making an
HTTP request, it doesn't block the execution of the program. Instead, it registers a callback
function to be executed when the operation completes. Meanwhile, the program can continue
executing other tasks or handle other requests, leading to efficient resource utilization.
When the asynchronous operation completes, the event loop in Node.js notifies the
corresponding callback function, which is then executed. This allows Node.js to handle multiple
concurrent operations efficiently, without blocking the execution thread and causing delays.
Asynchronous programming is crucial in building scalable and efficient applications for
several reasons:
Scalability: Asynchronous operations allow Node.js to handle a large number of concurrent
requests without consuming excessive resources. By leveraging non-blocking I/O, Node.js can
efficiently manage high concurrency, making it suitable for real-time applications, web servers,
and microservices architectures.
Efficiency: Asynchronous programming enables efficient resource utilization by allowing the
program to perform other tasks while waiting for I/O operations to complete. This leads to faster
response times and better overall performance.
Responsiveness: By avoiding blocking operations, Node.js applications remain responsive
even when dealing with time-consuming operations. This ensures that the application can
handle other requests or perform additional tasks while waiting for I/O operations to finish.

2. Explain the role of streams in Node.js. Discuss the different types of streams and how
they can be used for efficient data processing.
Streams in Node.js are a fundamental concept for handling data in chunks, enabling efficient
processing of large volumes of data. Streams provide an abstraction over data sources or
destinations, allowing data to be read from or written to incrementally.
Node.js provides four types of streams:
Readable streams: Readable streams are used for reading data. They produce a sequence of
data chunks that can be consumed by a consumer. Examples of readable streams include
reading from files, network sockets, or HTTP requests.
Writable streams: Writable streams are used for writing data. They accept chunks of data and
write them to a specified destination. Examples of writable streams include writing to files,
network sockets, or HTTP responses.
Duplex streams: Duplex streams represent streams that are both readable and writable. They
can be used for bidirectional communication, where data can be both read and written.
Examples of duplex streams include network sockets or WebSocket connections.
Transform streams: Transform streams are a special type of duplex streams that allow
modification of data during processing. They can be used to transform data as it passes through
the stream. Transform streams are commonly used for tasks such as compression, encryption,
or data manipulation.
Streams provide several benefits for efficient data processing in Node.js:
Memory efficiency: Streams allow data to be processed in chunks, reducing memory
requirements compared to loading the entire dataset into memory. This is especially useful when
working with large files or network data.
Responsiveness: Streams process data incrementally, which means that processing can start
as soon as the first chunk is available. This leads to faster response times and improves the
overall responsiveness of the application.
Piping and chaining: Streams can be easily piped or chained together, allowing data to flow
seamlessly from one stream to another. This enables a modular and reusable approach to data
processing, enhancing code maintainability and reducing complexity.

3. Discuss the concept of middleware in Express.js. Explain how middleware functions are
used, the order of execution, and how they contribute to building robust web
applications.

Figure -1 Middleware

Middleware functions in Express.js are functions that have access to the request and
response objects, as well as the next middleware function in the application's request-response
cycle. Middleware functions can perform various tasks such as logging, authentication, data
validation, error handling, and more.
Middleware functions are added to the application's request-response cycle using the
app.use() or app.METHOD() functions, where METHOD is the HTTP method such as GET,
POST, etc. The order in which middleware functions are defined determines the order of
execution.
When a request is made to an Express.js application, the middleware functions are
executed in the order they are defined. Each middleware function has access to the request and
response objects and can perform operations on them or modify them. The middleware
functions can also invoke the next() function to pass control to the next middleware function in
the chain.
Middleware functions contribute to building robust web applications in several ways:
Separation of concerns: Middleware functions allow developers to separate concerns and
modularize application logic. Each middleware function can handle a specific aspect of the
request-response cycle, such as authentication or error handling, promoting code organization
and reusability.
Code reuse: Middleware functions can be reused across different routes or even different
applications. This reduces code duplication and enhances code maintainability.
Error handling: Middleware functions can handle errors that occur during the request-response
cycle. They can catch exceptions, log errors, and send appropriate error responses to the client.
This ensures that errors are properly managed and does not impact the overall application flow.
Order-based execution: Middleware functions are executed in a specific order, allowing
developers to control the flow of operations. This is particularly useful when certain operations
need to be executed before or after others.
Flexibility: Middleware functions offer flexibility by allowing developers to add or remove them as
per the application's requirements. This makes it easy to extend or modify application behavior
without modifying the core logic.
Overall, middleware functions in Express.js contribute to building robust web applications by
promoting modularization, code reuse, error handling, and providing control over the request-
response cycle.

4. Explain the concept of clustering in Node.js. Discuss how clustering can improve the
performance and scalability of Node.js applications, and provide an example of
implementing clustering.
Clustering in Node.js is the process of creating multiple worker processes that share the same
server port, enabling them to handle incoming requests concurrently. Clustering improves the
performance and scalability of Node.js applications by utilizing multiple CPU cores and
distributing the workload.
When a Node.js application is run in a clustered mode, a master process is created along with
several worker processes. The master process manages the worker processes and listens for
incoming connections. Incoming requests are distributed among the worker processes using a
round-robin algorithm or other load balancing strategies.
Clustering improves performance and scalability in several ways:
Utilizing multiple CPU cores: By creating multiple worker processes, clustering allows the
application to utilize all available CPU cores. Each worker process can handle requests
independently, resulting in efficient utilization of hardware resources.
Load distribution: Clustering distributes the incoming requests across multiple worker
processes, preventing a single process from becoming a bottleneck. This enables the
application to handle a higher volume of requests concurrently.
Fault tolerance: In case of a worker process failure, the master process can automatically
restart the failed worker or create a new one, ensuring high availability and fault tolerance.

5. Explain in detail about how to create a simple Node.js application.


1. Set up the project structure:
 Create a new directory for your project.
 Navigate to the project directory using the command line.
 Run the command npm init to initialize a new Node.js project and generate a
package.json file. Follow the prompts to provide information about your project.
2. Install required packages:
 Identify the packages you need for your application.
 Run the command npm install <package> to install the required packages.
Replace <package> with the name of the package you want to install. Repeat this
step for each package.
3. Create the main application file:
 Create a new JavaScript file in the project directory (e.g., app.js).
 Open app.js in a text editor or an integrated development environment (IDE).
4. Build the basic server:
 Import the required modules at the top of app.js. For example, const http =
require('http');
 Create an instance of the HTTP server using http.createServer().
 Specify a callback function that will be executed for each incoming request. This
function should accept two parameters: request and response.
 In the callback function, set the response header with
response.setHeader('Content-Type', 'text/plain').
 Write the response using response.write('Hello, World!').
 End the response using response.end().
 Save the changes to app.js.
5. Run the application:
 Open the command line and navigate to the project directory.
 Run the command node app.js to start the Node.js application.
 The server will start listening on the default port (typically 3000).
 Open a web browser and visit http://localhost:3000 to see the "Hello, World!"
message.
We have created a simple Node.js application that serves a "Hello, World!" message. From
here, you can continue building upon this foundation by adding more features, routes, or
external dependencies as per your project requirements.
Remember to handle errors, implement proper error handling, and use best practices for
security and code organization as your application grows.
6. Build a scalable and event-driven application architecture using events and listeners
in Node.js

Figure 2 : Event Handling


Import the required modules:
Include the events module by adding the following line at the top of your JavaScript file:

const EventEmitter = require('events');


Create an event emitter instance:
Instantiate an object from the EventEmitter class:
const myEmitter = new EventEmitter();
Define event listeners:
Register event listeners using the on or addListener methods. Each listener is associated with a
specific event name:
myEmitter.on('myEvent', () => {
console.log('Event was triggered!');
});
Emit events:
Trigger an event using the emit method. Provide the event name as the first argument:
myEmitter.emit('myEvent');
Handling events with data:
You can pass data to event listeners by adding arguments when emitting the event:
myEmitter.on('myEventWithData', (data) => {
console.log('Event data:', data);
});

myEmitter.emit('myEventWithData', 'Hello, World!');


Remove event listeners:
To remove a specific event listener, use the off or removeListener methods:
myEmitter.off('myEvent', listenerFunction);
To remove all event listeners for a specific event, use the removeAllListeners method:
myEmitter.removeAllListeners('myEvent');
Event emitters and listeners provide a powerful mechanism for handling and responding to
events in an asynchronous and decoupled manner. They are particularly useful in scenarios
where multiple components need to communicate or when implementing event-driven
architectures.
By leveraging events and listeners, you can build modular and scalable applications that
respond to various events and triggers effectively. Remember to properly manage event
listeners to prevent memory leaks and remove unnecessary listeners when they are no longer
needed.

7. Create an HTTP server that handles incoming requests and sends appropriate
responses in Node.js.
Implementing HTTP services in Node.js involves creating an HTTP server that handles
incoming requests and sends appropriate responses. Here's an example of implementing
HTTP services in Node.js:
Import the required modules:
Include the http module by adding the following line at the top of your JavaScript file:
const http = require('http');
Create an HTTP server:
Use the createServer method from the http module to create an instance of the HTTP server:

const server = http.createServer((request, response) => {


// Request handling logic
});
Handle incoming requests:
Inside the callback function of the createServer method, implement the logic to handle incoming
requests.
Access request information such as URL, method, headers, and payload data.
Implement the desired functionality based on the request.
Set the appropriate response headers and content.
Send the response using the response.end() method.
const server = http.createServer((request, response) => {
// Example: Echoing back the request body in the response
let body = '';
request.on('data', (chunk) => {
body += chunk;
});
request.on('end', () => {
response.setHeader('Content-Type', 'text/plain');
response.statusCode = 200;
response.end(body);
});
});

Start the server:


Start the server to listen on a specific port using the listen method:
const port = 3000;
server.listen(port, () => {
console.log(`Server is listening on port ${port}`);
});
Test the HTTP service:
Open a web browser or use a tool like cURL or Postman to send requests to the server.
Access the server using the specified port (e.g., http://localhost:3000) and observe the
responses.
This example demonstrates a basic HTTP service that echoes back the request body in the
response. However, you can implement more complex logic, such as routing, authentication,
and database interactions, based on your application's requirements.
Remember to handle errors, implement proper error handling, and follow security best practices
when building HTTP services in Node.js.

8. Explain the concept of timers in Node.js. Discuss the different types of timers and
their use cases.
Timers in Node.js allow you to schedule code execution at a specified delay or interval.
There are three types of timers:
setTimeout: This timer executes a callback function once after the specified delay. It is
commonly used for delaying the execution of code or performing one-time tasks.
setInterval: This timer executes a callback function repeatedly at a specified interval. It is useful
for implementing recurring tasks or periodic updates.
setImmediate: This timer executes a callback function immediately after the current phase of
the event loop. It allows you to schedule a callback to be executed asynchronously, but before
any other I/O events or timers.
Use cases for timers include implementing delays, scheduling periodic tasks, executing
code asynchronously, and coordinating actions within the event loop.

9. What are callbacks in Node.js? Explain their role and importance in asynchronous
programming.

Callbacks in Node.js are functions that are passed as arguments to other functions and are
called when a certain operation or task is complete. They play a crucial role in handling
asynchronous programming in Node.js.

The importance of callbacks lies in their ability to ensure that operations do not block the
execution of the program. Instead of waiting for an operation to complete, Node.js proceeds
to execute other tasks and invokes the callback when the operation finishes. This non-
blocking behavior allows for efficient handling of I/O operations and improves the overall
performance and responsiveness of Node.js applications.

Callbacks enable developers to write asynchronous code that follows a sequential flow,
making it easier to handle dependencies, perform error handling, and control the order of
execution. They are fundamental to many Node.js APIs and are extensively used in event
handling, file system operations, networking, and database interactions .

10. How do you handle data input/output in Node.js? Explain the concept of streams and
their advantages.

Figure 3: Advantagers of Js Streaming

In Node.js, data input/output is handled using streams. Streams provide an efficient way to
read from or write to data sources incrementally, without loading the entire dataset into
memory. This is particularly useful for processing large volumes of data or handling real-time
data transmission.

Streams can be categorized into four types: Readable, Writable, Duplex, and Transform
streams. Readable streams allow reading data, writable streams allow writing data, duplex
streams provide both reading and writing capabilities, and transform streams modify the data
during processing.

Advantages of using streams include:

Memory efficiency: Streams process data in chunks, reducing memory usage compared to
loading the entire dataset into memory. This is especially beneficial when working with large
files or network data.

Scalability: Streams handle data incrementally, allowing for the processing of data as it
arrives. This enables real-time data processing and reduces latency in applications.

Responsiveness: Streams enable faster response times as data processing can start as
soon as the first chunk of data is available. This improves the overall responsiveness of
applications, especially when dealing with time-consuming operations.

Piping and chaining: Streams can be easily piped or chained together, allowing data to flow
seamlessly from one stream to another. This simplifies the implementation of complex data
processing pipelines.
11. Explain the concept of buffers in Node.js and their role in handling binary data.

Buffers in Node.js are objects used to represent sequences of binary data. They are
particularly useful when dealing with binary data in network protocols, file systems, or when
working with binary data formats.

Buffers can be created in several ways, such as by allocating a specific size or by converting
strings or other data types into binary format. Once created, buffers provide methods for
reading, writing, and manipulating binary data.

Buffers have a fixed size and can be considered as a fixed-length array of bytes. They allow
direct manipulation and efficient processing of binary data, making them suitable for
scenarios where direct access to binary data is required.

Buffers in Node.js are commonly used for tasks such as reading or writing files in binary
mode, encoding or decoding binary data formats, or sending binary data over network
sockets.

12. What are file streams in Node.js? Explain their advantages and how they can be used
for efficient file handling.

File streams in Node.js are a type of stream that provides a convenient way to handle file
input/output operations. They allow reading from or writing to files in chunks, rather than
loading the entire file into memory, providing efficient file handling capabilities.

Advantages of file streams include:

Memory efficiency: File streams process files in chunks, reducing memory consumption
compared to reading or writing the entire file at once. This is particularly advantageous when
working with large files or when memory resources are limited.

Performance: File streams enable efficient I/O operations by reading or writing data in
chunks, rather than blocking the execution while waiting for the entire file to be read or
written. This improves the performance and responsiveness of file operations.

Scalability: File streams can handle large files or multiple files concurrently, making them
suitable for scenarios that involve processing or transferring large amounts of data efficiently.

Convenience: File streams provide a consistent API for reading from or writing to files,
allowing for easy integration with other stream-based operations or data pipelines.

You might also like