You are on page 1of 5

Programme Name B.

tech Programme code 01-01


Course code TCS-702 Course Name Advance Computer
Architecture
Faculty Name Ms. Sakshi Koli Sem /Year 7th/4th

Unit1
1. What is Moore's Law and how has it influenced the field of computer architecture and
technology trends over time?
2. Explain the different classes of parallelism in computer architecture. How do they
relate to parallel architectures?
3. Discuss Instruction Set Architecture and its role in shaping the myopic view of
computer architecture.
4. What are the significant trends in technology that have impacted computer
architecture? Include aspects like cost, processor speed, power consumption, and
fabrication yield in your answer.
5. Explain the concept of Performance Metrics in computer architecture. How are they
measured, and why are they essential in evaluating system performance?
6. Describe the Iron Law of Performance and its relevance in computer architecture.
How does it impact the design and evaluation of computing systems?
7. What is Amdahl's Law, and how does it address the limitations of parallel processing
in enhancing system performance?
8. Discuss Lhadma's Law and its implications on computer architecture. How does it
affect the performance of computing devices, especially in the context of modern
technology trends?
9. How does Fabrication Yield impact the production of computer components? What
strategies are employed to improve yield rates in the manufacturing process?
10. Examine the role of Benchmark Standards in evaluating computer architecture
performance. Why are they crucial, and how do they help in comparing different
computing systems?
11. What are the recent trends in Computer Architecture regarding advancements in
Technology and Cost? Provide examples of innovations that have significantly
impacted the field.
12. Describe the relationship between Instruction Set Architecture and Parallel
Architectures. How do different instruction sets affect the efficiency of parallel
processing tasks?

UNIT 2:
13. Explain the basics of Memory Hierarchy in computer systems. Why is Memory
Hierarchy essential, and how does it improve overall system performance?
14. Discuss the concepts of Coherence and Locality Properties in Memory Hierarchy.
How do these properties impact the efficiency of memory access?
15. Describe different Cache Memory Organizations. What are the advantages and
disadvantages of direct-mapped, set-associative, and fully associative caches?
16. Examine Cache Performance metrics. What factors influence cache hit rates and miss
penalties? How do these metrics impact the overall system performance?
17. Explore Cache Optimization Techniques. How can cache performance be improved
through techniques such as prefetching, write policies, and cache line size
optimization?
18. Explain the concept of Virtual Memory. How does Virtual Memory differ from
Physical Memory, and what are the benefits of using virtual memory systems?
19. Discuss Techniques for Fast Address Translation in the context of Virtual Memory.
How are virtual addresses translated to physical addresses efficiently in modern
computer architectures?
20. Compare and contrast the advantages of different Cache Memory Organizations
(direct-mapped, set-associative, and fully associative) in terms of speed, complexity,
and efficiency.
21. How do Coherence and Locality Properties impact the design of cache memory
systems? Provide examples of how these properties influence cache behavior in
different computing scenarios.
22. Examine the challenges associated with Cache Optimization. What trade-offs do
designers face when optimizing cache performance, and how do these decisions
impact overall system efficiency?
23. Describe real-world applications where Virtual Memory is crucial. Discuss how
Virtual Memory systems enhance the performance and usability of these applications.
24. Investigate recent advancements in Memory Hierarchy Design. What cutting-edge
technologies or research efforts are being explored to improve memory access speed,
efficiency, and overall system performance?

Unit 3

1. Explain the concept of Pipelining in computer architecture. How does pipelining


improve the execution speed of instructions?
2. Discuss the basics of a RISC ISA (Reduced Instruction Set Architecture). What are
the fundamental characteristics that define a RISC instruction set?
3. Describe the classic five-stage pipeline for a RISC processor. What are the stages, and
how does each stage contribute to the overall instruction execution process?
4. Examine the performance issues in pipelining. What factors can cause performance
bottlenecks in a pipelined architecture, and how can these issues be mitigated?
5. Explain the concept of Pipeline Hazards. What are data hazards, control hazards, and
structural hazards? How do these hazards affect the smooth flow of instructions in a
pipeline?
6. Discuss strategies to resolve Data Hazards in a pipelined processor. What techniques,
such as forwarding and data speculation, are used to handle dependencies between
instructions?
7. Explore methods to address Control Hazards in a pipelined architecture. How are
branch instructions and other control flow changes managed to prevent pipeline stalls
and improve performance?
8. Describe Structural Hazards in the context of pipelining. How can these hazards
occur, and what techniques are employed to handle resource conflicts in a pipeline?
9. Compare the advantages and disadvantages of Pipelining in RISC and CISC
(Complex Instruction Set Computing) architectures. How does the RISC approach
enhance pipelining efficiency?
10. Examine the role of Instruction Reordering in optimizing pipelined performance. How
can reordering instructions improve pipeline utilization and overall system
throughput?
11. Discuss the concept of Speculative Execution in pipelining. How does speculative
execution work, and what benefits does it offer in terms of instruction throughput and
performance?
12. Investigate recent advancements in Pipelining technology. What innovations or
research efforts have been made to further improve the efficiency and effectiveness of
pipelined processors?
Unit 4
13. Explain Branch Prediction in computer architecture. What role does it play in
improving the performance of modern processors?
14. Describe Direction Prediction. How do directional predictors anticipate the outcome
of branches, and what techniques are used to enhance their accuracy?
15. Discuss Hierarchical Predictors. What are they, and how do they combine different
prediction methods to achieve more accurate branch predictions?
16. Explain If-Conversion as a technique for optimizing code. How is it related to branch
prediction, and what impact does it have on instruction execution efficiency?
17. Discuss Conditional Move instructions. How do these instructions mitigate the need
for branches in certain situations? What advantages do they offer in terms of
performance and code efficiency?
18. Introduce Instruction Level Parallelism (ILP). What is ILP, and how does it enable the
execution of multiple instructions simultaneously to enhance processor performance?
19. Explain RAW (Read After Write) and WAW (Write After Write) dependencies in the
context of ILP. How do these dependencies affect instruction scheduling and
execution in a pipelined processor?
20. Discuss the concept of Duplicating Register Values in ILP. How is this technique used
to manage register dependencies and improve instruction parallelism in out-of-order
execution processors?
21. Explore the challenges in Branch Prediction. What factors make accurate branch
prediction difficult, and how do different prediction algorithms address these
challenges?
22. Describe the role of Compiler Optimizations in improving branch prediction accuracy
and instruction level parallelism. How can compilers optimize code to reduce branch
mispredictions and increase ILP?
23. Discuss the limitations of ILP. What factors can restrict the effective utilization of ILP,
and how do these limitations impact the overall performance of processors?
24. Investigate recent advancements in Branch Prediction and Instruction Level
Parallelism. What innovative techniques or algorithms have been developed to
address the evolving challenges in these areas?

Unit 5

1. Explain Centralized Shared-Memory Architecture. How does it function, and what are
the advantages and limitations of this architecture in parallel computing?
2. Discuss the Taxonomy of Parallel Architectures in Multiprocessor Systems. What are
the different categories of parallel architectures, and how do they differ in terms of
design and performance?
3. Describe Distributed Shared-Memory Architecture. How is shared memory
distributed across multiple processors, and what challenges are associated with
maintaining coherence in such systems?
4. Compare Message Passing and Shared Memory paradigms in parallel computing.
What are the fundamental differences between these approaches, and in what
scenarios is one approach preferred over the other?
5. Explain the concept of Cache Coherence in the context of Multiprocessor Systems.
How is cache coherence maintained in shared-memory architectures, and what are the
techniques used to handle conflicts between cached data?
6. Discuss the advantages of Centralized Shared-Memory Architecture over other
parallel architectures. What specific applications benefit the most from this type of
architecture, and why?
7. Examine the challenges associated with Distributed Shared-Memory Architectures.
How do issues such as latency, bandwidth, and scalability impact the performance of
these systems?
8. Explore the different categories within the Taxonomy of Parallel Architectures.
Provide examples for each category and explain the unique characteristics that define
them.
9. Describe the role of Memory Consistency Models in Multiprocessor Systems. How do
these models ensure consistency and correctness when multiple processors access
shared data concurrently?
10. Compare the performance and scalability of Message Passing and Shared Memory
systems in the context of large-scale parallel computing. What factors influence their
efficiency in high-performance computing clusters?
11. Discuss the impact of Network Topology on Distributed Shared-Memory
Architectures. How does the choice of network topology affect communication
latency and overall system performance?
12. Examine the evolution of Multiprocessor Architectures over the years. How have
advancements in technology and research shaped the design and capabilities of
modern parallel computing systems?

You might also like