You are on page 1of 13

Faculty of Computer Science and Information

Technology (FSKTM)

BIC10503
Computer Architecture

Assignment:
The Evolution Of Computer

Lecturer:
Dr. Mohd Hamdi Irwan Bin Hamzah

N Name Matrix
o
1 Muhammad Afiq Bin Rudy CI220139
Azmir
2 Habib Al Farih BI220041

Date of submission: 8 December 2023


Table of contents

1. Introduction
2. Evolution of graphic card technology
2.1 Early History
2.2 Architectural Advancements
2.3 Real-time Ray Tracing
2.4 AI Integration
2.5 Memory and Bandwidth Advancements
2.6 Power Efficiency and Thermal Management

3. Impact on Industries
3.1 Gaming Revolution
3.2 Design and Creativity
3.3 AI and Machine Learning
4. Future Trajectory and Challenges
5. Conclusion
6. References
Pioneering the Future: A Deep Dive into Cutting-Edge
Graphic Card Technology
Abstract: This extensive review delves into the most recent advancements
in graphic card technology, examining the evolution, architectural
breakthroughs, industry implications, and potential trajectories for future
development. Graphics card technology stands as a cornerstone in the
evolution of computing, pushing boundaries, fostering creativity, and
enabling extraordinary advances. Despite the challenges, the potential for
future innovation in graphics cards promises to redefine computing
capabilities, empowering industries and individuals in the digital age.

Keywords: Graphic cards, GPU architecture, ray tracing, AI integration,


gaming, visual computing.

1. Introduction
Provides an introduction of graphic card technology's importance in modern
computers, its crucial position in several industries, and a summary of the
comprehensive assessment. The discrete graphic card is configured with isolated
visual memory, which does not share the system's memory bandwidth. On the PC, the
performance advantage of a dedicated graphics card is evident. Discrete graphics
cards are also rather common in gaming laptops (Zhao, Yang, & Zheng, 2013) Many
graphics cards are now integrated into the computer's core processor. Manufacturers
of graphics cards are aiming for great performance. Consider Nvidia's latest product:
Graphic Card RTX4090.
GPU applications are designed to increase throughput, allowing the execution of
as many tasks as feasible at the same time. This necessitates a large number of cores
that are as simple as possible, so eliminating all logic that improves the speed of a
single instruction stream while obtaining the flexibility to place more cores on a chip.
This development was made feasible by the introduction of faster Graphics
Processing Units (GPU) in conjunction with built-in ray tracing hardware accelerators
(Lilja, A., & Videfors, 2023)

1
2. Evolution of Graphic Card Technology
2.1 Early History
The early history of graphic card technology is an enthralling adventure that paved the
path for today's complex and powerful GPUs. Here's a more in-depth account of its
evolution:
1. The Evolution of Graphics Processing Units (GPUs): Dedicated graphics chips
were introduced in the 1980s. Since at least the 1980s, the word GPU has been in
use. It was popularized by Nvidia in 1999, when it marketed the GeForce 265
AIB as "the world's first GPU" (Peddie, J ,2023)
2. Display Standard Evolution: EGA and VGA Standards, In 1984, IBM released
the second-generation Enhanced Graphics Adapter (EGA), which succeeded and
outperformed the CGA's capabilities. In 1987, the EGA standard was replaced
with the VGA standard (Wardyga, B. J. 2023)
3. Early 3D Rendering and 2D Acceleration: Graphics hardware improved to offer
2D acceleration. GUI capable of assessing three 2D image reconstruction
methods: BP, MBP, and LSQR. With the advent of 3D games and applications in
the mid-1990s, demand for 3D graphics skyrocketed. In 1994, he founded 3dfx
Interavtive, a groundbreaking 3D graphics startup. It was originally spelled
"3Dfx." In 1998-1999, it was changed to "3dfx" (lowercase "d").
4. AGP and Advancements: The introduction of AGP in 1997 revolutionized
graphics card connectivity by providing a dedicated high-speed port for graphics
cards and improving data transfer rates between the GPU and system memory.
NVIDIA first appeared on the scene in the late 1990s, demonstrating significant
advances in 3D rendering performance.
5. Transform & Lighting and DirectX Integration: GPUs began incorporating
dedicated hardware for Transform & Lighting operations, offloading these tasks
from the CPU and significantly improving 3D rendering speeds and visual
quality. The DirectX API from Microsoft provides a common framework for
game developers to use GPU capabilities.
6. Evolution to Modern GPUs: In the mid-2000s, the evolution to unified shader
architectures, such as NVIDIA's GeForce 8 series and AMD's Radeon HD 2000
series, marked a fundamental shift in GPU design, allowing for more flexible and
efficient processing.

2
This early history paved the way for the explosive growth and advancements that
continue to define today's cutting-edge graphic card technology, tracing a path from
simple graphical displays to highly complex, versatile, and powerful GPUs.

2.2 Architectural Advancements


A GPU's hardware architecture differs from that of a CPU in several key ways, with
these differences inherited from early areas of GPU implementation (realtime
graphics), where the same instructions had to be implemented in large
numbers(Soller, S, 2011). GPUs are designed to maximize throughput, or the ability
to perform as many tasks as possible at the same time. To accomplish this, a large
number of cores as simple as possible must be used, removing all logic that improves
the performance of a single instruction stream while gaining the ability to put more
cores on a chip.
A GPU device is made up of multiple Streaming Multiprocessors (SM).
Streaming processors can function independently. A single SM contains tens (for
example, 32) computing cores that operate in Single Instruction Multiple Thread
(SIMT) mode, which means that all instructions in all threads are executed in locked
steps. Warp threads combine accesses to the GPGPU main memory. Loads and stores
to and from main memory are organized into contiguous chunks of memory
addresses, with the goal of feeding all threads in a warp. Memory accesses in blocks
must be sequenced to maximize performance and reduce the number of transactions
with main memory: kth threads in warp must access kth memory chunk elements.
For programs that want to use GPU resources to accelerate general computing
applications, several programming languages are available, the most popular being
Compute Unified Device Architecture (CUDA)(Fitzek, F. H,2020). Only for Nvidia
GPUs. However, with the introduction of modern GPUs from competing brands such
as AMD or Intel, Nvidia GPU restrictions may become obsolete. As a result, other
languages such as OpenMP 4, OpenCL , and OpenACC.
Graphic card manufacturing is an assembly-oriented and labor-intensive industry
which needs a lot of manpower and machines in the production line. The process of
graphic card manufacturing are illustrated in Fig. 2.1 (Kuo, R. J., & Nursyahid,
F. F,2023).

3
Figture 2.1 Graphic card manufacturing process
2.3 Real-Time Ray Tracing
Although ray tracing, which involves tracing the movement of light, is a simple and
powerful method of implementing reflections and shadows, its computational load is
currently too high to implement these effects in real-time using GPU hardware. As a
result, other methods for applying reflections and shadows have been developed.
Due to the high computational load involved, real-time implementation of ray
tracing for shadows and reflections using existing GPU hardware is difficult. Nvidia
has improved ray tracing performance by releasing graphics cards with dedicated ray
tracing hardware (Frolov, V. A., & Voloboy, A. G,2019). Furthermore, ray
tracing APIs have been added to graphics APIs such as DirectX and MICROSOFT's
Vulkan, and some support for real-time ray tracing has been added [14].
Ray-Specialized Bounding Volume Hierarchy (BVH) structure for efficient ray
tracing on complex scenes. DirectX is a Microsoft graphics API that provides an API
suitable for real-time ray tracing as well as GPU-accelerated ray tracing [14]. Hybrid
ray tracing is a technique for handling the entire rendering process that combines
rasterization and ray tracing. This method is required because the computational
demands of rendering everything using ray tracing can be extremely high [16].

4
2.4 AI Integration
Investigates the incorporation of AI and deep learning within graphics cards,
assessing the implications for AI-enhanced upscaling, noise reduction, and real-time
content creation. AI integration in graphics cards represents a significant advancement
in computational capability, allowing GPUs to use artificial intelligence and deep
learning techniques for tasks other than traditional graphics processing. GPU Al
Integration: 1. AI-Specific Hardware (NVIDIA Tensor Cores / AMD Matrix Cores,
Custom AI Accelerator), 2. AI-Powered Features (NVIDIA's DLSS (Deep Learning
Super Sampling) and AMD's FidelityFX Super Resolution, Noise Reduction, Content
Creation), 3. GPGPU and Deep Learning 4. Industry Applications (Gaming, Content
Creation, AI and Machine Learning), 5. Next Steps (AI-Based Real-Time
Interactivity).
AI integration in graphics cards represents a paradigm shift, leveraging artificial
intelligence's power to improve performance, efficiency, and capabilities across a
wide range of applications beyond traditional graphics processing.
GPUs have aided AI research, particularly deep learning. As a result, Jupyterlab's
backend is powered by GPUs to accelerate long-running AI training programs by
parallelizing matrix multiplication. CUDA bridges the gap between GPU and AI
algorithms. In addition, the notebook supports open neural network exchange
(ONNX), a standard AI model format for converting Scikit-learn and Tensorflow
models into onnx files. These model files are easily shareable and can be used for
inference. Galaxy also supports the onnx file format, which makes moving the model
from notebook to Galaxy and vice versa easier. Galaxy remote training tools can also
run on the GPU. Scikit-learn and Tensorflow for developing AI algorithms, Open-CV
and Scikit-Image for image processing, and the NiBabel package for processing
image files have all been made available to researchers [17].

2.5 Memory and Bandwidth Advancements


Graphic card memory and bandwidth advancements are critical for meeting modern
computing's ever-increasing demands, enabling smoother performance, handling

5
larger datasets, and pushing the boundaries of visual quality and computational
capabilities.
Both NVM and DRAM are connected to the memory bus on hybrid
NVM/DRAM platforms. As a result, different applications compete for the limited
memory bandwidth, and different types of memory traffic interfere with each other,
lowering overall application performance on the hybrid NVM/DRAM platform.
Memory bandwidth regulation is a common approach for reducing interference with
memory bandwidth usage in order to alleviate the problem of noisy neighbors. With
the commercialization of NVM in cloud data centers, the demand for memory
bandwidth regulation on hybrid platforms is growing. However, several significant
challenges hinder memory bandwidth regulation on hybrid NVM/DRAM platforms
[18].
The first difficulty is asymmetry in memory bandwidth. Different memory
accesses (i.e., DRAM read, DRAM write, NVM read, and NVM write) result in
different maximum memory bandwidth on an NVM/DRAM hybrid platform. The
actual available memory bandwidth is highly dependent on the workload's proportion
of various types of memory accesses [19].

2.6 Power Efficiency and Thermal Management


Discusses power efficiency and thermal management innovations, highlighting
technologies such as NVIDIA's DLSS and AMD's FidelityFX Super Resolution.
Power efficiency and thermal management are critical components of modern graphic
card design, ensuring optimal performance while minimizing heat generation.
Thermal management of electrical devices is a multifaceted issue. Heat generated
at the device junction is typically dissipated through the packaging to an area where it
can be further dissipated, typically through convection (e.g., air and liquid cooling)
[21]. As a result, thermal processing of electrical devices must consider packaging
and cooling as well as internal structure and materials in a holistic manner [21].
Thermal management of power devices can be divided into two categories:
device and package level design. Spreading heat evenly throughout the device,
reducing hot spots around critical junction areas, and using materials with high kT as
the device substrate are all examples of device-level design [22]. The major collective
push in the United States for recent thermal management efforts began with DARPA's

6
thermal management technology program and continued with nearjunction thermal
transport and intra-chip/inter-chip enhanced cooling programs [23].

3. Impact on Industries
3.1 Gaming Revolution
Analyzes the profound impact of cutting-edge graphic card technology on gaming
experiences, such as improved visual fidelity, faster frame rates, and immersive
environments. Graphic card technology advancements have fueled a significant
revolution in the gaming industry. Here's a quick rundown:
1. Improved Visual Fidelity: The inclusion of real-time ray tracing technology in
graphics cards, such as NVIDIA's RTX series and AMD's RDNA 2, has
transformed gaming visuals. It allows for the accurate simulation of light
behavior, resulting in realistic reflections, shadows, and lighting effects that
improve overall visual immersion [24].
2. Higher Resolution and Frame Rates: Modern graphics cards can support higher
resolutions such as 4K and even 8K, giving gamers sharper and more detailed
visuals [25]. GPUs enable higher refresh rates, resulting in smoother gameplay
and reduced motion blur, resulting in a more responsive gaming experience [5].
3. AI-Powered Upscaling: Technologies such as NVIDIA's DLSS and AMD's
FidelityFX Super Resolution use AI algorithms to upscale lower resolutions to
higher resolutions while maintaining performance, resulting in improved image
quality and smoother gameplay [5].
Graphic card technology's impact on gaming has elevated the industry, allowing
developers to create more visually stunning, immersive, and responsive gaming
experiences.

3.2 Design and Creativity


Explores how advances in graphic card technology are empowering designers, artists,
and content creators by enabling faster rendering, complex simulations, and real-time
editing capabilities.
Graphic card technology advancements have had a significant impact on design
and creativity in a variety of industries [26]. Here's an examination of their influence:

7
2. Complex Simulations and Visualization, 3. High-Fidelity Visuals and Realism, 4.
Specialized Tools and Software Integration, 5. Augmented Reality (AR) and Virtual
Reality (VR), 6. Impact on Various Creative Industries, and 7. Future Possibilities.
Graphic card technology continues to be a catalyst for innovation in design and
creativity, empowering professionals in various industries to achieve greater levels of
realism, efficiency, and interactivity in their work.

3.3 AI and Machine Learning


The role of GPUs in accelerating AI and machine learning tasks, as well as facilitating
training and inference processes [18] is investigated. The incorporation of AI and
machine learning (ML) into graphic card technology has enabled transformative
capabilities in a variety of fields. Here's an examination of its significance: 1. AI and
machine learning task acceleration, 2. advanced AI-driven features 3. GPGPU AI
Computing four. data processing and analysis 5. Gaming AI Integration 6. AI-Driven
Content Creation, 7. Industry Impact, and 8. Future Developments.
The combination of AI and GPU technology enables accelerated computation,
allowing for faster and more efficient processing across a wide range of applications
beyond traditional graphics. This synergy continues to drive innovation and
advancements in a variety of industries, influencing the future of AI-powered
computing.

4. Future Trajectory and Challenges


4.1 Potential Future Innovations
Speculation on potential future advancements in graphic card technology, such as
developments in quantum computing, deeper AI integration, and enhanced energy
efficiency [27].
Future advancements in graphic card technology have the potential to push the
boundaries of computational capabilities, enabling game-changing advances. Here are
some ideas for new directions: 3. Quantum Computing Integration, 4. Energy
Efficiency and Sustainable Computing, 5. Interconnectivity and Scalability, 6. Hybrid
Architectures and Specialized Processing Units, and 7. Personalized and Adaptive
Computing.

8
The potential future innovations in graphic card technology hold the promise of
revolutionizing computing capabilities across various industries, from gaming and
entertainment to scientific research and beyond.

4.2 Challenges and Limitations


While the potential for innovation in graphic card technology is enormous, developers
and engineers face several challenges and limitations in furthering this technology
1. Thermal Constraints (Heat Dissipation),
2. 2. Power Consumption,
3. 3. Manufacturing and Cost (Manufacturing Complexity, Accessibility, and
Affordability),
4. 4. Memory Bandwidth Bottlenecks
5. 5. AI Integration Challenges (AI Algorithm Optimization, Balancing AI and
Graphics Workloads),
6. 6. Scaling and Interoperability (Scaling Limitations),
7. 7. Ethical and Regulatory Concerns (AI Ethics), and
8. 8. Technological and Physical Limitations.
Navigating these challenges and limitations is critical for the long-term
development and advancement of graphic card technology, ensuring that future
innovations are not only cutting-edge but also address critical concerns like energy
efficiency, accessibility, and ethical concerns.

5. Conclusion
The evolution of graphic card technology has propelled computing capabilities to
unprecedented heights, revolutionizing industries, entertainment, and creative
endeavors.
1. Real-time ray tracing and AI integration have increased visual fidelity in
gaming, design, and content creation, bringing photorealistic rendering to the
forefront. AI and machine learning integration within GPUs has accelerated
tasks.
2. Graphic card advancements have resulted in immersive gaming experiences
with higher resolutions, realistic lighting, and smoother frame rates.

9
3. Future Potential and Challenges: Future innovations such as advanced ray
tracing, quantum computing integration, and sustainable computing hold great
promise in terms of fostering new dimensions in computing power and
efficiency. Addressing power consumption, thermal management, cost, and
ethical concerns will be critical in driving sustainable progress.

6. References
[1] Zhao, B., Yang, D., & Zheng, Z. The Implementation and Feasibility Study of
Supporting Dual Graphics Cards on Android Devices.
https://www.clausiuspress.com/assets/-default/article/2023/05/11/article_168
3796520.pdf
[4] Lilja, A., & Videfors, M. (2023). A Performance Comparison of Path Tracing on
FPGA And GPU. https://www.diva-portal.org/smash/record.jsf?
pid=diva2:1786171
[5] Peddie, J. (2023). The History of the GPU-Eras and Environment. Springer
Nature. https://doi.org/10.1007/978-3--31-13581-1
[6] Wardyga, B. J. (2023). The Video Games Textbook: History• Business•
Technology. CRC Press. ISBN: 9781000868227, 1000868222.
[9] Soller, S. (2011). GPGPU Origins and GPU Hardware Architecture.
https://hdms.bsz-bw.de/files/4500/gpgpu-origins-and-gpu-hardware-
architecture.pdf
[10] NVIDIA, P. V., & Fitzek, F. H. (2020). Cuda, Release: 10.2. 89, 2020. URL
Https://Developer. Nvidia. Com/Cuda-Toolkit.
[13] Kuo, R. J., & Nursyahid, F. F. (2023). Foreign Objects Detection Using Deep
Learning Techniques for Graphic Card Assembly Line. Journal Of
Intelligent Manufacturing, 34(7), 2989-3000.
https://link.springer.com/article/10.1007/s10845-022-01980-7
[15] Sanzharov, V. V., Gorbonosov, A. I., Frolov, V. A., & Voloboy, A. G. (2019,
December). Examination Of the Nvidia RTX. In CEUR Workshop
Proceedings (Vol. 2485, Pp. 7-12).
https://www.researchgate.net/profile/AlexeyVoloboy/publication/337392133
_-Examination_of_the_Nvidia_RTX/links/5de51a13299bf10bc3390dbd/
Examination-of-the-Nvidia-RTX.pdf
[16] Haines, E., & Akenine-Möller, T. (Eds.). (2019). Ray Tracing Gems: High-
Quality And Real-Time Rendering with DXR And Other Apis. Apress. ISBN:
9781484244272, 1484244273.
[17] Kumar, A. (2023). A Docker-Based Interactive Jupyterlab Powered by GPU For
Artificial Intelligence in Galaxy.
https://training.galaxyproject.org/trainingmaterial/topics/-statistics/tutorials/
gpu_jupyter_lab/tutorial.html
[18] Yi, J., Dong, B., Dong, M., Tong, R., & Chen, H. (2022). {MT^ 2}: Memory
Bandwidth Regulation on Hybrid {NVM/DRAM} Platforms. In 20th
USENIX Conference on File and Storage Technologies (FAST 22) (Pp. 199-
216). https://www.usenix-.org/conference/fast22/presentation/yi-mt2
[19] Xiang, Y., Ye, C., Wang, X., Luo, Y., & Wang, Z. (2019, August). EMBA:
Efficient Memory Bandwidth Allocation to Improve Performance on Intel
Commodity Processor. In Proceedings of the 48th International Conference

10
on Parallel Processing (Pp. 1-12).
https://dl.acm.org/doi/abs/10.1145/3337821.3337863
[20] Qin, Y., Albano, B., Spencer, J., Lundh, J. S., Wang, B., Buttay, C., ... & Zhang,
Y. (2023). Thermal Management and Packaging of Wide and Ultra ‐Wide
Bandgap Power Devices: A Review and Perspective. Journal Of Physics D:
Applied Physics.
https://iopscience.iop.org/article/10.1088/1361-6463/acb4ff/meta

11

You might also like