This action might not be possible to undo. Are you sure you want to continue?
BY SYED AWAIS HYDER M.Tech (VLSI) I Year I Sem 10D51D5711
Conclusion 8. Abstract 2. Intel Core 2 Duo Overview 7. Intel Pentium Pro Overview 3. Intel Pentium 2 Overview 4. Intel Pentium 3 Overview 5. Intel Pentium 4 Overview 6. References .INDEX 1.
Intel Intelligent Power Capability. Thus the comparison between these two processors concludes the above discussion. Here we are going to discuss one of most popular and most used brand of processor. The key innovations in Intel Core Duo Processor are Intel Wide Dynamic Execution. We are going to discuss the above key innovations as well. which simulates real life workloads for Internet Content Creation and Office Productivity. Intel Advanced Smart Cache. We will be discussing about the brief history of the Intel processors and then we will be discussing the new Intel Core Duo Processor and its features. Intel Advanced Digital Media Boost.INTEL PROCESSORS ABSTRACT Today there are a number of processors which we use in our systems. In the end we are going to compare the Intel Core 2 Duo and AMD AM2 based on the results of SYSmark 2004SE. Syed AwaisHyder M. Intel Smart Memory Access.tech (VLSI) I Year I Sem 10D51D5711 .
5 million transistors. before testing was possible.1 and 4. Likely Pentium Pro's most noticeable addition was its on-package L2 cache. At the time. also called computer organization. the Pentium Pro's cache had its own back-side bus (called dual independent bus by Intel). unlike most motherboard-based cache schemes that shared the main system bus with the CPU.Intel Pentium Pro It introduced the P6 microarchitecture (sometime referred as i686) and was originally intended to replace the original Pentium in a full range of applications. While the Pentium and Pentium MMX had 3. Intel instead placed the L2 die(s) separately in the package which still allowed it to run at the same clock speed as the CPU core. However. greatly reducing a traditional bottleneck. The processor and the cache were on separate dies in the same package and connected closely by a full-speed bus. the Pentium Pro contained 5. Pentium Pro's integrated cache skyrocketed performance in comparison to architectures which had each CPU sharing a central cache. The two or three dies had to be bonded together early in the production process. which was one of the reasons for the Pentium Pro's relatively low production yield and high cost. is the way a given instruction set architecture (ISA) is implemented on a . reducing cache-miss penalties. which ranged from 256 KB at introduction to 1 MB in 1997. The Pentium Pro's "on-package cache" arrangement was unique. Because of this. This meant that a single. These properties combined to produce an L2 cache that was immensely faster than the motherboard-based caches of older processors. the CPU could read main memory and cache concurrently. manufacturing technology did not feasibly allow a large L2 cache to be integrated into the processor core. This cache alone gave the CPU an advantage in input/output performance over older x86 CPUs. respectively. Additionally. The cache was also "non-blocking". All versions of the chip were expensive.5 million transistors. meaning that the processor could issue more than one cache request at a time (up to 4). In multiprocessor configurations. tiny flaw in either die made it necessary to discard the entire assembly. this far faster L2 cache did come with some complications. P6 microarchitecture Microarchitecture (sometimes abbreviated to µarch).
Speculative execution is a performance optimization. Some techniques first used in the x 86 spaces in the P6 core include: Speculative execution: It means doing work. processing instead the next instructions which is able to run immediately. ahead of branch.Used in most highperformance microprocessors to make use of instruction cycles that would otherwise be wasted by a certain type of costly delay. and was widely known for low power consumption. some instructions have to be scheduled ahead of time in a place that is not determined that such instructions have to be executed at all. the order in which the data. In doing so. Out-Of-Order: Out Of Order completion (called "dynamic execution" by Intel) Instructions are executed based on data flow rather than program order. a processor executes instructions in an order governed by the availability of input data. the result of which may not be needed. rather than by their original order in a program. This performance optimization technique is used in pipelined processors and other systems. in the processor they are handled in data order. and relatively high instructions per cycle (IPC).The key concept of OoO processing is to allow the processor to avoid a class of stalls that occur when the data needed to perform an operation are unavailable.The way the instructions are ordered in the original computer code is known as program order.processor. excellent integer performance. become available in the processor's registers.Modern pipelined microprocessors use speculative execution to reduce the cost of conditional branch instructions using schemes that predict the execution path of a program based on the history of branch executions. In this paradigm. It turns out that in order to improve performance and utilization of computer resources. Implementations might vary due to different goals of a given design or due to shifts in technology. . the processor can avoid being idle while data is retrieved for the next instruction in a program. A given ISA may be implemented with different microarchitectures. operands. Computer architecture is the combination of microarchitecture and instruction set design. The main idea is to do work that may not be needed.The P6 architecture lasted three generations from the Pentium Pro to Pentium III.
so physical address size increases from 32 bits to 36 bits. can be scaled to higher clock frequency. operating system support is required. it removes false dependencies and architecture limit for the number of registers. Physical Address Extension (PAE) It is a feature to allow x86 processors to access a physical address space (including random access memory and memory mapped devices) larger than 4 gigabytes. increases maximum physical memory size from 4 GB to 64 GB. The mapping is typically applied differently for each process. In theory. The elements of a pipeline are often executed in parallel or in time-sliced fashion. The 32-bit size of the virtual address is not changed. In this way. in that case. High performance .Super pipelining: Pipeline:It means breaking the work into smaller pieces increases the number of instructions per cycle. Register Renaming It refers to a technique used to avoid unnecessary serialization of program operations imposed by the reuse of registers by those operations i. so that the output of one element is the input of the next one.e. this depends a lot on other design characteristics. but current 32-bit desktop editions enforce the physical address space within 4GB even in PAE mode. each doing less work. Super pipeline:It means splitting to shorter stages which can multiply the throughput up to 2x times. a design with more stages. A pipeline is a set of data processing elements connected in series.Microsoft Windows implements PAE if booted with the appropriate option. and it isn't true by default that a processor claiming super pipelining is "better" than one that does not. theoretically. This. However. The operating system uses page tables to map this 4-GB address space into the 64 GB of physical memory. some amount of buffer storage is often inserted between elements. the extra memory is useful even though no single regular application can access it all simultaneously.To use PAE.Super pipelining simply refers to pipelining that uses a longer pipeline (with more stages) than "regular" pipelining. so regular application software continues to use instructions with 32-bit addresses and (in a flat memory model) is limited to 4 gigabytes of virtual address space.x86 processor hardware-architecture is augmented with additional address lines used to select the additional memory.
The Pentium II CPU was packaged in a slot-based module rather than a CPU socket. its L2 cache subsystem was a downgrade when compared to Pentium Pros. It was cheaper to manufacture because of the separate. A fixed or removable heat sink was carried on one side. Off-package cache solved the Pentium Pro's low yields. the Pentium II featured an improved version of the first P6-generation core of the Pentium Pro.CPUs have more physical registers than may be named directly in the instruction set. which contained 5. as well. This larger package was a compromise allowing Intel to separate the secondary cache from the processor while still keeping it on a closely coupled back-side bus. Intel Pentium 2 Containing 7. However. Intel notably improved 16-bit code execution performance on the Pentium II.Combined with the larger L1 cache and improved . thus making it possible to target different market segments with cheaper or more expensive processors and accompanying performance levels. unlike the Pentium Pro. However. slower L2 cache memory. The improved 16-bit performance and MMX support made it a better choice for consumer-level operating systems.5 million transistors. so they rename registers in hardware to achieve additional parallelism. The Pentium II went to 32 KB of L1 cache. sometimes using its own fan. The L2 cache ran at half the processor's clock frequency. The processor and associated components were carried on a daughterboard similar to a typical expansion board within a plastic cartridge. This arrangement also allowed Intel to easily vary the amount of L2 cache. The Pentium II was basically a more consumer-oriented version of the Pentium Pro. because of a variety of factors. The Pentium II was also the first P6-based CPU to implement the Intel MMX integer SIMD instruction set which had already been introduced on the Pentium MMX. allowing Intel to introduce the Pentium II at a mainstream price level. whose off die L2 cache ran at the same frequency as the processor. the smallest cache size was increased to 512 KB from the 256 KB on the Pentium Pro. an area in which the Pentium Pro was at a notable handicap.5 million transistors. Most consumer software of the day was still using at least some 16-bit code. double that of the Pentium Pro.
Single instruction. is a class of parallel computers in Flynn's taxonomy. MMX is a single instruction.16-bit performance. known as MM0 through MM7 (henceforth referred to as MMn). these registers were aliases for the existing x87 FPU stack registers (so no new registers needed to be saved or restored). The most notable difference was the addition of the SSE instruction set (to accelerate floating point and parallel calculations).MMn registers are directly addressable (random access). SSE integer instructions introduced with later SSE extensions could still operate on 64-bit MMX . Until SSE2. General processor performance was increased while costs were cut .MMX defined eight registers. SSE2. such machines exploit data level parallelism. the slower and cheaper L2 cache's performance impact was reduced. and the introduction of a controversial serial number embedded in the chip during the manufacturing process. introduced with the Pentium 4. Intel Pentium 3 The Pentium III brand refers to Intel's 32-bit x86 desktop and mobile microprocessors based on the sixth-generation P6 microarchitecture introduced on February 26. The brand's initial processors were very similar to the earlier Pentium II-branded microprocessors.MMX provides only integer operations. is a major enhancement to SSE. Single instruction. Streaming SIMD Extensions (SSE): It is a SIMD instruction set extension to the x86 architecture. 1999. multiple data (SIMD). SSE2 adds new math instructions for double-precision (64-bit) floating point and also extends MMX instructions to operate on 128-bit XMM registers. multiple data (SIMD) instruction set designed by Intel. designed by Intel and introduced in 1999 in their Pentium III series processors as a reply to AMD's 3DNow! (which had debuted a year earlier). It describes computers with multiple processing elements that perform the same operation on multiple data simultaneously. Thus. To avoid compatibility problems with the context switch mechanisms in existing operating systems. SSE contains 70 new instructions. multiple data (SIMD)describes computers with multiple processing elements that perform the same operation on multiple data simultaneously.
They had the 7th-generation x86 microarchitecture.Pentium 4 CPUs introduced the SSE2 and.) by featuring a very deep instruction pipeline to achieve very high clock speeds (up to 3. adding 16 new instructions which include permuting the bytes in a word. which was the company's first all-new design since introduction of P6 microarchitecture of the Pentium Pro CPUs in 1995. SSE3. adding a handful of DSP-oriented mathematics instructions and some process (thread) management instructions. without the need to use the legacy MMX or FPU registers. Later versions featured Hyper-Threading Technology (HTT). and games. II. Net Burst differed from the preceding P6 (Pentium III. in the Prescottbased Pentium 4s. also called Prescott New Instructions (PNI). SSSE3 is often mistaken for SSE4 as this term was used during the development of the Core microarchitecture. a feature to make one physical CPU work as two CPUs. The initial 32-bit x86 instruction set of the Pentium 4 microprocessors was extended by the 64-bit x86-64 set. as SSE2 offers an orthogonal set of instructions for dealing with common data types. etc.8 GHz). media processing. SSE2 enables the programmer to perform SIMD math on any data type (from 8-bit integer to 64-bit float) entirely with the XMM vector-register file. and within-word accumulate instructions. 3D graphics. SSSE3 is an incremental upgrade to SSE3. multiplying 16-bit fixed-point numbers with correct rounding. one logical and one virtual. SSE3 instruction sets to accelerate calculations. called Net Burst. Intel Pentium 4 The Pentium 4 brand refers to Intel's line of single-core desktop and laptop central processing units (CPUs).registers because the new XMM registers require operating system support. Many programmers consider SSE2 to be "everything SSE should have been". transactions. is an incremental upgrade to SSE2. Net Burst microarchitecture: .
the micro-ops are cached in their predicted path of execution. Execution Trace Cache: Within the L1 cache of the CPU. was the successor to the P6 microarchitecture in the x86 family of CPUs made by Intel. meaning that they actually operate at twice the core clock frequency. called P68 inside Intel. in a 3. For . thereby saving considerable time. additionally this considerably enhances the integer performance of the CPU. For example. Rapid Execution Engine: With this technology. Intel devised the Rapid Execution Engine. Drawback of having more stages in a pipeline is an increase in the number of stages that need to be traced back in the event that the branch predictor makes a mistake. they are already present in the correct order of execution. so that when executing a new instruction. Intel incorporated its Execution Trace Cache.The Net Burst microarchitecture. the two ALUs in the core of the CPU are double-pumped.The Net Burst microarchitecture includes features such as Hyper Pipelined Technology and Rapid Execution Engine which are firsts in this particular microarchitecture. Hyper-Threading Technology (HTT): Hyper-threading is an Intel-proprietary technology used to improve parallelization of computations (doing multiple tasks at once) performed on PC microprocessors.8 GHz processor.Net Burst is sometimes referred to as the Intel P7. Intel 80786. To address this issue. increasing the penalty paid for a misprediction. the ALUs will effectively be operating at 7. It stores decoded micro-operations. Hyper Pipelined Technology: Intel chose this name for the 20-stage pipeline. the CPU directly accesses the decoded micro-ops from the trace cache.6 GHz. instead of fetching and decoding the instruction again. or i786 microarchitecture when comparing to previous generations.The reason behind this is to generally make up for the low IPC count. The downside is that certain instructions are now much slower (relatively and absolutely) than before. which means that when instructions are fetched by the CPU from the cache. Moreover. making optimization for multiple target CPUs difficult.
Intel Core Duo Intel Core Duo consists of two cores on one die. Hyperthreading requires not only that the operating system support multiple processors.each processor core that is physically present. reducing the power consumption of Core 2-branded CPUs while increasing their processing capacity. The Core microarchitecture returned to lower clock rates and improved the usage of both available clock cycles and power when compared with the preceding Net Burst microarchitecture of the Pentium 4/D-branded CPUs. and an arbiter bus that controls both L2 cache and FSB (front-side bus) access. and Pentium M. each containing two cores. Core 2 is a brand encompassing a range of Intel's consumer 64-bit x86-64 single-. a 2 MB L2 cache shared by both cores. Upcoming stepping of Core Duo processors will also include the ability to disable one core to conserve power. but also that it be specifically optimized for HTT. and reunified laptop and desktop CPU lines. caches. This is because the Core microarchitecture is a descendant of the P6 microarchitecture used by Pentium Pro. The introduction of Core 2 relegated the Pentium brand to the mid-range market. Pentium III. Core microarchitecture: The Core microarchitecture (previously known as the Next-Generation MicroArchitecture or NGMA) is a multi-core processor microarchitecture unveiled by Intel . and buses. Pentium II. which previously had been divided into the Pentium 4. and shares the workload between them when possible. Intel's CPUs have varied widely in power consumption according to clock rate. and Pentium M brands.and dual-core models are single-die. The single.Core-based processors do not have the Hyper-Threading Technology found in Pentium 4 processors. execution units. whereas the quad-core models comprise two dies. Pentium D. architecture. and quad-core microprocessors based on the Core microarchitecture. and Intel recommends disabling HTT when using operating systems that have not been optimized for this chip feature. and semiconductor process. the operating system addresses two virtual processors. The Core microarchitecture provides more efficient decoding stages. packaged in a multi-chip module. dual-.
One new technology included in the design is Macro-Ops Fusion.dispatch. This allows the chip to produce less heat.execute and return up to four full instructions simultaneously.The high power consumption and heat intensity of Net Burst-based processors. Intel Wide Dynamic Execution: It is a combination of techniques like data flow analysis.SSSE3 contains 16 new discrete instructions.Intel 64 is Intel's implementation of x86-64. the resulting inability to effectively increase clock speed. For example. and consume as little power as possible. Intel's materials refer to 32 new instructions. Supplemental Streaming SIMD Extensions 3 (SSSE3): Supplemental Streaming SIMD Extensions 3 (SSSE3) is a SIMD instruction set created by Intel and is the fourth iteration of the SSE technology. Itenables delivery of more instructions per clock cycle to improve execution time and energy efficiency. as well as Intel 64 and SSSE3. Therefore. It is used in newer versions of Pentium 4. All components will run at minimum speed. a common code sequence like a compare followed by a conditional jump would become a single micro-op. the Atom D510. N450. and other bottlenecks such as the inefficient pipeline were the primary reasons Intel abandoned the Net Burst microarchitecture. ramping up speed dynamically as needed.Other new technologies include 1 cycle throughput (2 cycles previously) of all 128-bit SSE instructions and a new power saving design. Core i5.in Q1 2006. Celeron D. and N550. Each can act on 64-bit MMX or 128-bit XMM registers. which combines two x86 instructions into a single micro-operation. Pentium Extreme Edition. Intel¶s mobile and Net burstmicroarchitecture could handle three instructions at a . Every execution core is wider allowing each core to fetch. It has multiple cores and hardware virtualization support (marketed as Intel VT-x). and in all versions of the Pentium D. out of order execution and super scalar that Intel first implemented in P6 microarchitecture. Core i7. It was first introduced with Intel processors based on the Core microarchitecture. and Core i3 processors. Now with Intel microarchitecture it enhances this capability with Intel Wide Dynamic Execution.The architecture features lower power usage than before and is competitive with AMD in heat production. Xeon and Pentium Dual-Core processors. speculative execution. The Core microarchitecture was designed by the Intel Israel (IDC) team that previously designed the Pentium M mobile processor. Core 2.
encryption. photo processing. They accelerate a broadrange of applications. 128-bitSSE. financial. Intel Advanced Digital Media Boost: It is a feature that significantly improves performance when executing streaming SIMD extension(SSE) instructions. deeper instruction buffers for greater execution flexibility and additional features to reduce execution time.SSE instructions enhance the Intel architectureby enabling programmers to develop algorithmsthat can mix packed. Intel Advanced Digital Media Boost isparticularly useful when running many . SSE2 and SSE3 instructions were executedat a sustained rate of one complete instruction everytwo clock cycles² for example. effectively doubling the speed of executionfor these instructions. TheIntel core microarchitecture also includesand enhanced arithmetic logic unit (ALU) to further facilitate macro fusion. This further adds to theoverall efficiency of Intel Core microarchitectureby increasing the number of instructions handledper cycle. One such feature is macro fusion. Its single cycle execution of combined instruction pairs results in increased performance for less power. speechand image. and scientific applications. In previous generation processors each incoming instruction was individually decoded and executed. On many previous generation processors.floating-point.128-bit SIMD integer arithmetic and 128-bit SIMD double double-precision floating-pointoperations reduce the overall number of instructionsrequired to execute a particular programtask.The Intel Advanced Digital Media Boost featureenables these 128-bit instructions to be completelyexecuted at a throughput rate of one per clockcycle.time. including video. Two program instructions can then be executed as one micro-op reducing the overall work of the processor. This increases the overall number of instructions that can run in a given amount of time. and as a result can contribute to an overallperformance increase. single-precision. the lower 64 bitsin one cycle and the upper 64 bits in the next. Macro fusion improves overall performance and energyefficiency.and integers. Further efficiencies include more accurate branch prediction.engineering. Macro fusion enablescommon instruction pairs to be combines into a single internal instruction (micro-op) during decoding. using both SSE and MMXinstructions respectively.
If it intelligently speculates that it can.In addition to memory disambiguation. the Intel AdvancedSmart Cache also allows each core to dynamically utilize up to100 percent of available L2 cache.Multi-Core Optimized Cache also enables obtaining data fromcache at higher throughput rates. IntelSmart . other cores can increase their percentageof L2 cache. videoand audio. and processing other rich data setsthat use SSE. reload the correct dataand re-execute the instruction. Intel shares L2 cache between cores. consider thatmost current multi-core implementations don¶t share L2 cacheamong execution cores. The Intel Advanced Smart Cache: It is a multi-core optimized cachethat improves performance and efficiency by increasing theprobability that each execution core of a dual-core processor canaccess data from a higher-performance.If the speculative load ends up being valid. more-efficient cache subsystem. Intel¶smemory disambiguation has built-in intelligenceto detect the conflict.Intel Smart Memory Access includes animportant new capability called memorydisambiguation. When one core has minimalcache requirements. theprocessor spends less time waiting and moretime processing. which increases the efficiencyof out-of-order processing by providing theexecution cores with the built-in intelligenceto speculatively load data for instructionsIntel's memory disambiguation uses specialintelligent algorithms to evaluate whether ornot a load can be executed ahead of a precedingstore.To understand the advantage of this design. the data only has tobe stored in one place that each core can access. This betteroptimizes cache resources.then the load instructions can be scheduledbefore the store instructions to enable thehighest possible instructionlevel parallelism. SSE2 and SSE3 instructions. Intel Smart Memory Access: It improves systemperformance by optimizing the use of theavailable data bandwidth from the memorysubsystem and hiding the latency of memoryaccesses.In the rare event that the load is invalid. reducing cache misses and increasing performance.importantmultimedia operations involving graphics.To accomplish this. resulting in faster executionand more efficient use of processor resources.that thisdata is located as close as possible to whereit¶s needed to minimize latency and thusimprove efficiency and speed. This means when two execution coresneed the same data. With Intel¶s shared L2 cache.By sharing L2 caches among each core. they each have to store it in their ownL2 cache.
This enables themto ready data in the L1 cache for ³just-in-time´execution. It includes an advanced power gating capabilitythat allows for an ultra-fine-grained logic control that turns onindividual processor logic subsystems only if and when they areneeded. Prefetchers do just that² ³prefetch´memory contents before they are requestedso they can be placed in cache and then readilyaccessed when neededTo ensure data is where each execution coreneeds it. ensuringboth significant power savings without sacrificing responsiveness.Memory Access includes advancedprefetchers. . implementing power gating has been challengingbecause of the power consumed in the powering down and rampingback up. the Intel Core microarchitectureuses two prefetchers per L1 cache and twoprefetchers per L2 cacheThese prefetchersdetect multiple streaming and strided accesspatterns simultaneously. we¶ve been able to satisfy these concerns. as well as the need to maintain system responsivenesswhen returning to full power.Combined. many buses and arrays are split so that datarequired in some modes of operation can be put in a low powerstate when not needed. Intel Intelligent Power Capability: It is a set of capabilities designedto reduce powerconsumption and design requirements. Through Intel Intelligent PowerCapability. This featuremanages the runtime power consumption of all the processor¶sexecution cores. the advanced prefetchers and thememorydisambiguation result in improvedexecution throughput by maximizing theavailable system-bus bandwidth and hidinglatency to the memory subsystem. The prefetchers for the L2 cacheanalyze accesses from cores to ensure thatthe L2 cache holds the data the cores mayneed in the future. Additionally.In the past.
and while not every application will show a substantial performance increase with a faster processor. The user also accesses a database and runs some queries. the user modifies a 3D model and exports it to a vectorgraphics format. Meanwhile. A collection of documents are compressed. The user first renders a 3D model to a bitmap. Office Productivity In this scenario. The final movie with the special effects is then compressed in a format that can be broadcast over broadband Internet. The user opens a video editing package. the content creator creates a product related website targeting a broadband and narrowband audience. the overall performance spread among the tested CPUs is almost 75%.Conclusion The applications tested by SYSmark 2004 cover the vast majority of the modern computing spectrum. Back in the 3D modeling software. While waiting on this operation. The web site is given the final touches and the system is scanned for viruses. Once the movie is assembled. The corporate web site is viewed and the user begins creating the collateral documents. Everything from multimedia to office to multitasking performance is included. The user reviews his email and updates his calendar while a virus checking software scans the system. while preparing web pages using a web site publishing tool. the user edits it and creates special effects using one of the modified images as input. modifies it and saves the results. The user extracts content from an archive. the office productivity user creates a marketing presentation and supporting documents for a new product. Internet Content Creation In this scenario. the user imports the rendered image into an image-processing package. The user receives email containing a collection of documents in a compressed file. he uses an animation creation tool to open the exported 3D vector graphicsfile. He modifies it by including other pictures and optimizes it for faster animation. creates a movie from several raw input movie cuts and sound cuts and starts exporting it. The queries' results are imported into a spreadsheet and used to generate .
but actually performs a LOT better in many cases. but with a little bit of overclocking both of our budget Core 2 Duo chips perform very well. even the E6300 manages to remain competitive with the FX-62.5% performance advantage over the FX-62. Intel's Core 2 performance domination continues in the Office Productivity portion of SYSMark 2004. and AMD would not have had to take such desperate measures to keep up with them. cutting prices of their processors in half in some cases. So while it wasn't a case of Intel destroying AMD across the board. it's going to take a lot for AMD to recover from this deficit. The user then transcribes a document. with the Core 2 Extreme X6800 maintaining a 42.We can deduce from the tests we've performed that the Core 2 Duo not only allows Intel to catch up to AMD in desktop applications.graphical charts. the impact of the reduced cache is more apparent in Office Productivity applications than it is in the Internet Content Creation results. Intel probably could have launched Core 2 Duo at a higher price point. but this time it's far superior than the X2 4600+. Finally. the user looks at the results of his work (both the slide show and the portable document) in an Internet browser. The user edits and adds elements to a slide show template. Drilling down into the individual benchmark results for SYSmark 2004. we get something more important: competition in performance AND price.. Perhaps a Core 2 with 1MB or less of L2 wouldn't perform all that well. This is Intel's new $183 part offering performance equal to that of AMD's $1.000 flagship FX processor. . This time around. but the 2MB Core 2 chips preform respectably regardless of the application being tested. Dropping from 4 MB to 2 MB of cache does hurt performance a bit. There has been speculation that one of the reasons Core 2 Duo chips perform so well is that they have so much L2 cache. Intel chips have always done a little better in the Office Productivity tests.
3. 2.com Wikipedia. 5.REFERENCES 1.com .com ExtremeTech.org Microsoft.com PCWorld. Intel. 4.