You are on page 1of 8

GOOGLE CHROME

Operating system

What exactly is Chrome OS?


Google Chrome OS is the company's first attempt at designing an operating system for
more powerful computers. The Google partnered Android has done well for mobile
platforms, and it now wants to take the work it has done there, tie it up with the work it is
doing on its still-fresh Chrome browser and make the first 'OS for the cloud' – with most
of the work being done on the net rather than on the computer.
"Speed, simplicity and security are the key aspects of Google Chrome OS,"
said Google's statement. "We're designing the OS to be fast and lightweight, to start up
and get you onto the web in a few seconds.

The operating system of goolge named as google chrome has recently been launched the main
focus of this OS are these fields

 The Chromium-based browser and the window manager


 System-level software and user-land services: the kernel, drivers, connection manager,
and so on
 Firmware

The V8 JavaScript engine has been designed for scalability. What does scalability mean
in the context of JavaScript and why is it important for modern web applications?

Web applications are becoming more complex. With the increased complexity comes
more JavaScript code and more objects. An increased number of objects puts additional
stress on the memory management system of the JavaScript engine, which has to scale
to deal efficiently with object allocation and reclamation. If engines do not scale to
handle large object heaps, performance will suffer when running large web applications.

In browsers without a multi-process architecture, a simple way to see the effect of an


increased working set on JavaScript performance is to log in to GMail in one tab and run
JavaScript benchmarks in another. The objects from the two tabs are allocated in the
same object heap and therefore the benchmarks are run with a working set that
includes the GMail objects.
V8's approach to scalability is to use generational garbage collection. The main
observation behind generational garbage collection is that most objects either die very
young or are long-lived. There is no need to examine long-lived objects on every
garbage collection because they are likely to still be alive. Introducing generations to the
garbage collector allows it to only consider newly allocated objects on most garbage
collections.

Splay: A Scalability Benchmark

To keep track of how well V8 scales to large object heaps, we have added a new
benchmark, Splay, to version 4 of the V8 benchmark suite. The Splay benchmark builds a
large splay treeand modifies it by creating new nodes, adding them to the tree, and
removing old ones. The benchmark is based on a JavaScript log processing module used
by the V8 profiler and it effectively measures how fast the JavaScript engine can allocate
nodes and reclaim unused memory. Because of the way splay trees work, the engine
also has to deal with a lot of changes to the large tree.

We have measured the impact of running the Splay benchmark with different splay tree
sizes to test how well V8 performs when the working set is increased:

The graph shows that V8 scales well to large object heaps, and that increasing the
working set by more than a factor of 7 leads to a performance drop of less than 17%.
Even though 35 MB is more memory than most web applications use today, it is
necessary to support such working sets to enable tomorrow's web applications.

Google also say they’re using a “multi-process design” which they say means “a bit
more memory up front” but over time also “less memory bloat.” When web pages or
plug-ins do use a lot of memory, you can spot them in Chrome’s task manager, “placing
blame where blame belongs.”

If this is true and there's a process manager which allows you to see how many
resources are being consumed by a particular browser tab (including plugins!) this will
be a 100% killer browser feature.

It simply isn't possible to implement with current browser architectures which brings up
two points: 1) Browsers haven't tackled it due to the extreme amount of code rewrite
that it would cause and 2) that there's a general consensus that this architecture will
actually consume more resources than the current architectures.

This is important. Since there's no sharing going on between the tabs of the browser it's
not possible to easily reduce the amount of duplicate resources. For example, within the
Mozilla Gecko engine there's a lot of code reuse occurring, which allows for significantly
reduced memory consumption (and optimized memory collection and defragmentation).

But here's the rub.

The blame of bad performance or memory consumption no longer lies with the
browser but with the site.

By implementing this feature a browser is completely deflecting all memory or


performance criticism off to individual site owners ("Yikes, my browser is using 300MB
of memory! Actually it's just youtube.com consuming 290MB of it, they should fix their
web site!"). This is going to be a monumental shift in the responsibilities of web
developers - and one that will serve the web better, as a whole.

Of course there will still be overhead associated with the core browser - but,
presumably, this will be marginalized.

This is an incredibly devious (in the best sense of the word) tactic and it's one that
browser vendors will be forced to respond to. How the response will occur is another
matter entirely.

Once the response occurs, though, two things will happen: Browsers will begin to
compete on reducing specific memory/performance numbers for specific sites (it
happens now - but with the numbers made obvious users will beg for it) and browsers
will be enticed to lie.

Since the browser is the new harbinger of the de-facto "accurate performance metrics"
(it's no longer the Window Process Manager, for example) they'll have to take every
opportunity to exaggerate those number to their benefit.

On so many levels this new feature will change the way browsers are constructed and
how they communicate to the user. Even if Google Chrome launch does nothing but fall
off the end of the runway in a fiery explosion, users will be intrigued, and the seed will
be planted: Browsers must find a way to respond.

Update: A screenshot has been posted showing the task manager:

It's quite small (and, seemingly, quite spartan) but it appears to detail three properties of
every tab: CPU usage, memory usage, and network usage.

It's going to be fascinating to see what type of user-centric UIs come around this. Tabs
that use a lot of CPU turn red? if they consume a lot of memory they grow larger? It
seems like there's a bunch of ways in which the quality of the tabs could be
appropriately communicated.

WINDOWS 7

How to measure Windows 7 memory usage.


 

Here are the three tools I used: 

Task Manager You can open Task Manager by pressing Ctrl+Shift+Esc (or press Ctrl+Alt+Delete,
then click Start Task Manager). For someone who learned how to read memory usage in Windows XP,
the Performance tab will be familiar, but the data is presented very differently. The most important
values to look at are under the Physical Memory heading, where Total tells you how much physical
memory is installed (minus any memory in use by the BIOS or devices) and Available tells you how
much memory you can immediately use for a new process.

 
Performance Monitor This is the old-school Windows geek’s favorite tool. (One big advantage it has
over the others is that you can save To run it, click start, type perfmon, and press Enter. To use it,
you must create a custom layout by adding “counters” that track resource usage over time. The
number of available counters, broken into more than 100 separate categories, is enormous; in
Windows 7 you can choose from more than 35 counters under the Memory heading alone, measuring
things like Transition Pages RePurposed/sec. For this exercise, I configured Perfmon to show
Committed Bytes and Available Bytes. The latter is the same as the Available figure in Task Manager.
I’ll discuss Committed Bytes in more detail later.
Resource Monitor The easy way to open this tool is by clicking the button at the bottom of the
Performance tab in Task Manager. Resource Manager was introduced in Windows Vista, but it has
been completely overhauled for Windows 7 and displays an impressive amount of data, drawn from
the exact same counters as Perfmon without requiring you to customize anything. The Memory tab
shows how your memory is being used, with detailed information for each process and a colorful
Physical Memory bar graph to show exactly what’s happening with your memory. I believe this is by
far the best tool for understanding at a glance where your memory is being used.

You can go through the entire gallery to see exactly how each tool works. I ran these tests on a local
virtual machine, using 1 GB of RAM as a worst-case scenario. If you have more RAM than that, the
basic principles will be the same, but you’ll probably see more Available memory under normal usage
scenarios. As you’ll see in the gallery, I went from an idle system to one running a dozen or so
processes, then added in some intensive file operations, a software installation, and some brand-new
processes before shutting everything down and going back to an idle system.

Even on a system with only 1 GB of RAM, I found it difficult to exhaust all physical memory. At one
point I had 13 browser tabs open, including one playing a long Flash video clip); at the same time I
had opened a 1000-page PDF file in Acrobat Reader and a 30-page graphically intense document in
Word 2010, plus Outlook 2010 downloading mail from my Exchange account, a few open Explorer
windows, and a handful of background utilities running. And, of course, three memory monitoring
tools. Even with that workload, I still had roughly 10% of physical RAM available.

So why do people get confused over memory usage? One of the biggest sources of confusion, in my
experience, is the whole concept of virtual memory compared to physical memory. Windows organizes
memory, physical and virtual, into pages. Each page is a fixed size (typically 4 KB on a Windows
system). To make things more confusing, there’s also a page file(sometimes referred to as a paging
file). Many Windows users still think of this as a swap file, a bit of disk storage that is only called into
play when you absolutely run out of physical RAM. In modern versions of Windows, that is no longer
the case. The most important thing to realize is that physical memory and the page file added
together equal the commit limit, which is the total amount of virtual memory that all processes can
reserve and commit. You can learn more about virtual memory and page files by reading Mark
Russinovich’s excellent article Pushing the Limits of Windows: Virtual Memory.
As I was researching this post, I found a number of articles at Microsoft.com written around the time
Windows 2000 and Windows XP were released. Many of them talk about using the Committed Bytes
counter in Perfmon to keep an eye on memory usage. (In Windows 7, you can still do that, as I’ve
done in the gallery here.) The trouble is, Committed Bytes has only the most casual relationship to
actual usage of the physical memory in your PC. As Microsoft developer Brandon Paddock noted in his
blog recently, the Committed Bytes counter represents:
The total amount of virtual memory which Windows has promised could be backed by either physical
memory or the page file.
An important word there is “could.” Windows establishes a “commit limit” based on your available
physical memory and page file size(s).  When a section of virtual memory is marked as “commit” –
Windows counts it against that commit limit regardless of whether it’s actually being used. 

On a typical Windows 7 system, the amount of memory represented by the Committed Bytes counter
is often well in excess of the actual installed RAM, but that shouldn’t have an effect on performance.
In the scenarios I demonstrate here, with roughly 1 GB of physical RAM available, the Committed
Bytes counter never dropped below about 650 MB, even though physical RAM in use was as low as
283 MB at one point. And ironically, on the one occasion when Windows legitimately used almost all
available physical RAM, using a little more than 950 MB of the 1023 MB available, the Committed
Bytes counter remained at only 832 MB.

So why is watching Committed Bytes important? You want to make sure that the amount of
committed bytes never exceeds the commit limit. If that happens regularly, you need either a bigger
page file, more physical memory, or both.

Watching the color-coded Physical Memory bar graph on the Memory tab of Resource Monitor is by far
the best way to see exactly what Windows 7 is up to at any given time. Here, from left to right, is
what you’ll see:
Hardware Reserved (gray) This is physical memory that is set aside by the BIOS and other
hardware drivers (especially graphics adapters). This memory cannot be used for processes or system
functions.
In Use (green) The memory shown here is in active use by the Windows kernel, by running
processes, or by device drivers. This is the number that matters above all others. If you consistently
find this green bar filling the entire length of the graph, you’re trying to push your physical RAM
beyond its capacity.
Modified (orange) This represents pages of memory that can be used by other programs but would
have to be written to the page file before they can be reused.
Standby (blue) Windows 7 tries as hard as it can to keep this cache of memory as full as possible. In
XP and earlier, the Standby list was basically a dumb first-in, first-out cache. Beginning with Windows
Vista and continuing with Windows 7, the memory manager is much smarter about the Standby list,
prioritizing every page on a scale of 0 to 7 and reusing low-priority pages ahead of high-priority ones.
(Another Russinovich article,Inside the Windows Vista Kernel: Part 2, explains this well. Look for the
“Memory Priorities” section.) If you start a new process that needs memory, the lowest-priority pages
on this list are discarded and made available to the new process.

You might also like