You are on page 1of 25

Free and open source software

Free and open-source software (F/OSS, FOSS) or free/libre/open-source software (FLOSS) is software that is liberally licensed to grant users the right to use, study, change, and improve its design through the availability of its source code. This approach has gained both momentum and acceptance as the potential benefits have been increasingly recognized by both individuals and corporations. In the context of free and open-source software, free refers to the freedom to copy and re-use the software, rather than to the price of the software. The Free Software Foundation, an organization that advocates the free software model, suggests that, to understand the concept, one should "think of free as in free speech, not as in free beer" FOSS is an inclusive term that covers both free software and open source software, which despite describing similar development models, have differing cultures and philosophies. Free software focuses on the philosophical freedoms it gives to users, whereas open source software focuses on the perceived strengths of its peer-to-peer development model. FOSS is a term that can be used without particular bias towards either political approach. Software which is both gratis and free software may be called gratis/libre/open-source software (GLOSS).Free software licences and open source licenses are used by many software packages. While the licenses themselves are in most cases the same, the two terms grew out of different philosophies and are often used to signify different distribution methodologies.[6]

Contents
y

y y y y y

1 History o 1.1 Naming  1.1.1 Free software  1.1.2 Open source  1.1.3 FOSS  1.1.4 FLOSS 2 Dualism of FOSS o 2.1 Beyond copyright o 2.2 Future economics of FOSS 3 Adoption by governments 4 See also 5 Notes 6 References 7 External links

History
In the 1950s, 1960s, and 1970s, it was normal for computer users to have the freedoms that are provided by free software. Software was commonly shared by individuals who used computers and most companies were so concerned with selling their hardware devices, they provided the software for free.[7] Organizations of users and suppliers were formed to facilitate the exchange of software; see, for example, SHARE and DECUS. By the late 1960s change was inevitable: software costs were dramatically increasing, a growing software industry was competing with the hardware manufacturer's bundled software products (free in that the cost was included in the hardware cost), leased machines required software support while providing no revenue for software, and some customers able to better meet their own needs did not want the costs of "free" software bundled with hardware product costs. In United States vs. IBM, filed January 17, 1969, the government charged that bundled software was anticompetitive.[8] While some software might always be free, there would be a growing amount of software that was for sale only. In the 1970s and early 1980s, the software industry began using technical measures (such as only distributing binary copies of computer programs) to prevent computer users from being able to study and customize software they had bought using reverse engineering techniques. In 1980 the copyright law (Pub. L. No. 96-517, 94 Stat. 3015, 3028) was extended to computer programs in the United States[9] In 1983, Richard Stallman, longtime member of the hacker community at the MIT Artificial Intelligence Laboratory, announced the GNU project, saying that he had become frustrated with the effects of the change in culture of the computer industry and its users.[10] Software development for the GNU operating system began in January 1984, and the Free Software Foundation (FSF) was founded in October 1985. An article outlining the project and its goals was published in March 1985 titled the GNU Manifesto. The manifesto also focused heavily on the philosophy of free software. He developed The Free Software Definition and the concept of "copyleft", designed to ensure software freedom for all. The Linux kernel, started by Linus Torvalds, was released as freely modifiable source code in 1991. The licence wasn't exactly a free software licence, but with version 0.12 in February 1992, he relicensed the project under the GNU General Public License.[11] Much like Unix, Torvalds' kernel attracted the attention of volunteer programmers. In 1997, Eric Raymond published The Cathedral and the Bazaar, a reflective analysis of the hacker community and free software principles. The paper received significant attention in early 1998, and was one factor in motivating Netscape Communications Corporation to release their popular Netscape Communicator Internet suite as free software. This code is today better known as Mozilla Firefox and Thunderbird. Netscape's act prompted Raymond and others to look into how to bring free software principles and benefits to the commercial software industry. They concluded that FSF's social activism was not appealing to companies like Netscape, and looked for a way to rebrand the free software movement to emphasize the business potential of the sharing of source code. The new name they chose was "open source", and quickly Bruce Perens, publisher Tim O'Reilly, Linus Torvalds, and

others signed on to the rebranding. The Open Source Initiative was founded in February 1998 to encourage use of the new term and evangelize open source principles.[12] While the Open Source Initiative sought to encourage the use of the new term and evangelize the principles it adhered to, corporations found themselves increasingly threatened by the concept of freely distributed software and universal access to an application's source code. A Microsoft executive publicly stated in 2001 that "open source is an intellectual property destroyer. I can't imagine something that could be worse than this for the software business and the intellectualproperty business." [13] This view perfectly summarizes the initial response to FOSS by the majority of big business. However, while FOSS has historically played a role outside of the mainstream of private software development, companies as large as Microsoft have begun to develop official open source presences on the Internet. Corporations ranging from IBM, Oracle, Google and State Farm are just a few of the big names with a serious public stake in today's competitive open source market signalling a shift in the corporate philosophy concerning the development of free to access software.[14] Free software The Free Software Definition, written by Richard Stallman and published by Free Software Foundation (FSF), defines free software as a matter of liberty, not price.[15] The earliest known publication of the definition was in the February 1986 edition[16] of the now-discontinued GNU's Bulletin publication of FSF. The canonical source for the document is in the philosophy section of the GNU Project website. As of April 2008, it is published there in 39 languages.[17] Open source The Open Source Definition is used by the Open Source Initiative to determine whether a software license can be considered open source. The definition was based on the Debian Free Software Guidelines, written and adapted primarily by Bruce Perens.[18][19] Perens did not base his writing on the four freedoms of free software from the Free Software Foundation, which were only widely available later.[20] FOSS The first known use of the phrase free open source software on Usenet was in a posting on 18 March 1998, just a month after the term open source itself was coined. In February 2002, FOSS appeared on a Usenet newsgroup dedicated to Amiga computer games In early 2002, MITRE used the term FOSS in what would later be their 2003 report Use of Free and Open Source Software (FOSS) in the U.S. Department of Defense. FLOSS The acronym FLOSS was coined in 2001 by Rishab Aiyer Ghosh for free/libre/open source software. Later that year, the European Commission (EC) used the phrase when they funded a study on the topic.[23]

Unlike libre software, which aimed to solve the ambiguity problem, FLOSS aimed to avoid taking sides in the debate over whether it was better to say "free software" or to say "open source software". Proponents of the term point out that parts of the FLOSS acronym can be translated into other languages, with for example the F representing free (English) or frei (German), and the L representing libre (Spanish or French), livre (Portuguese), or libero (Italian), liber (Romanian) and so on. However, this term is not often used in official, non-English, documents, since the words in these languages for free as in freedom do not have the ambiguity problem of free in English. By the end of 2004, the FLOSS acronym had been used in official English documents issued by South Africa, Spain, and Brazil.

The terms "FLOSS" and "FOSS" have come under some criticism for being counterproductive and sounding silly. For instance, Eric Raymond, co-founder of the Open Source Initiative, has stated: "Near as I can figure ... people think theyd be making an ideological commitment ... if they pick 'open source' or 'free software'. Well, speaking as the guy who promulgated 'open source' to abolish the colossal marketing blunders that were associated with the term 'free software', I think 'free software' is less bad than 'FLOSS'. Somebody, please, shoot this pitiful acronym through the head and put it out of our misery." Raymond quotes programmer Rick Moen as stating "I continue to find it difficult to take seriously anyone who adopts an excruciatingly bad, haplessly obscure acronym associated with dental hygiene aids" and "neither term can be understood without first understanding both free software and open source, as prerequisite study."

Dualism of FOSS
While the Open Source Initiative includes free software licenses as part of its broader category of approved open source licenses, the Free Software Foundation sees free software as distinct from open source. The key differences between the two are their approach to copyright and appropriation in the context of usage. The primary obligation of users of traditional open source licenses such as BSD is limited to appropriation that clearly identifies the copyright owner of the software. Such a license is focused on providing developers who wish to redistribute the software the greatest level flexibility. Users who do not wish to redistribute the software in any form are under no obligation. Developers can modify the software and redistribute it either as source or as part of a larger, possibly proprietary, derived work, provided the original appropriation is intact. These appropriations throughout the distribution chain ensure the owners' copyrights are maintained.

The primary obligation of users of free software licenses such as the GPL is to preserve the rights of other users under the terms of the license. Such a license is focused on ensuring that users' rights to access and modify the software cannot be denied by developers who redistribute the software. The only way to accomplish this is by restricting the rights of developers to include free software in larger, derived works unless those works share the same free software license. Free software uses copyright to enforce compliance with the software license. To strengthen its legal position, the Free Software Foundation asks developers to assign copyright to the Foundation when using the GPL license. From a user's (non-distributor's) perspective, both free software and open source can be treated as effectively the same thing and referred to with the inclusive term FOSS. From a developer's (distributor's) perspective, free and open source software are distinct concepts with much different legal implications.

Beyond copyright:
While copyright is the primary legal mechanism that FOSS authors use to control usage and distribution of their software, other mechanisms such as legislation, patents, and trademarks have implications as well. In response to legal issues with patents and the DMCA, the Free Software Foundation released version 3 of its GNU Public License in 2007 that explicitly addressed the DMCA and patent rights. As author of the GCC compiler software, the FSF also exercised its copyright and changed the GCC license to GPLv3. As a user of GCC, and a heavy user of both DRM and patents, it is speculated that this change caused Apple, Inc. to switch the compiler in its Xcode IDE from GCC to the open source Clang compiler. The Samba project also switched to the GPLv3 in a recent version of its free Windows-compatible network software. In this case, Apple replaced Samba with closed-source, proprietary software - a net loss for the FOSS movement as a whole. Some of the most popular FOSS projects are owned by corporations that, unlike the FSF, use both patents and trademarks to enforce their rights. In August, 2010, Oracle sued Google claiming that its use of the open source Java infringed on Oracle's patents. Oracle acquired those patents with its acquisition of Sun Microsystems in January, 2010. Sun had, itself, acquired MySQL in 2008. This made Oracle the owner of the most popular proprietary database and the most popular open source database. Oracle's attempts to commercialize the open source MySQL database have raised concerns in the FOSS community. In response to uncertainty about the future of MySQL, the FOSS community used MySQL's GPL license to fork the project into a new database outside of Oracle's control This new database, however, will never be MySQL because Oracle owns the trademark for that term. Definition of FOSS: Acronym for Free or Open Source Software. FOSS programs are those that have licenses that allow users to freely run the program for any purpose, modify the program as they want, and also to freely distribute copies of either the original version or their own modified version.

One major reason for the growth and use of FOSS technology (including LAMP) is because users have access to the source so it is much easier to fix faults and improve the applications. In combination with the open license, this simplifies the development process for many enterprises and gives them flexibility that simply isn't available within the confines of a proprietary or commercial product.
Definition for GNU:

GNU is short for "GNU is Not Unix", a recursive definition. It is a free Operating System created in 1984 by Richard Stallman, an activist in free software. GNU is sponsored by the Free Software Foundation, a non-profit organization based in Boston, MA, USA.The gcc compiler for C and C++ compiler is part of GNU. There is also dotGNU, an open source replacement for Microsoft's .NET.

History of GNU/Linux:
The plan for the GNU operating system was publicly announced on September 27, 1983, on the net.unix-wizards and net.usoft newsgroups by Richard Stallman. Software development began on January 5, 1984, when Stallman quit his job at the Massachusetts Institute of Technology (MIT) Artificial Intelligence Laboratory so that they could not claim ownership or interfere with distributing GNU as free software. Richard Stallman chose the name by using various plays on words, including the song. The goal was to bring a wholly free software operating system into existence. Stallman wanted computer users to be "free", as most were in the 1960s and 1970s free to study the source code of the software they use, free to share the software with other people, free to modify the behaviour of the software, and free to publish their modified versions of the software. This philosophy was later published as the GNU Manifesto in March 1985. Richard Stallman's experience with the Incompatible Timesharing System (ITS), an early operating system written in assembly language that became obsolete due to discontinuation of PDP-10, the computer architecture for which ITS was written, led to a decision that a portable system was necessary. It was thus decided that GNU would be mostly compatible with Unix. At the time, Unix was already a popular proprietary operating system. The design of Unix had proven to be solid, and it was modular, so it could be re implemented piece by piece. Much of the needed software had to be written from scratch, but existing compatible third-party free software components were also used such as the TeX typesetting system, the X Window System, and the Mach microkernel that forms the basis of the GNU Mach core of GNU Hurd (the official kernel of GNU). With the exception of the aforementioned third-party components, most of GNU has been written by volunteers of the GNU Project; some in their spare time, some paid by companies, educational institutions, and other non-profit organizations. In October 1985, Stallman set up the Free Software Foundation (FSF). In the late 1980s and 1990s, the FSF hired software developers to write the software needed for GNU.

As GNU gained prominence, interested businesses began contributing to development or selling GNU software and technical support. The most prominent and successful of these was Cygnus Solutions, now part of Red Hat.

History of free s/w movement:


In the 1950s, 1960s, and 1970s, it was normal for computer users to have the software freedoms associated with free software. Software was commonly shared by individuals who used computers and by hardware manufacturers who welcomed the fact that people were making software that made their hardware useful. Organizations of users and suppliers, for example, SHARE, were formed to facilitate exchange of software. By the late 1960s, the picture changed: software costs were dramatically increasing, a growing software industry was competing with the hardware manufacturer's bundled software products (free in that the cost was included in the hardware cost), leased machines required software support while providing no revenue for software, and some customers able to better meet their own needs did not want the costs of "free" software bundled with hardware product costs. In United States vs. IBM, filed January 17, 1969, the government charged that bundled software was anticompetitive. While some software might always be free, there would be a growing amount of software that was for sale only. In the 1970s and early 1980s, the software industry began using technical measures (such as only distributing binary copies of computer programs) to prevent computer users from being able to study and modify software. In 1980 copyright law was extended to computer programs. In 1983, Richard Stallman, longtime member of the hacker community at the MIT Artificial Intelligence Laboratory, announced the GNU project, saying that he had become frustrated with the effects of the change in culture of the computer industry and its users. Software development for the GNU operating system began in January 1984, and the Free Software Foundation (FSF) was founded in October 1985. He developed a free software definition and the concept of "copyleft", designed to ensure software freedom for all. The economic viability of free software has been recognized by large corporations such as IBM, Red Hat, and Sun Microsystems.[10][11][12][13][14] Many companies whose core business is not in the IT sector choose free software for their Internet information and sales sites, due to the lower initial capital investment and ability to freely customize the application packages. Also, some non-software industries are beginning to use techniques similar to those used in free software development for their research and development process; scientists, for example, are looking towards more open development processes, and hardware such as microchips are beginning to be developed with specifications released under copyleft licenses (see the OpenCores project, for instance). Creative Commons and the free culture movement have also been largely influenced by the free software movement.

Advantages of free software :


People outside the free software movement frequently ask about the practical advantages of free software. It is a curious question. Non free software is bad because it denies your freedom. Thus, asking about the practical advantages of free software is like asking about the practical advantages of not being handcuffed.

Advantages of GNU/Linux: the security, stability, and other points. And we've mentioned the increased freedom you get with Linux. You get freedom from insecurity, freedom from instability, freedom from software audits and piracy charges, and freedom to customize your software. You are also free to change vendors -- what a concept in today's computing world! For example, if you opt to go with Red Hat Linux, but you later decide to switch to Ubuntu Linux, you can easily do it. You run the same free, stable Linux system, but with a different vendor producing and backing the software.Here are some other reasons to consider using GNU/Linux systems for your organization: An easy install. Modern GNU/Linux distributions are just as easy (or easier!) to install as any other modern operating system. Interoperability. Linux vendors do not try to force you to use only their software. GNU/Linux systems work seamlessly with other operating systems like the MacOS, Windows, and Unix systems. Linux is standards-based. Every part of GNU/Linux systems are based on open computing standards. There is no vendor which is trying to invent their own secret, proprietary "standards" in an attempt to lock you into their own specific platform. Your choice of vendors. As mentioned above, you can choose between many different GNU/Linux vendors. This competition forces vendors to earn your business. Your choice of Graphical User Interfaces (GUIs). Linux distributions give you a choice of desktop GUIs. Two popular ones, GNOME and KDE, are just as easy to use as Windows or other GUI operating systems. In fact, GNOME or KDE can be configured to look and work almost identically to Windows! But GNU/Linux also gives you more choices! For example, the Burlington Coat Factory chain of stores is using one Linux GUI to create an easy-to-use gift registry for its customers. GNU/Linux doesn't force you into one way of doing things. Remote Desktop Support. GNU/Linux adopted the X windowing system which was developed at the Massachusetts Institute of Technology (MIT). This GUI allows you to easily have a remote desktop on a machine located half a world away -- all without buying expensive third-party software to do this! Graphical Terminal Support. Using the X windowing system mentioned above, with GNU/Linux you can use inexpensive graphical terminals instead of buying a full-blown PC for every user in your office. This will not only save in terms of hardware costs, but also allow you to centrally administer all desktops from one server. Linux runs on different hardware. GNU/Linux systems run on many types of computers. Not only on traditional Intel and AMD personal computers, but also on Sun SPARC systems, PowerPCs, Apple

computers, and a huge range of IBM computers -- right on up to IBM's largest mainframe computers. While you may never want to explore these options, isn't it nice to know that you have multiple options? Linux runs efficiently. Because of the efficiency caused by the free software development process, GNU/Linux can be happily run on older computers. Your old Pentium computer can be brought back into service as a useful machine. Even ancient 386 or 486 computers can be used, either as a lightweight GNU/Linux system or perhaps as a graphical terminal (see above). Linux is multi-user. Unlike operating systems (OSs) built from single-user systems (e.g. Windows, MacOS) which have had band-aids applied to make them appear to be multi-user systems, Linux has the multi-user Unix system as its model and seamlessly handles multiple people using a single machine. Linux evolved on the Internet. Linux has been modeled after the Unix operating system, a system which evolved on college campuses and the hostile computing environment of academia. Linux itself was also born in and evolved in a hostile environment -- the Internet. Linux's rough and tumble neighborhood means that you benefit from increased security and stability. Linux is easy to network. Because of Linux's birth and evolution on the Internet, GNU/Linux systems
network easily, "naturally" and seamlessly.

Defn of Open-source software (OSS):


It is computer software that is available in source code form: the source code and certain other rights normally reserved for copyright holders are provided under a software license that permits users to study, change, improve and at times also to distribute the software. Open source software is very often developed in a public, collaborative manner. Open-source software is the most prominent example of open-source development and often compared to (technically defined) user-generated content or (legally defined) open content movements. A report by the Standish Group states that adoption of open-source

software models has resulted in savings of about $60 billion per year to consumers. Linux distribution:
It is a member of the family of Unix-like operating systems built on top of the Linux kernel. Such distributions (often called distros for short) are operating systems including a large collection of software applications such as word processors, spreadsheets, media players, and database applications. These operating systems consist of the Linux kernel and, usually, a set of libraries and utilities from the GNU Project, with graphics support from the X Window System. Distributions optimized for size may not contain X and tend to use more compact alternatives to the GNU utilities, such as BusyBox, uClibc, or dietlibc. There are currently over six hundred Linux distributions. Over three hundred of those are in active development, constantly being revised and improved. Because most of the kernel and supporting packages are free and open source software, Linux distributions have taken a wide variety of forms from fully featured desktop, server, laptop, netbook, Mobile Phone, and Tablet operating systems as well as minimal environments (typically for use in embedded systems or for booting from a floppy disk). Aside from certain custom software (such as

installers and configuration tools), a distribution is most simply described as a particular assortment of applications installed on top of a set of libraries married with a version of the kernel, such that its "out-ofthe-box" capabilities meet most of the needs of its particular end-user base.One can distinguish between commercially-backed distributions, such as Fedora (Red Hat), openSUSE (Novell), Ubuntu (Canonical Ltd.), and Mandriva Linux (Mandriva), and entirely community-driven distributions, such as Debian and Gentoo.

Logging in: It is usually used to enter a specific page, which trespassers cannot see. Once the user is logged in, the login token may be used to track what actions the user has taken while connected to the site. Logging out may be performed explicitly by the user taking some action, such as entering the appropriate command, or clicking a website link labeled as such. It can also be done implicitly, such as by the user powering off his or her workstation, closing a web browser window, leaving a website, or not refreshing a webpage within a defined period. In the case of web sites that use cookies to track sessions, when the user logs out, session-only cookies from that site will usually be deleted from the user's computer. In addition, the server invalidates any associations with the session, making any session-handle in the user's cookie store useless. This feature comes in handy if the user is using a public computer or a computer that is using a public wireless connection. As a security precaution, one should not rely on implicit means of logging out of a system, especially not on a public computer, instead one should explicitly log out and wait for the confirmation that this request has taken place. Logging out of a computer when leaving it is a common security practice, preventing unauthorized users from tampering with it. There are also people who choose to have a password-protected screensaver set to activate after some period of inactivity, requiring the user to reenter their login credentials to unlock the screensaver and gain access to the system. There can be different methods of logging in that may be via image, finger prints,eye scan, password(oral or textual input) etc. Listing files:
The list of file formats organized by type, as can be found on computers. Filename extensions are usually noted in parentheses if they differ from the format name or abbreviation. In theory, using the basic Latin alphabet (A Z) and a three character extension, the number of combinations amounts to 17,576 If other acceptable characters are included, the maximum number of combinations is 195,112 .Unix-like systems don't have extensions, and Microsoft Windows NT, 95, 98, and Me don't have a three character limit on extensions for 32-bit or 64-bit applications on file systems other than pre-Windows 95/Windows NT 3.5 versions of the FAT file system, so some file system types are given extensions longer than three characters.

Editing files:
There are times when you will need to edit the WordPress files, especially if you want to make changes in your WordPress Theme. WordPress features a built-in editor for editing files from within your browser

whilst online: The Theme Editor. You can also edit files copied or stored on your computer and then upload them to your site using an FTP client.

Examples:

          

Microsoft Word WordPerfect Open Office Apple I Work Pages Microsoft Publisher Microsoft Works Microsoft Excel Adobe Photoshop Adobe Illustrator Adobe Dreamweaver

Any do-it-yourself instant web page software.

Copying:
It is the duplication of information or an artifact based only on an instance of that information or artifact, and not using the process that originally generated it. With analog forms of information, copying is only possible to a limited degree of accuracy, which depends on the quality of the equipment used and the skill of the operator. There is some inevitable deterioration and accumulation of "noise" (random small changes, not sound) from original to copy; when successive generations of copy are made, this deterioration accumulates with each generation. With digital forms of information, copying is perfect.Copy and paste is frequently used for information a computer user selects and copies to an area he or she wishes.


Most high-accuracy copying techniques use the principle that there will be only one type of possible interpretation for each reading of data, and only one possible way to write an interpretation of data.

Moving files:
Many files have been uploaded to Wikipedia. A long-term project under way is to move free content filesincluding images and audioto theWikimedia Commons. The Commons provides a central location for files for use on all Wikimedia Foundation projects. Below are issues to consider when carrying out moves to the Commons. Commons employs more restrictive policies on copyright issues than the English Wikipedia does; for instance, fair use images such as most images of album cover art cannot be hosted on

Commons.Commons accepts only free content. Note that unlike Wikipedia, which only requires files to be free use in the United States, Commons requires files be free use in both the United States and the country where the file originated (so a French painting must be free use in both France and the United States).  Do not transfer files without a clear and verifiable source.  "Own work" is an acceptable source, however you should be aware that a great deal of files labeled as own work, possibly as high as a quarter of those identified as such, are not actually the work of the uploader.' Several things contribute to this, including a lack of familiarity with copyright and that "Own work" is an easily reachable default in the upload wizard.  The most common mistake involving claims of own work involves people photographing the artistic work of other people. Users think 'I took the photo of that painting, the photo is my own work', not knowing that that the painting itself is also under copyright, and that said copyright also comes into play when they upload. If you come across such a situation, check to see if the piece of art's copyright has expired. If so, you can add a second template, usually PD-old-100, and mark which template applies to the painting and which applies to the photograph of the painting. This file description page of a Featured Sound illustrates how to do this.  If the source is a website, check to see if the link is still active. If it isn't, it might be simple to reestablish a link, but if no link can be found, it might not be a good idea to transfer the file over.  If there are multiple authors, each must be cited. If the file is a derivative of another file on a WMF project, that file must be sourced, the new file's license must follow the guidelines set by the old file's license (i.e. if a file released under a Creative Commons Share Alike license, a derivative of that file must also be licensed with at least a Creative Commons Share Alike license.)  Other WMF projects cannot be used as sources. Track down the source from the upload on the other project, and use that instead.

Sometimes you'll be able to look at a claim that something's someone's own work and know that it is false, or at least questionable. If your gut feeling says that something isn't right, there's a good chance that something isn't right.

Do not transfer files with unclear or inaccurate copyright templates.  If there's no copyright status template on the page, don't move it over, period. Much of the time, you'll be able to figure out what template should be used by reading the text on the file description page, in which case you can put in the proper template and then transfer it over. That's perfectly okay.

Viewing file contents




The files are compiled or written to be only opened with a specific program. The below examples are used to open a plain text file. If you are unable to read the file or it appears to begarbage, gibberish, or encrypted when opened, it must be opened with the appropriate application. Make sure the program you need to open the file with is installed on the computer and that it is associated with the file. If you're not sure what program is used to open the file determine the file extension and review our file extension page for a listing of associated programs. Below is a listing of how to view the contents of a standard file for each of the major PC operating systems.

Microsoft Windows users:




Double-click the file that you wish to open. If the file is an un-associated file type you will receive a "Open With" window. If you are unfamiliar with what program to use to open this file try using wordpad or notepad to view the file.

Changing file modes and permissions:


The chmod utility modifies the file mode bits of file as specified by the mode operand. To change the mode of a file, you must have one of the following authorities:
y y

The current user has *ALLOBJ special authority. The current user is the owner of the file.

By default, chmod follows symbolic links and changes the mode on the file pointed to by the symbolic link. Symbolic links do not have modes so usingchmod on a symbolic link always succeeds and has no effect.

The -H, -L and -P options are ignored unless the -R option is specified. In addition, these options override each other and the command's actions are determined by the last one specified. Note that chmod changes the OS/400 data authorities for an object. Use the CHGAUT CL command to change the OS/400 object authorities for an object. Options: -H If the -R option is specified, symbolic links on the command line are followed. Symbolic links encountered in the tree traversal are not followed. Since symbolic links do not have modes chmod has no effect on the symbolic links. -L If the -R option is specified, both symbolic links on the command line and symbolic links encountered in the tree traversal are followed. -P If the -R option is specified, no symbolic links are followed. Since symbolic links do not have modes chmod has no effect on the symbolic links. -R If file designates a directory, chmod changes the mode of each file in the entire subtree connected at that point. -h Do not follow symbolic links. Since symbolic links do not have modes chmod has no effect on the symbolic links.
Process management: It is the ensemble of activities of planning and monitoring the performance of a process. The term usually refers to the management of business processes and manufacturing processes. Business process management (BPM) and business process reengineering are interrelated, but not identical.
Process management is the application of knowledge, skills, tools, techniques and systems to define, visualize, measure, control, report and improve processes with the goal to meet customerrequirements profitably. It can be differentiated from program management in that program management is concerned with managing a group of inter-dependent projects. But from another viewpoint, process management includes program management. In project management, process management is the use of a repeatable process to improve the outcome of the project.

Managing Groups:
Groups serve to simplify the assignment of rights. Ordinary privileges must be granted to a single user, one at a time. This can be tedious if several users need to be assigned the same access to a variety of database objects. Groups are created to avoid this problem. A group simply requires a name, and can be created empty (without users). Once created, users who are intended to share common access privileges are added into the group together, and are henceforth associated by their membership in that group. Rights on database objects are then granted to the group, rather than to each member of the group. For a system with many users and databases, groups make managing rights less of an administrative chore. Note: Users may belong to any number of groups, or no groups at all.

Creating and Removing Groups


Before you get started managing groups, you should first understand how to create and remove them from the system. Each of these procedures requires superuser privileges. See the Section called Managing Users" earlier in this chapter for more about superusers. Creating a group Any superuser may create a new group in PostgreSQL with the CREATE GROUP command. Here is the syntax for CREATE GROUP:
CREATE GROUP groupname [ WITH [ SYSID groupid ] [ USER username [, ...] ] ]

In this syntax, groupname is the name of the group that you wish to create. A group's name must start with an alphabetical character, and may not exceed 31 characters in length. Providing the WITH keyword allows for either of the optional attributes to be specified. If you wish to specify the system ID to use for the new group, use the SYSID keyword to specify the groupid value. Use the USER keyword to include one or more users to the group at creation time. Separate usernames by commas. Additionally, the PostgreSQL user and group tables operate separately from each other. This separation does allow a user's usesysid and a group's grosysid to be identical within the PostgreSQL system.

An example creates the sales group, and adds two users to it upon its creation. These users are allen, and vincent (presumably, members of Book Town's sales department). Example :Creating a group
booktown=# CREATE GROUP sales booktown-# WITH USER allen, vincent; CREATE GROUP

The CREATE GROUP server message indicates that the group was created successfully. You may verify the creation of a group, as well as view all existing groups, with a query on the pg_group system table. Example: Verifying a group
booktown=# SELECT * FROM pg_group; groname | grosysid | grolist ------------+----------+------------sales | 1 | {7017,7016} accounting | 2 | marketing | 3 | (3 rows)

Notice that the grolist column is an array, containing the PostgreSQL user ID of each user in the group. These are the same user IDs which can be seen in the pg_user view. For example:
booktown=# SELECT usename FROM pg_user booktown-# WHERE usesysid = 7017 OR usesysid = 7016; usename --------allen vincent (2 rows)

Removing a group Any superuser may also remove a group with the DROP GROUP SQL command. You should exercise caution with this command, as it is irreversible, and you will not be prompted to verify the removal of the group (even if there are users still in the group). Unlike DROP DATABASE, DROP GROUP may be performed within a transaction block. Here is the syntax for DROP
DROP GROUP groupname GROUP:

The groupname is the name of the group to be permanently removed. . Example:. Removing a group
booktown=# DROP GROUP marketing; DROP GROUP

The DROP GROUP server message returned from the group indicates that the group was successfully destroyed. Note that removing a group does not remove permissions placed on it, but rather "disembodies" them. Any permissions placed on a database object which have rights assigned to a dropped group will appear to be assigned to a group system ID, rather than to a group. Note: Inadvertently dropped groups can be restored to their previous functionality by creating a new group with the same system ID as the dropped group. This involves the SYSID keyword, as documented in the Section called Creating a group." If you assign group permissions to a table and then drop the group, the group permissions on the table will be retained. However, you will need to add the appropriate users to the newly recreated group for the table permissions to be effective for members of that group.

Associating Users with Groups


Users are both added and removed from groups in PostgreSQL through the ALTER GROUP SQL command. Here is the syntax for the ALTER GROUP command:
ALTER GROUP groupname { ADD | DROP } USER username [, ... ]

The groupname is the name of the group to be modified, while the username is the name of the user to be added or removed, depending on whether the ADD or DROP keyword is specified. Adding a user to a group Suppose that Booktown hires two new sales associates, David and Ben, and gives them usernames david and ben, respectively. uses the ALTER GROUP command adds these new users to the salesgroup. Example :Adding a user to a group
booktown=# ALTER GROUP sales ADD USER david, ben; ALTER GROUP

The ALTER GROUP server message returned in indicates that the users david and ben were successfully added to the sales group. demonstrates another query to the pg_ group table to verify the addition of those new users to the group. Note that there are now four system IDs in the grolist column for the sales group. Example: Verifying user addition
booktown=# SELECT * FROM pg_group WHERE groname = 'sales'; groname | grosysid | grolist ---------+----------+----------------------sales | 1 | {7019,7018,7017,7016} (1 row)

Removing a user from a group Suppose that some time later David is transferred from sales to accounting. In order to maintain the correct group association, and to make sure that David does not have any rights granted exclusively to the sales group, his user (david) should be removed from that group Example: Removing a user from a group
booktown=# ALTER GROUP sales DROP USER david; ALTER GROUP

The ALTER GROUP message returned from indicates that the david user was successfully removed from the sales group. To complete his transition to the accounting department, David must then have his user added to the accounting group. The following statements use similar syntax as the statements .The net effect is that the david user is added into the accounting group. This means that any special rights granted to this group will be implicitly granted to david for as long as he is a member of the group.
booktown=# ALTER GROUP accounting ADD USER david; ALTER GROUP booktown=# SELECT * FROM pg_group; groname | grosysid | grolist ------------+----------+-----------------sales | 1 | {7016,7017,7019} accounting | 2 | {7018} (2 rows)

File ownerships and Permissions:


There are three specific permissions on Unix-like systems that apply to each class:

The read permission, which grants the ability to read a file. When set for a directory, this permission grants the ability to read the names of files in the directory (but not to find out any further information about them such as contents, file type, size, ownership, permissions, etc.) The write permission, which grants the ability to modify a file. When set for a directory, this permission grants the ability to modify entries in the directory. This includes creating files, deleting files, and renaming files. The execute permission, which grants the ability to execute a file. This permission must be set for executable binaries (for example, a compiled C++ program) or shell scripts (for example, a Perl program) in order to allow the operating system to run them. When set for a directory, this permission grants the ability to traverse its tree in order to access files or subdirectories, but not see the content of files inside the directory (unless read is set).

The effect of setting the permissions on a directory (rather than a file) is "one of the most frequently misunderstood file permission issues"[7]. When a permission is not set, the rights it would grant are denied. Unlike ACL-based systems, permissions on a Unix-like system are not inherited. Files created within a directory will not necessarily have the same permissions as that directory.

Pluggable authentication modules (PAM): It is a mechanism to integrate multiple low-level authentication schemes into a highlevel application programming interface (API). It allows programs that rely on authentication to be written independent of the underlying authentication scheme. PAM was first proposed by Sun Microsystems in an Open Software Foundation Request for Comments (RFC) dated October 1995. It was adopted as the authentication framework of the Common Desktop Environment. As a stand-alone infrastructure, PAM first appeared from an open-source, Linux-PAM, development in Red Hat Linux 3.0.4 in August 1996. PAM is currently supported in the AIX operating system, DragonFly BSD FreeBSD, HP-UX, Linux, Mac OS X, NetBSD and Solaris. PAM was later standardized as part of the X/Open UNIX standardization process, resulting in the X/Open Single Sign-on (XSSO) standard. The XSSO standard differs from both the original RFC, and from the Linux and Sun APIs from most other implementations. Also, they are not implemented similarly. For these and other reasons, OpenBSD has chosen to adopt BSD Authentication, which is an alternative authentication framework, originally from BSD/OS. Common Log File System (CLFS) is a general-purpose logging subsystem that is accessible to both kernel-mode as well as user-mode applications for building high-performance transaction logs. It was introduced with Windows Server 2003 R2 and included in later Windows OSs. CLFS can be used for both data logging as well as for event logging. CLFS is used by TxF and TxR to store transactional state changes before they commit a transaction. The job of CLFS, like any other transactional logging system, is to record a series of steps required for some action so that they can be either played back accurately in the future to commit the transaction to secondary storage or undone if required. CLFS first marshals logs records to in-memory buffers and then writes them to log-files on secondary storage (stable media in CLFS

terminology) for permanent persistence. When the data will be flushed to stable media is controlled by built-in policies, but a CLFS client application can override that and force a flush. CLFS allows for customizable log formats, expansion and truncation of logs according to defined policies, as well as simultaneous use by multiple client applications. CLFS is able to store log files anywhere on the file system.[1] CLFS defines a device driver interface (DDI), via which physical storage system specific drivers plug in to the CLFS API. The CLFS driver implements the ARIES recovery algorithm; other algorithms can be supported by using custom drivers CLFS supports both dedicated logs, as well as multiplexed logs. A dedicated log contains a single stream of log records whereas multiplexed log contain multiple streams, each stream for a different application. Even though a multiplexed log has multiple streams, logs are flushed to the streams sequentially, in a single batch. CLFS can allocate space for a set of log records ahead-oftime (before the logs are actually generated) to make sure the operation does not fail due to lack of storage space. A log record in a CLFS stream is first placed to Log I/O Block in a buffer in system memory. Periodically blocks are flushed to stable storage devices. On the storage device, a log contains a set of Containers, which are allocated contiguously, each containing multiple Log I/O Blocks. New log records are appended to the present set. Each record is identified by a Log Sequence Number (LSN), an increasing 32-bit sequence number. The LSN and other metadata are stored in the record header. The LSN encodes the identifier of the container, the offset to the record and the identifier of the record - this information is used to access the log record subsequently. However, the container identifiers are logical identifiers, they must be mapped to physical containers. The mapping is done by CLFS itself.

Configuring Networking
The procedures in this section describe how to configure networking resources that are available in the Fabric workspace of System Center 2012 Virtual Machine Manager (VMM). Networking in System Center 2012 Virtual Machine Manager includes several enhancements that enable administrators to efficiently provision network resources for a virtualized environment. The networking enhancements include the following:
y y y

The ability to create and define logical networks Static IP address and static MAC address assignment Load balancer integration

Logical Networks:
A logical network together with one or more associated network sites is a user-defined named grouping of IP subnets, VLANs, or IP subnet/VLAN pairs that is used to organize and simplify network

assignments. Some possible examples include BACKEND, FRONTEND, LAB, MANAGEMENT and BACKUP. Logical networks represent an abstraction of the underlying physical network infrastructure which enables you to model the network based on business needs and connectivity properties. After a logical network is created, it can be used to specify the network on which a host or a virtual machine (stand-alone or part of a service) is deployed. Users can assign logical networks as part of virtual machine and service creation without having to understand the network details. You can use logical networks to describe networks with different purposes, for traffic isolation and to provision networks for different types of service-level agreements (SLAs). For example, for a tiered application, you may group IP subnets and VLANs that are used for the front-end Web tier as the FRONTEND logical network. For the IP subnets and VLANs that are used for backend servers such as the application and database servers, you may group them as BACKEND. When a self-service user models the application as a service, they can easily pick the logical network for virtual machines in each tier of the service to connect to. At least one logical network must exist for you to deploy virtual machines and services. By default, when you add a Hyper-V host to VMM management, VMM automatically creates logical networks that match the first DNS suffix label of the connection-specific DNS suffix on each host network adapter.

TCP/IP model: (Transmission Control Protocol/Internet Protocol) is a descriptive framework for the Internet Protocol Suite of computer network protocols created in the 1970s by DARPA, an agency of the United States Department of Defense. It evolved from ARPANET, which was an early wide area network and a predecessor of the Internet. The TCP/IP Model is sometimes called the Internet Model or less often the DoD Model. The TCP/IP model describes a set of general design guidelines and implementations of specific networking protocols to enable computers to communicate over a network. TCP/IP provides endto-end connectivity specifying how data should be formatted, addressed, transmitted, routed and received at the destination. Protocols exist for a variety of different types of communication services between computers. TCP/IP has four abstraction layers as defined in RFC 1122. This layer architecture is often compared with the seven-layer OSI Reference Model; using terms such as Internet reference model, incorrectly, however, because it is descriptive while the OSI Reference Model was intended to be prescriptive, hence being a reference model. The TCP/IP model and related protocols are maintained by the Internet Engineering Task Force (IETF). often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow sharing of resources and information.[1] Where at least one process in one device is able to send/receive data to/from at least one process residing in a remote device, then the two devices are said to be in a network.

Networking: Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope. Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. Well-known communications protocols are Ethernet, a hardware and Link Layer standard that is ubiquitous in local area networks, and the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, as well as host-to-host data transfer, and application-specific data transmission formats. Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of these disciplines. Routing: Routing is the process of selecting paths in a network along which to send network traffic. Routing is performed for many kinds of networks, including the telephone network (Circuit switching), electronic data networks (such as the Internet), and transportation networks. This article is concerned primarily with routing in electronic data networks using packet switching technology. In packet switching networks, routing directs packet forwarding, the transit of logically addressed packets from their source toward their ultimate destination through intermediate nodes, typically hardware devices called routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the router's memory, is very important for efficient routing. Most routing algorithms use only one network path at a time, but multipath routing techniques enable the use of multiple alternative paths. Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Because structured addresses allow a single routing table entry to represent the route to a group of devices, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging) in large networks, and has become the dominant form of addressing on the Internet, though bridging is still widely used within localized environments.

Dial-up Internet access:


Dial-up Internet access is a form of Internet access that uses the facilities of the public switched telephone network (PSTN) to establish a dialed connection to an Internet service provider (ISP) via telephone lines. The user's computer or router uses an attached modem to encode and decode Internet Protocol packets and control information into and from analogue audio frequency signals, respectively.

Availability:
Dial-up connections to the Internet require no infrastructure other than the telephone network. Where telephone access is widely available, dial-up remains useful and it is often the only choice available for rural or remote areas, where broadband installations are not prevalent due to low population density, and high infrastructure cost. Dial-up access may also be an alternative for users on limited budgets, as it is offered free by some ISPs, though broadband is increasingly available at lower prices in many countries due to market competition. Dial-up requires time to establish a telephone connection (up to several seconds, depending on the location) and perform handshaking for protocol synchronization before data transfers can take place. In locales with telephone connection charges, each connection incurs an incremental cost. If calls are time-metered, the duration of the connection incurs costs. Dial-up access is a transient connection, because either the user, ISP or phone company terminates the connection. Internet service providers will often set a limit on connection durations to allow sharing of resources, and will disconnect the userrequiring reconnection and the costs and delays associated with it. Technically-inclined users often find a way to disable the auto-disconnect program such that they can remain connected for days. A 2008 Pew Internet and American Life Project study states that only 10 percent of US adults still used dial-up Internet access. Reasons for retaining dial-up access include lack of infrastructure and high broadband prices. This has allowed Dial-up providers such as NetZero to continue spending marketing dollars to obtain customers and commit to having U.S. based customer support. Connecting through DSL: When you connect to the Internet, you might connect through a regular modem, through a localarea network connection in your office, through a cable modem or through a digital subscriber line (DSL) connection. DSL is a very high-speed connection that uses the same wires as a regular telephone line. Advantages:
y y y

You can leave your Internet connection open and still use the phone line for voice calls. The speed is much higher than a regular modem DSL doesn't necessarily require new wiring; it can use the phone line you already have.

The company that offers DSL will usually provide the modem as part of the installation.

disadvantages:
y y y

A DSL connection works better when you are closer to the provider's central office. The farther away you get from the central office, the weaker the signal becomes. The connection is faster for receiving data than it is for sending data over the Internet. The service is not available everywhere.

In this article, we explain how a DSL connection manages to squeeze more information through a standard phone line -- and lets you make regular telephone calls even when you're online. Telephone Lines If you have read How Telephones Work, then you know that a standard telephone installation in the United States consists of a pair of copper wires that the phone company installs in your home. The copper wires have lots of room for carrying more than your phone conversations -they are capable of handling a much greater bandwidth, or range of frequencies, than that demanded for voice. DSL exploits this "extra capacity" to carry information on the wire without disturbing the line's ability to carry conversations. The entire plan is based on matching particular frequencies to specific tasks. To understand DSL, you first need to know a couple of things about a normal telephone line -the kind that telephone professionals call POTS, for Plain Old Telephone Service. One of the ways that POTS makes the most of the telephone company's wires and equipment is by limiting the frequencies that the switches, telephones and other equipment will carry. Human voices, speaking in normal conversational tones, can be carried in a frequency range of 0 to 3,400 Hertz (cycles per second -- see How Telephones Work for a great demonstration of this). This range of frequencies is tiny. For example, compare this to the range of most stereo speakers, which cover from roughly 20 Hertz to 20,000 Hertz. And the wires themselves have the potential to handle frequencies up to several million Hertz in most cases. The use of such a small portion of the wire's total bandwidth is historical -- remember that the telephone system has been in place, using a pair of copper wires to each home, for about a century. By limiting the frequencies carried over the lines, the telephone system can pack lots of wires into a very small space without worrying about interference between lines. Modern equipment that sends digital rather than analog data can safely use much more of the telephone line's capacity. DSL does just that. A DSL internet connection is one of many effective communication tools for keeping employees in touch with the office. Connecting through leased line:

A leased line is a service contract between a provider and a customer, whereby the provider agrees to deliver a symmetric telecommunications line connecting two or more locations in exchange for a monthly rent (hence the term lease). It is sometimes known as a 'Private Circuit' or 'Data Line' in the UK or as CDN (Circuito Diretto Numerico) in Italy. Unlike traditional PSTN lines it does not have a telephone number, each side of the line being permanently connected to the other. Leased lines can be used for telephone, data or Internet services. Some are ringdown services, and some connect two PBXes. Typically, leased lines are used by businesses to connect geographically distant offices. Unlike dial-up connections, a leased line is always active. The fee for the connection is a fixed monthly rate. The primary factors affecting the monthly fee are distance between end points and the speed of the circuit. Because the connection doesn't carry anybody else's communications, the carrier can assure a given level of quality. An internet leased line is a premium internet connectivity product, delivered over fiber normally, which is dedicated and provides uncontended, symmetrical speeds, Full Duplex. It is also known as an ethernet leased line, DIA line, data circuit or private circuit. For example, a channel can be leased, and provides a maximum transmission speed of 1.544 Mbit/s. The user can divide the connection into different lines for multiplexing data and voice communication, or use the channel for one high speed data circuit. Increasingly, leased lines are being used by companies, and even individuals, for Internet access because they afford faster data transfer rates and are cost-effective for heavy users of the Internet.

End of unit 1.

You might also like