This action might not be possible to undo. Are you sure you want to continue?
19 articles by Brien M. Posey
In this article series, I will start with the absolute basics, and work toward building a functional network. In this article I will begin by discussing some of the various networking components and what they do. In the past, all of the articles that I have written for this Web site have been intended for use by administrators with at least some level of experience. Recently though, there have been requests for articles targeted toward those who are just getting started with networking and that have absolutely no experience at all. This article will be the first in a series targeted toward novices. In this article series, I will start with the absolute basics, and work toward building a functional network. In this article I will begin by discussing some of the various networking components and what they do.
The first piece of hardware that I want to discuss is a network adapter. There are many different names for network adapters, including network cards, Network Interface Cards, NICs. These are all generic terms for the same piece of hardware. A network card’s job is to physically attach a computer to a network, so that the computer can participate in network communications. The first thing that you need to know about network cards is that the network card has to match the network medium. The network medium refers to the type of cabling that is being used on the network. Wireless networks are a science all their own, and I will talk about them in a separate article. At one time making sure that a network card matched the network medium was a really big deal, because there were a large number of competing standards in existence. For example, before you built a network and started buying network cards and cabling, you had to decide if you were going to use Ethernet, coaxial Ethernet, Token Ring, Arcnet, or one of the other networking standards of the time. Each networking technology had its strengths and weaknesses, and it was important to figure out which one was the most appropriate for your organization. Today, most of the networking technologies that I mentioned above are quickly becoming extinct. Pretty much the only type of wired network used by small and medium sized businesses is Ethernet. You can see an example of an Ethernet network card, shown in Figure A.
Figure A: This is what an Ethernet card looks like Modern Ethernet networks use twisted pair cabling containing eight wires. These wires are arranged in a special order, and an RJ-45 connecter is crimped onto the end of the cable. An RJ-45 cable looks like the connector on the end of a phone cord, but it’s bigger. Phone cords use RJ-11 connectors as opposed to the RJ-45 connectors used by Ethernet cable. You can see an example of an Ethernet cable with an RJ-45 connector, shown in Figure B.
Figure B: This is an Ethernet cable with an RJ-45 connector installed
Hubs and Switches
As you can see, computers use network cards to send and receive data. The data is transmitted over Ethernet cables. However, you normally can’t just run an Ethernet cable between two PCs and call it a network. In this day and age of high speed Internet access being almost universally available, you tend to hear the term broadband thrown around a lot. Broadband is a type of network in which data is sent and received across the same wire. In contrast, Ethernet uses Baseband communications. Baseband uses separate wires for sending and receiving data. What this means is that if one PC is sending data across a particular wire within the Ethernet cable, then the PC that is receiving the data needs to have the wire redirected to its receiving port. You can actually network two PCs together in this way. You can create what is known as a cross over cable. A cross over cable is simply a network cable that has the sending and receiving wires reversed at one end, so that two PCs can be linked directly together. The problem with using a cross over cable to build a network is that the network will be limited to using no more and no less than two PCs. Rather than using a cross over cable, most networks use normal Ethernet cables that do not have the sending and receiving wires reversed at one end. Of course the sending and receiving wires have to be reversed at some point in order for communications to succeed. This is the job of a hub or a switch. Hubs are starting to become extinct, but I want to talk about them anyway because it will make it easier to explain switches later on. There are different types of hubs, but generally speaking a hub is nothing more than a box with a bunch of RJ-45 ports. Each computer on a network would be connected to a hub via an Ethernet cable. You can see a picture of a hub, shown in Figure C.
A hub is a device that acts as a central connection point for computers on a network
A hub has two different jobs. Its first job is to provide a central point of connection for all of the computers on the network. Every computer plugs into the hub (multiple hubs can be daisy chained together if necessary in order to accommodate more computers). The hub’s other job is to arrange the ports in such a way so that if a PC transmits data, the data is sent over the other computer’s receive wires. Right now you might be wondering how data gets to the correct destination if more than two PCs are connected to a hub. The secret lies in the network card. Each Ethernet card is programmed at the factory with a unique Media Access Control (MAC) address. When a computer on an Ethernet network transmits data across an Ethernet network containing PCs connected to a hub, the data is actually sent to every computer on the network. As each computer receives the data, it compares the destination address to its own MAC address. If the addresses match then the computer knows that it is the intended recipient, otherwise it ignores the data. As you can see, when computers are connected via a hub, every packet gets sent to every computer on the network. The problem is that any computer can send a transmission at any given time. Have you ever been on a conference call and accidentally started to talk at the same time as someone else? This is the same thing that happens on this type of network. When a PC needs to transmit data, it checks to make sure that no other computers are sending data at the moment. If the line is clear, it transmits the necessary data. If another computer tries to communicate at the same time though, then the packets of data that are traveling across the wire collide and are destroyed (this is why this type of network is sometimes referred to as a collision domain). Both PCs then have to wait for a random amount of time and attempt to retransmit the packet that was destroyed. As the number of PCs on a collision domain increases, so does the number of collisions. As the number of collisions increase, network efficiency is decreased. This is why switches have almost completely replaced hubs. A switch, such as the one shown in Figure D, performs all of the same basic tasks as a hub. The difference is that when a PC on the network needs to communicate with another PC, the switch uses a set of internal logic circuits to establish a dedicated, logical path between the two PCs. What this means is that the two PCs are free to communicate with each other, without having to worry about collisions.
Figure D: A switch looks a lot like a hub, but performs very differently
Switches greatly improve a network’s efficiency. Yes, they eliminate collisions, but there is more to it than that. Because of the way that switches work, they can establish parallel communications paths. For example, just because computer A is communicating with computer B, there is no reason why computer C can’t simultaneously communicate with computer D. In a collision domain, these types of parallel communications would be impossible because they would result in collisions.
In this article, I have discussed some of the basic components that make up a simple network. In Part 2, I will continue the discussion of basic networking hardware.
This article continues the discussion of networking hardware by talking about one of the most important networking components; routers. In the first part of this article series, I talked about some basic networking hardware such as hubs and switches. In this article, I want to continue the discussion of networking hardware by talking about one of the most important networking components; routers. Even if you are new to networking, you have probably heard of routers. Broadband Internet connections, such as those utilizing a cable modem or a DSL modem, almost always require a router. A router's job isn't to provide Internet connectivity though. A router's job is to move packets of data from one network to another. There are actually many different types of routers ranging from simple, inexpensive routers used for home Internet connectivity to the insanely expensive routers used by giant corporations. Regardless of a router’s cost or complexity, routers all work on the same basic principles. That being the case, I'm going to focus my discussion around simple, low budget routers that are typically used to connect a PC to a broadband Internet connection. My reason for doing so is that this article series is intended for beginners. In my opinion, it will be a lot easier to teach you the basics if I am referencing something that is at least somewhat familiar to most people, and that is not as complicated as many of the routers used within huge corporations. Besides, the routers used in corporations work on the same basic principles as the routers that I will be discussing in this article. If you are wanting a greater level of knowledge though, don’t worry. I will talk about the science of routing in a whole lot more detail later in this article series. As I explained earlier, a router's job is to move packets of data from one network to another. This definition might seem strange in the context of a PC that's connected to a broadband Internet connection. If you stop and think about it, the Internet is a network (actually it's a collection of networks, but that's beside the point). So if a router's job is to move traffic between two networks, and the Internet is one of those networks, where is the other one? In this particular case, the PC that is connected to the router is actually configured as a very simple network. To get a better idea of what I am talking about, take a look at the pictures shown in Figures A and B. Figure A shows the front of a 3COM broadband router, while Figure B shows the back view of the same router.
Figure A: This is the front view of a 3COM broadband router
Figure B: A broadband Internet router contains a set of RJ-45 ports just like a hub or switch
As you can see in the figures, there is nothing especially remarkable about the front view of the router. I wanted to include this view anyway though, so that those of you
who are unfamiliar with routers can see what a router looks like. Figure B is much more interesting. If you look at Figure B, you’ll see that there are three sets of ports on the back of the router. The port on the far left is where the power supply connects to the router. The middle port is an RJ-45 port used to connect to the remote network. In this particular case, this router is intended to provide Internet connectivity. As such, this middle port would typically be used to connect the router to a cable modem or to a DSL modem. The modem in turn would provide the actual connectivity to the Internet. If you look at the set of ports on the far right, you’ll see that there are four RJ-45 ports. If you think back to the first part of this article series, you’ll recall that hubs and switches also contained large groups of RJ-45 ports. In the case of a hub or switch, the RJ-45 ports are used to provide connectivity to the computers on the network. These ports work the exact same way on this router. This particular router has a four port switch built in. Remember earlier when I said that a router’s job was to move packets between one network and another? I explained that in the case of a broadband router, the Internet represents one network, and the PC represents the second network. The reason why a single computer can represent an entire network is because the router does not treat the PC as a standalone device. Routers treat the PC as a node on a network. As you can see from the photo in Figure B, this particular router could actually accommodate a network of four PCs. It’s just that most home users who use this type of configuration only plug one PC into the router. Therefore a more precise explanation would be that this type of network routes packets of data between a small network (even if that network only consists of a single computer) to the Internet (which it treats as a second network).
The Routing Process
Now that I've talked a little bit about what a router is and what it does, I want to talk about the routing process. In order to understand how routing works, you have to understand a little bit about how the TCP/IP protocol works. Every device connected to a TCP/IP network has a unique IP address bound to its network interface. The IP address consists of a series of four numbers separated by periods. For example, a typical IP address looks something like this: 192.168.0.1 The best analogy I can think of to describe an IP address is to compare it to a street address. A street address consists of a number and a street name. The number identifies the specific building on the street. An IP address works kind of the same way. The address is broken into the network number and a device number. If you were to compare an IP address to a Street address, then think of the network number as being like a street name, and at the device number as being like a house number. The network number identifies which network the device is on, and the device number gives the device an identity on that network. So how do you know where the network number ends and the device number begins? This is the job of the subnet mask. A subnet mask tells the computer where the network number portion of an IP address stops, and where the device number starts. Subnetting can be complicated, and I will cover in detail in a separate article. For now, let's keep it simple and look at a very basic subnet mask. A subnet mask looks a lot like an IP address in that it follows the format of having four numbers separated by periods. A typical subnet mask looks like this: 255.255.255.0
In this particular example, the first three numbers (called octets) are each 255, and the last number 0. The number 255 indicates that all of the bits in the corresponding position in the IP address are a part of the network number. The number zero indicates that none of the bits in the corresponding position in the IP address are a part of the network number, and therefore they all belong to the device number. I know this probably sounds a little bit confusing, so consider this example. Imagine that you had a PC with an IP address of 192.168.1.1 and a subnet mask of 255.255.255.0. In this particular case, the first three octets of the subnet mask are all 255. This means that the first three octets of the IP address all belong to the network number. Therefore, the network number portion of this IP address is 192.168.1.x. The reason why this is important to know is because a router’s job is to move packets of data from one network to another. All of the devices on a network (or on a network segment to be more precise) share a common network number. For example, if 192.168.1.x was the network number associated with computers attached to the router shown in Figure B, then the IP addresses for four individual computers might be: • 192.168.1.1 • 192.168.1.2 • 192.168.1.3 • 192.168.1.4 As you can see, each computer on the local network shares the same network number, but has a different device number. As you may know, whenever a computer needs to communicate with another computer on a network, it does so by referring to the other computer’s IP address. For example, in this particular case the computer with the address of 192.168.1.1 could easily send a packet of data to the computer with the address of 192.168.1.3, because both computers are a part of the same physical network. Things work a bit differently if a computer needs to access a computer on another network. Since I am focusing this particular discussion on small broadband routers that are designed to provide Internet connectivity, let’s pretend that one of the users on the local network wanted to visit the www.brienposey.com Web site. A Web site is hosted by a server. Like any other computer, a Web server has a unique IP address. The IP address for this particular Web site is 220.127.116.11. You can easily look at this IP address and tell that it does not belong to the 192.168.1.x network. That being the case, the computer that’s trying to reach the Web site can’t just send the packet out along the local network, because the Web server isn’t a part of the local network. Instead, the computer that needs to send the packet looks at its default gateway address. The default gateway is a part of a computer’s TCP/IP configuration. It is basically a way of telling a computer that if it does not know where to send a packet, then send it to the specified default gateway address. The default gateway’s address would be the router’s IP address. In this case, the router’s IP address would probably be 192.168.1.0. Notice that the router’s IP address shares the same network number as the other computers on the local network. It has to so that it can be accessible to those computers. Actually, a router has at least two IP addresses. One of those addresses uses the same network number as your local network. The router’s other IP address is assigned by your ISP. This IP address uses the same network number as the ISPs network. The router’s job is therefore to move packets from your local network onto the ISPs
network. Your ISP has routers of its own that work in exactly the same way, but that route packets to other parts of the Internet.
As you can see, a router is a vital network component. Without routers, connectivity between networks (such as the Internet) would be impossible. In Part 3 of this article series, I will discuss the TCP/IP protocol in more detail.
This article continues the Networking for Beginners series by talking about how DNS servers work In the last part of this article series, I talked about how all of the computers on a network segment share a common IP address range. I also explained that when a computer needs to access information from a computer on another network or network segment, it’s a router’s job to move the necessary packets of data from the local network to another network (such as the Internet). If you read that article, you probably noticed that in one of my examples, I made a reference to the IP address that’s associated with my Web site. To be able to access a Web site, your Web browser has to know the Web site’s IP address. Only then can it give that address to the router, which in turn routes the outbound request packets to the appropriate destination. Even though every Web site has an IP address, you probably visit Web sites every day without ever having to know an IP address. In this article, I will show you why this is possible. I have already explained that IP addresses are similar to street addresses. The network portion of the address defines which network segment the computer exists on, and the computer portion of the address designates a specific computer on that network. Knowing an IP address is a requirement for TCP/IP based communications between two computers. When you open a Web browser and enter the name of a Web site (which is known as the site’s domain name, URL, or Universal Resource Locator), the Web browser goes straight to the Web site without you ever having to enter an IP address. With that in mind, consider my comparison of IP addresses to postal addresses. You can’t just write someone’s name on an envelope, drop the envelope in the mail, and expect it to be delivered. The post office can’t deliver the letter unless it has an address. The same basic concept applies to visiting Web sites. Your computer cannot communicate with a Web site unless it knows the site’s IP address. So if your computer needs to know a Web site’s IP address before it can access the site, and you aren’t entering the IP address, where does the IP address come from? Translating domain names into IP addresses is the job of a DNS server. In the two articles leading up to this one, I talked about several aspects of a computer’s TCP/IP configuration, such as the IP address, subnet mask, and default gateway. If you look at Figure A, you will notice that there is one more configuration option that has been filled in; the Preferred DNS server.
Figure A: The Preferred DNS Server is defined as a part of a computer’s TCP/IP configuration
As you can see in the figure, the preferred DNS server is defined as a part of a computer’s TCP/IP configuration. What this means is that the computer will always know the IP address of a DNS server. This is important because a computer cannot communicate with another computer using the TCP/IP protocol unless an IP address is known. With that in mind, let’s take a look at what happens when you attempt to visit a Web site. The process begins when you open a Web browser and enter a URL. When you do, the Web browser knows that it can not locate the Web site based on the URL alone. It therefore retrieves the DNS server’s IP address from the computer’s TCP/IP configuration and passes the URL on to the DNS server. The DNS server then looks up the URL on a table which also lists the site’s IP address. The DNS server then returns the IP address to the Web browser, and the browser is then able to communicate with the requested Web site. Actually, that explanation is a little bit over simplified. DNS name resolution can only work in the way that I just described if the DNS server contains a record that corresponds to the site that’s being requested. If you were to visit a random Web site, there is a really good chance that your DNS server does not contain a record for the site. The reason for this is because the Internet is so big. There are millions of Web sites, and new sites are created every day. There is no way that a single DNS server could possibly keep up with all of those sites and service requests from everyone who is connected to the Internet.
Let’s pretend for a moment that it was possible for a single DNS server to store records for every Web site in existence. Even if the server’s capacity were not an issue, the server would be overwhelmed by the sheer volume of name resolution requests that it would receive from people using the Internet. A centralized DNS server would also be a very popular target for attacks. Instead, DNS servers are distributed so that a single DNS server does not have to provide name resolutions for the entire Internet. There is an organization named the Internet Corporation for Assigned Names and Numbers, or ICANN for short, that is responsible for all of the registered domain names on the Internet. Because managing all of those domain names is such a huge job, ICANN delegates portions of the domain naming responsibility to various other firms. For example, Network Solutions is responsible for all of the .com domain names. Even so, Network Solutions does not maintain a list of the IP addresses associated with all of the .com domains. In most cases, Network Solution’s DNS servers contain records that point to the DNS server that is considered to be authoritative for each domain. To see how all this works, imagine that you wanted to visit the www.brienposey.com website. When you enter the request into your Web browser, your Web browser forwards the URL to the DNS server specified by your computer’s TCP/IP configuration. More than likely, your DNS server is not going to know the IP address of this website. Therefore, it will send the request to the ICANN DNS server. The ICANN DNS server wouldn’t know the IP address for the website that you are trying to visit. It would however know the IP address of the DNS server that is responsible for domain names ending in .COM. It would return this address to your Web browser, which in return would submit the request to the specified DNS server. The top level DNS server for domains ending in .COM would not know the IP address of the requested Web site either, but it would know the IP address of a DNS server that is authoritative for the brienposey.com domain. It would send this address back to the machine that made the request. The Web browser would then send the DNS query to the DNS server that is authoritative for the requested domain. That DNS server would then return the websites IP address, thus allowing the machine to communicate with the requested website. As you can see, there are a lot of steps that must be completed in order for a computer to find the IP address of a website. To help reduce the number of DNS queries that must be made, the results of DNS queries are usually cached for either a few hours or a few days, depending on how the machine is configured. Caching IP addresses greatly improves performance and minimizes the amount of bandwidth consumed by DNS queries. Imagine how inefficient Web browsing would be if your computer had to do a full set of DNS queries every time you visit a new page.
In this article, I explained how DNS servers are used to resolve domain names to IP addresses. Although the process that I’ve described sounds fairly simple, it is important to remember that ICANN and top level DNS registrars, such as Network Solutions, use a load balancing technique to distribute requests across many different DNS servers. This prevents any one server from becoming overwhelmed, and eliminates the chances of having a single point of failure.
Workstations and Servers
This article continues the Networking for Beginners series by talking about the differences between workstations and servers. So far in this article series, I have talked a lot about networking hardware and about the TCP/IP protocol. The networking hardware is used to establish a physical connection between devices, while the TCP/IP protocol is essentially the language that the various devices use to communicate with each other. In this article, I will continue the discussion by talking a little bit about the computers that are connected to a network. Even if you are new to networking, you have no doubt heard terms such as server and workstation. These terms are generally used to refer to a computer’s role on the network rather than the computer’s hardware. For example, just because a computer is acting as a server, it doesn’t necessarily mean that it has to be running server hardware. It is possible to install a server operating system onto a PC, and have that PC act as a network server. Of course in most real life networks, servers are running specialized hardware to help them to be able to handle the heavy workload that servers are typically subjected to. What might make the concept of network servers a little bit more confusing is that technically speaking a server is any computer that hosts resources over a network. This means that even a computer that’s running Windows XP could be considered to be a server if it is configured to share some kind of resource, such as files or a printer. Computers on a network typically fall into one of three roles. Usually a computer is considered to be either a workstation (sometimes referred to as a client), server, or a peer. Workstations are computers that use network resources, but that do not host resources of their own. For example, a computer that is running Windows XP would be considered a workstation so long as it is connected to a network and is not sharing files or printers. Servers are computers that are dedicated to the task of hosting network resources. Typically, nobody is going to be sitting down at a server to do their work. Windows servers (that is, computers running Windows Server 2003, Windows 2000 Server, or Windows NT Server) have a user interface that is very similar to what you would find on a Windows workstation. It is possible that someone with an appropriate set of permissions could sit down at the server and run Microsoft Office or some other application. Even so, such behavior is strongly discouraged because it undermines the server’s security, decreases the server’s performance, and has the potential to affect the server’s stability. The last type of computer that is commonly found on a network is a peer. A peer machine is a computer that acts as both a workstation and a server. Such machines typically run workstation operating systems (such as Windows XP), but are used to both access and host network resources. In the past, peers were found primarily on very small networks. The idea was that if a small company lacks the resources to purchase true servers, then the workstations could be configured to perform double duty. For example, each user could make their own files accessible to every other user on the network. If a user happens to have a printer
attached to their PC, they can also share the printer so that others on the network can print to it. Peer networks have been traditionally discouraged in larger companies because of their inherent lack of security, and because they cannot be centrally managed. That’s why peer networks are primarily found in extremely small companies or in homes with multiple PCs. Windows Vista (the successor to Windows XP) is attempting to change that. Windows Vista will allow users on traditional client/server networks to form peer groups that will allow the users and those groups to share resources amongst themselves in a secure manner, without breaking their connection to network servers. This new feature is being marketed as a collaboration tool. Earlier I mentioned that peer networks are discouraged in favor of client/server networks because they lack security and centralized manageability. However, just because a network is made up of workstations and servers, it doesn’t necessarily guarantee security and centralized management. Remember, a server is only a machine that is dedicated to the task of hosting resources over a network. Having said that, there are countless varieties of servers and some types of servers are dedicated to providing security and manageability. For example, Windows servers fall into two primary categories; member servers and domain controllers. There is really nothing special about a member server. A member server is simply a computer that is connected to a network, and is running a Windows Server operating system. A member server might be used as a file repository (known as a file server), or to host one or more network printers (known as a print server). Member servers are also frequently used to host network applications. For example, Microsoft offers a product called Exchange Server 2003 that when installed on a member server, allows that member server to function as a mail server. The point is that a member server can be used for just about anything. Domain controllers are much more specialized. A domain controller’s job is to provide security and manageability to the network. I am assuming that you’re probably familiar with the idea of logging on to a network by entering a username and password. On a Windows network, it is the domain controller that is responsible for keeping track of usernames and passwords. The person who is responsible for managing the network is known as the network administrator. Whenever a user needs to gain access to resources on a Windows network, the administrator uses a utility provided by a domain controller to create a user account and password for the new user. When the new user (or any user for that matter) attempts to log onto the network, the users credentials (their username and password) are transmitted to the domain controller. The domain controller validates the user’s credentials by comparing them against the copy stored in the domain controller’s database. Assuming that the password that the user entered matches the password that the domain controller has on file, the user is granted access to the network. This process is called authentication. On a Windows network, only the domain controllers perform authentication services. Of course users will probably need to access resources stored on member servers. This is not a problem because resources on member servers are protected by a set of permissions that are related to the security information stored on domain controllers. For example, suppose that my user name was Brien. I enter my username and password, which is sent to a domain controller for authentication. When the domain controller authenticates me, it has not actually given me access to any resources. Instead, it
validates that I am who I claim to be. When I go to access resources off of a member server, my computer presents a special access token to the member server that basically says that I have been authenticated by a domain controller. The member server does not trust me, but it does trust the domain controller. Therefore, since the domain controller has validated my identity, the member server accepts that I am who I claim to be and gives me access to any resources for which I have permission to access.
As you’ve probably guessed, the process of being authenticated by a domain controller and gaining access to network resources is a little more complicated than what I have discussed here. I will be discussing authentication and resource access in much greater detail later in the series. For right now, I wanted to keep things simple so that I could gradually introduce you to these concepts. In the next part of this article series, I will be discussing domain controllers in much more detail. As I do, I will also discuss the role that domain controllers play within the Active Directory
What domain controllers are and how they fit into your network infrastructure. In the previous article in this series, I talked about the roles of various computers on a network. As you may recall, one of the roles that I talked a little bit about was that of a domain controller. In this article, I will talk more about what domain controllers are and how they fit into your network infrastructure. One of the most important concepts in Windows networking is that of a domain. A domain is basically a collection of user accounts and computer accounts that are grouped together so that they can be centrally managed. It is the job of the domain controller to facilitate this central management of domain resources. To see why this is important, consider that any workstation that’s running Windows XP contains a handful of built in user accounts. Windows XP even allows you to create additional user accounts on the workstation. Unless the workstation is functioning as a standalone system or is a part of a peer network, these workstation level user accounts (called local user accounts) are not used for controlling access to network resources. Instead, local user accounts are used to regulate access to the local computer. They act primarily as a mechanism which insures that administrators can perform workstation maintenance, without the end users having the ability to tamper with workstation settings. The reason why local user accounts are not used to control access to resources outside of the workstation that they reside on is because doing so would create an extreme management burden. Think about it for a minute. Local user accounts reside on each individual workstation. This means that if local user accounts were a network’s primary security mechanism, then an administrator would have to physically travel to the computer containing an account any time a change is needed to be made to the account’s permissions. This might not be a big deal on smaller networks, but making security changes would be extremely cumbersome on larger networks or in situations in which a change is needed to be applied globally to all accounts. Another reason why local user accounts are not used to control access to network resources is because they don’t travel with the user from one computer to another. For instance, if a user’s computer crashed, the user couldn’t just log on to another computer and work while their computer was being fixed, because the user’s account is specific to the computer that crashed. In order for the user to be able to do any work, a new account would have to be created on the computer that the user is now working with. These are just a few of the reasons why using local user accounts to secure access to network resources is impractical. Even if you wanted to implement this type of security, Windows does not allow it. Local user accounts can only be used to secure local resources. A domain solves these and other problems by centralizing user accounts (and other configuration and security related objects that I will talk about later in the series). This allows for easier administration, and allows users to log onto the network from any PC on the network (unless you restrict which machines a user can login from). With the information that I have given you so far regarding domains, it may seem that the philosophy behind domains is that, since the resources which users need access to
reside on a server, you should use server level user accounts to control access to those resources. In a way this idea is true, but there is a little more to it than that. Back in the early 1990s I was working for a large insurance company that was running a network with servers running Novell NetWare. Windows networking hadn’t been invented yet, and Novell NetWare was the server operating system of choice at the time. At the time when I was hired, the company only had one network server, which contained all of the user accounts and all of the resources that the users needed access to. A few months later, someone decided that the users at the company needed to run a brand new application. Because of the size of the application and the volume of data that the application produced, the application was placed onto a dedicated server. The version of Novell NetWare that the company was running at the time used the idea that I presented earlier in which resources residing on a server were protected by user accounts which also resided on that server. The problem with this architecture was that each server had its own, completely independent set of user accounts. When the new server was added to the network, users logged in using the normal method, but they had to enter another username and password to access resources on the new server. At first things ran smoothly, but about a month after the new server was installed things started to get ugly. It became time for users to change their password. Users didn’t realize that they now had to change their password in two different places. This meant that passwords fell out of sync, and the help desk was flooded with calls related to password resets. As the company continued to grow and added more servers, the problem was further compounded. Eventually, Novell released version 4.0 of NetWare. NetWare version 4 introduced a technology called the Directory Service. The idea was that users should not have a separate account for each server. Instead, a single user account could be used to authenticate users regardless of how many servers there were on the network. The interesting thing about this little history lesson is that although domains are unique to Microsoft networks (Novell networks do not use domains), domains work on the same basic principle. In fact, when Windows 2000 was released, Microsoft included a feature which is still in use today called the Active Directory. The Active Directory is very similar to the directory service that Novell networks use. So what does all of this have to do with domains? Well, on Windows servers running Windows 2000 Server, Windows Server 2003, or the forthcoming Longhorn Server, it is the domain controller’s job to run the Active Directory service. The Active Directory acts as a repository for directory objects. Among these objects are user accounts. As such, one of a domain controller’s primary jobs is to provide authentication services. One very important concept to keep in mind is that domain controllers provide authentication, not authorization. What this means is that when a user logs on to a network, a domain controller validates the user’s username and password and essentially confirms that the user is who they claim to be. The domain controller does not however tell the user what resources they have rights to. Resources on Windows networks are secured by access control lists (ACLs). An ACL is basically just a list that tells who has rights to what. When a user attempts to access a resource, they present their identity to the server containing the resource. That server makes sure that the user’s identity has been authenticated and then cross references the user’s identity with an ACL to see what it is that the user has rights to.
As you can see, a domain controller performs a very important role within a Windows network. In the next part of this article series, I will talk more about domain controllers and about the Active Directory.
Discusses the anatomy of a Windows domain. In the previous article in this series, I introduced you to the concept of domains and domain controllers. In this article, I want to continue the discussion by talking about the anatomy of a Windows domain. As I explained in Part 5 of this article series, domains are not something new. Microsoft originally introduced them in Windows NT Server. Originally, domains were completely self contained. A single domain often housed all of the user accounts for an entire company, and the domain’s administrator had complete control over the domain and anything in it. Occasionally though, having a single domain just wasn’t practical. For example, if a company had offices in several different cities, then each office might have its own domain. Another common scenario is when one company buys another company. In such situations, it is not at all uncommon for both companies to already have domains. In situations like these, it is sometimes necessary for users from one domain to access resources located in another domain. Microsoft created trusts as a way of facilitating such access. The best way that I can think of to describe trusts is to compare them to the way that security works at an airport. In the Untied States, passengers are required to show their drivers license to airport security staff before boarding a domestic flight. Suppose for a moment that I were going to fly somewhere. The security staff at the airport does not know who I am, and they certainly don’t trust me. They do however trust the state of South Carolina. They assume that the state of South Carolina has exercised due diligence in verifying my identity before issuing me a drivers license. Therefore, I can show them a South Carolina drivers license and they will let me on the plane, even though they don’t necessarily trust me as an individual. Domain trusts work the same way. Suppose that I am a domain administrator and my domain contains resources that users in another domain need to access. If I am not an administrator in the foreign domain then I have no control over who is given user accounts in that domain. If I trust the administrator of that domain not to do anything stupid, then I can establish a trust so that my domain trusts members of the other domain. In a situation like this, my domain would be referred to as the trusting domain, and the foreign domain would be known as the trusted domain. In the previous article, I mentioned that domain controllers provide authentication, not authorization. This holds true even when trust relationships are involved. Simply choosing to trust a foreign domain does not give the users in that domain rights to access any of the resources in your domain. You must still assign permissions just as you would for users in your own domain. At the beginning of this article, I mentioned that in Windows NT a domain was a completely self contained environment, and that trusts were created as a way of allowing users in one domain to access resources in another domain. These concepts still hold partially true today, but the domain model changed dramatically when Microsoft created the Active Directory. As you may recall, the Active Directory was
first introduced in Windows 2000, but is still in use today in Windows Server 2003 and the soon to be released Longhorn Server. One of the primary differences between Windows NT style domains and Active Directory domains is that domains are no longer completely isolated from each other. In Windows NT, there was really no organizational structure for domains. Each domain was completely independent of any other domain. In an Active Directory environment, the primary organizational structure is known as a forest. A forest can contain multiple domain trees. The best way that I can think of to compare a domain tree is to compare it to a family tree. A family tree consists of great grandparents, grandparents, parents, children, etc. Each member of a family tree has some relation to the members above and below them. A domain tree works in a similar manner, and you can tell a domain’s position within a tree just by looking at its name. Active Directory domains use DNS style names, similar to the names used by Web sites. In Part 3 of this article series, I explained how DNS servers resolve URLs for Web browsers. The same technique is used internally in an Active Directory environment. Think about it for a moment. DNS stands for Domain Name Server. In fact, a DNS server is a required component for any Active Directory deployment. To see how domain naming works, let’s take a look at how my own network is set up. My network’s primary domain is named production.com. I don’t actually own the production.com Internet domain name, but it doesn’t matter because this domain is private and is only accessible from inside my network. The production.com domain is considered to be a top level domain. If this were an Internet domain, it would not be a top level domain, because .com would be a top level domain and production.com would be a child domain of the .com domain. In spite of this minor difference, the same basic principle holds true. I could easily create a child domain by creating another domain name that encompasses production.com. For example, sales.production.com would be considered to be a child domain of the production.com domain. You can even create grandchild domains. An example of a grandchild domain of production.com would be widgets.sales.production.com. As you can see, you can easily tell a domain’s position within a domain tree just by looking at the number of periods in the domain’s name. Earlier I mentioned that an Active Directory forest can contain domain trees. You are not limited to creating a single domain tree. In fact, my own network uses two domain trees; production.com and test.com. The test.com domain contains all of the servers that I monkey around with while experimenting with the various techniques that I write articles about. The production.com domain contains the servers that I actually use to run my business. This domain contains my mail server and some file servers. The point is that having the ability to create multiple domain trees allows you to segregate your network in a way that makes the most sense from a management prospective. For example, suppose that a company has offices in five different cities. The company could easily create an Active Directory forest that contains five different domain trees; one for each city. There would most likely be a different administrator in each city, and that administrator would be free to create child domains off of their domain tree on an as needed basis. The beauty of this type of structure is that all of these domains fall within a common forest. This means that while administrative control over individual domains or domain
trees might be delegated to an administrator in another city, the forest administrator ultimately maintains control over all of the domains in the forest. Furthermore, trust relationships are greatly simplified because every domain in the forest automatically trusts every other domain in the forest. It is still possible to establish trusts with external forests or domains.
In this article, I have talked about the organizational structure used in creating Active Directory domains. In the next part of this article series, I will talk about how network communications work in an Active Directory environment.
Introduction to FSMO Roles
The necessity of FSMO roles. So far in this article series, I have explained that the Active Directory consists of a forest filled with domain trees, and that the names of each domain indicate its position within the forest. Given the hierarchical nature of the Active Directory, it might be easy to assume that domains near the top of the hierarchy (or rather the domain controllers within those domains) are the most important. This isn't necessarily the case though. In this article, I will discuss the rules that individual domain controllers play within the Active Directory forest. Earlier in this series, I talked about how domains in Windows NT were all encompassing. Like Active Directory domains, Windows NT domains supported the use of multiple domain controllers. Remember that domain controllers are responsible for authenticating user logons. Therefore, if a domain controller is not available then no one will be able to log on to the network. Microsoft realized this early on and designed Windows to allow multiple domain controllers so that if a domain controller failed, another domain controller would be available to authenticate logons. Having multiple domain controllers also allows the domain related work load to be shared by multiple computers rather than the full burden falling on a single server. Although Windows NT supported multiple domain controllers within a domain, one of these domain controllers was considered to be more important than the others. This was known as the Primary Domain Controller or PDC. As you may recall, a domain controller contains a database of all of the user accounts within the domain (among other things). This database was called the Security Accounts Manager, or SAM database. In Windows NT, the PDC stored the master copy of the database. Other domain controllers within a Windows NT domain were known as Backup Domain Controllers or BDCs. Any time that a change needed to be made to the domain controller’s database, the change would be written to the PDC. The PDC would then replicate the change out to all of the BDCs in the domain. Under normal circumstances, the PDC was the only domain controller in a Windows NT domain to which domain related updates could be applied. If the PDC were to fail, there was a way to promote a BDC to PDC, thus enabling that domain controller to act as the domain’s one and only PDC. Active Directory domains do things a little bit differently. The Active Directory uses a Multi master replication model. What this means is that every domain controller within a domain is writable. There is no longer the concept of PDCs and BDCs. If an administrator needs to make a change to the Active Directory database, the change can be applied to any domain controller in the domain, and then replicated to the remaining domain controllers. Although the multimaster replication model probably sounds like a good idea, it opens the door for contradictory changes. For example, what happens if two different administrators apply contradictory changes to two different domain controllers at the same time? In most cases, the Active Directory assumes that the most recent change takes precedence. In some situations, the consequences of a conflict are too serious to rely on this type of conflict resolution. In these cases, Microsoft takes a stand point that it is
better to prevent a conflict from occurring in the first place than to try to resolve the conflict after it happens. To handle these types of situations, Windows is designed to designate certain domain controllers to perform Flexible Single Master Operation (FSMO) roles. Essentially this means that Active Directory domains fully support multimaster replication except in certain circumstances in which the domain reverts to using a single master replication model. There are three different FSMO roles that are assigned at the domain level, and two additional roles that are assigned the forest level.
Where are the FSMO Roles Located?
For the most part, the FSMO roles pretty much take care of themselves. It is important however for you to know which domain controllers host these roles. By default, the first domain controller in the forest hosts all five roles. As additional domains are created, the first domain controller brought online in each domain holds all three of the domain level FSMO roles. The reason why it is so important to know which domain controllers hold these roles is because hardware eventually gets old and is decommissioned. I once saw a situation in which a network administrator was preparing to deploy an Active Directory network for his company. While waiting for the newly ordered servers to arrive, the administrator installed Windows onto a junk PC so that he could begin playing around with the various Active Directory management tools. When the new servers finally arrived, the administrator configured them as domain controllers in the already created domain rather than creating a new forest. Of course this meant that the junk PC was holding the FSMO roles for the domain in the forest. Everything worked fine until the administrator decided to remove the “junk” PC from the network. Had he properly decommissioned this server, there would not have been a problem. Being inexperienced though, he simply reformatted the machine’s hard drive. All of a sudden the Active Directory began to experience numerous problems. If this administrator had realized that the machine that he had removed from the domain was hosting the domain and forest’s FSMO roles, the problems could have been avoided. Incidentally, in a situation like this there is a way of seizing the FSMO roles from the deceased server so that your network can resume normal operations.
What are the FSMO Roles?
I will talk more about the specific functions of the FSMO roles in the next article in this series. I do however want to quickly mention what these roles are. As you may recall, I mentioned that there are three domain specific roles, and two forest specific roles. The domain specific roles include the Relative identifier, the Primary Domain Controller Emulator, and the Infrastructure Master. Forest level roles include the Schema Master and the Domain Naming master. Below is a brief description of what these roles do: Schema Master: maintains the authoritative copy of the Active Directory database schema. Domain Naming Master: maintains the list of domains within the forest. Relative Identifier Master: responsible for ensuring that every Active Directory object at a domain receives a unique security identifier.
Primary Domain Controller Emulator: acts as the Primary Domain Controller in domains containing domain controllers running Windows NT. Infrastructure Master: the Infrastructure Master is responsible for updating an object’s security identifier and distinguished name in a cross domain object reference.
Hopefully by now, you understand the importance of the FSMO roles even if you don’t understand what the rules themselves actually do. In the next article in this series, I will discuss the FSMO roles in much greater detail and help you to understand what it is that they actually do. I will also show you how to definitively determine which server is hosting the various roles.
FSMO Roles continued
Continuation of the discussion of FSMO roles. Introduction This article will continue the discussion of FSMO roles by discussing what the various roles do, the consequences of FSMO failures, and how to determine which server is hosting the FSMO roles.
The Importance of FSMO Roles
In the previous part of this article series, I explained that Active Directory domains use multi master replication except in certain situations in which it is critically important to avoid a conflict. In those situations, Windows reverts to a single master replication model in which a single domain controller acts as the sole authority for the change in question. These domain controllers are said to hold Flexible Single Operations Master (FSMO) roles. As I explained in Part 7 of this article series, there are five different FSMO roles. Two of these roles exist at the forest level, and three of the roles exist at the domain level. The Forest level roles include the Schema Master and the Domain Naming master, while the domain level FSMO roles include the Relative Identifier Master, Primary Domain Controller (PDC) Emulator, and Infrastructure Master. I actually debated as to whether or not to discuss FSMO roles so early in this article series. Ultimately I decided to go ahead because FSMO roles are so important to supporting Active Directory functionality. As I’m sure you probably know, in order to be able to function, the Active Directory requires that the DNS services are accessible and that the domain have at least one domain controller. When an Active Directory based network is initially created, the first domain controller to be brought online is almost always configured to act as the network’s DNS server. This same domain controller is also assigned all five of the FSMO roles. If other domains are created within the forest, then the first domain controller within each domain will host the FSMO roles for that domain. The forest level FSMO roles are only hosted on a single domain controller regardless of the number of domains in the forest. I tell you this because I want to talk about what will happen if a domain controller that is hosting the FSMO roles fails. If the domain controller that contains the forest level FSMO roles fails, you are definitely going to notice the problem. It isn’t that the FSMO roles themselves are all that critical to the network’s operation, but rather that the domain controller that hosts the forest level FSMO roles is usually also hosting the DNS services, which are considered critical to Active Directory. If the DNS services were hosted on a separate server and the domains within the forest each had more than one domain controller, you probably wouldn’t even notice the failure for a while (unless you had monitoring software to alert you to the failure). Usually, there are no immediate consequences to an FSMO role failure, but some rather strange symptoms will develop later on if the problem is not corrected. That being the case, it is important to know the signs of an FSMO role failure. It is also important for you to know how to determine which server is hosting each FSMO role. That way, if
symptoms matching that of an FSMO failure occur, you can check to see which server is hosting the role that may have failed, and can then begin the troubleshooting process on that server.
The Schema Master
The Active Directory is really nothing more than a database, and like any other database, the Active Directory contains a schema. Unlike many other databases, the Active Directory’s schema is not static. There are any number of operations that require extending the schema. For example, installing Exchange Server requires the Active Directory schema to be extended. Any time that changes are made to the Active Directory schema, those changes are applied to the Schema Master. The Schema Master is by far the most critical of the FSMO roles, so Microsoft hides it from view. If you need to find out which server is hosting the Schema Master role, then insert your Windows Server 2003 installation CD, and double click on the ADMINPAK.MSI file that’s found in the CD’s I386 directory. When you do, Windows will launch the Administration Tools Pack Setup Wizard. Follow the wizard’s prompts to install the Administration Tools pack. When the installation process completes, close the Setup wizard and open the Microsoft Management Console by entering the MMC command at the Run prompt. When the console opens, select the Add / Remove Snap-In command from the File menu. When you do, Windows will display the Add / Remove Snap-in properties sheet. Click the Add button found on the properties sheet’s Standalone tab to reveal a list of available snap-ins. Select the Active Directory Schema snap-in from the list and click the Add button, followed by the Close and OK buttons. Now that the snap-in has been loaded, right click on the Active Directory Schema container and select the Operations Master command from the resulting shortcut menu. You will now see a dialog box that tells you which server is acting as the forest’s Schema Master.
The Domain Naming Master
As I have already explained, an Active Directory forest can contain multiple domains. It’s the Domain Naming Master’s job to keep track of these domains. If the Domain Naming Master were to fail, then it would be impossible to create or remove domains until the Domain Naming Master comes back online. To determine which server is acting as the Domain naming Master for the forest, open the Active Directory Domains and Trusts console. When the console opens, right click on the Active Directory Domains and Trusts container and select the Operations Masters command from the resulting shortcut menu. When you do, Windows will display the Domain Naming master.
The Relative Identifier
As you know, the Active Directory allows administrators to create Active Directory objects on any domain controller. The catch is that each object must have a unique relative identifier number. To prevent relative identifier numbers from being duplicated, the Relative Identifier Master allocates a pool of relative identifiers to each domain controller. When a new object is created within a domain, the domain controller that the object is being created on takes one of its relative identifiers out of its pool and assigns it to the object. When the pool is exhausted, the domain controller must contact the
Relative Identifier Master for additional relative identifiers. As such, the eventual symptom of a Relative Identifier Master failure is the inability to create objects in the Active Directory. To determine which server is acting as the Relative Identifier for a domain, open the Active Directory Users and Computers console. When the console opens, right click on the listing for the current domain and select the Operations Masters command from the resulting shortcut menu. When you do, Windows will display the Operations Masters properties sheet. You can determine which domain controller is acting as the Relative Identifier by looking at the properties sheet’s RID tab.
The Primary Domain Controller Emulator
Throughout this article series, I have talked about the role that the Primary Domain Controller (PDC) plays in Windows NT environments. The PDC emulator role was created to allow Active Directory domain controllers to co-exist with Windows NT domain controllers. The basic idea was that when an organization is being upgraded from Windows NT to Windows 2000 or to Windows Server 2003, the PDC is the first domain controller to be upgraded. At that point, the newly upgraded domain controller functions both as an Active Directory domain controller and as a PDC to the domain controllers that are still running Windows NT. Today the PDC emulator role is largely irrelevant because very few organizations still use Windows NT Server. If you need to determine which server in your domain is hosting the PDC Emulator role though, you can do so by opening the Active Directory Users and Computers console. When the console opens, right click on the listing for the current domain and select the Operations Masters command from the resulting shortcut menu. When you do, Windows will display the Operations Masters properties sheet. You can determine which domain controller is acting as the PDC Emulator by looking at the properties sheet’s PDC tab.
The Infrastructure Master
In an Active Directory environment, a forest can contain multiple domains. Of course the implication of this is that Active Directory domains are not completely independent entities. They must occasionally communicate with the rest of the forest. This is where the Infrastructure Master comes into play. When you create, modify, or delete an object within a domain, the change will naturally be propagated throughout the domain. The problem is that the rest of the forest is not aware of the change. It’s the Infrastructure Master’s job to make the rest of the forest aware of the change. If an Infrastructure Master server fails then changes to objects will not be visible across domain boundaries. For example, if you were to rename a user account, the user account would still appear to have its old name when viewed from other domains in the forest. To determine which server is acting as the Infrastructure Master for a domain, open the Active Directory Users and Computers console. When the console opens, right click on the listing for the current domain and select the Operations Masters command from the resulting shortcut menu. When you do, Windows will display the Operations Masters properties sheet. You can determine which domain controller is acting as the Infrastructure Master by looking at the properties sheet’s Infrastructure tab.
As you can see, the FSMO roles play a critical role in the functionality of the Active Directory. In the next part of this article series, I will continue the discussion by talking about the structure of the Active Directory and the naming scheme used by Active Directory objects.
Active Directory Information
How objects are stored in the Active Directory In the last few parts of this article series, I talked a lot about what the Active Directory is, and how it works in regards to your network's domain controllers. You already know from the previous articles in this series that the Active Directory is essentially a database containing various objects such as user accounts and computer accounts. In this article, I want to continue the discussion by showing you how the Active Directory is structured. If you have ever used Microsoft Access or SQL Server, then you are probably used to being able to open the database and view it in its entirety. However, none of the primary administrative tools used for managing the Active Directory will allow you to see the entire Active Directory database. Instead, Microsoft provides you with a variety of management tools that each focus on a specific area of the database. As a new administrator, the administrative tool that you will probably use the most often is the Active Directory Users and Computers console. You can access the Active Directory Users and Computers console from any Windows Server 2003 domain controller by selecting the Active Directory Users and Computers command from the server’s Start / All Programs / Administrative Tools menu. The console itself looks something like what you see in Figure A
Figure A: The Active Directory Users and Computers console is the primary administrative tool for managing Active Directory objects.
I will later discuss the process of creating or editing Active Directory objects, meanwhile I wanted to go ahead and show you this console because it reveals a little bit the structure of the Active Directory. If you look at Figure A, you will notice that there are a number of containers, each of which correspond to a specific object type. Every object in the entire Active Directory is assigned an object type (known as an object class). Each object also has a number of attributes associated with it. The specific attributes vary depending on the object type. For example, the Users container is filled with user accounts, which are all classified as user objects as shown in Figure B. If you were to right click on one of these user objects and choose the Properties command from the resulting shortcut menu, you would see the user objects' properties sheet, as shown in Figure C.
Figure B:The Users container is filled with user accounts, which are all classified as user objects.
Figure C: When you right click on a user object and select the Properties command from the resulting shortcut menu, you will see the user’s properties sheet.
If you look at figure C, you will see that there are fields for various pieces of information such as first name, last name, telephone number, etc. Each of these fields corresponds to a specific attribute of the individual object. Although the majority of the fields shown in the figure are not populated, in a real life situation these fields could be used to create a corporate directory. In fact, many applications are designed to extract information directly from the Active Directory. For example, Microsoft Exchange Server (Microsoft’s e-mail server product) creates a global address list that is based on the contents of the Active Directory. This global address list is used when sending email messages to other users in the company. If you look at Figure D, you can see a screen in which I performed a search on the name Hershey (my cat’s name in case you are wondering), and Outlook returned all of the Global Address List entries that contain the name Hershey. Not surprisingly there is only one result. If you look at the results portion of the window though, you can see where Outlook would display the user’s title, business phone number, and location had these fields been populated. All of this information was extracted from the Active Directory.
If you wanted to see even more information about the user, you could right click on the user’s name and choose the Properties command from the resulting menu. Doing so would display the screen shown in Figure E. Keep in mind that this is not an administrative screen. This is a screen that any user in the company can access directly through Outlook 2007 in order to find information about other employees.
Figure E: You can view Active Directory information directly through Microsoft Outlook. It is easy to dismiss the significance of what I just showed you. After all, Outlook is a Microsoft product, so it only makes sense that Outlook would be able to extract information from the Active Directory which is a part of another Microsoft product. What a lot of people do not realize though, is that it is fairly easy for anyone with the appropriate permissions to extract information from the Active Directory. In fact, there are countless third party products that are designed to interact with the Active Directory. Some are even capable of storing data in dedicated Active Directory partitions. The reason why it is possible for you or for third party software vendors to interact with the Active Directory is because the Active Directory is based on a well known standard. The Active Directory is based on a standard called X.500. The X.500 standard is basically just a common way of implementing a directory service. Microsoft is not the only company to create a directory service based on this service. Novell originally created the NetWare Directory Service based on this standard. There is also a standard way of accessing directory service information. In an Active Directory environment, accessing directory information involves using the Lightweight Directory Access Protocol, otherwise known as LDAP. The LDAP protocol runs on top of the TCP/IP protocol. The first thing that you need to know about the LDAP protocol is that whoever named it must have been on crack, because there is nothing lightweight about it (although it is more lightweight than the original directory access protocol, which was not designed to take advantage of the TCP/IP protocol stack). Entire books have been written on LDAP, and an in depth discussion is not really appropriate at this point in the article series. What I will tell you is that every object in the Active Directory is refered to by a distinguished name (often abbreviated as DN). The distinguished name is based on the object’s position within the directory hierarchy. There are many different components that can go into a distinguished name, but some of the more common ones are a
common name (abbreviated as CN) and a domain name (abbreviated as DC). For example, suppose that the Contoso.com domain contained an account named User1, and the account was located in the Users container. In such a situation, the distinguished name for the user account would be: CN=User1, CN=Users, DC=Contoso, DC=com
In this article, I have explained that information stored in the Active Directory can be used by external applications through the use of the LDAP protocol. In the next article in this series, I will continue the discussion of distinguished names as they relate to the Active Directory.
The basics of naming objects within a directory. In the previous part of this article series, I explained that the LDAP protocol references objects in the Active Directory by their distinguished name, and that every object in the directory has its own unique distinguished name. In this article, I want to continue the discussion by explaining how distinguished names work.
Before I Begin
Before I get started, I just want to remind you that distinguished names are not unique to the Active Directory. Microsoft built the Active Directory to take advantage of industry standards which are used by other companies such as Novell and IBM. By learning how distinguished names work, you will not only be better prepared to manage an Active Directory environment, you will also have some degree of familiarity if you are ever asked to work with a non Microsoft network operating system.
Basic Naming Rules
Distinguished names are made up of attributes, which are assigned values. A single distinguished name almost always contains multiple attribute value pairs. To see what I am talking about, let’s look at a simple distinguished name:
CN=User1, CN=Users, DC=Contoso, DC=com
In this particular example, the distinguished name is made up of four different attribute / value pairs, each of which are separated by a comma. The first attribute / value pair is CN=USER1. In this attribute / value pair, CN (which stands for Common Name) is the attribute and User1 is the value. Attributes and values are always separated by the equals sign, and attribute / value pairs are always separated from each other by commas.
Relative Distinguished Names
When you look at a distinguished name such as CN=User1, CN=Users, DC=Contoso, DC=com, one thing probably becomes immediately apparent; distinguished names can be really long. If you take a closer look at this distinguished name, you will notice that it is hierarchical. In this particular case, DC=com represents the highest level of the hierarchy. DC=Contoso represents the second level of the hierarchy. You can tell that COM and Contoso are both domains because both use the DC attribute. The domain hierarchy mimics the domain hierarchy used by DNS servers (you learned about the DNS hierarchy earlier in this series). It is important to understand how the distinguished name hierarchy works for two reasons. First, by understanding the naming hierarchy, it becomes possible to know exactly where a particular object is located within the directory. The other reason why it is important to understand the nature of the directory hierarchy is because sometimes shortcuts are used in lieu of a full distinguished name. To see what I am talking about, let’s take another look at our example distinguished name: CN=User1, CN=Users, DC=Contoso, DC=com. This distinguished name simply refers to a user account (more precisely known as a user object) named User1. The rest
of the information in the distinguished name simply tells us the object’s position within the directory hierarchy. If you were trying to tell another person about this object, you would probably casually refer to it as User1. Sometimes LDAP does the same thing. This is possible because it isn’t necessary to provide information about an object’s location in the hierarchy if the location is already known. For example, if we are performing some operation on user objects located in the Users container in the Contoso.com domain, is it really necessary to explicitly state that every single object is located in the Contoso.com domain’s Users container? In situations like this, the distinguished name is often replaced by a Relative Display Name (abbreviated RDN). In the case of CN=User1, CN=Users, DC=Contoso, DC=com, the RDN is CN=User1. The RDN is always made up of the most specific identifier. This will be the left most attribute / value pair in the distinguished name. The remaining portion of the distinguished name is known as the parent distinguished name. In this particular case, the parent distinguished name would be CN=Users, DC=Contoso, DC=com. Before I move on, I want to mention that Microsoft tends to use a slightly different distinguished name format than some other network operating system manufacturers. As you have already seen, Microsoft’s distinguished names tend to be based on containers and domains. There is certainly nothing wrong with this format, because it does comply with RFC 2253, which sets the rules for distinguished names. Some of the other network operating systems tend to base their distinguished name hierarchies on companies and countries rather than containers and domains. In these types of distinguished names, the attribute O is used to designate an organization (company) name, and the letter C is used to designate a country name. Using this naming convention, the distinguished name CN=User1, CN=Users, DC=Contoso, DC=com would look something like this:
CN=User1, O=Contoso, C=US
Keep in mind that the two formats both comply with RFC 2253, but they cannot be used interchangeably. Remember that a distinguished name’s job is to describe an object and its position within the directory. The reason for the two different distinguished name formats is that Microsoft structures their directory differently than some of their competitors.
Special Characters in Distinguished Names
So far you have seen that commas and equal signs have special meaning in the context of a distinguished name. There are several other characters that also have special meanings. These characters include the plus sign, the greater than and less than signs, the number sign, the back slash, and the quotation mark. I’m not going to bother covering most of these because you will rarely, if ever, have to use them in real life. I do however want to talk about the back slash. The back slash allows you to tell an LDAP statement to ignore the following character. This allows you to store otherwise forbidden characters in your directory. To see how this is of use, consider that full names are often expressed as last name comma first name. Even so, LDAP does not allow you to use the statement CN=Smith,
John because the comma is used by LDAP to separate attribute / value pairs. If you wanted to store the value Smith, John in the directory, you could do so by making use of the back slash, as shown below:
In the statement above, the back slash tells LDAP to treat the comma as data rather than as a part of the command syntax. Another way to accomplish this is to surround the entire attribute value by quotation marks. Everything within the quotation marks is treated as data rather than as a part of the syntax. There is a special rule regarding the use of the back slash within quotation marks. The back slash can only be used to force LDAP to ignore another back slash. To put it simply, if you needed to include a back slash as a part of the data, you would simply use two back slashes instead of one. Any other use of the back slash between quotation marks is considered to be illegal.
As you can see, the rules for creating a distinguished name can be a bit tricky. Even so, having a basic understanding of distinguished names is key to effectively managing an Active Directory environment. In Part 11, I will continue the discussion by demonstrating some of the Active Directory management tools.
The Active Directory Users and Computers Console
The Active Directory Users and Computers console and how to use this console to manage remote domains. Over the last several parts of this article series, I have talked a lot about the inner workings of the Active Directory. In this article, I want to switch gears and show you what all of this information has to do with running a network. Windows Server 2003 comes with several different tools used for managing the Active Directory. The Active Directory management tool that you will use most often for dayto-day management tasks is the Active Directory Users and Computers console. As the name implies, this console is used to create, manage, and delete user and computer accounts. You can access this console by clicking your server’s Start button and navigating through the Start menu to All Programs / Administrative Tools. The Active Directory Users and Computers option should be near the top of the Administrative Tools menu. Keep in mind that only domain controllers contain this option, so if you do not see the Active Directory Users and Computers command, make sure that you are logged into a domain controller. Another thing that you might notice is that the Administrative Tools menu contains a couple of other Active Directory tools: Active Directory Domains and Trusts and Active Directory Sites and Services. I will be discussing these utilities in future articles. When you open the Active Directory Users and Computers container, you will see a screen similar to the one that is shown in Figure A. As you might recall from previous articles in the series, the Active Directory is based on a forest, which contains one or more domains. Although the forest represents the entire Active Directory, the Active Directory Users and Computers console does not allow you to work with the Active Directory at the forest level. The Active Directory Users and Computers console is strictly a domain level tool. In fact, if you look at Figure A, you will notice that production.com is highlighted. Production.com is a domain on my network. All of the containers listed beneath the domain contain Active Directory objects that are specific to the domain.
Figure A: The Active Directory Users and Computers console allows you to manage individual domains
You might have noticed that I said that production.com was one of the domains on my network, and yet none of my other domains are listed in Figure A. The Active Directory Users and Computers console only lists one domain at a time for the sake of keeping the console uncluttered. Remember when I said that the Active Directory Users and Computers console is only accessible from the Administrative Tools menu if you are logged into a domain controller? Well, the domain that is listed in the console corresponds to the domain controller that you are logged into. For example, in writing this article I logged in to one of the domain controllers for the production.com domain, so the Active Directory Users and Computers console connects to the production.com domain. The problem with this is that domains are often geographically dispersed. For example, it is fairly common for large companies to have a different domain for each corporate office. If for instance you were in Miami, Florida and the company’s other domain represented an office in Las Vegas, Nevada it would not be practical to have to travel across the country every time you needed to manage the Las Vegas domain. Fortunately, you do not have to. Although the Active Directory Users and Computers console defaults to displaying the domain that is associated with the domain controller that you are logged in to, you can use the console to display any domain that you have rights to. All you have to do is to right click on the domain that is being displayed and then select the Connect to Domain command from the resulting shortcut menu. Doing so displays a screen that allows you to either type in the name of the domain that you want to connect to, or to click a Browse button and browse for the domain.
Just as a domain might be located far away, you might also find it impractical to log directly in to a domain controller. For example I have worked in several offices in which domain controllers were located in a separate building or too far across the facility that I was in to make logging in to a domain controller impractical for day to day maintenance. The good news is that you do not have to be logged in to a domain controller to access the Active Directory Users and Computers console. You only have to be logged in to a domain controller to access the Active Directory Users and Computers console from the Administrative Tools menu. You can access the Active Directory Users and Computers console from a member server by manually loading it into the Microsoft Management Console. To do so, enter the MMC command at the server’s Run prompt. When you do that, the server will open an empty Microsoft Management Console. Next, select the Add / Remove Snap-In command from the console’s File menu. Windows will now open the Add / Remove Snap-In properties sheet. Click the Add button found on the properties sheet’s Standalone tab and you will see a list of all of the available snap-ins. Select the Active Directory Users and Computers option from the list of snap-ins and click the Add button, followed by the Close and OK buttons. The console will now be loaded. In some situations loading the console in this way may produce an error. If you receive an error and the console does not allow you to manage the domain then right click on the Active Directory Users and Computers container and select the Connect to Domain Controller command from the resulting shortcut menu. This will give you the chance to connect the console to a specific domain controller without actually having to log in to that domain controller. Doing so will allow you to manage the domain as if you were sitting at the domain controller’s console. That technique works great if you have a server at your disposal, but what happens if your workstation is running Windows Vista, and all of the servers are on the other side of the building? One of the easiest solutions to this problem is to establish an RDP session with one of your servers. RDP is the Remote Desktop Protocol. It allows you to remotely control servers in your organization. In a Windows Server 2003 environment, you can enable a remote session by right clicking on My Computer and selecting the Properties command from the resulting shortcut menu. Upon doing so, you will see the System Properties sheet. Now, go to the Remote tab and select the Enable Remote Desktop on this Computer check box, as shown in Figure B.
Figure B: You can configure a server to support Remote Desktop connections
To connect to the server from Windows Vista, select the Remote Desktop Connection command from the All Programs / Accessories menu. When you do, you will see a screen similar to the one that is shown in Figure C. Now, just enter the name of your server and click the Connect button to establish a remote control session.
Figure C: Windows Vista makes it easy to connect to a remote server
In this article, I have begun demonstrating the Active Directory Users and Computers console. I have also explained how you can use this console to manage remote domains.
In Part 12 I will continue the discussion by showing you more of the Active Directory Users and Computers console’s capibilities.
User Account Management
How to create a user account and some basic user account management techniques In the previous part of this article series, I began discussing the Active Directory Users and Computers console. Although that article explained how to connect to the domain of choice using the console, it never actually explained how to use the console for dayto-day management tasks. In this article, I will show you some basic techniques for user account maintenance.
Creating a User Account
One of the most common uses for the Active Directory Users in Computers console is to create new user accounts. To do so, expand the container corresponding to the domain that you are attached to, and select the Users container. When you do, the console's details pane will display all of the user accounts that currently exist in the domain, as shown in Figure A.
Figure A: Selecting the Users container causes the console to display all of the user accounts in the domain.
Now, right click on the Users container and select the New command from the resulting shortcut menu. When you do, you will see a submenu that gives you the choice of many different types of objects that you can create. Technically, the Users container is just a container and you can put pretty much any type of object in it. It is generally considered bad practice though to store objects other than user objects in the Users container. That being the case, select the User command from the submenu. When you do, you will see the dialog box shown in Figure B.
Figure B: The New Object – User dialog box allows you to create a new user account.
As you can see in the figure, Windows initially only requires you to enter some very basic information about the user. Although this screen asks for things like first name and last name, these are not technically required. The only piece of information that is absolutely required is the User Logon Name. Although the other fields are optional, I recommend filling them in anyway. The reason why I recommend filling in as many fields as you can is because a user account is nothing more than an object that will reside within the Active Directory. Things like first name and last name are attributes of the user object that you are creating. The more attribute information that you fill in, the more useful the information stored in the Active Directory will be. After all, the Active Directory is a database that you can query for information. In fact, many applications work by extracting the various attributes from the Active Directory. When you have filled in the various fields, click the Next button, and you will be taken to the screen shown in Figure C.
Figure C: You will
be prompted to assign a password to the new user account.
As you can see in the figure, assigning a password is fairly simple. All you really have to do is type, and retype the password. By default, the user is required to change the password at the next logon. You can prevent this behavior by clearing the User Must Change Password at Next Logon check box. There is another check box allowing you to prevent the user from changing their password at all. You also have the option of setting passwords to never expire, or disabling the account completely. Although there is nothing overly complex about the password screen, there is one important thing to keep in mind. When you assign a password to a new user account, the password must comply with your corporate security policy. If the password that you use does not meet the requirements dictated by the applicable group policies, then the user account will not be created. Click next and you will see a screen displaying a summary of the options that you have chosen. Assuming that everything looks good, click Finish and the new user account will be created.
Editing User Account Attributes
Earlier, I discussed the importance of filling in the various attributes as you create a new user account. You might have noticed that the screens involved in creating a new user account did not really have many attributes that you were able to fill in. However, the Active Directory contains dozens of built in attributes related to user accounts. I am not saying that you have to go through the console and populate dozens of attributes for every single user account. There are some attributes that do come in handy. I recommend populating attributes that are related to basic contact information. In fact, some corporations create corporate directories that are based solely on information stored in these Active Directory attributes. Even if you are not interested in building applications that extract information from your Active Directory, it is still a
good idea to populate the Active Directory with user contact information. For example, suppose that you need to reboot a server, and a user is still logged into an application that resides on the server. If you have the user's contact information stored in the Active Directory, then you can simply look up the user's phone number, and call the user to ask them to log out. Before I show you how to populate the various Active Directory attributes, I want to mention that the same technique can also be used for modifying existing attributes. For example, if a female employee were to get married, she might change her last name. You could use the techniques that I am about to show you to modify the contents of the Last Name attribute. To access the various user account attributes, simply right click on the user account of choice and select the Properties command from the resulting shortcut menu. Upon doing so, Windows will display the screen shown in Figure D.
Figure D: The user's properties sheet is used to store attribute and configuration information for the user account.
As you can see in the figure, the properties sheet's General tab allows you to modify the user’s first name, last name, or display name. You can also fill in (or modify) a few other fields such as Description, Office, Telephone Number, E-mail, or Web Page. If you are interested in storing more detailed information about the user, then check out the Address, Telephones, and Organization tabs. These tabs all contain fields for storing much more detailed information about the user.
Resetting a User’s Password
You probably noticed in Figure D that there are a lot of different tabs on the user’s properties sheet. Most of these tabs are related to the security and configuration of the user account. One thing that most new administrators seem to notice right away when exploring these tabs is that there is no option on any of the tabs to reset the user’s password. If you need to reset a user’s password, then close the user’s properties sheet. After doing so, right click on the user account and select the Reset Password command found on the resulting shortcut menu.
In this article, I have walked you through the processes of creating a user account, populating the various Active Directory attributes related to that account, and resetting the account password. In the next article in the series, I will continue the discussion by demonstrating more of the Active Directory Users and Computers console’s capabilities.
This article continues the Networking for Beginners series by introducing the concept of security groups. In the previous article in this series, I showed you how to use the Active Directory Users and Computers console to create and manage user accounts. In this article, I want to continue the discussion by teaching you about groups. In a domain environment, user accounts are essential. A user account gives a user a unique identity on the network. This means that it is possible to track the user’s online activity. It is also possible to give a user account a unique set of permissions, assign the user a unique e-mail address, and meet all of the user’s other individual needs. Although custom tailoring a user account to meet a user’s individual needs sounds like a good idea, it isn’t really practical in a lot of cases. Setting up and managing user accounts is a time consuming task. It isn’t a big deal if you’ve only got a couple dozen users in your organization, but if your organization has thousands of users, then account management can quickly become an overwhelming burden. My advice is that even if you manage a very small network, you should treat the small network as if it were a big network. The reason for this is that you never know when the network will grow. Using good management techniques from the very beginning will help you to avoid a logistical nightmare later on. I have actually seen the consequences of unexpected, rapid growth in the real world. About fifteen years ago, I was hired as a network administrator for an insurance company. At the time, the network was very small. There were only a couple dozen workstations attached to the network. The woman who was in charge of the network had no prior IT experience and was thrown to the wolves, so to speak. Not having an IT background, and not knowing any better, she had configured the network so that all of the configuration settings existed on a per user basis. At the time, this was no big deal. There weren't many users, and it was easy to manage the various accounts and permissions. Within a year there were over two hundred PCs on the network. By the time I left the company a couple of years later, there were well over a thousand people using a network that was only initially designed to handle a few dozen. As you can imagine, the network experienced some severe growing pains. Some of these growing pains were related to hardware performance, but most were related to the inability to effectively manage that many user accounts. Eventually, the network became such a mess that all of the user accounts had to be deleted and recreated from scratch. Obviously, rapid unexpected growth can cause problems, but you are probably wondering why in the world things became so unmanageable that all of the accounts had to be deleted so that we could “just start over”. As I mentioned before, all of the configuration and security settings were user based. This meant that if a department manager came to me and asked me to tell him who had access to a particular network resource, I would have to look at every account individually to see whether or not the user had access to the resource. When you only have a couple dozen users, checking every account to see which users have access to
something is tedious and disruptive (at the time, checking took about 20 minutes). When you’ve got a couple hundred users checking every user account can take most of the day. Granted, the events that I just described happened well over a decade ago. As the IT industry goes, these events might as well have occurred in prehistoric times. After all, the network operating systems that were in use at the time are now extinct. Even so, the lessons learned back then are as relevant today as they were then. All of the problems that I just described could have been prevented if groups had been used. The basic idea behind groups is that a group can contain multiple user accounts. Since security settings are assigned at the group level, you should never manually assign permissions directly to a user account. Instead, you would assign permission to a group, and then make the user a member of the group. I realize that this might sound a little confusing, so I will demonstrate the technique for you. Suppose that one of your file servers contains a folder named Data, and that you need to grant a user read access to the Data folder. Rather than assigning the permission directly to the user, let’s create a group. To do so, open the Active Directory Users and Computers console. When the console opens, right click on the Users container, and select the New | Group commands from the resulting shortcut menus. Upon doing so, you will see a screen similar to the one that is shown in Figure A. At a minimum, you must assign a name to the group. For ease of management, let’s just call the group Data, since the group is going to be used to secure the Data folder. For right now, don’t worry about the group scope or the group type settings. I will teach you about these settings in the next part of this series.
Figure A: Enter a name for the group that you are creating
Click OK, and the Data group will be added to the list of users, as shown in Figure B. Notice that the group’s icon uses two heads, indicating that it is a group, as opposed to the single headed icon used for user accounts.
Figure B: The Data group is added to the list of users
Now, double click on the Data group, and you will see the group’s properties sheet. Select the properties sheet’s Members tab, and click the Add button. You are now free to add user accounts to the group. The accounts that you add are said to be group members. You can see what the Members tab looks like in Figure C.
Figure C: The Members tab lists all of the group’s members
Now it’s time to put the group to work. To do so, right click on the Data folder, and select the Properties command from the resulting shortcut menu. When you do, you will see the folder’s properties sheet. Go to the properties sheet’s Security tab, and click the Add button. When prompted, enter the name of the group that you just created (Data) and click OK. You are now free to establish a set of permissions for the group. Whatever permissions you apply to the group, also apply to group members. As you can see in Figure D, there are some other rights that are applied to the folder by default. It is best to remove the Users group from the access control list to prevent any accidental contradictions of permissions.
Figure D: The Data group is added to the folder’s access control list
Remember earlier when I mentioned how much work it was to try to figure out which users had access to a particular resource? Well, when groups are in use, the process becomes simple. If you need to know which users have access to the folder, just look to see which groups have access to the folder, as shown in Figure D. Once you know which groups can access the folder, determining who has rights to the folder is as simple as checking the group’s membership list (shown in Figure C). Any time additional users need access to the folder, just add their names to the list of group members. Likewise, you can remove permissions to the folder by deleting a user’s name from the list of group members.
In this article, I have shown you how to create security groups in a Windows Server 2003 environment. In the next article in the series, I will continue the discussion by showing you the impact of selecting a different group type.
The various types of security groups that Windows allows you to create. In the previous article, I showed you how to create security groups in Windows Server 2003. When I walked you through the process though, you might have noticed that Windows allows you to create a few different types of groups, as shown in Figure A. As you might have guessed, each of these group types has a specific purpose. In this article, I will explain what each type of group is used for.
Figure A: Windows allows you to create a few different types of groups
If you look at the dialog box shown above, you will notice that the Group Scope area provides you with the option of creating a domain local, global, or universal group. There is also a fourth type of group that is not shown here, it is simply called a local group.
Local groups are groups that are specific to individual computer. As you know by now, local computers can contain user accounts that are completely separate from those accounts that belong to the domain that the computer is connected to. These are known as a local user accounts, and they are only accessible from the computer on which they reside. Furthermore, local user accounts can only exist on workstations and on member servers. Domain controllers do not allow for the existence of local user accounts. With this in mind that should come as no surprise that local groups are simply groups that are specific to a particular member server or workstation. A local group is often used to manage local user accounts. For example, the local Administrators group allows you to designate which users are administrators over the local machine.
Although a local group can only be used to secure resources residing on the local machine, it doesn't mean that the group's membership must be limited to local users. While a local group can, and usually does, contain local users, it can also contain domain users. Furthermore, local groups can also contain other groups that reside at the domain level. For example, you could make a universal group a member of a local group, and the universal group’s members will basically become members of the local group. In fact, a local group can contain local users, domain users, domain local groups, global groups, and universal groups. There are two caveats that you need to be aware of though. First, as you might have noticed, a local group cannot contain another local group. It would seem that you should be able to drop one group into another, but you can’t. Someone at Microsoft once told me that the reason for this is to prevent a situation in which two local groups become members of each other. The other caveat that you need to be aware of is that local groups can only contain domain users and domain level groups if the machine containing the local group is a member of the domain. Otherwise, local groups can only contain local users.
Domain Local Groups
Given what you've just learned about local groups, the idea of a domain local group probably sounds contradictory. The reason why domain local groups exist though, is because domain controllers do not contain a local account database. This means that there are no such things as local users or local groups on a domain controller. Even so, domain controllers have local resources that need to be managed. This is where domain local groups come into play. When you install Windows Server 2003 onto a computer, the machine typically begins life as either a standalone server or as a member server. In either case, local user accounts and local groups are created during the installation process. Now suppose that you wanted to convert the machine into a domain controller. When you run DCPROMO, the local groups and local user accounts are converted into domain local groups and domain user accounts. It is important to keep in mind that all of the domain controllers within a domain share a common user account database. This means that if you add a user to a domain local group on one domain controller, the user will be a member of that domain local group on every domain controller in the entire domain. The most important thing to keep in mind about domain local groups is that there are two different types. As I mentioned, when DCPROMO is run, the local groups are converted to domain local groups. Any domain local groups that are created by running DCPROMO are placed into the Builtin folder in the Active Directory Users and Computers console, as shown in Figure B.
Figure B: Domain local groups created by DCPROMO reside in the Builtin container
The reason why this is important to know is because there are some restrictions imposed on these particular domain local groups. These groups cannot be moved or deleted. Likewise, if you cannot make these groups members of other domain local groups. These restrictions do not apply to domain local groups that you create though. Domain local groups that you create typically began life in the Users container. From there, you are free to move or delete them to your heart’s content. I have to be perfectly frank and tell you though that in all the years I have been working with Windows Server, I have yet to find a good argument for creating domain local groups. In fact, domain local groups are basically identical to global groups, except that they are restricted to an individual domain.
Global groups are by far the most commonly used type of group. In most cases, a global group simply acts as a collection of Active Directory user accounts. The interesting thing about global groups is that they can be placed inside of each other. You can make one global group a member of another global group, so long as both global groups exist within the same domain. Keep in mind, the global groups can only contain Active Directory resource. You cannot place a local user account or a local group into a global group. You can however, add a global group to a local group. In fact, doing so is the most common way of granting domain users permissions to resources stored on a local computer. For example, suppose that you wanted to give the managers in your company administrative rights to their workstations (not that I recommend doing that, this is just an example). To do so, you could create a global group called Managers, and place each of the manager’s domain user accounts into it. You could then add the Managers group to the
workstation’s local Administrators group, thus making the managers administrators on those workstations.
In this article, I've explained that Windows supports the use of four different types of security groups. So far, I have explained the differences between local, domain local, and global groups. In the next part of this article series, I will continue the discussion by discussing universal groups. I will then go on to discuss the concept of group nesting
Universal Groups & Group Nesting
This article continues the discussion on Universal Groups and the concept of group nesting. In the previous article in this series, I introduced you to the concept of using groups to manage network access control, rather than granting permissions directly to users. I then went on to explain that Windows Server 2003 supports a few different types of groups, and that each of these types of groups has its own strengths and limitations. In that article, I talked a lot about local groups, domain local groups, and global groups. You could easily manage your entire network using only these types of groups. Even so, there is one more type of group that Windows Server 2003 supports; universal groups. For those of you who found local groups, domain local groups, and global groups to be confusing or overly restrictive, then universal groups will initially seem like an answer to prayers. Universal groups are essentially groups that are not subject to the restrictions that apply to the other types of groups. For example, in the previous article, I mentioned that you can’t place a local group or a domain local group into another local group. You can however, put a universal group into a local group. The rules that apply to other types of groups simply don’t apply to universal groups. Of course, this raises the question of why you would ever use any of the other types of groups if they have limitations that universal groups can overcome. One of the reasons why there are so many different types of groups is because Windows Server is an evolutionary product. Universal groups were introduced in Windows 2000 Server, along with the Active Directory. Previous versions of Windows Server (namely Windows NT Server) supported the use of groups, but universal groups had not been invented yet when these versions were current. When Microsoft released Windows 2000 Server, they chose to continue to support other types of groups as a way of maintaining backward compatibility with Windows NT. Likewise, Windows Server 2003 also supports the use of legacy group types for backward compatibility reasons. The fact that universal groups didn’t exist in the days of Windows NT Server, means that Windows NT doesn’t support universal groups. This presents a bit of a problem if you happen to have any Windows NT servers in your forest. Windows 2000 Server was such a dramatic change from Windows NT Server that a number of the new features would only work on networks with no Windows NT Server domain controllers. To get around this problem, Microsoft created the concept of native mode. I will talk a lot more about native mode in Part 17, but the basic idea is that when Windows 2000 Server is initially installed, it is operating in something called mixed mode. Mixed mode is fully backward compatible with Windows NT, but many of Windows 2000’s features can’t be used until you get rid of the Windows NT domain controllers and switch to native mode. Although the terminology is a bit different, the same basic concept also applies to Windows Server 2003. Universal groups are one of those features that is only available if your domain controllers are operating in Windows 2000 Server Native Mode or higher. That’s one reason why you can’t use universal groups in every situation. Even if all of your servers are running Windows Server 2003, and your forest is fully native, it is still a bad idea in most cases to use universal groups exclusively.
Earlier in this series, I introduced you to the concept of global catalog servers. As you may recall, global catalog servers are domain controllers that have been assigned the task of keeping track of every object in the forest. Typically, each Active Directory site contains its own copy of the global catalog, which means that any time a global catalog is updated, the updated information must be replicated to the other global catalog servers. When you create a universal group, both the group name and the group’s membership list are written to the global catalog. This means that as you create more and more universal groups, the global catalog becomes more bloated. As the global catalog becomes larger, the amount of time that it takes to replicate the global catalog from one global catalog server to another also increases. If left unchecked, this can lead to network performance problems. In case you are wondering, other types of groups don’t place nearly as much of a load on the global catalog. For example, global groups are listed in the global catalog, but their membership list isn’t. Therefore, Microsoft’s basic rule of thumb is that it is OK to create universal groups, but you should use them sparingly.
One last group related concept that I want to discuss is that of nesting. The easiest way that I can think of to explain nesting is to compare it to Russian matryoshka dolls, like the ones shown in Figure A. These types of dolls are designed so that they can all be placed inside of one another. The smallest goes into the second smallest, the second smallest goes into the third smallest, and so on. This idea of placing an object inside of a similar object is called nesting.
Figure A: Russian matryoshka dolls illustrate the concept of nesting.
There are many different reasons for nesting groups. One of the most common reasons involves matching up resources with departments. For example, a company might start
by creating a group for each department. They might create a Finance group, a Marketing group, an IT group, and so on. Next, they would place users into the group that corresponds to the department that the user works in. The next step in the process would be to create groups that correspond to the various resources that you need to grant access to. For example, if you knew that everyone in the finance department was going to need access to an accounting application, you could create a group that grants access to the application, and then place the finance group into that group. You don’t have to nest groups, but doing so sometimes allows you to keep things a little bit better organized, while saving a little bit of work in the process. For instance in the previous example, you didn’t have to manually place individual user accounts into the group for the accounting application. Instead, you just reused a group that already existed. Keep in mind that not every group can be nested into every other type of group. The table below shows which types of groups can be nested into other groups.
Group Type Local Domain Local Global Universal
Can Be Nested into Can Be Nested into Can Be Nested into Can Be Nested into Local Domain Local Global Universal No No No No Yes Yes, if in the same No No domain Yes Yes Yes, if in the same Yes domain Yes Yes No Yes Table 1
If Windows is operating in Windows 2000 mixed mode, the following limitations apply: • • • Universal groups cannot be created Domain local groups can only contain global groups Global groups can not contain other groups
In this article, I have explained that it is sometimes advantageous to nest one group within another group. I then went on to discuss under which situations it is possible to do this. In the next part of this article series, I am going to take a step back and talk about the role that the Windows operating system plays in networking.
The Windows Operating System's Role in Networking
The role that the Windows Operating System plays in networking. Last month I received an e-mail message from a reader who wanted to know why most of the articles in this series have focused on Windows. It was not so much that the person who sent me the message hated Microsoft, or preferred Linux, or anything like that, but wondered why Windows was necessary. As he correctly pointed out, networking has been around since long before Windows. To make a long story short, I thought that the message made a good point, so I wanted to take the opportunity to talk about the role that Windows plays in networking.
Before I Begin
Before I get started, there are a couple of things that I need to say up front. First, I am going to be spending some time talking about the early days of Windows. There are a lot of rumors alleging that Microsoft “borrowed” parts of the Windows Operating System from companies like IBM and Apple. Personally, I do not know if these rumors are true or not, and to be perfectly frank, I do not really care. I just wanted to acknowledge the point up front in an effort to reduce the number of e-mail messages that I receive in response to this article. The other thing that I want to clarify up front is that today, every operating system implements networking in roughly the same way. Although one operating system might be more efficient than another, the end result is basically the same. After all, it is no coincidence that Windows, Macintosh, Linux, and UNIX can all communicate across the same Internet, using the same protocols. By writing about Windows, I am not trying to start an operating system war, as I seem to have inadvertently done so many times in the past. I just choose to write about Windows because it is the most commonly used operating system, and articles about Windows would therefore theoretically benefit the largest number of people, and this is primarily a Windows focused website.
What Windows Did for the World
Now that I have hopefully appeased most of the haters, let us get down to business. The reason why Windows became such a dominant operating system was because it solved two major problems that plagued the IT industry. The first of these problems is that prior to the creation of Windows, PCs were relatively difficult to use (at least for the lay person anyway). Prior to Windows 3.x, most PCs ran a Microsoft operating system known as MS-DOS. DOS was an acronym that stood for Disk Operating System. The DOS operating system actually worked pretty well, but it did have some serious shortcomings. For starters, the operating system was text based. This meant that if you wanted to launch an application, you could not just point and click on an icon, you had to know the command or commands needed to launch that application. If you wanted to know how much free disk space you had, you could not just right click on a disk icon, you had to use the CHKDSK or DIR command.
The average person was intimidated by DOS. After all, using DOS even for the basics required learning quite a few commands. Many of those commands could do significant damage to your data if you accidentally used them incorrectly, so that added to the problem. There is no denying that PC use was already becoming widespread before Microsoft introduced the graphical operating system, but Windows helped to make PCs much easier to use. The second thing that Windows accomplished was far more important. Windows provided a level of abstraction that allowed device drivers to be separated from applications. In the days of DOS, it was an application developer’s responsibility to include device drivers as a part of an application. For example, when I was in high school, the best word processor on the market was a now defunct product known as PFS Write. One of the things that made PFS Write such a good product was that it supported numerous printers. Even so, I recall purchasing a copy and installing it onto my computer, only to find out that it did not include a driver for my printer. As a result, I had to buy a new printer, just to be able to use a word processor. Keep in mind that my previous printer was not junk. The problem was that most applications at the time shipped on floppy disks, which had an extremely limited capacity. As a result, application developers would typically only include drivers for the most commonly available hardware. At the time, it was not at all uncommon to find that some applications (especially video games) did not support particular video cards, sound cards, etc. The way that drivers were tied to applications was bad for both application developers and for consumers. It was bad for application developers, because they had to spend time writing a zillion device drivers, which increased development cost and increased the amount of time that it took to get their product to market. Because an application could only support a limited set of hardware, the developer inevitably alienated some would be customers by not supporting their hardware. Having device drivers tied to applications was bad for consumers as well. Typically, older hardware was not supported, often forcing consumers to purchase new hardware along with their new application. At the same time though, cutting edge hardware was not usually supported either. Application developers needed to create drivers that would work for the largest number of people possible, so it was rare for an application to contain drivers for the latest hardware. Often the new hardware was backward compatible with device drivers for older hardware, but it might take years for the cutting edge hardware’s full potential to be widely utilized by applications. When Microsoft created Windows, they created an environment in which any application can interact with any hardware. Sure, applications still have minimum hardware requirements, but hardware brands and models do not really matter anymore. For example, if I wanted to print this document, it would not really matter what kind of printer I have, as long as I have a printer driver installed. Windows is built in layers. Every Windows application generates print jobs in exactly the same way, regardless of what the application is, or what type of printer the job is being sent to. The Windows operating system then uses the specified print driver to translate the job into something that the printer can understand. The actual process is a
little bit more complicated than this, but I wanted to convey the basic idea rather than going into a lot of boring architectural details. The point is that abstracting applications from device drivers helps everyone. Application developers no longer suffer the burden of writing device drivers, and consumers are now free to use any hardware they want (so long as it meets minimum standards) without having to worry about whether or not it will work with a particular application.
As you can see, Microsoft was able to design Windows in a way that allowed applications to be abstracted from device drivers. In the next part of this article series, I will continue the discussion by showing you how this architecture assists with networking.
The OSI Model
How the OSI model is used to help applications to communicate across a network. In last month’s article, I talked about the way that Windows (and other network operating systems) use a process called abstraction to allow applications to be developed without the vendor having to worry about creating drivers for specific hardware components. Although this concept is widely used throughout the Windows operating system, it is especially important when it comes to networking. To see why this is the case think about what I talked about in the previous article in regard to hardware abstraction. Suppose that an application needs to be able to communicate across the network. The application developer does not build network drivers into the application, they merely write the application in a way that allows it to make certain calls to the Windows operating system. The manufacturer of the machine’s network adapter provides a driver that also links to Windows, and Windows performs the necessary match ups that allow the application to communicate with the network adapter. Of course that is just the quick and dirty version. Things are actually quite a bit more complex than that. After all, the network adapter is just a device that is designed to send and receive packets of data. The card itself knows nothing of Windows, the application, or even of the protocols that are being used. The example that I provided a moment ago implies that there are three layers at work; the application, the operating system, and the physical hardware. While these layers do exist (but not necessarily by those names), they can be subdivided into several more layers. Before I explain what these layers are and what they do, I want to point out that the concepts that I am about to teach you are not abstract. In fact, if you open the Local Area Connection properties sheet, shown in Figure A, you can see that a network connection is made up of several different components, such as the network client, the network adapter driver, and the protocol. Each of these components corresponds to one or more individual layers.
Figure A: The Local Area Connection properties sheet offers a glimpse at the way that the various network layers are implemented in Windows
The network model that Windows, and most other network operating systems use is called the OSI Model. The term OSI Model is short for Open System Interconnection Basic Reference Model. The OSI Model consists of seven different layers. Each layer of the model is designed so that it can perform a specific task, and facilitate communications between the layer above it and the layer below it. You can see what the OSI Model looks like in Figure B.
Figure B: The OSI Model
The Application Layer
The top layer of the OSI model is the Application layer. The first thing that you need to understand about the application layer is that it does not refer to the actual applications that users run. Instead, it provides the framework that the actual applications run on top of. To understand what the application layer does, suppose for a moment that a user wanted to use Internet Explorer to open an FTP session and transfer a file. In this particular case, the application layer would define the file transfer protocol. This protocol is not directly accessible to the end user. The end user must still use an application that is designed to interact with the file transfer protocol. In this case, Internet Explorer would be that application.
The Presentation Layer
The presentation layer does some rather complex things, but everything that the presentation layer does can be summed up in one sentence. The presentation layer takes the data that is provided by the application layer, and converts it into a standard format
that the other layers can understand. Likewise, this layer converts the inbound data that is received from the session layer into something that the application layer can understand. The reason why this layer is necessary is because applications handle data differently from one another. In order for network communications to function properly, the data needs to be structured in a standard way.
The Session Layer
Once the data has been put into the correct format, the sending host must establish a session with the receiving host. This is where the session layer comes into play. It is responsible for establishing, maintaining, and eventually terminating the session with the remote host. The interesting thing about the session layer is that it is more closely related to the application layer than it is to the physical layer. It is easy to think of connecting a network session as being a hardware function, but in actuality, sessions are usually established between applications. If a user is running multiple applications, several of those applications may have established sessions with remote resources at any time.
The Transport Layer
The Transport layer is responsible for maintaining flow control. As you are no doubt aware, the Windows operating system allows users to run multiple applications simultaneously. It is therefore possible that multiple applications, and the operating system itself, may need to communicate over the network simultaneously. The Transport Layer takes the data from each application, and integrates it all into a single stream. This layer is also responsible for providing error checking and performing data recovery when necessary. In essence, the Transport Layer is responsible for ensuring that all of the data makes it from the sending host to the receiving host.
The Network Layer
The Network Layer is responsible for determining how the data will reach the recipient. This layer handles things like addressing, routing, and logical protocols. Since this series is geared toward beginners, I do not want to get too technical, but I will tell you that the Network Layer creates logical paths, known as virtual circuits, between the source and destination hosts. This circuit provides the individual packets with a way to reach their destination. The Network Layer is also responsible for its own error handling, and for packet sequencing and congestion control. Packet sequencing is necessary because each protocol limits the maximum size of a packet. The amount of data that must be transmitted often exceeds the maximum packet size. Therefore, the data is fragmented into multiple packets. When this happens, the Network Layer assigns each packet a sequence number. When the data is received by the remote host, that device’s Network layer examines the sequence numbers of the inbound packets, and uses the sequence number to reassemble the data and to figure out if any packets are missing. If you are having trouble understanding this concept, then imagine that you need to mail a large document to a friend, but do not have a big enough envelope. You could put a few pages into several small envelopes, and then label the envelopes so that your friend knows what order the pages go in. This is exactly the same thing that the Network Layer does.
The Data Link Layer
The data link layer can be sub divided into two other layers; the Media Access Control (MAC) layer, and the Logical Link Control (LLC) layer. The MAC layer basically establishes the computer’s identity on the network, via its MAC address. A MAC address is the address that is assigned to a network adapter at the hardware level. This is the address that is ultimately used when sending and receiving packets. The LLC layer controls frame synchronization and provides a degree of error checking.
The Physical Layer
The physical layer of the OSI model refers to the actual hardware specifications. The Physical Layer defines characteristics such as timing and voltage. The physical layer defines the hardware specifications used by network adapters and by the network cables (assuming that the connection is not wireless). To put it simply, the physical layer defines what it means to transmit and to receive data.
It Works Both Ways
So far I have discussed the OSI Model in terms of an application that needs to transmit data across the network. The OSI Model is also used when a machine receives data. When data is received, that data comes in through the Physical Layer. The remaining layers work to strip away the encapsulation, and put the data into a format that the application layer can use.
In this article, I have explained how Windows uses the OSI model to implement networking. It is important to understand that the OSI model is only a guide as to how networking should be implemented. In the real world, protocol stacks sometimes combine multiple layers into a single component. I will show you how protocol stacks fit into the model in the next article in the series.
This article continues the Networking for Beginners series by explaining how to make resources available on a network In the previous article, I talked about the OSI model and how it serves as a model for implementing abstraction between the hardware and the software. In this article, I had originally intended to talk about how protocol stacks are related to the OSI model. After giving it some thought I decided that the topic was relatively confusing, and that it didn't have a whole lot of real world value for new network administrators. That being the case, I want to talk about making resources available on a network instead. If you want to read about protocol stacks and how they relate to the OSI model, there are several good articles available on the Internet. Here are links to a few: • TCP/IP-Ethernet Tower of Babel Breeds Confusion • The TCP/IP Model • How OS Works With that said, I want to turn my attention to making resources available over a network. If you really stop and think about it, the whole reason for building a network in the first place is so that resources can be shared among multiple computers. Resources come in a lot of different forms. Often, sharing resources means sharing files or folders, but not always. At the time that I first got started in networking, printers were very expensive, and it was not uncommon to see companies build networks so that a single printer could be shared by multiple employees. This saved the company from having to purchase and maintain a printer for every single employee. Even small, home networks are all about sharing resources. The most common type of home network involves a wireless access point that also serves as an Internet router. On these types of networks, the Internet connection is the resource that is being shared. There is simply no reason to have a separate Internet connection for every computer, when the Internet connection can easily be shared. As you can see, there are many different types of resources that can be shared on a network. The actual process for sharing the resource varies depending on the type of resource that is being shared and on the network operating systems that are being used. That being the case, I will begin my discussion by talking about how you can share files and folders on a network.
Before I Begin
Before I get started, I want to quickly mentioned that the information that I'm about to give you is based on Windows Server 2003. Windows Server 2003, Windows XP, and every previous version of Windows handle file and folder sharing in basically the same way. The actual steps that you use in the sharing process varies slightly from one Windows operating system to another, but the basic underlying concepts are the same. Windows Vista takes a different approach to sharing files than its predecessors do. That being the case, I will talk about filesharing and Windows Vista later on in this series. For right now though, just keep in mind that most of what I'm about to show you doesn't apply to Vista.
Creating A File Share
If you want to share in the files that are stored on a server, you'll have to first create a file share. A file share is essentially a designated entry point through which users can access the files. The reason why a file share is necessary is because it would be a huge security risk to share the full contents of the server. Creating a file share is simple. To do so, begin the process by creating a folder in the location where you want the shared data to reside. For example, many file servers have a designated storage array or a data drive whose sole purpose is to store data (as opposed to program files and operating system components). In most cases, you'll probably have quite a few folders worth of data that you need to share. It is also common for each of these folders to have its own unique security requirements. You can create a separate share for each folder, but doing so is usually considered to be a bad idea unless each share resides on a different volume. There are exceptions to every rule, but in most cases you will only want to create one file share per volume. You can place all of your folders within this single file share, and then assign the necessary permissions on a per folder basis. As this discussion progresses, you'll begin to understand why creating multiple file sharers is such a bad idea. If you've already got a bunch of folders in place, and don't worry about it. You can easily create a new folder and then move your existing folders into the new folder. Another option is to create a file share at the volume level, in which case you would not have to move the existing folders. For the purposes of this article, I'm going to assume that you've created a folder that will contain subfolders beneath it, and that you will be sharing this top level folder. Once you have created your folder, right-click on it and choose the Sharing and Security command from the resulting shortcut menu. When you do, you will see the folder's properties sheet, as shown in Figure A.
Figure A: The Sharing tab gives you the option of sharing the folder
As you can see in the figure, the Sharing tab allows you to control whether or not the folder is shared. When you select the Share this Folder option, you will be prompted to
enter a share name. The name that you choose is very important. Windows isn't nearly as picky as it used to be about share names, but even so, I would recommend that you keep the share name under 16 characters and avoid using spaces or symbols for backward compatibility purposes. I should also mention that if you were to make the last character of the share name a dollar sign, then the share that you are creating becomes invisible. This is known as a hidden share. Windows offers several different hidden shares by default, and I will talk more about hidden shares later in the series. The Comment field allows you to enter a comment about what the share is used for. This is purely for administrative purposes. Comments are optional, but documenting shares is never a bad idea. Now take a look at the User Limit section. You will notice in the figure that the user limit is set by default to Maximum Allowed. Anytime that you deploy a Windows server, you must have the necessary client access licenses in place. You have the option of either a purchasing licenses for each individual client, or licensing the server to support a specific number of connections. Assuming that you have multiple servers, it is usually less expensive to license clients rather than an individual servers. At any rate, when the user limit is set to Maximum Allowed, it means that an unlimited number of clients can connect to the share until the number of connections meets the number of licenses that you have purchased. If you're using a per client licensing model, then access to the share is technically unlimited, but it's still up to use make sure that you have a license for every client. Your other option is to allow a specific number of users to connect to the share. This option has a lot less to do with licensing than it does performance. Lower end hardware may not be able to support a large number of client connections. Therefore, Microsoft gives you the option of limiting the number of simultaneous connections to the share, so as not to overwhelm your hardware.
In this article, I have begun talking about the ways in which resources are shared on a network. In the next article in this series, when you how to set permissions on the share that you're creating.
Share Level Permissions
This article continues the Networking for Beginner series by talking about the difference between file level and share level permissions. In the previous part of this article series, I began showing you how to create a network share that you can use to share resources located on a server. So far, we have created a share, but we have yet to give anyone access to it. In this article, I will continue the discussion by discussing the differences between file level and share level permissions.
Securing a Share
Although the entire point of creating a share is to allow users on your network to access the resources contained within the share, you still have to be careful about what level of access the users are given to those resources. For example, suppose that your human resources department has created a spreadsheet that lists the salary information for every employee in your company. Now suppose that everybody in human resources needs to be able to access the spreadsheet, and to make updates to it. Since the finance department is responsible for printing paychecks, they need to have access to the spreadsheet too, but you probably do not want them to be making any changes to it. Given the sensitive nature of the information in the spreadsheet, you probably would not want anyone else in the company to have access to it. With that in mind, let us take a look at how this type of security could be implemented. The first thing that you need to understand about the share that you have created, is that there are two different types of security that you can apply to it. You have a choice of using share level security, file level security, or both. Share level security applies directly to the share point that you have created. When the users connect to the SharePoint to access the files, the share level permissions that you have set are applied. In contrast, file level permissions are applied directly to files and folders rather than to the share. The reason why there are two different types of permissions has a little bit to do with the evolution of Windows. The Windows operating system supports two different hard drive formats; FAT and NTFS. FAT is a legacy file system that has been around since the early 1980s. Because of its age, FAT is a no-frills file system, and does not support file level security. NTFS on the other hand, was designed with security in mind. You can apply file level security directly to files and folders residing on an NTFS volume. Since the FAT file system does not support file level security, Microsoft allows you to use share level security as a way of getting around the file system shortcomings. Today the NTFS file system is used almost exclusively, and the FAT file system is all but extinct. You can still use share level permissions if you want to, but it is usually considered to be better practice to use file level permissions instead. So what makes file level permissions so much better than share level permissions? For starters, share level permissions only apply if a user is accessing the files through the share. This can be a problem because Windows allows you to create multiple share points on a single volume. If the share points are created carelessly, they can overlap with each other. This can lead to users having unexpected levels of permissions to files and folders.
Another reason why file level permissions are preferable to share level permissions is because share level permissions do not provide any protection unless the user is accessing the files through the SharePoint. If a user were to log on to a server console locally, then they could browse the local hard drive without having to pass through SharePoint. If share permissions were the only types of permissions being used, then the user could theoretically have full access to the files within the share. File level permissions also protect data if the server is booted to an alternate operating system, or if the hard drive is removed from the server and placed into a different machine. Share level permissions simply do not provide this kind of protection. Since file level permissions are far superior to share level permissions, you may be wondering why you would want to create a share at all. You need to create shares, because shares act as an entry point for accessing the file system over the network. If you need to give users access to files on a file server, there really is not any getting around creating shares. However, you can secure the share using file level permissions rather than depending on share level permissions. As you may recall, we created a folder named Data in the previous article, and then shared that folder. To set the permissions in this folder, right-click on it, and choose the Properties command from the resulting shortcut menu. When you do, you will see the folder's properties sheet. Now take a look at the properties sheet's Sharing tab, as shown in Figure A. As you can see in the figure, this tab contains a Permissions button. You can click this button to set share level permissions for the share.
Figure A: The Permissions button is used to set share level permissions for the share
Now take a look at the Security tab. This tab is used to set file level permissions, starting at the folder to which the SharePoint has been bound. The first thing that you need to know about file level permissions is that under normal circumstances they make
use of the concept known as inheritance. Inheritance simply means that when you set a permission, that permission applies not only to the folder, but to everything in it. This includes any subfolders that may exist and any files or folders within the subfolders. Another thing that you need to know about file level permissions is that because of inheritance some permissions will apply automatically. If you take a look at Figure B, you can see that Security tab for the properties sheet that we have been looking at. As you can see in the figure, several different sets of permissions have already been applied. I do not expect you to understand what all of these settings mean just yet, but I will be talking about them in detail later on in this series. For now just be aware of the fact that some permissions are applied automatically.
Figure B: The Security tab can be used to set file level security for the folder to which the share point is bound
If you look at the Security tab, you will notice that the top half of the tab contains a list of users and groups. The lower half of the tab contains a list of permissions. If you want to apply a set of permissions to a user or group, you simply select the user or group that you want to work with from the top half of the tab, and then set the permissions on the lower half of the tab. Of course, before you can set the permissions you need to understand what permissions actually mean. I will discuss the permissions in detail in the next part of this series.
In this article, I have explained that you can secure a SharePoint using either file level or share level permissions, or both. In the next article in this series, I will explain how the permissions themselves work, and how to apply permissions to files and folders.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.