This action might not be possible to undo. Are you sure you want to continue?
Definition: network management refers to the activity, methods, procedures and tools that pertain to the operation, administration, maintenance and provisioning of network systems. Operation: involves keeping services up and running; monitoring the network to spot problems Administration: involves tracking resources, that is keeping the network under control, Maintenance: involves repairing, upgrading, device configuration, preventive and corrective measures aimed at keeping the network at its best. Provisioning: includes configuring resources to attain the desired service eg setup Voice over Internet Protocol VoIP service.
NETWORK MANAGEMENT: It’s the broad subject of managing computer networks. There exists a wide variety of software and hardware products that help network system administrators manage a network. Network management therefore covers a wide area, including: Security: Ensuring that the network is protected from unauthorized users. Performance: Eliminating bottlenecks in the network. Reliability: Making sure the network is available to users and responding to hardware and software malfunctions Why Is Network Management Important? Today the networks have become a critical piece of the business process so Downtime costs money and if they Efficient networks, that means additional revenue. Network management is a proactive tool and it gives many advantages like: - Provides visibility into current network operations - Reduces network cost of ownership - Leverages IT personnel
Network management has key functions such as
1. Fault management 2. Configuration management 3. Accounting management 4. Performance management 5. Security management Others are bandwidth management, network acquirement management. All these functions will require planning, controlling, allocating, deploying, coordinating and monitoring of the resources. Fault management: includes detecting, isolating and correcting malfunctioning networks. This can be done with the help of database information such as error log, error detection notifications, diagnostic test. For example Simple Network management Protocol SNMP send error notifications to operators, debug notices. The information helps network administrators to monitor events, the software can at times automatically take actions or probe human intervention. Fault management can be classified as; Passive fault management –expects intelligent devices Active fault management- ping through alarms even if device is not responding/active. Ping uses IP to test if device is reachable( Internet Control Message Protocol ICMP) Configuration management: focuses on establishing and maintaining consistence. Includes security features, assurance after hardware or software or firmware or documentation changes. Software configuration management SCM that include Configuration identification- for end user purpose Configuration control- set of processes requirement to make changes Status accounting that deals with records and reports. Audits that deal with functional and performance attributes. Hardware configuration: deal with preventive and predictive maintenance Accounting management- deals with the control and reports on financial viability of the network. Billing for network services etc Performance management: involves meeting goals effectively and efficiently; reconciles personal goal with the goals of the employing organization.
It aims and attaining certain benefits such as financial gains- sales growth, reduced costs, project overruns, reduced goal accomplishment time ie aligning the organization to its goals. Motivation: teamwork, transparency, confidence, good communication are improved- not business as usual. Management control: flexibility of working environment, compliance to standards, documentation etc. Security management: asset management, physical security including that of human beings, documentation etc. security policies, procedures, guidelines on risk assessment ( threats and vulnerabilities), risk analyses.
NETWORK DESIGN The design should aim at addressing the following goals:Functionality: Must allow users to meet their job requirements, it must provide user to user and user to application connectivity, with reasonable speed and reliability. Scalability The network must be able to grow, that is the initial design should grow without any major changes to the overall design. Adaptability The network must be design with an eye towards future technology and should include no elements that would limit implementation of new technology as they become available. Manageability The network should be design to facilitate network monitoring and management to ensure ongoing stability of operation In order for a network to be effective and serve the need for its users, it should be implemented according to systematic series of planned steps.
Good network design uses structured methodology known as PDIOOR. PDIOOR is an acronym that describes some of the major elements in a network design process, namely:
• • • • • •
Planning (scope, requirements, current network) Design (topology- physical layout, peer –to-peer or client/server) Implementation (installation, prototype, configuration, testing documentation etc) Operation Optimization Retiring the network
The specific steps involved in any network design project include: 1. 2. 3. 4. 5. 6. 7. 8. Identifying customer requirements Identifying and analyzing the current network Designing network topologies and services Planning the network implementation Proof of concept (building pilots or prototypes) Documenting the network design Implementing and verifying the network design Monitoring and revising the network design
Network design has TWO major components 1. The network's logical and 2. The physical design. The physical aspects of your LAN will depend on the underlying physical transport technology—Ethernet or Token-Ring, for example, or possibly ATM, which is supported as a LAN protocol in products such as Windows 2000/XP, Windows Server 2003, and Linux. Depending on which technology you use, there will be one or more LAN topologies from which to choose. Before you can begin to design a physical network, however, you first must determine your needs. • What services must you provide to your user community? • What are the resources you'll need? Equipments, cable, humans skills
Network protocols, TCP?IP, others such as ARCnet , NetBEUI, and Novell's IPX/SPX, these are basically legacy products that are no longer being deployed in newer networks • Applications, such as FTP allow users to send or receive files from remote systems. Can you trust each employee to use this application without abusing it? need to test new application be installation for security reasons • Network speed, and, can users do with network downtime ? • Most important, network security can Network Attached Storage and Storage Area Networks," methods that can mirror data at geographically distant locations be used? Redundant topologies that can be used to prevent a single point of failure from making the network (and its resources) unavailable • Another important factor is cost—you can't forget the budget. These factors make up the logical design for your network. You first should decide what you need and what it will take to provide for those needs. If you are creating a new network and purchasing new applications, you will probably spend most of your time considering the licensing and other financial costs (training users, network personnel, and so on). If you are upgrading older applications, several other factors come into consideration. Many applications that were coded using COBOL, BASIC, FORTRAN, and other languages that helped jumpstart the computer age may have built-in network functionality based on older proprietary network protocols. Planning a Logical Network Design When you plan a logical network design, you can start from one of two places. You can design and install a new network from scratch, or you can upgrade an existing network. Either way, you should gather information about several important factors before you begin the logical design. For example, depending on the services that will be provided to clients, you might need to analyze the possible traffic patterns that might result from your plan. Locate potential bottlenecks and, where possible, alleviate them by providing multiple paths to resources or by putting up servers that provide replicas of important
data so that load balancing can be provided. The following are other factors to consider: • Who are the clients? What are their actual needs? How have you determined these needs—from user complaints or from helpdesk statistics? Is this data reliable? • What kinds of services will you provide on the network? Are they limited in scope? Will any involve configuring a firewall between LANs? And if so, that still doesn't account for configuring a firewall to enable access to the Internet. Will you need to allow an Internet connection for just your internal network's users, or will use you need to allow outside vendors access to your network? What will it cost to evaluate what kind of services user groups need to access from the Internet? Will you need to allow all users to use email—both within the internal network and through the firewall on the Internet? Then you should be sure to send this traffic through a content filter or firewall, and use virus-protection software to detect and prevent malicious code or virus-infected attachments. • The same goes for what sites users will be allowed to access using a network browser and other network applications. Will you have users who work from home and require dial-in or VPN access through the Internet? Who Are Your Clients? You need to assess work patterns for various departments so that you can appropriately place servers, high-bandwidth links, and other such things in the appropriate physical location of the network. If most of the network traffic you expect to see will come from the engineering department, you'll need to provide that department with a large data pipe. What Kinds of Services or Applications Will the Network Offer? You need to make a list of the kinds of applications currently in use, as well as a list of those requested by users. Each application should have a written risk assessment document that points out potential security problems, if any. Typical network applications today include FTP, telnet, and, of course, browsing the Web. There are "secure"
versions of these applications and there are versions that leave a door wide open into your network. Whatever list of applications you chose to support over the network, keep in mind two things: • Is the application safe? Most applications today come in secure versions or can be used with a proxy server to help minimize the possibility of abuse. Yet, as we all have seen, even the largest corporations are targets at times, and those companies have the staff that should be able to prevent these things from happening. . • Does one application overlap another? Every user has his or her favorite application. Some people like one word processor, whereas others prefer a different one. But when dealing with applications or application suites (such as Microsoft Office), you'll find it better to make a decision and stick with a single product if it can satisfy the needs of your users. They might not like it, and training might be necessary, but supporting multiple applications that do the same thing wastes money and leads to confusion. A commonly overlooked method for getting data files out of a network and onto the Internet is to simply send the files as an attachment to an email. So if you think you've blocked file transfers by disabling FTP access through the firewall, this example should show that you really do need to do a thorough evaluation of any new application or service you will allow on the network. And don't forget to test new applications to ensure that they perform as expected. The same goes for older applications—will they work on the new or upgraded network? Lastly, do you monitor network usage? Do you want to permit users to spend their days browsing the Net, or checking personal email while at work? Many companies have policies that apply to using the telephone for personal business. Do you overlook this situation when giving users email capability? Are you preventing access to sites that are obviously not business-related? What Degree of Reliability Do I Require for Each Network Link? Just how much downtime is acceptable? For most users, the answer would be zero. Important components of your network, such as file servers, should have fault tolerance built in from the bottom up. In
large servers, you'll find dual-redundant power supplies (each connected to a separate UPS), and disk arrays set up using RAID techniques to provide for data integrity in the event that a disk goes south. If a link between two offices needs to be up 100% of the time, you should plan for multiple links between the two sites to provide a backup capability. In this case, you also can justify the cost of the extra link by using load balancing so that network response time is improved. And, if you are using multiple links to remote sites, it's always a good idea to have more than a single path to the site. Redundant power lines bringing electricity should use different paths — for if they are side-by-side. Then If a tree falls, will it bring down both of those power lines down. Note In addition to dedicated links between sites, the use of Virtual Private Networking (VPNs) is becoming a popular method for connecting to remote sites. The advantages of using a VPN are that you can send data across the Internet, which is less expensive than using a dedicated link, and mobile users can also use a VPN connection to connect to your network as they move from place to place. The only problem with this approach is that a remote site, as well as your main site, should use two ISPs to ensure that if one goes down, you still have a connection to the Internet. Another technology that can be used to provide an extra layer of redundancy, as well as high-speed access to storage devices, is the Storage Area Network (SAN). A SAN is a network that is separate from the LAN and contains only storage devices and servers that need to access those devices. Because the network bandwidth is not shared with LAN users, multiple servers can access the same storage. If one server fails, other servers can be configured to provide redundant access to the data. Also, the same RAID and other redundancy techniques used for storage devices that are directly attached to a server (such as the SCSI hardware and protocols) can be used on a SAN.
The terms RAID (Redundant Array Of Independent Disks) and UPS (uninterruptible power supplies) are important in today's networks, as is the concept of load balancing and dual-redundant power supplies in large networks.. The old saying "If it ain't broke, don't fix it" doesn't apply to networks. You should always be proactively looking for potential single points of failure and doing something to fix them. By building redundancy into the network design at the start, you'll save yourself a lot of grief in the future. Choosing a LAN Protocol Today the de facto protocol of choice has to be TCP/IP. iPrint— i.e printing over the Internet uses the Internet Printing Protocol—require TCP/IP. To simplify administering configuration information for a large number of computers, you might want to use the Dynamic Host Configuration Protocol (DHCP), If you want to provide a central name resolution service, you might choose the Domain Name Service (DNS),. Planning and Design Components • Documentation— What kind of documents will be required to implement the plan? This can be in the form of checklists for both simple and complex upgrades, sign-off sheets, informational documents provided to end users, and so on. Don't forget training documentation that will be needed in a major upgrade. Training documentation should be prepared for both administrators and the highly skilled end users (power users) of new technology. Of course, you should have a document that shows the physical and logical layout of the network that is being implemented or upgraded. This sort of document can be very useful when something goes wrong and you are trying to troubleshoot a problem. You might find, for example, that the physical network you've designed cannot handle the load that users and applications will place on the network at certain points. • Overall project plan—Creating a project plan with a liberal timeline can be very helpful for keeping the project on track by setting milestones to be met. By making the schedule a liberal
one, you automatically build in extra time to be used when things don't go quite as you expected they would. If everything works perfectly,. • Policies and procedures— As with any technology, you should plan to develop documents that detail policies and procedures to follow when the new network begins operating. Policies dictate how the network is to be used. For example, you might not allow employees to use email for personal use or the Web browser to view pages not related to your business. Procedures are detailed instructions on how to perform certain actions. With new technology, both policy and procedure should be considered important factors. Document Everything Documentation is everything. People have very short memories of things that appear to have only a limited lifetime, such as work projects. An executive overview— You must have some overall plan to present to upper management that explains, without too many technical details, the reasons the upgrade (or new network) is needed, and what benefits the business will obtain from the upgrade. A technical project plan— This is a difficult document to create. After you've identified the parts of the network to upgrade, you need to create lists of steps detailing the replacement of old equipment with the new, with little disruption to the user community. If you are building a network from scratch, or planning a major upgrade in which most of the existing equipment will be replaced, this kind of document works best when done in sections. Note Goals! If you include goals in your project plan (and you should), you can identify certain accomplishments that will be attained during the implementation of the project. These goals can be used to measure the performance of the plan, and can be used to adjust the schedule you have set for the project plan. Feedback from each project team can be used to modify the goals that you have set. It is rare that a project plan succeeds without some modifications. Detailed checklists— For each task that must be performed, a detailed checklist can help ensure that an important
step is not left out. Risk matrix— Identify potential risks early in the project, as well as mitigation steps that may help avoid these risks, and a description of the impact to the project that will result if the issue does occur. Impact can include items such as timeline slip, features or benefits that will need to be dropped, additional equipment or budget needed to overcome an obstacle, or even complete project failure. Test, Test, and Then Test Some More After you have developed a plan and the requisite documentation, don't assume that all of your assumptions and calculations are accurate. The only way to determine that the products or applications you will use in the upgraded network will function as expected is to perform extensive testing. Microsoft resource kits always point out that for larger networks you should create a testing laboratory and try to test different combinations of applications and operating-system configurations and determine whether the results match the expectations of your plan. TIP Remember that wireless clients on a secure network must be configured manually, even if you plan to use DHCP for IP addresses. Each client must be configured with the correct SSID and encryption standard for your SOHO or corporate wireless network, and you must also assign a unique computer name and a common network or domain name for computers in the same workgroup or domain. Make sure your checklist also includes wireless-specific information. Providing Training for Technical Personnel Technical users who will be responsible for helping manage the network should be trained in the procedures for which they will be responsible. Again, this means you should provide training for those who will help you set up the network as well as those who will manage it after it is functioning. Planning Resources Finally, keep in mind that technology changes rapidly in the computer and networking fields.
If you are about to set yourself on a course of designing a network, become familiar with all the latest technologies, and don't depend solely on past experience. The best way to keep up with new technologies is to read about them. You can use books, such as this one, and resources on the World Wide Web, and you can also talk to knowledgeable consultants who are experts in their field.
What is an Operating System? It is “the software that controls the hardware or the programs that make the hardware useable. In brief, an operating system is the set of programs that controls a computer. Some examples of operating systems are UNIX, Mach, MS-DOS, MS-Windows, Windows/NT, Chicago, OS/2, MacOS, VMS, MVS, and VM. Controlling the computer involves software at several levels. That is kernel services, library services, and application-level services, all of which are part of the operating system. Processes run Applications, which are linked together with libraries that perform standard services. The kernel supports the processes by providing a path to the peripheral devices. The kernel responds to service calls from the processes and interrupts from the devices. The core of the operating system is the kernel, a control program that functions in privileged state (an execution context that allows all hardware instructions to be executed), reacting to interrupts from external devices and to service requests and traps from processes. Generally, the kernel is a permanent resident of the computer. It creates and terminates processes and responds to their request for service. Operating Systems are resource managers. The main resource is computer hardware in the form of processors, storage, input/output devices, communication devices, and data. Some of the operating system functions are: implementing the user interface, sharing hardware among users, allowing users to share data among themselves, preventing users from interfering with one another, scheduling resources among users, facilitating input/output, recovering from errors, accounting for resource usage, facilitating parallel operations, organizing data for secure and rapid access, and handling network communications. Objectives of Operating Systems Modern Operating systems generally have the following three major goals that are accomplish by running processes in low privilege and providing service calls that invoke the operating system kernel in high-privilege state.
To hide details of hardware by creating abstraction An abstraction is software that hides lower level details and provides a set of higher-level functions. An operating system transforms the physical world of devices, instructions, memory, and time into virtual world that is the result of abstractions built by the operating system. There are several reasons for abstraction. First, the code needed to control peripheral devices is not standardized. Operating systems provide subroutines called device drivers that perform operations on behalf of programs for example, input/output operations. Second, the operating system introduces new functions as it abstracts the hardware. For instance, operating system introduces the file abstraction so that programs do not have to
deal with disks. Third, the operating system transforms the computer hardware into multiple virtual computers, each belonging to a different program. Each program that is running is called a process. Each process views the hardware through the lens of abstraction. Fourth, the operating system can enforce security through abstraction. To allocate resources to processes (Manage resources) An operating system controls how processes (the active agents) may access resources (passive entities). Provide a pleasant and effective user interface The user interacts with the operating systems through the user interface and usually interested in the “look and feel” of the operating system. The most important components of the user interface are the command interpreter, the file system, on-line help, and application integration. The recent trend has been toward increasingly integrated graphical user interfaces that encompass the activities of multiple processes on networks of computers.
One can view Operating Systems from two points of views: Resource manager and Extended machines. Form Resource manager point of view Operating Systems manage the different parts of the system efficiently and from extended machines point of view Operating Systems provide a virtual machine to users that is more convenient to use. The structurally Operating Systems can be design as a monolithic system, a hierarchy of layers, a virtual machine system, an exokernel, or using the client-server model. The basic concepts of Operating Systems are processes, memory management, I/O management, the file systems, and security. System Components Even though, not all systems have the same structure many modern operating systems share the same goal of supporting the following types of system components. Process Management The operating system manages many kinds of activities ranging from user programs to system programs like printer spooler, name servers, file server etc. Each of these activities is encapsulated in a process. A process includes the complete execution context (code, data, PC, registers, OS resources in use etc.). It is important to note that a process is not a program. A process is only ONE instant of a program in execution. There are many processes can be running the same program. The five major activities of an operating system in regard to process management are
• • • • •
Creation and deletion of user and system processes. Suspension and resumption of processes. A mechanism for process synchronization. A mechanism for process communication. A mechanism for deadlock handling.
Primary-Memory or Main-Memory is a large array of words or bytes. Each word or byte has its own address. Main-memory provides storage that can be access directly by the CPU. That is to say for a program to be executed, it must in the main memory. The major activities of an operating in regard to memory-management are:
• • •
Keep track of which part of memory are currently being used and by whom. Decide which process are loaded into memory when memory space becomes available. Allocate and deallocate memory space as needed.
File Management A file is a collected of related information defined by its creator. Computer can store files on the disk (secondary storage), which provide long term storage. Some examples of storage media are magnetic tape, magnetic disk and optical disk. Each of these media has its own properties like speed, capacity, data transfer rate and access methods. A file systems normally organized into directories to ease their use. These directories may contain files and other directions. The five main major activities of an operating system in regard to file management are 1. 2. 3. 4. 5. The creation and deletion of files. The creation and deletion of directions. The support of primitives for manipulating files and directions. The mapping of files onto secondary storage. The back up of files on stable storage media.
I/O System Management I/O subsystem hides the peculiarities of specific hardware devices from the user. Only the device driver knows the peculiarities of the specific device to whom it is assigned. Secondary-Storage Management Generally speaking, systems have several levels of storage, including primary storage, secondary storage and cache storage. Instructions and data must be placed in primary storage or cache to be referenced by a running program. Because main memory is too small to accommodate all data and programs, and its data are lost when power is lost, the computer system must provide secondary storage to back up main memory. Secondary storage consists of tapes, disks, and other media designed to hold information that will eventually be accessed in primary storage (primary, secondary, cache) is ordinarily divided into bytes or words consisting of a fixed number of bytes. Each location in storage has an address; the set of all addresses available to a program is called an address space. The three major activities of an operating system in regard to secondary storage management are: 1. Managing the free space available on the secondary-storage device. 2. Allocation of storage space when new files have to be written.
3. Scheduling the requests for memory access. Networking A distributed systems is a collection of processors that do not share memory, peripheral devices, or a clock. The processors communicate with one another through communication lines called network. The communication-network design must consider routing and connection strategies, and the problems of contention and security.
Protection System If computer systems has multiple users and allows the concurrent execution of multiple processes, then the various processes must be protected from one another's activities. Protection refers to mechanism for controlling the access of programs, processes, or users to the resources defined by a computer system. Command Interpreter System A command interpreter is an interface of the operating system with the user. The user gives commands with are executed by operating system (usually by turning them into system calls). The main function of a command interpreter is to get and execute the next user specified command. Command-Interpreter is usually not part of the kernel, since multiple command interpreters (shell, in UNIX terminology) may be support by an operating system, and they do not really need to run in kernel mode. There are two main advantages to separating the command interpreter from the kernel. 1. If we want to change the way the command interpreter looks, i.e., I want to change the interface of command interpreter, I am able to do that if the command interpreter is separate from the kernel. I cannot change the code of the kernel so I cannot modify the interface. 2. If the command interpreter is a part of the kernel it is possible for a malicious process to gain access to certain part of the kernel that it showed not have to avoid this ugly scenario it is advantageous to have the command interpreter separate from kernel.
Operating Systems Services
Following are the five services provided by an operating systems to the convenience of the users. Program Execution The purpose of a computer system is to allow the user to execute programs. So the operating systems provide- an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems. Running a program involves the allocating and de-allocating memory, CPU scheduling in case of multi-process. These functions cannot be given to the user-level programs. So user-level
programs cannot help the user to run programs independently without the help from operating systems. I/O Operations Each program requires an input and produces output. This involves the use of I/O. The operating systems hides the user the details of underlying hardware for the I/O. All the user sees is that the I/O has been performed without any details. So the operating systems by providing I/O make it convenient for the users to run programs. For efficiently and protection users cannot control I/O so this service cannot be provided by userlevel programs. File System Manipulation The output of a program may need to be written into new files or input taken from some files. The operating systems provide this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his or her task accomplished. Thus operating systems make it easier for user programs to accomplished their task. This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service s left with operating system. Communications There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user of the worry of passing messages between processes. In case where the messages need to be passed to processes on the other computers through a network it can be done by the user programs. The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system. Error Detection An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning. This service cannot be handled by user programs because it involves monitoring and in cases altering area of memory or de-allocation of memory for a faulty process. Or relinquishing a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs. A user program if given these privileges can interfere with the correct (normal) operation of the operating systems.
Scheduling Algorithms CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU. Following are some scheduling algorithms we will study
• • • • • • •
FCFS Scheduling. Round Robin Scheduling. SJF Scheduling. Shortest job first SRT Scheduling. Shortest running time Priority Scheduling. Multilevel Queue Scheduling. Multilevel Feedback Queue Scheduling.
What is Samba? "Samba is an Open Source/Free Software suite that provides seamless file and print services to SMB/CIFS clients." Samba is freely available, unlike other SMB/CIFS implementations, and allows for interoperability between Linux/Unix servers and Windows-based clients. Samba is software that can be run on a platform other than Microsoft Windows, for example, UNIX, Linux, IBM System 390, OpenVMS, and other operating systems. Samba uses the TCP/IP protocol that is installed on the host server. When correctly configured, it allows that host to interact with a Microsoft Windows client or server as if it is a Windows file and print server Samba is a software package that gives network administrators flexibility and freedom in terms of setup, configuration, and choice of systems and equipment. Because of all that it offers, Samba has grown in popularity, and continues to do so, every year since its release in 1992. Features Samba allows file and print sharing between computers running Windows and computers running Unix. It is an implementation of dozens of services and a dozen protocols, including the NetBIOS over TCP/IP SMB, CIFS (an enhanced version of SMB). Samba sets up network shares for chosen Unix directories (including all contained subdirectories). These appear to Microsoft Windows users as normal Windows folders accessible via the network. Samba services are implemented as two daemons:
smbd, which provides the file and printer sharing services, and nmbd, which provides the NetBIOS-to-IP-address name service. NetBIOS over TCP/IP requires some method for mapping NetBIOS computer names to the IP addresses of a TCP/IP network.
Samba configuration is achieved by editing a single file (typically installed as /etc/smb.conf or /etc/samba/smb.conf). Samba can also provide user logon scripts and group policy implementation through poledit.
Samba is included in most Linux distributions and is started during the boot process. On Red Hat, for instance, the /etc/rc.d/init.d/smb script runs at boot time, and starts both daemons. Samba isn't included in Solaris 8, but a Solaris 8-compatible version is available from the Samba website. Samba includes a web administration tool called Samba Web Administration Tool (SWAT).
System Administration Tips Being a network administration you should know the basic network technologies such as TCP/IP, Ethernet, cabling system, basic network hardware, software, directory services and some basic network troubleshooting techniques. Always document your network resources and configurations such as network hardware and software inventory, computers, basic network configurations, logon scripts. Also take a complete backup of your server’s data and manage the resources of network server in such as way that simultaneous access to the server does not affect the performance of the server. Keep your network software inventory up to date and assure that you have these utilities such as data recovery tools, backup utilities, troubleshooting tools, TCP/IP ports and performance monitoring tools and up-to-date antivirus software Make your network administrator’s password complex and also train an assistant network administrator. Keep an up-to-dated network security tool such as GFI LanGuard, which can prevent your network from any kind of internet or external security attack. Put a strict restriction in the server room and only authorized persons should be allowed to enter in the server room. Regularly take back of your critical data on other hard disks and write backup data on the CDs. After installing and configuring your server, take an image of your whole hard disk, with a good utility such as Norton Ghost. Always get ready an updated disaster recovery and fault tolerance plans. Do not leave the serve room unattended there must be someone responsible person present every time in the server room. Following are the basic tips for administering a computer network. 1. If you are preparing a new system, make sure that you are using the best installation source such as an installation CD. After installing the operating system, also install up-to-date antivirus program such as Norton antivirus, Trend Micro, McAfee, Panda Antivirus or any other antivirus program of your choice. 2. After completing the installation process of all the networking computers, test your network for the performance, speed and security. 3. Keep your network design simple and segment your network in different sections so that it would be easy to isolate a faulty component of the network. 4. Regularly upgrade your network computers with the latest antivirus definitions, software patches and other security measures. 5. Purchase a planned hardware for your network and make a documentation of your hardware and software inventory. 6. Only install the required software applications on your server and networking computers. Any
kind of unauthorized software or web based application can become source of viruses and other malicious codes from the internet. 7. Define network IP address, crate user accounts, create user material and keep the resources closer to the user. 8. Train your IT staff as well as all the users and warn them about any unauthorized activities. 9. Administer the access to the internet and put a tight security restriction on the unauthorized internet usage and block all the unwanted ports and web based application access. 10. Secure your gateway computer or proxy server as much as possible because gateway is directly exposed to the internet. 11. Mirror your hard disk. 12. Structure your network. 13. Being a network administrator, always research and test new network technologies and trends. Being a network administrator or IT manager, you have to ensure the security of your computer network i.e. your network is protected from the unauthorized users and from hackers. For this purpose you have to restrict the user rights, implement the securities policies, use firewalls and monitor the network traffic and performance with the different tools. Some high end software produces real time graphical views to monitor the network activities Simple Network Management Protocol (SNMP) What is SNMP? Simple Network Management Protocol (SNMP) is a widely used protocol designed to facilitate the management of networked devices from a central location. Designed originally for the management of devices such as routers and switches, its usage has grown rapidly to encompass the monitoring of nearly any electronic device one can think of. SNMP is now used to monitor and manage television broadcast studios, automated fare collection systems, airborne military platforms, energy distribution systems, emergency radio networks, and much more.
The SNMP architecture is composed of three major elements:
• • •
Managers (software) are responsible for communicating with (and managing) network devices that implement SNMP Agents (also software). Agents reside in devices such as workstations, switches, routers, microwave radios, printers, and provide information to Managers. MIBs (Management Information Base) describe data objects to be managed by an Agent within a device. MIBs are actually just text files, and values of MIB data objects are the topic of conversation between Managers and Agents.
Here are some typical uses of SNMP:
• • • • •
Monitoring device performance Detecting device faults, or recovery from faults Collecting long term performance data Remote configuration of devices Remote device control
This action might not be possible to undo. Are you sure you want to continue?