Professional Documents
Culture Documents
521074132
MASTER OF BUSINESS ADMINISTRATION MBA SEM III MI0033 Software Engineering Assignment Set- 1
Q1. Quality and reliability are related concepts but are fundamentally different in a number of ways. Discuss them.
Answer:Quality Concepts It has been said that no two snowflakes are alike. Certainly when we watch snow falling it is hard to imagine that snowflakes differ at all, let alone that each flake possesses a unique structure. In order to observe differences between snowflakes, we must examine the specimens closely, perhaps using a magnifying glass. In fact, the closer we look the more differences we are able to observe. This phenomenon, variation between samples, applies to all products of human as well as natural creation. For example, if two identical circuit boards are examined closely enough, we may observe that the copper pathways on the boards differ slightly in geometry, placement, and thickness. In addition, the location and diameter of the holes drilled in the boards varies as well. All engineered and manufactured parts exhibit variation. The variation between samples may not be obvious without the aid of precise equipment to measure the geometry, electrical characteristics, or other attributes of the parts. However, with sufficiently sensitive instruments, we will likely come to the conclusion that no two samples of any item are exactly alike. Quality Quality of design refers to the characteristics that designers specify for an item. The grade of materials, tolerances, and performance specifications all contribute to the quality of design. As higher-grade materials are used, tighter tolerances and greater levels of performance are specified, the design quality of a product increases, if the product is manufactured according to specifications. Quality of conformance is the degree to which the design specifications are followed during manufacturing. Again, the greater the degree of conformance, the higher is the level of quality of conformance. In software development, quality of design encompasses requirements, specifications, and the design of the system. Quality of conformance is an issue focused primarily on implementation. If the implementation follows the design and the resulting system meets its requirements and performance goals, conformance quality is high.
(ii) Software Safety When software is used as part of the control system, complexity can increase by an order of magnitude or more. Subtle design faults induced by human error something that can be uncovered and eliminated in hardware-based conventional control become much more difficult to uncover when software is used. Software safety is a software quality assurance activity that focuses on the identification and assessment of potential hazards that may affect software negatively, and cause an entire system to fail. If hazards can be identified early in the software engineering process, software design features can be specified that will either eliminate or control potential hazards. A modeling and analysis process is conducted as part of software safety. Initially, hazards are identified and categorized by criticality and risk. For example, some of the hazards associated with a computerbased cruise control for an automobile might be causes uncontrolled acceleration that cannot be stopped.
Answer:What is Software Testing? Test is a formal activity. It involves a strategy and a systematic approach. The different stages of tests supplement each other. Tests are always specified and recorded. Test is a planned activity. The workflow and the expected results are specified. Therefore the duration of the activities can be estimated. The point in time where tests are executed is defined. Test is the formal proof of software quality. Overview of Test Methods Static tests The software is not executed but analyzed offline. In this category would be code inspections (e.g. Fagan inspections), Lint checks, cross reference checks, etc. Dynamic tests This requires the execution of the software or parts of the software (using stubs). It can be executed in the target system, an emulator or simulator. Within the dynamic tests the state of the art distinguishes between structural and functional tests. Structural tests These are so called "white-box tests" because they are performed with the knowledge of the source code details. Input interfaces are stimulated with the aim to run through certain predefined branches or paths in the software. The software is stressed with critical values at the boundaries of the input values or even with illegal input values. The behavior of the output interface is recorded and compared with the expected (predefined) values. Functional tests These are the so called "black-box" tests. The software is regarded as a unit with unknown content. Inputs are stimulated and the values at the output results are recorded and compared to the expected and specified values. Test by progressive Stages The various tests are able to find different kinds of errors. Therefore it is not enough to rely on one kind of test and completely neglect the other. E.g. white-box tests will be able to find coding errors. To detect the same coding error in the system test is very difficult. The system malfunction which may result from the coding error will not necessarily allow conclusions about the location of the coding error. Test therefore should be progressive and supplement each other in stages in order to find each kind of error with the appropriate method. Module test A module is the smallest compilable unit of source code. Often it is too small to allow functional tests (blackbox tests). However it is the ideal candidate for white-box tests. These have to be first of all static tests (e.g. Lint and inspections) followed by dynamic tests to check boundaries, branches and paths. This will usually require the employment of stubs and special test tools. Component test This is the black-box test of modules or groups of modules which represent certain functionality. There are no rules about what can be called a component. It is just what the tester defined to be a component, however it should make sense and be a testable unit. Components can be step by step integrated to bigger components and tested as such. Integration test The software is step by step completed and tested by tests covering a collaboration of modules or classes. The integration depends on the kind of system. E.g. the steps could be to run the operating system first and
Which Test finds which Error? Possible error Syntax errors Can be best found by Compiler, Lint Example Missing semicolons, Values defined but not initialized or used, order of evaluation disregarded. Overflow of variables at calculation, usage of inappropriate data types, values not initialized, values loaded with wrong data or loaded at a wrong point in time, lifetime of pointers. Wrong program flow, use of wrong formulas and calculations. Overlapping ranges, range violation (min. and max. values not observed or limited), unexpected inputs, wrong sequence of input parameters. Disturbances by OS interruptions or hardware interrupts, timing problems, lifetime and duration problems. Resource problems (runtime, stack, registers, memory, etc.) Wrong system behaviour, specification errors
Data errors
Software inspection, module tests Software inspection, module tests Software inspection, module tests, component tests.
Operating system errors, Design inspection, architecture and design integration tests errors Integration errors System errors Integration tests, system tests System tests
Q3.
Answer:Levels of the CMM: Level 1 - Initial Processes are usually ad hoc and the organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization and not on the use of proven processes. In spite of this ad hoc, chaotic environment, maturity level 1 organizations often produce products and services that work; however, they frequently exceed the budget and schedule of their projects. Organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes again. Software project success depends on having quality people. Level 2 - Repeatable Software development successes are repeatable. The processes may not repeat for all the projects in the organization. The organization may use some basic project management to track cost and schedule. Process discipline helps ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans. Project status and the delivery of services are visible to management at defined points (for example, at major milestones and at the completion of major tasks). Basic project management processes are established to track cost, schedule, and functionality. The minimum process discipline is in place to repeat earlier successes on projects with similar applications and scope. There is still a significant risk of exceeding cost and time estimate. Level 3 - Defined The organizations set of standard processes, which is the basis for level 3, is established and improved over time. These standard processes are used to establish consistency across the organization. Projects establish their defined processes by the organizations set of standard processes according to tailoring guidelines. The organizations management establishes process objectives based on the organizations set of standard processes and ensures that these objectives are appropriately addressed. A critical distinction between level 2 and level 3 is the scope of standards, process descriptions, and procedures. At level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on a particular project). At level 3, the standards, process descriptions, and procedures for a project are tailored from the organizations set of standard processes to suit a particular project or organizational unit. Level 4 - Managed Using precise measurements, management can effectively control the software development effort. In particular, management can identify ways to adjust and adapt the process to particular
projects without measurable losses of quality or deviations from specifications. At this level organization set a quantitative quality goal for both software process and software maintenance. Sub processes are selected that significantly contribute to overall process performance. These selected sub processes are controlled using statistical and other quantitative techniques. A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable. Level 5 - Optimizing Focusing on continually improving process performance through both incremental and innovative technological improvements. Quantitative process-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement. The effects of deployed process improvements are measured and evaluated against the quantitative process-improvement objectives. Both the defined processes and the organizations set of standard processes are targets of measurable improvement activities. Process improvements to address common causes of process variation and measurably improve the organizations processes are identified, evaluated, and deployed. Optimizing processes that are nimble, adaptable and innovative depends on the participation of an empowered workforce aligned with the business values and objectives of the organization. The organizations ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning. A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At maturity level 4, processes are concerned with addressing special causes of process variation and providing statistical predictability of the results. Though processes may produce predictable results, the results may be insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing common causes of process variation and changing the process (that is, shifting the mean of the process performance) to improve process performance (while maintaining statistical probability) to achieve the established quantitative process-improvement objectives.
Q4.
Answer: Water fall model: The simplest software development life cycle model is the waterfall model, which states that the phases are organized in a linear order. A project begins with feasibility analysis. On the successful demonstration of the feasibility analysis, the requirements analysis and project planning begins. The design starts after the requirements analysis is done. And coding begins after the design is done. Once the programming is completed, the code is integrated and testing is done. On successful completion of testing, the system is installed. After this the regular operation and maintenance of the system takes place. The following figure demonstrates the steps involved in waterfall life cycle model.
The Waterfall Software Life Cycle Model With the waterfall model, the activities performed in a software development project are requirements analysis, project planning, system design, detailed design, coding and unit testing, system integration and testing. Linear ordering of activities has some important consequences. First, to clearly identify the end of a phase and beginning of the others. Some certification mechanism has to be employed at the end of each phase. This is usually done by some verification and validation. Validation means confirming the output of a phase is consistent with its input (which is the output of the previous phase) and that the output of the phase is consistent with overall requirements of the system. The consequence of the need of certification is that each phase must have some defined output that can be evaluated and certified. Therefore, when the activities of a phase are completed, there should be an output product of that phase and the goal of a phase is to produce this product. The outputs of the earlier phases are often called intermediate products or design document. For the coding phase, the output is the code. From this point of view, the output of a software project is to justify the final program along with the use of documentation with the requirements document, design document, project plan, test plan and test results. Another implication of the linear ordering of phases is that after each phase is completed and its outputs are certified, these outputs become the inputs to the next phase and should not be changed or modified. However, changing requirements cannot be avoided and must be faced. Since changes performed in the output of one phase affect the later phases that might have been performed. These changes have to make in a controlled manner after evaluating the effect of each change on the project. This brings us to the need for configuration control or configuration management. The certified output of a phase that is released for the best phase is called baseline. The configuration management ensures that any changes to a baseline are made after careful review, keeping in mind the interests of all parties that are affected by it. There are two basic assumptions for justifying the linear ordering of phase in the manner proposed by the waterfall model. For a successful project resulting in a successful product, all phases listed in the waterfall model must be performed anyway. Any different ordering of the phases will result in a less successful software product.
Q5.
Explain the Advantages of Prototype Model, & Spiral Model in Contrast to Water Fall model.
Often, a customer defines a set of general objectives for software but does not identify detailed input, processing, or output requirements. In other cases, the developer may be unsure of the efficiency of an algorithm, the adaptability of an operating system, or the form that human/machine interaction should take. In these, and many other situations, a prototyping paradigm may offer the best approach. The prototyping paradigm begins with requirements gathering. Developer and customer meet and define the overall objectives for the software, identify whatever requirements are known, and outline areas where further definition is mandatory. A "quick design" then occurs. The quick design focuses on a representation of those aspects of the software that will be visible to the customer/user (e.g., input approaches and output formats). The quick design leads to the construction of a prototype. The prototype is evaluated by the customer/user and used to refine requirements for the software to be developed. Iteration occurs as the prototype is tuned to satisfy the needs of the customer, while at the same time enabling the developer to better understand what needs to be done. The prototype can serve as "the first system." The one that Brooks recommends we throw away. But this may be an idealized view. It is true that both customers and developers like the prototyping paradigm. Users get a feel for the actual system, and developers get to build something immediately. Yet, prototyping can also be problematic for the following reasons: 1. The customer sees what appears to be a working version of the software, unaware that the prototype is held together with chewing gum and baling wire, unaware that, in the rush to get it working, no one has considered overall software quality or long-term maintainability. When informed that the product must be rebuilt so that high levels of quality can be maintained, the customer cries foul and demands that "a few fixes" be applied to make the prototype a working product. Too often, software development management relents. The developer often makes implementation compromises in order to get a prototype working quickly. An inappropriate operating system or programming language may be used simply because it is available and known; an inefficient algorithm may be implemented simply to demonstrate capability. After a time, the developer may become familiar with these choices and forget all the reasons why they were inappropriate. The less-than-ideal choice has now become an integral part of the system.
2.
The Spiral Model The spiral model, originally proposed by Boehm [BOE88], is an evolutionary software process model that couples the iterative nature of prototyping with the controlled and systematic aspects of the linear sequential model. It provides the potential for rapid development of incremental versions of the software. Using the spiral
Customer Communication tasks required to establish effective communication between developer and customer. Planning tasks required to define resources, timelines, and other project-related information. Risk Analysis tasks required to assess both technical and management risks. Engineering tasks required to build one or more representations of the application. Construction and Release tasks required to construct, test, install, and provide user support (e.g., documentation and training). Customer Evaluation tasks required to obtain customer feedback based on evaluation of the software representations created during the engineering stage and implemented during the installation stage.
Each of the regions is populated by a set of work tasks, called a task set, that are adapted to the characteristics of the project to be undertaken. For small projects, the number of work tasks and their formality is low. For larger, more critical projects, each task region contains more work tasks that are defined to achieve a higher level of formality. In all cases, the umbrella activities (e.g., software configuration management and software quality assurance) noted in Section 2.2 are applied. As this evolutionary process begins, the software engineering team moves around the spiral in a clockwise direction, beginning at the center. The first circuit around the spiral might result in the development of a product specification; subsequent passes around the spiral might be used to develop a prototype, and then progressively more sophisticated versions of the software. Each pass through the planning region results in adjustments to the project plan. Cost and schedule are adjusted based on feedback derived from customer evaluation. In addition, the project manager adjusts the planned number of iterations required to complete the software. Unlike classical process models that end when software is delivered, the spiral model can be adapted to apply throughout the life of the computer software. An alternative view of the spiral model can be considered by examining the project entry point axis, the starting point for different types of projects. A concept development project starts at the core of the spiral and will continue (multiple iterations occur along the spiral path that bounds the central shaded region) until concept development is complete. If the concept is to be developed into an actual product, the process proceeds through the next cube (new product development project entry point) and a new development project is initiated. The new product will evolve through a number of iterations around the spiral, following the path that bounds the region that has somewhat lighter shading than the core. In essence, the spiral, when characterized in this way, remains operative until the software is retired. There are times when the process is dormant, but whenever a change is initiated, the process starts at the appropriate entry point (e.g., product enhancement). The spiral model is a realistic approach to the development of large-scale systems and software. Because software evolves as the process progresses, the developer and customer had better understand and react to risks at each evolutionary level. The spiral model uses prototyping as a risk reduction mechanism but, more importantly, enables the developer to apply the prototyping approach at any stage in the evolution of the product. It maintains the systematic stepwise approach suggested by the classic life cycle, but incorporates it
Q6.
Answer:COCOMO Model: COCOMO stands for constructive cost model. It is used for software cost estimation and uses regression formula with parameters based on historic data. COCOMO has a hierarchy of 3 accurate and detail forms, namely: Basic, Intermediate and Detailed. The Basic level is good for a quick and early overall cost estimate for the project but is not accurate enough. The intermediate level considers some of the other project factors that influence the project cost and the detailed level accounts for various project phases that affect the cost of the project. Advantages of COCOMO estimating model are: COCOMO is factual and easy to interpret. One can clearly understand how it works. Accounts for various factors that affect cost of the project. Works on historical data and hence is more predictable and accurate.
Disadvantages of COCOMO estimating model COCOMO model ignores requirements and all documentation. It ignores customer skills, cooperation, knowledge and other parameters. It oversimplifies the impact of safety/security aspects. It ignores hardware issues. It ignores personnel turnover levels. It is dependent on the amount of time spent in each phase.
Software Estimation Technique: Accurately estimating software size, cost, effort, and schedule is probably the biggest challenge facing Software developers today.
Estimating Software Size An accurate estimate of software size is an essential element in the calculation of estimated project costs and schedules. The fact that these estimates are required very early on in the project (often while a contract bid is being prepared) makes size estimation a formidable task. Initial size estimates are typically based on the
Master of Business Administration IS Semester 3 MI0034 Database Management System Assignment Set- 1
Traditional File system keeps redundant Redundancy is eliminated to the maximum [duplicate] information in many locations. This extent in DBMS if properly defined. might result in the loss of Data Consistency. For e.g.: Employee names might exist in separate files like Payroll Master File and also in Employee Benefit Master File etc. Now if an employee changes his or her last name, the name might be changed in the pay roll master file but not be changed in Employee Benefit Master File etc. This might result in the loss of Data Consistency. In a File system data is scattered in various files, This problem is completely solved here. and each of these files may be in different formats, making it difficult to write new application programs to retrieve the appropriate data. Security features are to be coded in the Application Program itself. Coding for security requirements is not required as most of them have been taken care by the DBMS.
Hence, a data base management system is the software that manages a database, and is responsible for its storage, security, integrity, concurrency, recovery and access.
Q2. What is the disadvantage of sequential file organization? How do you overcome it? What are the advantages & disadvantages of Dynamic Hashing? Answer:
1. 2.
Internal nodes: Guide the search, each has a left pointer corresponding to a 0 bit, and a right pointer corresponding to a 1 bit. Leaf nodes: It holds a pointer to a bucket a bucket address.
Each leaf node holds a bucket address. If a bucket overflows, for example: a new record is inserted into the bucket for records whose hash values start with 10 and causes overflow, then all records whose hash value starts with 100 are placed in the first split bucket, and the second bucket contains those whose hash value starts with 101. The levels of a binary tree can be expanded dynamically. Advantages of dynamic hashing: 1. 2. The main advantage is that splitting causes minor reorganization, since only the records in one bucket are redistributed to the two new buckets. The space overhead of the directory table is negligible.
Disadvantages: 1. The index tables grow rapidly and too large to fit in main memory. When part of the index table is stored on secondary storage, it requires extra access. 2. The directory must be searched before accessing the bucket, resulting in two-block access instead of one in static hashing. 3. A disadvantage of extendable hashing is that it involves an additional level of indirection.
Q3. What is relationship type? Explain the difference among a relationship instance, relationship type & a relation set? Answer: Relationships: In the real world, items have relationships to one another. E.g.: A book is published by a particular publisher. The association or relationship that exists between the entities relates data items to each other in a meaningful way. A relationship is an association between entities. A collection of relationships of the same type is called a relationship set. A relationship type R is a set of associations between E, E2..En entity types mathematically, R is a set of relationship instances ri. E.g.: Consider a relationship type WORKS_FOR between two entity types - employee and department, which associates each employee with the department the employee works for. Each relationship instance in WORKS_FOR associates one employee entity and one department entity, where each relationship instance is ri which connects employee and department entities that participate in ri. Employee el, e3 and e6 work for department d1, e2 and e4 work for d2 and e5 and e7 work for d3. Relationship type R is a set of all relationship instances.
Some instances of the WORKS_FOR relationship Degree of relationship type: The number of entity sets that participate in a relationship set. A unary relationship exists when an association is maintained with a single entity:
Constraints on Relationship Types Relationship types usually have certain constraints that limit the possible combination of entities that may participate in the relationship instance. E.g.: If the company has a rule that each employee must work for exactly one department. The two main types of constraints are cardinality ratio and participation constraints. The cardinality ratio specifies the number of entities to which another entity can be associated through a relationship set. Mapping cardinalities should be one of the following. One-to-One: An entity in A is associated with at most one entity in B and vice versa.
Employee can manage only one department and that a department has only one manager. One-to-Many: An entity in A is associated with any number in B. An entity in B however can be associated with at most one entity in
. Each department can be related to numerous employees but an employee can be related to only one department Many-to-One: An entity in A is associated with at most one entity in B. An entity in B however can be associated with any number of entities in A. Many depositors deposit into a single account. Many-to-Many: An entity in A is associated with any number of entities in B and an entity in B is associated with any number of entities in A.
An employee can work on several projects and several employees can work on a project.
Participation Roles: There are two ways an entity can participate in a relationship where there are two types of participations. 1. Total: The participation of an entity set E in a relationship set R is said to be total if every entity in E participates in at lest one relationship in R. Every employee must work for a department. The participation of employee in WORK FOR is called total.
Some instances of the WORKS_FOR relationship Total participation is sometimes called existence dependency. 2. Partial: If only some entities in E participate in relationship in R, the participation of entity set E in relationship R is said to be partial.
Some instances of the WORKS_FOR relationship We do not expect every employee to manage a department, so the participation of employee in MANAGES relationship type is partial.
Q4. What is SQL? Discuss. Answer: THE SQL STATEMENT CAN BE GROUPED INTO FOLLOWING CATEGORIES. 1. 2. 3. 4. DDL(Data Definition Language) DML(Data Manipulation Language) DCL(Data Control Language) TCL(Transaction Control Language)
DDL: Data Definition Language The DDL statement provides commands for defining relation schema i,e for creating tables, indexes, sequences etc. and commands for dropping, altering, renaming objects. DML: (Data Manipulation Language) The DML statements are used to alter the database tables in someway. The UPDATE, INSERT and DELETE statements alter existing rows in a database tables, insert new records into a database table, or remove one or more records from the database table.
DCL: (Data Control Language) The Data Control Language Statements are used to Grant permission to the user and Revoke permission from the user, Lock certain Permission for the user. SQL
SELECTING SPECIFIC COLUMNS: We wish to retrieve only name and salary of the employees.Example-3 SELECT name, salary FROM employee: OUT PUT NAME SALARY
USING ALIASES (Alternate name given to columns): SELECT \ (Name, Salary, Salary *12 "YRLY SALARY" FROM Employee OUT PUT NAME Prasad Reena Deepak Yadav Venkat SALARY 32000 8000 22000 18000 30000 SALARY *12 384000 96000 264000 360000 216000
Eliminating duplicate rows: To eliminate duplicate rows simply use the keyword DISTINCT. SELECT DISTINCT MGRSSN FROM Employee OUTPUT MGRSSN 2222 4444
DISPLAYING TABLE STRUCTURE: To display schema of table use the command DESCRIEBE or DESC. DESC Employee; OUTPUT
SELECT statement with WHERE clause The conditions are specified in the where clause. It instructs sql to search the data in a table and returns only those rows that meet search criteria. SELECT * FROM emp WHERE name = 'yadav' Out put SSN 2222 NAME yadav BDATE 10-dec-60 SALARY 30000 MGRSSN 4444 DNO 3
SELECT Name, Salary FROM Employee WHERE Salary > 20000; OUT PUT NAME Prasad Deepak Yadav SALARY 32000 22000 18000
BETWEEN AND OR OPERATOR: To illustrate the between and or operators. SQL supports range searches. For eg If we want to see all the employees with salary between 22000 and 32000 SELECT * from employee WHERE salary between 22000 and 32000 Example NAME Prasad Yadav Deepak IS NULL OR IS NOT NULL: The null tests for the null values in the table Example: SELECT Name FROM Employee WHERE Mgrssn IS NULL; OUT PUT: NAME Prasad SORTING (ORDER BY CLAUSE): It gives a result in a particular order. Select all the employee lists sorted by name, salary in descending order. Select* from emp order by basic; Select job, ename from emp order by joindate desc; Desc at the end of the order by clause orders the list in descending order instead of the default [ascending] order. Example: SELECT* FROM EMPLOYEE ORDER BY name DESC. SALARY 32000 30000 22000
OUT PUT: NAME Reena Pooja Deepak Aruna LIKE CONDITION: Where clause with a pattern searches for sub string comparisons. SQL also supports pattern searching through the like operator. We describe patterns using two special characters. Percent [%]. The % character matches any sub string. Underscore ( _ ) The underscore matches any character. Example: SELECT emp_name from emp WHERE emp_name like 'Ra%' Example: 1 NAME Raj Rama Ramana Select emp_name from emp where name starts with 'R' and has j has third caracter. SELECT *from Emp WHERE EMP_NAME LIKE 'r_J%'; WHERE clause using IN operator SQL supports the concept of searching for items within a list; it compares the values within a parentheses. 1. Select employee, who are working for department 10&20. SELECT * from emp WHERE deptno in (10, 20) 2. Selects the employees who are working in a same project as that raja works onl SELECT eno from ep WHERE (pno) IN (select pno from works_on where empno=E20); AGGREGATE FUNCTIONS AND GROUPING: Group by clause is used to group the rows based on certain common criteria; for e.g. we can group the rows in an employee table by the department. For example, the employees working for department number 1 may form a group, and all employees working for department number 2 form another group. Group by clause is usually used in conjunction with aggregate functions like SUM, MAX, Min etc.. Group by clause runs the aggregate function described in the SELECT statement. It gives summary information. HAVING CLAUSE: The having clause filters the rows returned by the group by clause. Eg: 1>Select job, count (*) from emp group by job having count (*)>20; 2>Select Deptno,max(basic), min(basic)from emp group by Detno having salary>30000 find the average salary of only department 1. SELECT DnO,avg(salary) FROM Employee GROUP BY Dno
Q5. What is Normalization? Discuss various types of Normal Forms? Answer: Normalization is the process of building database structures to store data, because any application ultimately depends on its data structures. If the data structures are poorly designed, the application will start from a poor foundation. This will require a lot more work to create a useful and efficient application. Normalization is the formal process for deciding which attributes should be grouped together in a relation. Normalization serves as a tool for validating and improving the logical design, so that the logical design avoids unnecessary duplication of data, i.e. it eliminates redundancy and promotes integrity. In the normalization process we analyze and decompose the complex relations into smaller, simpler and well-structured relations. Normal forms Based on Primary Keys A relation schema R is in first normal form if every attribute of R takes only single atomic values. We can also define it as intersection of each row and column containing one and only one value. To transform the unnormalized table (a table that contains one or more repeating groups) to first normal form, we identify and remove the repeating groups within the table. E.g. Dept. D.Name R&D HRD D.No 5 4 D. location [England, London, Delhi) Bangalore
Figure A Consider the figure that each dept can have number of locations. This is not in first normal form because D.location is not an atomic attribute. The dormain of D location contains multivalues. There is a technique to achieve the first normal form. Remove the attribute D.location that violates the first normal form and place into separate relation Dept_location
Functional dependency: The concept of functional dependency was introduced by Prof. Codd in 1970 during the emergence of definitions for the three normal forms. A functional dependency is the constraint between the two sets of attributes in a relation from a database. Given a relation R, a set of attributes X in R is said to functionally determine another attribute Y, in R, (X->Y) if and only if each value of X is associated with one value of Y. X is called the determinant set and Y is the dependant attribute. Second Normal Form (2 NF) A second normal form is based on the concept of full functional dependency. A relation is in second normal form if every non-prime attribute A in R is fully functionally dependent on the Primary Key of R. A Partial functional dependency is a functional dependency in which one or more non-key attributes are functionally dependent on part of the primary key. It creates a redundancy in that relation, which results in anomalies when the table is updated. Third Normal Form (3NF) This is based on the concept of transitive dependency. We should design relational schema in such a way that there should not be any transitive dependencies, because they lead to update anomalies. A functional dependence [FD] x->y in a relation schema 'R' is a transitive dependency. If there is a set of attributes 'Z' Le x>, z->y is transitive. The dependency SSN->Dmgr is transitive through Dnum in Emp_dept relation because SSN->Dnum and Dnum->Dmgr, Dnum is neither a key nor a subset [part] of the key. According to codd's definition, a relational schema 'R is in 3NF if it satisfies 2NF and no no_prime attribute is transitively dependent on the primary key. Emp_dept relation is not in 3NF, we can normalize the above table by decomposing into E1 and E2. Transitive is a mathematical relation that states that if a relation is true between the first value and the second value, and between the second value and the 3rd value, then it is true between the 1st and the 3rd value.
Q6. What do you mean by Shared Lock & Exclusive lock? Describe briefly two phase locking protocol? Answer: Shared Locks: It is used for read only operations, i.e., used for operations that do not change or update the data. E.G., SELECT statement:, Shared locks allow concurrent transaction to read (SELECT) a data. No other transactions can modify the data while shared locks exist. Shared locks are released as soon as the data has been read. Exclusive Locks: Exclusive locks are used for data modification operations, such as UPDATE, DELETE and INSERT. It ensures that multiple updates cannot be made to the same resource simultaneously. No other transaction can read or modify data when locked by an exclusive lock. Exclusive locks are held until transaction commits or rolls back since those are used for write operations.
Q.2 Discuss OSI Reference model. Answer: The OSI Reference Model The OSI model is based on a proposal developed by the International Standards Organization as a first step towards international standardization of the protocols used in the various layers. The model is called the ISO (International Standard Organization Open Systems Interconnection) Reference Model because it deals with connecting open systems that is, systems that follow the standard are open for communication with other systems, irrespective of a manufacturer. Its main objectives were to: Allow manufacturers of different systems to interconnect equipment through a standard interfaces. Allow software and hardware to integrate well and be portable on different systems.
The Physical Layer The Physical layer coordinates the function required to carry a bit (0s and 1s) stream over a physical medium. It defines electrical and mechanical specifications of cables, connectors and signaling options that physically link two nodes on a network.
The Application Layer Application layer supports functions that control and supervise OSI application processes such as start/maintain/stop application, allocate/de-allocate OSI resources, accounting, and check point and recovering. It also supports remote job execution, file transfer protocol, message transfer and virtual terminal.
Q.3 Describe different types of Data transmission modes. Answer: Data Transmission Modes The transmission of binary data across a link can be accomplished in either parallel or serial mode. In parallel mode, multiple bits are sent with each clock tick. In serial mode, 1 bit is sent with each clock tick. While there is one way to send parallel data, there are three subclasses of serial transmission: asynchronous, synchronous, and isochronous.
Data transmission and modes Serial and Parallel Serial Transmission In serial transmission one bit follows another, so we need only one communication channel rather than n to transmit data between two communicating devices. The advantage of serial over parallel transmission is that with only one communication channel, serial transmission reduces cost of transmission over parallel by roughly a factor of n. Since communication within devices is parallel, conversion devices are required at the interface between the sender and the line (parallel-to-serial)
Q.4 Define Switching. What is the difference between circuit switching and Packet Switching? Answer: Switching A network is a set of connected devices. Whenever we have multiple devices, we have the problem of how to connect them to make one-to-one communication possible. One of the better solutions is switching. A switch is network consists of a series of interlinked nodes, called switches. Switches are devices capable of crating temporary connections between two or more devices linked to the switch. In a switched network, some of these nodes are connected to the end systems (computers or telephones). Others are used only for routing. Switched networks are divided, as shown in the figure.
Q.5 Classify Guided medium (wired).Compare fiber optics and copper wire. Answer: Guided Transmission Medium Guided media, which are those that provide a conduit form one device to another, include twisted-pair cable, coaxial cable, and fiber-optic cable. A single traveling along any of these media is directed and contained by the physical limits of the medium. Twisted-pair and coaxial cable use metallic (copper) conductors that accept and transport signals on the form of electric current. Optical fiber is a cable that accepts and transports signals in the form of light. Comparison of fiber optics and copper wire Fiber has many advantages over copper wire as a transmission media. These are: It can handle much higher band widths than copper. Due to the low attenuation, repeaters are needed only about every 30 km on long lines, versus about every 5 km for copper. Fiber is not being affected by the power surges, electromagnetic interference, or power failures. Nor it is affected by corrosive chemicals in the air, making it deal for harsh factory environment. Fiber is lighter than copper. One thousand twisted pairs copper cables of 1 km long weight 8000 kg. But two fibers have more capacity and weigh only 100 kg, which greatly reduces the need for expensive mechanical support systems that must be maintained. Fibers do not leak light and are quite difficult to tap. This gives them excellent security against potential wire tappers. If new routes designed, the fiber is the first choice because of lower installation cost .
Q.6 What are different types of satellites? Answer: Classification of Satellites Four different types of satellite orbits can be identified depending on the shape and diameter of the orbit: GEO (Geostationary orbit) LEO (Low Earth Orbit) MEO (Medium Earth Orbit) or ICO (Intermediate Circular Orbit)
HEO (Highly Elliptical Orbit) elliptical orbits Geostationary orbit Altitude: ca. 36000 km above earth surface Coverage: Ideally suited for continuous, regional coverage using a single satellite. Can also be used equally effectively for global coverage using a minimum of three satellites Visibility: Mobile to satellite visibility decreases with increased latitude of the user. Poor Visibility in built-up, urban regions. Low Earth orbit Altitude: ca. 500 - 1500 km Coverage: Multi-satellite constellations of upwards of 30-50 satellites are required for global, continuous coverage. Single satellites can be used in store and forward mode for localized coverage but only appear for short periods of time. Visibility: The use of satellite diversity, by which more than one satellite is visible at any given time, can be used to optimize the link. This can be achieved by either selecting the optimum link or combining the reception of two or more links. The higher the guaranteed minimum elevation angle to the user, the more satellites is needed in the constellation.
b.) There are two dimensions of reliability stability and equivalence or non-variability. Stability refers to consistency of results with repeated measurements of the same object, as in the weighing machine example. Non variability refers to consistency at a given point of time among different investigators and samples of items. The problem of reliability is more likely to arise with measurements in the social sciences than with measurements in the physical sciences, due to factors such as poor memory or recall of respondents, lack of clear instructions given to respondents and irrelevant contents of the measuring instrument. Reliability can be improved in three ways 1) By reducing the external sources of variation. This in turn can be achieved by standardizing the conditions under which measurement is carried out, by employing trained investigators and by providing standard instructions. 2) By making the measuring instrument more consistent internally, through an analysis of the different items 3) By adding more number of items to the measuring instrument, in order to increase the probability of more accurate measurement. The desired level of reliability depends on the research objectives, as well as the homogeneity of the population under study. If precise estimates are required, the higher will be the desired level of accuracy. In the case of a homogeneous population, a lower level of reliability may be sufficient, since there is not much variation in the data. Reliability and validity are closely interlinked. A measuring instrument that is valid is always reliable, but the reverse is not true. That is, an instrument that is reliable is not always valid. However, an instrument that is not valid may or may not be reliable and an instrument that is not reliable is never valid.
Q2. a) What are the sources from which one may be able to identify research problems? b) Why literature survey is important in research? Answer: a.) The selection of a problem is the first step in research. The term problem means a question or issue to be examined. The selection of a problem for research is not an easy task; it self is a problem. It is least amenable to formal methodological treatment. Vision, an imaginative insight, plays an important role in this process. One with a critical, curious and imaginative mind and is sensitive to practical problems could easily identify problems for study.
The sources from which one may be able to identify research problems or develop problems awareness are:
b.)
Review of literature Academic experience Daily experience Exposure to field situations Consultations Brain storming Research Intuition
Frequently, an exploratory study is concerned with an area of subject matter in which explicit hypothesis have not yet been formulated. The researchers task then is to review the available material with an eye on the possibilities of developing hypothesis from it. In some areas of the subject matter, hypothesis may have been stated by previous research workers. The researcher has to take stock of these various hypotheses with a view to evaluating their usefulness for further research and to consider whether they suggest any new hypothesis. Sociological journals, economic reviews, the bulletin of abstracts of current social sciences research, directory of doctoral dissertation accepted by universities etc afford a rich store of valuable clues. In addition to these general sources, some governmental agencies and voluntary organizations publish listings of summaries of research in their special fields of service. Professional organizations, research groups and voluntary organizations are a constant source of information about unpublished works in their special fields.
Q3. a) What are the characteristics of a good research design? b) What are the components of a research design? Answer: a.) Characteristics of a Good Research Design 1. It is a series of guide posts to keep one going in the right direction. 2. It reduces wastage of time and cost. 3. It encourages co-ordination and effective organization. 4. It is a tentative plan which undergoes modifications, as circumstances demand, when the study progresses, new aspects, new conditions and new relationships come to light and insight into the study deepens. 5. It has to be geared to the availability of data and the cooperation of the informants. 6. It has also to be kept within the manageable limits
b.)
Research hypothesis: When a prediction or a hypothesized relationship is tested by adopting scientific methods, it is known as research hypothesis. The research hypothesis is a predictive statement which relates a dependent variable and an independent variable. Generally, a research hypothesis must consist of at least one dependent variable and one independent variable. Whereas, the relationships that are assumed but not be tested are predictive statements that are not to be objectively verified are not classified as research hypothesis. Experimental and control groups: When a group is exposed to usual conditions in an experimental hypothesis-testing research, it is known as control group. On the other hand, when the group is exposed to certain new or special condition, it is known as an experimental group. In the aforementioned example, the Group A can be called a control group and the Group B an experimental one. If both the groups A and B are exposed to some special feature, then both the groups may be called as experimental groups. A research design may include only the experimental group or the both experimental and control groups together. Treatments: Treatments are referred to the different conditions to which the experimental and control groups are subject to. In the example considered, the two treatments are the parents with regular earnings and those with no regular earnings. Likewise, if a research study attempts to examine through
Experiment: An experiment refers to the process of verifying the truth of a statistical hypothesis relating to a given research problem. For instance, experiment may be conducted to examine the yield of a certain new variety of rice crop developed. Further, Experiments may be categorized into two types namely, absolute experiment and comparative experiment. If a researcher wishes to determine the impact of a chemical fertilizer on the yield of a particular variety of rice crop, then it is known as absolute experiment. Meanwhile, if the researcher wishes to determine the impact of chemical fertilizer as compared to the impact of bio-fertilizer, then the experiment is known as a comparative experiment. Experiment unit: Experimental units refer to the predetermined plots, characteristics or the blocks, to which the different treatments are applied. It is worth mentioning here that such experimental units must be selected with great caution.
Q4. a) Distinguish between Doubles sampling and multiphase sampling. b) What is replicated or interpenetrating sampling? a.) Double Sampling and Multiphase Sampling Double sampling refers to the subsection of the final sample form a pre-selected larger sample that provided information for improving the final selection. When the procedure is extended to more than two phases of selection, it is then, called multi-phase sampling. This is also known as sequential sampling, as sub-sampling is done from a main sample in phases. Double sampling or multiphase sampling is a compromise solution for a dilemma posed by undesirable extremes. The statistics based on the sample of n can be improved by using ancillary information from a wide base: but this is too costly to obtain from the entire population of N elements. Instead, information is obtained from a larger preliminary sample nL which includes the final sample n. Replicated or Interpenetrating Sampling It involves selection of a certain number of sub-samples rather than one full sample from a population. All the sub-samples should be drawn using the same sampling technique and each is a self-contained and adequate sample of the population. Replicated sampling can be used with any basic sampling technique: simple or stratified, single or multi-stage or single or multiphase sampling. It provides a simple means of calculating the sampling error. It is practical. The replicated samples can throw light on variable non-sampling errors. But disadvantage is that it limits the amount of stratification that can be employed.
b.) When a researcher wants to use secondary data for his research, he should evaluate them before deciding to use them. 1. Data Pertinence The first consideration in evaluation is to examine the pertinence of the available secondary data to the research problem under study. The following questions should be considered. What are the definitions and classifications employed? Are they consistent ? What are the measurements of variables used? What is the degree to which they conform to the requirements of our research? What is the coverage of the secondary data in terms of topic and time? Does this coverage fit the needs of our research? On the basis of above consideration, the pertinence of the secondary data to the research on hand should be determined, as a researcher who is imaginative and flexible may be able to redefine his research problem so as to make use of otherwise unusable available data. 2. Data Quality If the researcher is convinced about the available secondary data for his needs, the next step is to examine the quality of the data. The quality of data refers to their accuracy, reliability and completeness. The assurance and reliability of the available secondary data depends on the organization which collected them and the purpose for which they were collected. What is the authority and prestige of the organization? Is it well recognized? Is it noted for reliability? It is capable of collecting reliable data? Does it use trained and well qualified investigators? The answers to these questions determine the degree of confidence we can have in the data and their accuracy. It is important to go to the original source of the secondary data rather than to use an immediate source which has quoted from the original. Then only, the researcher can review the cautionary ands other comments that were made in the original source.
Q6.
What are the differences between observation and interviewing as methods of data collection? Give two specific examples of situations where either observation or interviewing would be more appropriate.
Answer: Observation has certain advantages: 1. The main virtue of observation is its directness: it makes it possible to study behaviour as it occurs. The researcher need not ask people about their behaviour and interactions; he can simply watch what they do and say. 2. Data collected by observation may describe the observed phenomena as they occur in their natural settings. Other methods introduce elements or artificiality into the researched situation for instance, in interview; the respondent may not behave in a natural way. There is no such artificiality in observational studies, especially when the observed persons are not aware of their being observed. 3. Observations is more suitable for studying subjects who are unable to articulate meaningfully, e.g. studies of children, tribal, animals, birds etc. 4. Observations improve the opportunities for analyzing the contextual back ground of behaviour. Further more verbal resorts can be validated and compared with behaviour through observation. The validity of what men of position and authority say can be verified by observing what they actually do. 5. Observations make it possible to capture the whole event as it occurs. For example only observation can provide an insight into all the aspects of the process of negotiation between union and management representatives. 6. Observation is less demanding of the subjects and has less biasing effect on their conduct than questioning. 7. It is easier to conduct disguised observation studies than disguised questioning. 8. Mechanical devices may be used for recording data in order to secure more accurate data and also of making continuous observations over longer periods. Observation cannot be used indiscriminately for all purposes. It has its own limitations: 1. Observation is of no use, studying past events or activities. One has to depend upon documents or narrations people for studying such things. 2. Observation is not suitable for studying and attitudes. However, an observation of related behaviour affords a good clue to the attitudes. E.g. and observations of the seating pattern of high caste and class persons in a general meeting in a village may be useful for forming an index of attitude.
Interviewing is not free limitations. Its greatest drawback is that it is costly both in money and time.
Second, the interview results are often adversely affected by interviewers mode of asking questions and interactions, and incorrect recording and also by the respondents faulty perception, faulty memory, inability to articulate etc. Third, certain types of personal and financial information may be refused in face-to face interviews. Such information might be supplied more willingly on mail questionnaires, especially if they are to be unsigned. Fourth, interview poses the problem of recording information obtained from the respondents. No full proof system is available. Note taking is invariably distracting to both the respondent and the interviewer and affects the thread of the conversation. Last, interview calls for highly interviewers. The availability of such persons is limited and the training of interviewers is often a long and costly process.
Master of Business Administration - MBA Semester III MB0051 Legal Aspects of Business Assignment Set- 1
Q1. What is the difference between fraud and misinterpretation? What do you understand by mistake?
Answer: Fraud is generally defined in the law as an intentional misrepresentation of material existing fact made by one person to another with knowledge of its falsity and for the purpose of inducing the other person to act, and upon which the other person relies with resulting injury or damage. Fraud may also be made by an omission or purposeful failure to state material facts, which nondisclosure makes other statements misleading. Misrepresentation means a false statement of fact made by one party to another party, which has the effect of inducing that party into the contract. For example, under certain circumstances, false statements or promises made by a seller of goods regarding the quality or nature of the product that the seller has may constitute misrepresentation. A finding of misrepresentation allows for a remedy of rescission and sometimes damages depending on the type of misrepresentation. Difference between fraud and misinterpretation:1. In misrepresentation the person making the false statement believes it to be true. In fraud the false statement is person who knows that it is false or he does not care to know whether it is true or false. 2. There is no intention to deceive the other party when there is misrepresentation of fact. The very purpose of the fraud is to deceive the other party to the contract. 3. Misrepresentation renders the contract voidable at the option of the party whose consent was obtained by misrepresentation. In the case of fraud the contract is voidable It also gives rise to an independent action in tort for damages. 4. Mis-representation is not an offence under Indian penal code and hence not punishable. Fraud, In certain cases is a punishable offence under Indian penal code. 5. Generally, silence is not fraud except where there is a duty to speak or the relations between parties is fiduciary. Under no circumstances can silence be considered as misrepresentation. 6. The party complaining of misrepresentation cant avoid the contract if he had the means to discover the truth with ordinary diligence. But in the case of fraud, The party making a false statement cannot say that the other party had the means to discover the truth with ordinary diligence. Mistake Amis take is an erroneous belief, at contracting, that certain facts are true. It may be used as grounds to invalidate the agreement. Common law has identified two different types of mistake in contract: "unilateral mistake" and "mutual mistake, "sometimes called "common mistake."
Answer: Many states utilize a mix of statutory and common law to provide remedies for breach of contract. Depending on the contract and circumstances of the breach, you may have several basic choices of remedies. There are two general categories of relief for breach of contract: damages and performance. Damages involve seeking monetary compensation for a breach of contract. Performance involves forcing the other side to do what they originally promised in the contract agreement. An attorney that specializes in contract law can help you decide which direction is best for your breach of contract dispute. Monetary Damages for Breach of Contract Before you file a breach of contract lawsuit, you should know which type of remedy you are seeking. Many people simply want monetary compensation for the grief caused by the other partys breach of contract. Types of damages for breach of contract include:
Compensatory Damages - money to reimburse you for costs to compensate for your loss Consequential and Incidental Damages - money for losses caused by the breach that were foreseeable (foreseeable damages are when each side reasonably knew that--at the time of the contract--there would be potential losses if there was a breach Attorney Fees and Costs - only recoverable if expressly provided for in the contract Liquidated Damages - damages specified in the contract that would be payable if there is a fraud Punitive Damages - money given to punish a person who acted in an offensive and egregious manner in an effort to deter that person and others from continuing to act in this way. You generally cannot collect punitive damages in contract cases.
The controlling law, the conduct of the violating party, and the extent of harm you suffered can influence which of these damages for breach of contract will be awarded in your situation. The more egregious and intentional the behavior, the greater the chance you have of being awarded larger, punitive damages. If the breach was unintended and arose from negligent behavior, you will probably receive compensatory or consequential damages. Requesting Performance of the Contract Sometimes money just cannot fix the problem. Instead of asking for damages, you can also seek actual performance or modification of performance of the original contract. Performance remedies for breach of contract include:
Specific Performance - a court order requiring performance exactly as specified in the contract; this remedy is rare, except in real estate transactions and other unique property, as the courts do not want to get involved with monitoring performance Rescission - the contract is canceled and both sides are excused from further performance and any money advanced is returned Reformation - the terms of the contract are changed to reflect what the parties actually intended
Q3.
Answer: The difference between a guarantee and an indemnity Introduction Guarantees and indemnities are both long established forms of what the law terms suretyship. There are important legal distinctions between them. What is what? A guarantee is For example: An indemnity is For example: a promise to someone that a third party will meet its obligations to them if they dont pay you, I will a promise to be responsible for anothers loss, and to compensate them for that loss on an agreed basis if it costs you more than 250 to fix that, I will reimburse you the difference.
Section 4 of the venerable Statute of Frauds Act 1677 requires guarantees to be in writing if they are to be enforceable. There is no such requirement in the case of an indemnity, although of course written agreement is always best as a matter of practice and for proof.
Further, a guarantee provides for a liability co-extensive with that of the principal. In other words, the guarantor cannot be liable for anything more than the client. The document will be construed as a guarantee if, on its true construction, the obligations of the surety are to stand behind the principal and only come to the fore once an obligation has been breached as between the principal and the financier. The obligation is a secondary one, reflexive in character. An indemnity however, provides for concurrent liability with the principal throughout and there is no need to look first to the principal. In essence it is an agreement that the surety will hold the financier harmless against all losses arising from the contract between the principal and the financier. It is not always obvious whether a clause or agreement is a guarantee or an indemnity. And an example... Some of the differences were highlighted by the Court of Appeal in the 2007 case Pitts and Ors v Jones. The appellants bringing the claim were minority shareholders in a company of which the other party was managing director and majority shareholder. The majority shareholder had negotiated the sale of the company to a purchaser who had agreed to buy the shares of the minority at the same price. The appellants were summoned to the sale completion meeting and were told that as part of the terms agreed their shares would be purchased after a delay of six months. On being made aware of the risk of the purchaser becoming insolvent within this period they declined to sign the documents but relented when the majority shareholder undertook verbally to pay if the purchaser failed to do so. The purchaser did subsequently become insolvent and could not pay for the minority shareholders shares, so they sued the majority shareholder on his undertaking to pay them. The Court of Appeal found that, while all the other necessary elements of a legally binding contract were present (offer, acceptance, consideration and the intention to create legal relations), the undertaking given to the minority shareholders was unenforceable since it was a guarantee and was not in writing. The minority shareholders lost the value of their shares and were left with no recourse.
The words used; the fact that one label or another is used is not determinative but it may demonstrate what the parties were attempting to achieve; Whether the document purports to make the surety liable for a greater sum than could be demanded under the principal contract, in which case the inference is that he is undertaking an obligation to indemnify; Whether a demand upon the principal debtor is defined as a condition precedent to proceeding against the surety in which case the document may well best be read as a guarantee; An indemnity comprises only two parties- the indemnifier and the indemnity holder. A guarantee is a contract between three parties namely the surety, principal debtor and the creditor.
Q4. What is the distinction between cheque and bill of exchange. Answer:
Bill of Exchange
It is drawn on a banker It has three parties - the drawer, the drawee, and payee. It is seldom drawn in sets It does not require acceptance by the drawee. Days of grace are not allowed to a banker No stamp duty is payable on checks It is usually drawn on the printed f
It may be drawn on any party or individual. There are three parties - the drawer, the drawee, and the payee. Foreign bills are drawn in sets It must be accepted by the drawee before he can be made liable to pay the bill. Three days of grace are always allowed to the drawee. Stamp duty has to be paid on bill of exchange. It may be drawn in any paper and need not necessarily be printed.
Q5.
Answer: Companies Limited by Guarantee and by Shares A company limited by guarantee is a lesser known type of business entity which is generally formed by non-profit purposes and has members instead of shareholders. There are both some similarities and differences between the two groups. Members and shareholders enjoy limited liability, however in cases where a share based company is liquidated; the latter might be required to pay all amounts of unpaid monies relating to the shares they hold. For example, if an individual shareholder holds 100 shares of 1 each, all of which remains unpaid at the time of dissolution, then they would be required to pay 100 to the company. Most companies limited by guarantee have a constitution which states that each member is only required to pay 1 should it be dissolved. Assuming that an average shareholder holds more than one share in a company, members in a business limited by guarantee do appear to have less risk attached to their positions. Profit Making Status Perhaps the most fundamental difference between the two types of limited companies is that those with shares generally exist for profit making purposes. Companies limited by guarantee however, are non-profit making organisations and are usually
registered to provide a specified service to the public or a particular segment of the population. The memorandum and articles of association of each would also differ as companies limited by shares usually have very general objects clauses which allow them to pursue any legal trade or activity. Objects of companies limited by guarantee Companies limited by guarantee however, often have very specific objects and detailed rules pertaining to which areas they can engage in. Charities, which are often of this type, might have restrictions imposed on them by their major donors who wish to ensure that their donations will be spent according to their wishes and not in a manner which they would not approve. By having a defined set of objects, companies limited by guarantee which are seeking to raise funds might find it easier to do so because they would be able to demonstrate that sufficient restrictions exist to protect the donors intentions. Removing the Word Limited Companies limited by guarantee can have the word limited removed from their name under section 30 of theCompanies Act. Company directors, secretary and declarant Both types of companies are bound by the same requirements to have at least one director, a secretary and adeclarant at the time of incorporation and throughout any period of its existence. When forming a company limited by guarantee, members are listed in the same manner in which shareholders would be, even though no allotments are made to them.
Q6.
Answer: There is a very old and correct saying that goes on to say that a coin has two sides. Like a coin almost every aspect of life has two sides. For example the most common example can be taken of the advent of technology and the crime associated with it. With the advent of time and technology, computers have formed an integral part of the working society. Computers along with them have brought greater work and time efficiency in the working circle of the society as a whole. But there comes the twist. Along with all the benefits that computers and technology have brought, there also comes the rising and alarming threat of cyber crime. Cyber crime in recent times has been credited along with a lot of attention by the general public, thanks to the almost impossible crimes committed by the hackers. The dangers that cyber crime is posing to computers and information in computers has been acknowledged by almost all the countries. Serious concern and alarm have been raised against the growing threat of cyber crime and its potential threat to the information possessed in the computers.
The other reason as to why cyber crime is posing a serious threat is because of the fact that it is usually very hard to track the hackers as they operate together but from far and different places, making it harder for the law enforcers to track and find them. These hackers usually operate from different and far off places, sometimes even different countries making it almost impossible for anyone to track them. Thus national bodies and governments should operate together to make a legal framework and structure through which no hackers can slip through after committing cyber crime
MBA IS- 3rd SEM MI0036 BUSINESS INTELLIGENCE TOOLS Assignment Set- 1
Q1. Define the term business intelligence tools? Briefly explain how the data from one end gets transformed into information at the other end?
Answer: Business intelligence (BI) is a wide category of applications and technologies which gathers, stores, analysis, and provides access to data. It helps enterprise users to make better business decisions. BI applications involve the activities of decision support systems, query and reporting, online analytical processing (OLAP), statistical analysis, forecasting, and data mining. Business intelligence applications include: Mission-critical and important to an enterprise's functions or occasional to meet a unique requirement. Enterprise-wide or restricted to one division, subdivision, or project. Centrally started or driven by user demand.
Business Intelligence is a term used to refer a number of activities a company may undertake to gather information about their market or their competitors. Some of the activities are: Competition analysis. Market analysis. Industry analysis.
Business Intelligence (BI) provides in depth knowledge about performance indicators, such as: Customers Competitors Business counterparts Economic environment
Internal operations The improvement in business pace allows a company to constantly grow in diverse market conditions. According to Microsofts vision of BI for using SQL server 2005, BI is a method of storing and key enterprise data so that anyone in your company can quickly and easily ask questions of accurate and timely data. Effective BI allows you to use information to realise why your business got the particular results, to decide on courses of action based on past facts, and to correctly project potential results. BI data is displayed in a way that it is suitable to each type of user, i.e., the specialists are able to see into detailed data, executives concentrate on timely summaries, and middle managers will look into data presented in detailed way which helps them to make good business decisions. Example: Microsofts BI uses cubes, rather than tables, to store information and presents information via reports. The information can be accessible to end users in a range of formats: Windows applications, Web Applications, and Microsoft BI client tools, such as Excel or SQL Reporting Services. It is important to give an explanation to the term insight. Insights are the final goals for authors, vendors and IT consultants, when commencing the BI projects. They are sometimes called moments of clarity that drive forward the enterprise. When a business overview is presented by BI, it is possible to look at a previously unseen fact or aspect of the organization. The main challenge of Business Intelligence is to gather and serve planned information regarding all applicable factors which makes the business profitable and allows you to access that knowledge easily and efficiently and in result maximize the success of an organisation. Example: Accountants Armoury: Work Inc2 is group finance company and Gavin Leverett is the planning director since January 2007. One of his primary priorities was to get a strong set of coverage and enquiry tools in place as quickly as possible. According to him consolidation and reporting tools are key weapons in the accountants armoury. One of his first priorities was to get a strong set of reporting and inquiry tools in place. A priority for Leverett was to put business intelligence tools in a place that would automate the work involved in collecting the monthly profit and loss data from each business unit, consolidate it, and compare it against the corporate budget. During 2008, the company worked with its systems integration partner Datel to give finance staff the ability to report against data held in the company's customer relationship management system, also from Sage, to get visibility into Work Inc's sales pipeline. Tile manufacturer CP Group, used business intelligence tools to analyse profitability right down to the level of the individual product. This involved the accumulation of
BI provides an integrated Business Intelligence and Data Warehousing solution that meets its business needs. These solutions give an organisation the ability to plan performance for business strategy, goals and objectives. Through the connection of performance to corporate goals, leadership can use the day-to-day data generated throughout their organisation to monitor key indicators and take decisions that make a difference. With performance management solution that uses accessible information to assume performance, an organisation can quickly determine. The performance management solution generally includes the concern of metrics and the way to track them, and the expansion of dashboards to successfully communicate the performance results. Enterprise reporting provides the organisation with the information needed to check and control behaviour and the associated performance and Data mining is used to identify trends to identify its customers and the interrelationships of organisation performance. It uses web-based tools and processes best suited to meet the needs of each organisation based on their size and needs.
Q. 2 what do you mean by data ware house? What are the major concepts and terminology used in the study of data warehouse? Answer: A Data warehouse is a part of the data warehousing system. It provides consolidated, accessible and flexible collection of data for end user analysis and reporting. The characteristics of data warehouse as defined by Inmon as a subject-oriented, integrated, non-volatile, time variant collection of data that supports the decision making process of an organisation. He explains the terms in the above definition as:
Subject-Oriented: Data Warehouse is subject-oriented as the data gives information about a particular subject instead of about a companys ongoing operation. Integrated: Data Warehouse is integrated as the data is gathered from a variety of sources into the data warehouse and merged into a coherent whole. Time Variant: Data warehouse is time-variant as all the data in it is identified with a particular time period.
Non-Volatile: Data is stable in a data warehouse. More data is added but data is never removed. Thus, the management can gain a constant picture of the business. Hence the data warehouse is non-volatile (long term storage). Data warehousing is different from data warehouse. It is necessary to distinguish between the two. It can be distinguished as follows: Data warehousing comprises a complete architecture whereas the data warehouse is data stored in the form of fact tables, aggregated fact tables, and lookup fact tables. Data warehousing provides tools for the end users to access collection of stored data whereas a data warehouse is a collection of data that support the decision making process of a management.
Q.3
what are the data modeling techniques used in data warehousing environment?
Answer: OLAP Data Modelling In the OLAP data model the data is viewed as data cube that consists of measures and dimensions as mentioned earlier. Each level of data in the dimension can be arranged as a hierarchy with levels of detail. For example, the dimension time can have levels such as days, months and years. Each level in the hierarchy will have specific values at a particular given instance. While viewing a database, a user can either move up or down to view less or more detailed information between levels. Aggregation and Storage Models The fundamental of multidimensional navigation of OLAP is cubes dimensions, hierarchy, and measures. When data is presented in this manner, users can use a complex set of data automatically. The ideology of OLAP is to provide consistent response times for all operations the users request for. The information summary is worked out before hand as the data is always collected at the detail level only. These pre-computed values are the basis of performance gains in OLAP.
Q.4
Discuss the categories in which data is divided before structuring it into data warehouse?
Answer: Data Mart is a subset of the collection of data of an organisation that is applicable to a specific set of users. It is generally oriented towards a specific purpose or a main data subject that may be shared to support business requirements. For example, the sales division data mart may confine its subject to items sold, number of items sold, profits and loss. Data marts are represented as one dimensional model or a star schema with the fact table and multidimensional table. The concept of data mart can be applied to any kind of data. It is basically designed to meet the immediate requirements and is appropriate. Data Marts can be classified as Independent data mart or Dependent data mart depending on the source of data. These can be defined as follows:
Independent data mart originates from data gathered from one or more operational systems or external information providers, or data gathered locally within a company or a particular area. Dependent data mart originates from enterprise data warehouses.
Aspects of Data Mart We have been introduced to data mart and its concepts. Now let us look into the aspects of data mart which talks about the points to remember while using a data mart.
External Data: While using external data in a data mart, one may come across two issues. 1. If more than one data mart requires the same external data, then the external data should be placed in the data warehouse and later moved on to the specific data mart. This reduces the data redundancy. It also avoids the other data marts from receiving this data. 2. While collecting external data, the ancestry of the data is expected to be saved as well. It includes the source, size and acquisition of external data; filtering and editing criteria of the data and so on.
Reference Data: The reference table enables the users to access data faster. It relates data in the data mart to its expanded version. These tables are stored in addition to basic data in data mart. The reference tables are copied over from data warehouses directly. In rare occasions data mart itself
Performance Issues: The performance factors differ in context with Decision Support System (DSS) and OLAP environment (to be discussed in the next section). The response time issue is entirely different- In DSS environment, the real time or online response is not as important as it is in OLTP systems. The response time can be loosened and can vary from 1 minute to 24 hours. This is applicable especially to data warehousing where there is abundance of data to be dealt with. It is not the same with data mart environment. In data mart, the data is specific and requirements are clear. Therefore, certain performance expectations can be set and achieved. For example, performance can be achieved by using star joins. In the case of multidimensional (MDDB) environment, the performance issues differ again. Better performance can be achieved if the MDDB is not overloaded. We can say that good performance in a data mart environment can be achieved by limiting volume of data, using star joins in indexes, and creating aggregated data, pre-joining tables and arrays of data.
Requirements Monitoring for a Data Mart: Monitoring the data mart behaviour at specific time intervals is necessary. Monitoring the data mart is indispensable when the data mart is growing significantly. In this context, by monitoring we mean the data usage and data content tracking. There are certain queries to be dealt with where data usage is concerned. They are: What data is being accessed? Who are the users who are active? What is the quantity of data access? What are the usage timings? What is the best way to access? The data content tracking has the following queries to be dealt with: What are the contents of the data mart? Is there any bad data in the data mart? How much is the data mart growing? How fast is the data mart growing?
Security in the Data Mart: When there is confidential information in the data mart; care must be taken to protect it well. Confidential information includes financial information, medical records and human resources information. Such information can be protected using encryption and decryption, log on/off security, application security, DBMS security, and firewall.
Q.5 Discuss the purpose of executive information system in an organization? Answer: The basic focus of the BI projects is technical-oriented rather than business-oriented. The reason for this is that most of the BI projects are run by IT project managers with minimal business knowledge. These managers do not tend to involve in business communities. Therefore, it is not surprising that most of the projects fail to deliver the expected business profits. It is vital to note that around 20% of businessmen use BI applications 80% of the time. Therefore, it is important to recognise the main business and technical representatives at the
Business Executives: They are the visionaries who are aware of the organisational strategies. They have to help make main project decisions and have to be solicited for deciding the projects direction at different stages. Finance Department: This department is responsible for accounting and can give insight into the organisations efficiencies and development areas. Marketing Personnel: They have to be involved during all the phases of the project because they are the key users of the BI applications. Sales and Customer Support Representatives: They must have direct customer contact and need to able to present the customers view during a BI project. The customers can help recognise the final aims of the BI system. Acceptance of the products or the service strategies matters the most. The main business partners have to give a different view of the customer and solicit the information at the start of an ongoing basis. IT Personnel: They support the operational systems and gives awareness about the accumulation of BI requests from the various groups. In addition to giving technical expertise, the IT staff in the BI project team should analyse and give BI-related requests. Operations Managers and Staff: They can make planned business decisions which will link between the strategic and the operational data which makes it important during some key phases of the BI project.
Q.6 Discuss the challenges involved in data integration and coordination process? Answer: Obtaining the required data quickly and accurately to make decisions is the biggest challenge and the solution to this is Business Intelligence. There are many challenges that organisations face. There are 10 critical challenges that organisations should understand and address for the success of BI. The BI projects fail because of the following: Failure to identify BI projects as cross-organizational business initiatives, and to recognize that they vary from the typical standalone solutions. Unengaged business supporters or supporters who have little or no power in the enterprise. Unavailable or unwilling business representation. Lack of skilled and readily available staff, or sub-optimal staff employment. No software release idea, that is, no iterative development method. No work breakdown arrangement, that is, no methodology. No business analysis or standardisation activities. No regard for the impact of dirty data on business gain. No understanding of the requirement for and use of meta-data. Too much reliance on different methods and tools. (Silver bullet syndrome)