This action might not be possible to undo. Are you sure you want to continue?
Naser Shoukat Firfire
Master of Business Administration-MBA Semester II MB0047 – Management Information Systems – 4 Credits (Book ID: B1136) Assignment Set- 1 (60 Marks) Marks-60
Attempt each question. Each question carries 10 marks: 1. What is MIS? Define the characteristics of MIS? What are the basic Functions of MIS? Give some Disadvantage of MIS? Basic necessity of Management Information System which is essential for the further utilization of MIS. In this unit you will learn the different types of MIS with the concepts. An organization consists of many functions both technical and managerial and many people working for different departments. Therefore an Information System of an organization is not only with respect to one department or one functioning but a combination of management, technology and organizations. MIS characteristics · It supports transaction handling and record keeping. · It is also called as integrated database Management System which supports in major functional areas. · It provides operational, tactical, and strategic level managers with east access to timely but, for the most, structured information. · It supports decision –making function which is a vital role of MIS. · It is flexible which is needed to adapt to the changing needs of the organization. · It promotes security system by providing only access to authorized users. · MIS not only provides statistical and data analysis but also works on the basis on MBO (management by objectives). MIS is successfully used for measuring performance and making necessary change in the organizational plans and procedures. It helps to build relevant and measurable objectives, monitor results, and send alerts. · Coordination: MIS provides integrated information so that all the departments are aware of the problem and requirements of the other departments. This helps in equal interaction of the different centers and connects decision centers of the organization. · Duplication of data is reduced since data is stored in the central part and same data can be used by all the related departments. · MIS eliminates redundant data. · It helps in maintaining consistency of data. It is divided into subsystems. Handlings with small systems are much easier than an entire system. This helps in giving easy access of data, accuracy and better information production. · MIS assembles, process, stores, Retrieves, evaluates and disseminates the information. Function of MIS The main functions of MIS are: · Data Processing: Gathering, storage, transmission, processing and getting output of the data. Making the data into information is a major task.
Naser Shoukat Firfire
· Prediction: Prediction is based on the historical data by applying the prior knowledge methodology by using modern mathematics, statistics or simulation. Prior knowledge varies on the application and with different departments. · Planning: Planning reports are produced based on the enterprise restriction on the companies and helps in planning each functional department to work reasonably. · Control: MIS helps in monitoring the operations and inspects the plans. It consists of differences between operation and plan with respect to data belonging to different functional department. It controls the timely action of the plans and analyzes the reasons for the differences between the operations and plan. Thereby helps managers to accomplish their decision making task successfully. · Assistance: It stores the related problems and frequently used information to apply them for relative economic benefits. Through this it can derive instant answers of the related problem. · Database: This is the most important function of MIS. All the information is needs a storage space which can be accessed without causing any anomalies in the data. Integrated Database avoids the duplication of data and thereby reduces redundancy and hence consistency will be increased. · The major function of MIS lies in application of the above functions to support the managers and the executives in the organization in decision-making.
Disadvantages of MIS The following are some of the disadvantages of MIS:
Naser Shoukat Firfire
· MIS is highly sensitive: MIS is very helpful in maintaining logging information of an authorized user. This needs to monitor constantly. · Quality of outputs is governed by quality of inputs. · MIS budgeting: There is difficulty in maintaining indirect cost and overheads. Capturing the actual cost needs to have an accrual system having true costs of outputs which is extremely difficult. It has been difficult to establish definite findings. · MIS is not flexible to update itself for the changes. · The changes in the decision of top level management decrease its effectiveness. · Information accountability is based on the qualitative factors and the factors like morality, confidence or attitude will not have any base.
2. Explain Knowledge based system? Explain DSS and OLAP with example? Knowledge based systems are artificial intelligent tools working in a narrow domain to provide intelligent decisions with justification. Knowledge is acquired and represented using various knowledge representation techniques rules, frames and scripts. The basic advantages offered by such system are documentation of knowledge, intelligent decision support, self learning, reasoning and explanation. Knowledge-based systems Intelligence. Their core components are:
are systems based on the methods and techniques of Artificial
knowledge base acquisition mechanisms inference mechanisms
While for some authors
expert systems, case-based reasoning systems and neural networks are all
particular types of knowledge-based systems, there are others who consider that neural networks are different, and exclude it from this category.
KBS is a frequently used abbreviation for knowledge-based system.
A decision support system (DSS) is a computer-based information system that supports business or organizational decision-makingactivities. DSSs serve the management, operations, and planning levels of an organization and help to make decisions, which may be rapidly changing and not easily specified in advance.
Naser Shoukat Firfire
DSSs include knowledge-based systems. A properly designed DSS is an interactive software-based system intended to help decision makers compile useful information from a combination of raw data, documents, personal knowledge, or business models to identify and solve problems and make decisions. Typical information that a decision support application might gather and present are:
inventories of information assets (including legacy and relational data sources, cubes, data warehouses, and data marts), comparative sales figures between one period and the next, projected revenue figures based on product sales assumptions.
An OLAP (Online analytical processing) cube is a data structure that allows fast analysis of data. It can also be defined as the capability of manipulating and analyzing data from multiple perspectives. The arrangement of data into cubes overcomes some limitations of relational databases. OLAP (online analytical processing) cubes can be thought of as extensions to the two-dimensional array of a spreadsheet. For example a company might wish to analyze some financial data by product, by timeperiod, by city, by type of revenue and cost, and by comparing actual data with a budget. These additional methods of analyzing the data are known as dimensions. Functionality The OLAP cube consists of numeric facts called measures which are categorized by dimensions. The cube metadata (structure) may be created from a star schema or snowflake schema of tables in a relational database.Measures are derived from the records in the fact tableand dimensions are derived from the dimension tables. Pivot A financial analyst might want to view or "pivot" the data in various ways, such as displaying all the cities down the page and all the products across a page. This could be for a specified period, version and type of expenditure. Having seen the data in this particular way the analyst might then immediately wish to view it in another way. The cube could effectively be re-oriented so that the data displayed now has periods across the page and type of cost down the page. Because this re-orientation involves resummarizing very large amounts of data, this new view of the data has to be generated efficiently to avoid wasting the analyst's time, i.e. within seconds, rather than the hours a relational database and conventional report-writer might have taken.
Because there can be more than three
dimensions in an OLAP system the term hypercube is sometimes used.
MB0047 Hierarchy Each of the elements of a dimension could be summarized using a hierarchy.
Naser Shoukat Firfire
The hierarchy is a series
of parent-child relationships, typically where a parent member represents the consolidation of the members which are its children. Parent members can be further aggregated as the children of another parent.
For example May 2005 could be summarized into Second Quarter 2005 which in turn would be summarized in the Year 2005. Similarly the cities could be summarized into regions, countries and then global regions; products could be summarized into larger categories; and cost headings could be grouped into types of expenditure. Conversely the analyst could start at a highly summarized level, such as the total difference between the actual results and the budget, and drill down into the cube to discover which locations, products and periods had produced this difference. OLAP operations The analyst can understand the meaning contained in the databases using multi-dimensional analysis. By aligning the data content with the analyst's mental model, the chances of confusion and erroneous interpretations are reduced. The analyst can navigate through the database and screen for a particular subset of the data, changing the data's orientations and defining analytical calculations.
initiated process of navigating by calling for page displays interactively, through the specification of slices via rotations and drill down/up is sometimes called "slice and dice". Common operations include slice and dice, drill down, roll up, and pivot. Slice: A slice is a subset of a multi-dimensional array corresponding to a single value for one or more members of the dimensions not in the subset.
Dice: The dice operation is a slice on more than two dimensions of a data cube (or more than two consecutive slices).
Drill Down/Up: Drilling down or up is a specific analytical technique whereby the user navigates among levels of data ranging from the most summarized (up) to the most detailed (down).
Roll-up: A roll-up involves computing all of the data relationships for one or more dimensions. To do this, a computational relationship or formula might be defined.
Pivot: This operation is also called rotate operation. It rotates the data in order to provide an alternative presentation of data - the report or page display takes a different dimensional orientation.
3. What are Value Chain Analysis & describe its significance in MIS? Explain what is meant by BPR? What is its significance? How Data warehousing & Data Mining is useful in terms of MIS?
Naser Shoukat Firfire
The value chain, also known as value chain analysis, is a concept from business management that was first described and popularized by Michael Porter in his 1985 best-seller, Competitive Advantage: Creating and Sustaining Superior Performance Firm Level A value chain is a chain of activities for a firm operating in a specific industry. The business unit is the appropriate level for construction of a value chain, not the divisional level or corporate level. Products pass through all activities of the chain in order, and at each activity the product gains some value. The chain of activities gives the products more added value than the sum of the independent activity's value. It is important not to mix the concept of the value chain with the costs occurring throughout the activities. A diamond cutter, as a profession, can be used to illustrate the difference of cost and the value chain. The cutting activity may have a low cost, but the activity adds much of the value to the end product, since a rough diamond is significantly less valuable than a cut diamond. Typically, the described value chain and the documentation of processes, assessment and auditing of adherence to the process routines are at the core of the quality certification of the business, e.g. ISO 9001. Activities The value chain categorizes the generic value-adding activities of an organization. The "primary activities" include: inbound logistics, operations (production), outbound logistics, marketing and sales (demand), and services (maintenance). The "support activities" include: administrative infrastructure management, human resource management, technology (R&D), and procurement. The costs and value driversare identified for each value activity. Industry Level An industry value chain is a physical representation of the various processes that are involved in producing goods (and services), starting with raw materials and ending with the delivered product (also known as the supply chain). It is based on the notion of value-added at the link (read: stage of production) level. The sum total of link-level value-added yields total value. The French Physiocrat's Tableau économique is one of the earliest examples of a value chain. Wasilly Leontief's Input-Output tables, published in the 1950s, provide estimates of the relative importance of each individual link in industrylevel value-chains for the U.S. economy. Significance The value chain framework quickly made its way to the forefront of management thought as a powerful analysis tool for strategic planning. The simpler concept of value streams, a cross-functional process which was developed over the next decade,
had some success in the early 1990s.
The value-chain concept has been extended beyond individual firms. It can apply to whole supply chains and distribution networks. The delivery of a mix of products and services to the end customer will
Naser Shoukat Firfire
mobilize different economic factors, each managing its own value chain. The industry wide synchronized interactions of those local value chains create an extended value chain, sometimes global in extent. Porter terms this larger interconnected system of value chains the "value system." A value system includes the value chains of a firm's supplier (and their suppliers all the way back), the firm itself, the firm distribution channels, and the firm's buyers (and presumably extended to the buyers of their products, and so on). Capturing the value generated along the chain is the new approach taken by many management strategists. For example, a manufacturer might require its parts suppliers to be located nearby its assembly plant to minimize the cost of transportation. By exploiting the upstream and downstream information flowing along the value chain, the firms may try to bypass the intermediaries creating new business models, or in other ways create improvements in its value system. Value chain analysis has also been successfully used in large Petrochemical Plant Maintenance Organizations to show how Work Selection, Work Planning, Work Scheduling and finally Work Execution can (when considered as elements of chains) help drive Lean approaches to Maintenance. The Maintenance Value Chain approach is particularly successful when used as a tool for helping Change Management as it is seen as more user friendly than other business process tools. Value chain analysis has also been employed in the development sector as a means of identifying poverty reduction strategies by upgrading along the value chain.
Although commonly associated with export
oriented trade, development practitioners have begun to highlight the importance of developing national and intra-regional chains in addition to international ones. SCOR The Supply-Chain Council, a global trade consortium in operation with over 700 member companies, governmental, academic, and consulting groups participating in the last 10 years, manages the SupplyChain Operations Reference (SCOR), the de facto universal reference model forSupply Chain including Planning, Procurement, Manufacturing, Order Management, Logistics, Returns, and Retail; Product and Service Design including Design Planning, Research, Prototyping, Integration, Launch and Revision, and Sales including CRM, Service Support, Sales, and Contract Management which are congruent to the Porter framework. The SCOR framework has been adopted by hundreds of companies as well as national entities as a standard for business excellence, and the US DOD has adopted the newly-launched DesignChain Operations Reference (DCOR) framework for product design as a standard to use for managing their development processes. In addition to process elements, these reference frameworks also maintain a vast database of standard process metrics aligned to the Porter model, as well as a large and constantly researched database of prescriptive universal best practices for process execution. Value Reference Model
Naser Shoukat Firfire
VRM Quick Reference Guide V3R0 A Value Reference Model (VRM) developed by the trade consortia Value Chain Group offers an open source semantic dictionary for value chain management encompassing one unified reference framework representing the process domains of product development, customer relations and supply networks. The integrated process framework guides the modeling, design, and measurement of business performance by uniquely encompassing the plan, govern and execute requirements for the design, product, and customer aspects of business. The Value Chain Group claims VRM to be next generation Business Process Management that enables value reference modeling of all business processes and provides product excellence, operations excellence, and customer excellence. Six business functions of the Value Chain:
Research and Development Design of Products, Services, or Processes Production Marketing & Sales Distribution Customer Service
This guide to the right provides the levels 1-3 basic building blocks for value chain configurations. All Level 3 processes in VRM have input/output dependencies, metrics and practices. The VRM can be extended to levels 4-6 via the Extensible Reference Model schema. Data mining (also known as Knowledge Discovery in Data, or KDD), field of computer science, intelligence with database management.
the process of extracting patterns from large data sets by combining methods from statistics and artificial
With recent tremendous technical advances in processing power, storage capacity, and inter-connectivity of computer technology, data mining is seen as an increasingly important tool by modern business to transform unprecedented quantities of digital data into business intelligence giving an in a wide range of profiling practices, such as marketing, surveillance, fraud detection, and scientific discovery. The growing consensus that data mining can bring real value has led to an explosion in demand for novel data mining technologies.
The related terms data dredging, data fishing and data snooping refer to the use of data mining techniques to sample portions of the larger population data set that are (or may be) too small for reliable
Naser Shoukat Firfire
statistical inferences to be made about the validity of any patterns discovered. These techniques can, however, be used in the creation of new hypotheses to test against the larger data populations.
A data warehouse (DW) is a database used for reporting. The data is offloaded from the operational systems for reporting. The data may pass through an operational data store for additional operations before it is used in the DW for reporting. A data warehouse maintains its functions in three layers: staging, integration, and access. Staging is used to store raw data for use by developers (analysis and support). The integration layer is used to integrate data and to have a level of abstraction from users. The accesslayer is for getting data out for users. This definition of the data warehouse focuses on data storage. The main source of the data is cleaned, transformed, catalogued and made available for use by managers and other business professionals for data mining, online analytical processing, market research and decision support (Marakas & OBrien 2009). However, the means to retrieve and analyze data, to extract, transform and load data, and to manage thedata dictionary are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition for data warehousing includes business intelligence tools, tools to extract, transform and load data into the repository, and tools to manage and retrieve metadata. Architecture Operational database layer The source data for the data warehouse — An organization's Enterprise Resource Planning systems fall into this layer. Data access layer The interface between the operational and informational access layer — Tools to extract, transform, load data into the warehouse fall into this layer. Metadata layer The data dictionary — This is usually more detailed than an operational system data dictionary. There are dictionaries for the entire warehouse and sometimes dictionaries for the data that can be accessed by a particular reporting and analysis tool. Informational access layer The data accessed for reporting and analyzing and the tools for reporting and analyzing data — This is also called the data mart.Business intelligence tools fall into this layer. The Inmon-Kimball differences about design methodology, discussed later in this article, have to do with this layer Conforming information Another important fact in designing a data warehouse is which data to conform and how to conform the data. For example, one operational system feeding data into the data warehouse may use "M" and "F" to denote sex of an employee while another operational system may use "Male" and "Female". Though this is a simple example, much of the work in implementing a data warehouse is devoted to making similar meaning data consistent when they are stored in the data warehouse. Typically, extract, transform, load tools are used in this work.
Naser Shoukat Firfire
Master data management has the aim of conforming data that could be considered "dimensions". Normalized versus dimensional approach for storage of data There are two leading approaches to storing data in a data warehouse — the dimensional approach and the normalized approach. The dimensional approach, whose supporters are referre d to as ―Kimb allites‖, believe in Ralph Kimb all‘s ap p ch in which it is state d tha t th e data w a h ou roa re se sh ou b e mo de d using a Dimensional Model/star schema. The normalized approach, also called the ld le 3NF model, whose supporters are referred to as ―Inmonites‖, believe in Bill Inmon's ap pro ach in which it is state d tha t th e data warehouse should be modeled using an E-R model/normalized model. In a dimensional approach, transaction data are partitioned into either "facts", which are generally numeric transaction data, or "dimensions", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order. A key advantage of a dimensional approach is that the data warehouse is easier for the user to understand and to use. Also, the retrieval of data from the data warehouse tends to operate very quickly. Dimensional structures are easy to understand for business users, because the structure is divided into me as ureme nts/facts and context/dime nsions . Facts are related to th e organization‘s business processes and operational system whereas the dimensions surrounding them contain context about the measurement (Kimball, Ralph 2008). The main disadvantages of the dimensional approach are: In order to maintain the integrity of facts and dimensions, loading the data warehouse with data from different operational systems is complicated, and It is difficult to modify the data warehouse structure if the organization adopting the dimensional approach changes the way in which it does business. In the normalized approach, the data in the data warehouse are stored following, to a degree, database normalization rules. Tables are grouped together by subject areas that reflect general data categories (e.g., data on customers, products, finance, etc.). The normalized structure divides data into entities, which creates several tables in a relational database. When applied in large enterprises the result is dozens of tables that are linked together by a web of joints. Furthermore, each of the created entities is converted into separate physical tables when the database is implemented (Kimball, Ralph 2008). The main advantage of this approach is that it is straightforward to add information into the database. A disadvantage of this approach is that, because of the number of tables involved, it can be difficult for users both to:join data from different sources into meaningful information and then access the information without a precise understanding of the sources of data and of the data structure of the data warehouse. It should be noted that both normalized – and dimensional models can be represented in entityrelationship diagrams as both contain jointed relational tables. The difference between the two models is the degree of normalization. These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008). Top-down versus bottom-up design methodologies Bottom-up design  Ralph Kimball, a well-known author on data warehousing, is a proponent of an approach to data  warehouse design which he describes asbottom-up.
Naser Shoukat Firfire
In the bottom-up approach data marts are first created to provide reporting and analytical capabilities for specific business processes. Though it is important to note that in Kimball methodology, the bottom-up process is the result of an initial business oriented Top-down analysis of the relevant business processes to be modelled. Data marts contain, primarily, dimensions and facts. Facts can contain either atomic data and, if necessary, summarized data. The single data mart often models a specific business area such as "Sales" or "Production." These data marts can eventually be integrated to create a comprehensive data warehouse. The integration of data marts is managed through the implementation of what Kimball calls "a data  warehouse bus architecture". The data warehouse bus architecture is primarily an implementation of "the bus", a collection of conformed dimensions, which are dimensions that are shared (in a specific way) between facts in two or more data marts. The integration of the data marts in the data warehouse is centered on the conformed dimensions (residing in "the bus") that define the possible integration "points" between data marts. The actual integration of two or more data marts is then done by a process known as "Drill across". A drill-across works by grouping (summarizing) the data along the keys of the (shared) conformed dimensions of each fact participating in the "drill across" followed by a join on the keys of these grouped (summarized) facts. Maintaining tight management over the data warehouse bus architecture is fundamental to maintaining the integrity of the data warehouse. The most important management task is making sure dimensions among data marts are consistent. In Kimball's words, this means that the dimensions "conform". Some consider it an advantage of the Kimball method, that the data warehouse ends up being "segmented" into a number of logically self contained (up to and including The Bus) and consistent data marts, rather than a big and often complex centralized model. Business value can be returned as quickly as the first data marts can be created, and the method gives itself well to an exploratory and iterative approach to building data warehouses. For example, the data warehousing effort might start in the "Sales" department, by building a Sales-data mart. Upon completion of the Sales-data mart, The business might then decide to expand the warehousing activities into the, say, "Production department" resulting in a Production data mart. The requirement for the Sales data mart and the Production data mart to be integrable, is that they share the same "Bus", that will be, that the data warehousing team has made the effort to identify and implement the conformed dimensions in the bus, and that the individual data marts links that information from the bus. Note that this does not require 100% awareness from the onset of the data warehousing effort, no master plan is required upfront. The Sales-data mart is good as it is (assuming that the bus is complete) and the production data mart can be constructed virtually independent of the sales data mart (but not independent of the Bus). If integration via the bus is achieved, the data warehouse, through its two data marts, will not only be able to deliver the specific information that the individual data marts are designed to do, in this example either "Sales" or "Production" information, but can deliver integrated Sales-Production information, which, often, is of critical business value. An integration (possibly) achieved in a flexible and iterative fashion. Top-down design Bill Inmon, one of the first authors on the subject of data warehousing, has defined a data warehouse as a  centralized repository for the entire enterprise. Inmon is one of the leading proponents of the topdown approach to data warehouse design, in which the data warehouse is designed using a normalized enterprise data model. "Atomic" data, that is, data at the lowest level of detail, are stored in the data warehouse. Dimensional data marts containing data needed for specific business processes or specific departments are created from the data warehouse. In the Inmon vision the data warehouse is at the center of the "Corporate Information Factory" (CIF), which provides a logical framework for delivering business intelligence (BI) and business management capabilities. Inmon states that the data warehouse is: Subject-oriented
Naser Shoukat Firfire
The data in the data warehouse is organized so that all the data elements relating to the same real-world event or object are linked together. Non-volatile Data in the data warehouse are never over-written or deleted — once committed, the data are static, readonly, and retained for future reporting. Integrated The data warehouse contains data from most or all of an organization's operational systems and these data are made consistent. Time-variant The top-down design methodology generates highly consistent dimensional views of data across data marts since all data marts are loaded from the centralized repository. Top-down design has also proven to be robust against business changes. Generating new dimensional data marts against the data stored in the data warehouse is a relatively simple task. The main disadvantage to the top-down methodology is that it represents a very large project with a very broad scope. The up-front cost for implementing a data warehouse using the top-down methodology is significant, and the duration of time from the start of project to the point that end users experience initial benefits can be substantial. In addition, the top-down methodology can be inflexible and unresponsive to changing departmental needs during the [ implementation phases. 4. Explain DFD & Data Dictionary? Explain in detail how the information requirement is determined for an organization? A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system.DFDs can also be used for thevisualization of data processing (structured design). On a DFD, data items flow from an external data source or an internal data store to an internal data store or an external data sink, via an internal process. A DFD provides no information about the timing of processes, or about whether processes will operate in sequence or in parallel. It is therefore quite different from a flowchart, which shows the flow of control through an algorithm, allowing a reader to determine what operations will be performed, in what order, and under what circumstances, but not what kinds of data will be input to and output from the system, nor where the data will come from and go to, nor where the data will be stored (all of which are shown on a DFD).
Naser Shoukat Firfire
It is common practice to draw the context-level data flow diagram first, which shows the interaction between the system and external agents which act as data sources and data sinks. On the context diagram (also known as the 'Level 0 DFD') the system's interactions with the outside world are modelled purely in terms of data flows across the system boundary. The context diagram shows the entire system as a single process, and gives no clues as to its internal organization. This context-level DFD is next "exploded", to produce a Level 1 DFD that shows some of the detail of the system being modeled. The Level 1 DFD shows how the system is divided into sub-systems (processes), each of which deals with one or more of the data flows to or from an external agent, and which together provide all of the functionality of the system as a whole. It also identifies internal data stores that must be present in order for the system to do its job, and shows the flow of data between the various parts of the system. Data flow diagrams were proposed by Larry Constantine, the original developer of structured design,
based on Martin and Estrin's "data flow graph" model of computation.
Data flow diagrams (DFDs) are one of the three essential perspectives of the structured-systems analysis and design method SSADM. The sponsor of a project and the end users will need to be briefed and consulted throughout all stages of a system's evolution. With a data flow diagram, users are able to visualize how the system will operate, what the system will accomplish, and how the system will be implemented. The old system's dataflow diagrams can be drawn up and compared with the new system's data flow diagrams to draw comparisons to implement a more efficient system. Data flow diagrams can be used to provide the end user with a physical idea of where the data they input ultimately has an effect upon the structure of the whole system from order to dispatch to report. How any system is developed can be determined through a data flow diagram.
Naser Shoukat Firfire
In the course of developing a set of levelled data flow diagrams the analyst/designers is forced to address how the system may be decomposed into component sub-systems, and to identify the transaction data in the data model. There are different notations to draw data flow diagrams (Yourdon & Coad and Gane & Sarson ), defining different visual representations for processes, data stores, data flow, and external entities. A data dictionary, or metadata repository, as defined in the IBM Dictionary of Computing, is a "centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format." The term may have one of several closely related meanings pertaining to databases and database management systems (DBMS):
  
a document describing a database or collection of databases an integral component of a DBMS that is required to determine its structure a piece of middleware that extends or supplants the native data dictionary of a DBMS
Middleware In the construction of database applications, it can be useful to introduce an additional layer of data dictionary software, i.e. middleware, which communicates with the underlying DBMS data dictionary. Such a "high-level" data dictionary may offer additional features and a degree of flexibility that goes beyond the limitations of the native "low-level" data dictionary, whose primary purpose is to support the basic functions of the DBMS, not the requirements of a typical application. For example, a high-level data dictionary can provide alternative entity-relationship models tailored to suit different applications that share a common database.
Extensions to the data dictionary also can assist in query
optimization against distributed databases.
Software frameworks aimed at rapid application development sometimes include high-level data dictionary facilities, which can substantially reduce the amount of programming required to build menus, forms, reports, and other components of a database application, including the database itself. For example, PHPLens includes a PHP class library to automate the creation of tables, indexes, and foreign key constraintsportably for multiple databases. forms with data validation and complex JOINs.
Another PHP-based data dictionary, part of
the RADICORE toolkit, automatically generates programobjects, scripts, and SQL code for menus and For the ASP.NET environment, Base One'sdata dictionary provides cross-DBMS facilities for automated database creation, data validation, performance enhancement (caching and index utilization), application security, and extended data types
Naser Shoukat Firfire
5. What is ERP? Explain its existence before and its future after? What are the advantages & Disadvantages of ERP? What is Artificial Intelligence? How is it different from Neural Networks? Enterprise resource planning (ERP) integrates internal and external management information across an entire organization, embracingfinance/accounting, manufacturing, sales and service, CRM, etc. ERP systems automate this activity with an integrated software application. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the organization and manage the connections to outside stakeholders.
ERP systems can run on a variety of hardware and network configurations, typically employing a database to store data.
ERP systems typically include the following characteristics:
An integrated system that operates in real time (or next to real time), without relying on periodic updates.
A common database, which supports all applications. A consistent look and feel throughout each module. Installation of the system without elaborate application/data integration by the Information Technology (IT) department
Finance/Accounting General ledger, payables, cash management, fixed assets, receivables, budgeting, consolidation Human resources payroll, training, benefits, 401K, recruiting, diversity management Manufacturing Engineering, bill of materials, work orders, scheduling, capacity, workflow management, quality control, cost management, manufacturing process, manufacturing projects, manufacturing flow, activity based costing, Product lifecycle management Supply chain management Order to cash, inventory, order entry, purchasing, product configurator, supply chain planning, supplier scheduling, inspection of goods, claim processing, commissions Project management Costing, billing, time and expense, performance units, activity management
Customer relationship management Sales and marketing, commissions, service, customer contact, call center support
Naser Shoukat Firfire
Advantage and disadvantage of ERP The advantages and disadvantages of ERP is an interesting Study. The foremost advantage of an ERP system is bringing down the costs and saving the valuable time which would have otherwise been wasted in procedural maneuvers and unwanted delays. Different software programs maintained in the departments were proving to be a great hurdle. Since ERP is a uniform platform it ensures that there in no discrepancy in the information that is processed. Industry wise advantages Manufacturing Sector--------------------Speeding up the whole process. Distribution and retail Stores-----------Accessing the status of the goods Transport Sector---------------------------Transmit commodities through online transactions. Project Service industry-----------------Fastens the compilation of reports. The advantage and disadvantage of ERP is best understood by studying them under different categories. Hence the next paragraph presents information on corporates as a whole because the advantage of ERP systems in a company is different when compared industry wise. Advantages in a corporate entity The accounts department personnel can act independently. They don't have to be behind the technical persons every time to record the financial transactions. Ensures quicker processing of information and reduces the burden of paperwork. Serving the customers efficiently by way of prompt response and follow up. Disposing queries immediately and facilitating the payments from customers with ease and well ahead of the stipulated deadline. It helps in having a say over your competitor and adapting to the whims and fancies of the market and business fluctuations. The swift movement of goods to rural areas and in lesser known places has now become a reality with the use of ERP. The database not only becomes user friendly but also helps to do away with unwanted ambiguity. ERP is suitable for global operations as it encompasses all the domestic jargons, currency conversions, diverse accounting standards, and multilingual facilities .In short it is the perfect commercial and scientific epitome of the verse "Think Local. Act Global". ERP helps to control and data and facilitates the necessary contacts to acquire the same.
Naser Shoukat Firfire
Disadvantage Inspite of rendering marvelous services ERP is not free from its own limitations. ERP calls for a voluminous and exorbitant investment of time and money. The amount of cash required would even be looming on the management given the fact that such an outlay is not a guarantee to the said benefits but subject to proper implementation, training and use. In the ever expanding era of information theft ERP is no exception. It is alarming to note the time taken to implement the system in the organization. These means large amounts of workers have to shun their regular labor and undertake training. This not only disturbs the regular functioning of the organization but also runs the organization in the huge risk of losing potential business in that particular period. There are great benefits rendered by the system. On the other hand when one thinks of this information reach in the hands of undeserving persons who could do more than misuse ,it is evident that there is no way of ensuring secrecy of information and larger chances of risk will be generated as long as they are in the public domain. Conclusion ERP is recommended in an organization not only because the advantages outnumber the disadvantages but also by keeping in mind the ways to overcome the disadvantages. An organization has to correctly weigh the advantages and disadvantages of ERP before going for them. Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" of success.
  
an intelligent agent is a system that perceives its environment and takes actions that maximize its chances John McCarthy, who coined the term in 1956, defines it as "the science and engineering of
making intelligent machines."
The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine. addressed by myth, fiction andphilosophy since antiquity. optimism,
  
This raises philosophical
issues about the nature of the mind and the ethics of creating artificial beings, issues which have been Artificial intelligence has been the subject of
but has also suffered setbacks
and, today, has become an essential part of the technology
industry, providing the heavy lifting for many of the most difficult problems in computer science. AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other.
Subfields have grown up around particular institutions, the work of
individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.
Naser Shoukat Firfire
The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention. Deduction, reasoning, problem solving Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans were often assumed to use when they solve puzzles, play board games or make logical deductions. late 1980s and '90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.
  
For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research.
Human beings solve most of their problems using fast, intuitive judgments rather than the conscious, stepby-step deduction that early AI research was able to model.
AI has made some progress at imitating
this kind of "sub-symbolic" problem solving: embodied agentapproaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside human and animal brains that give rise to this skill. Knowledge representation Main articles: Knowledge representation and Commonsense knowledge Knowledge representation
and knowledge engineering
are central to AI research. Many of the
problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;
situations, events, states and time;
causes and effects;
knowledge about knowledge (what
we know about what other people know);
and many other, less well researched domains. A complete
representation of "what exists" is an ontology(borrowing a word from traditional philosophy), of which the most general are called upper ontologies.
Among the most difficult problems in knowledge representation are: Default reasoning and the qualification problem Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969
the qualification problem: for any commonsense rule that AI researchers care to represent, there
Naser Shoukat Firfire
tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem. The breadth of commonsense knowledge
The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering — they must be built, by hand, one complicated concept at a time.
 
A major goal is to have the computer understand enough
concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology. The subsymbolic form of some commonsense knowledge Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed" fake.
or an art critic can take one look at a statue and instantly realize that it is a
These are intuitions or tendencies that are represented in the brain non-consciously and Knowledge like this informs, supports and provides a context for symbolic,
conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI or computational intelligence will provide ways to represent this kind of knowledge. Planning Main article: Automated planning and scheduling Intelligent agents must be able to set goals and achieve them.
They need a way to
visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize theutility (or "value") of the available choices.
In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be.
this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.
Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.
Learning Main article: Machine learning
Naser Shoukat Firfire
has been central to AI research from the beginning.
In 1956, at
the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine".
Unsupervised learning is the ability to find patterns in a stream of
input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression takes a set of numerical input/output examples and attempts to discover a continuous function that would generate the outputs from the inputs. Inreinforcement learning
the agent is
rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Natural language processing
ASIMO uses sensors and intelligent algorithms to avoid obstacles and navigate stairs. Main article: Natural language processing Natural language processing
gives machines the ability to read and understand the
languages that humans speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval(or text mining) and machine translation.
Motion and manipulation Main article: Robotics
eThe field of robotics
is closely related to AI. Intelligence is required for robots to be
able to handle such tasks as object manipulation
and navigation, with sub-problems
Naser Shoukat Firfire
of localization (knowing where you are), mapping (learning what is around you) and motion planning (figuring out how to get there).
Perception Main articles: Machine perception, Computer vision, and Speech recognition Machine perception vision
is the ability to use input from sensors (such as cameras,
microphones, sonar and others more exotic) to deduce aspects of the world. Computer is the ability to analyze visual input. A few selected subproblems are speech
facial recognition and object recognition.
Social intelligence Main article: Affective computing
Kismet, a robot with rudimentary social skills Emotion and social skills
play two roles for an intelligent agent. First, it must be able
to predict the actions of others, by understanding their motives and emotional states. (This involves elements ofgame theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, for good humancomputer interaction, an intelligent machine also needs to display emotions. At the very least it must appear polite and sensitive to the humans it interacts with. At best, it should have normal emotions itself. Creativity Main article: Computational creativity A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative, or systems that identify and assess creativity). A related area of computational research is Artificial Intuition and Artificial Imagination.
MB0047 General intelligence Main articles: Strong AI and AI-complete
Naser Shoukat Firfire
Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them. required for such a project.
A few believe
that anthropomorphic features likeartificial consciousness or an artificial brain may be
Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it The term neural network was traditionally used to refer to a network or circuit of biological neurons.
The modern usage of the term often refers to artificial neural networks, which are
composed of artificial neurons or nodes. Thus the term has two distinct usages: 1. Biological neural networks are made up of real biological neurons that are connected or functionally related in the peripheral nervous system or the central nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis. 2. Artificial neural networks are composed of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The real, biological nervous system is highly complex: artificial neural network algorithms attempt to abstract this complexity and focus on what may hypothetically matter most from an information processing point of view. Good performance (e.g. as measured by good predictive ability, low generalization error), or performance mimicking animal or human error patterns, can then be used as one source of evidence towards supporting the hypothesis that the abstraction really captured something important from the point of view of information processing in the brain. Another incentive for these abstractions is to reduce the amount of computation required to simulate artificial neural networks, so as to allow one to experiment with larger networks and train them on larger data sets.
Naser Shoukat Firfire
This article focuses on the relationship between the two concepts; for detailed coverage of the two different concepts refer to the separate
6. Distinguish between closed decision making system & open decision making system? What is ‘What – if‘ analysis? Why is more time spend in problem analysis & problem definition as compared to the time spends on decision analysis? Closed decision making system & Open decision making system Decision making can be regarded as the mental processes (cognitive process) resulting in the selection of a course of action among several alternative scenarios. Every decision making process produces a  final choice. The output can be an action or an opinion of choice. Human performance in decision terms has been the subject of active research from several perspectives. From a psychological perspective, it is necessary to examine individual decisions in the context of a set of needs, preferences an individual has and values they seek. From acognitive perspective, the decision making process must be regarded as a continuous process integrated in the interaction with the environment. From a normative perspective, the analysis of individual decisions is concerned with the logic of decision making and rationality and the invariant choice it leads to.
Yet, at another level, it might be regarded as a problem solving activity which is terminated when a satisfactory solution is found. Therefore, decision making is a reasoning or emotional process which can be rational or irrational, can be based on explicit assumptions or tacit assumptions. Logical decision making is an important part of all science-based professions, where specialists apply their knowledge in a given area to making informed decisions. For example, medical decision making often involves making a diagnosis and selecting an appropriate treatment. Some research using naturalistic methods shows, however, that in situations with higher time pressure, higher stakes, or increased ambiguities, experts use intuitive decision making rather than structured approaches, following a recognition primed decision approach to fit a set of indicators into the expert's experience and immediately arrive at a satisfactory course of action without weighing alternatives. Recentrobust decision efforts have formally integrated uncertainty into the decision making process. However, Decision Analysis, recognized and included uncertainties with a structured and rationally justifiable method of decision making since its conception in 1964. A major part of decision making involves the analysis of a finite set of alternatives described in terms of some evaluative criteria. These criteria may be benefit or cost in nature. Then the problem might be to rank these alternatives in terms of how attractive they are to the decision maker(s) when all the criteria are
Naser Shoukat Firfire
considered simultaneously. Another goal might be to just find the best alternative or to determine the relative total priority of each alternative (for instance, if alternatives represent projects competing for funds) when all the criteria are considered simultaneously. Solving such problems is the focus of multicriteria decision analysis (MCDA) also known as multi-criteria decision making(MCDM). This area of decision making, although it is very old and has attracted the interest of many researchers and practitioners, is still highly debated as there are many MCDA / MCDM methods which may yield very different results when they are applied on exactly the same data. a decision making paradox. Problem Analysis vs Decision Making It is important to differentiate between problem analysis and decision making. The concepts are completely separate from one another. Problem analysis must be done first, then the information gathered in that process may be used towards decision making. Problem Analysis • Analyze performance, what should the results be against what they actually are • Problems are merely deviations from performance standards • Problem must be precisely identified and described • Problems are caused by some change from a distinctive feature • Something can always be used to distinguish between what has and hasn't been effected by a cause • Causes to problems can be deducted from relevant changes found in analyzing the problem • Most likely cause to a problem is the one that exactly explains all the facts Decision Making • Objectives must first be established • Objectives must be classified and placed in order of importance • Alternative actions must be developed • The alternative must be evaluated against all the objectives • The alternative that is able to achieve all the objectives is the tentative decision • The tentative decision is evaluated for more possible consequences
• The decisive actions are taken, and additional actions are taken to prevent any adverse consequences from becoming problems and starting both systems (problem analysis and decision making) all over again
[4 ] 
This leads to the formulation of
Everyday techniques Some of the decision making techniques people use in everyday life include:
Pros and Cons: Listing the advantages and disadvantages of each option, popularized by Platoand Benjamin Franklin
Naser Shoukat Firfire
Simple Prioritization: Choosing the alternative with the highest probability-weighted utility for each alternative (see Decision Analysis) or derivative Possibilianism: Acting on choices so as not to preclude alternative understandings of equal probability, including active exploration of novel possibilities and emphasis on the necessity of holding multiple positions at once if there is no available data to privilege one over the others.
Satisficing: Accepting the first option that seems like it might achieve the desired result Acquiesce to a person in authority or an "expert", just following orders Flipism: Flipping a coin, cutting a deck of playing cards, and other random or coincidence methods
Prayer, tarot cards, astrology, augurs, revelation, or other forms of divination Decision-Making Stages
Developed by B. Aubrey Fisher, there are four stages that should be involved in all group decision making. These stages, or sometimes called phases, are important for the decision-making process to begin Orientation stage- This phase is where members meet for the first time and start to get to know each other. Conflict stage- Once group members become familiar with each other, disputes, little fights and arguments occur. Group members eventually work it out. Emergence stage- The group begins to clear up vague opinions by talking about them. Reinforcement stage- Members finally make a decision, while justifying themselves that it was the right decision. Decision-Making Steps When in an organization and faced with a difficult decision, there are several steps one can take to ensure the best possible solutions will be decided. These steps are put into seven effective ways to go about this decision making process (McMahon 2007). The first step - Outline your goal and outcome. This will enable decision makers to see exactly what they are trying to accomplish and keep them on a specific path. The second step - Gather data. This will help decision makers have actual evidence to help them come up with a solution. The third step - Brainstorm to develop alternatives. Coming up with more than one solution ables you to see which one can actually work. The fourth step - List pros and cons of each alternative. With the list of pros and cons, you can eliminate the solutions that have more cons than pros, making your decision easier.
Naser Shoukat Firfire
The fifth step - Make the decision. Once you analyze each solution, you should pick the one that has many pros (or the pros that are most significant), and is a solution that everyone can agree with. The sixth step - Immediately take action. Once the decision is picked, you should implement it right away. The seventh step - Learn from, and reflect on the decision making. This step allows you to see what you did right and wrong when coming up, and putting the decision to use. Cognitive and personal biases Biases can creep into our decision making processes. Many different people have made a decision about the same question (e.g. "Should I have a doctor look at this troubling breast cancer symptom I've discovered?" "Why did I ignore the evidence that the project was going over budget?") and then craft potential cognitive interventions aimed at improving decision making outcomes. Below is a list of some of the more commonly debated cognitive biases.
Selective search for evidence (a.k.a. Confirmation bias in psychology) (Scott Plous, 1993) – We tend to be willing to gather facts that support certain conclusions but disregard other facts that support different conclusions. Individuals who are highly defensive in this manner show significantly greater left prefrontal cortex activity as measured by EEG than do less defensive individuals.
Premature termination of search for evidence – We tend to accept the first alternative that looks like it might work. Inertia – Unwillingness to change thought patterns that we have used in the past in the face of new circumstances. Selective perception – We actively screen-out information that we do not think is important. (See prejudice.) In one demonstration of this effect, discounting of arguments with which one disagrees (by judging them as untrue or irrelevant) was decreased by selective activation of right prefrontal cortex.
Wishful thinking or optimism bias – We tend to want to see things in a positive light and this can  distort our perception and thinking. Choice-supportive bias occurs when we distort our memories of chosen and rejected options to make the chosen options seem more attractive. Recency – We tend to place more attention on more recent information and either ignore or forget more distant information. (See semantic priming.) The opposite effect in the first set of data or other information is termed Primacy effect (Plous, 1993).
Repetition bias – A willingness to believe what we have been told most often and by the greatest number of different sources.
Naser Shoukat Firfire
Anchoring and adjustment – Decisions are unduly influenced by initial information that shapes our view of subsequent information. Group think – Peer pressure to conform to the opinions held by the group. Source credibility bias – We reject something if we have a bias against the person, organization, or group to which the person belongs: We are inclined to accept a statement by someone we like. (See prejudice.) Incremental decision making and escalating commitment – We look at a decision as a small step in a process and this tends to perpetuate a series of similar decisions. This can be contrasted with zero-based decision making. (See slippery slope.) Attribution asymmetry – We tend to attribute our success to our abilities and talents, but we attribute our failures to bad luck and external factors. We attribute other's success to good luck, and their failures to their mistakes. Role fulfillment (Self Fulfilling Prophecy) – We conform to the decision making expectations that others have of someone in our position. Underestimating uncertainty and the illusion of control – We tend to underestimate future uncertainty because we tend to believe we have more control over events than we really do. We believe we have control to minimize potential problems in our decisions.
Reference class forecasting was developed to eliminate or reduce cognitive biases in decision making. Post decision analysis Evaluation and analysis of past decisions is complementary to decision making; see also mental accounting. Cognitive styles Influence of Briggs Myers type According to behavioralist Isabel Briggs Myers, a person's decision making process depends to a significant degree on their cognitive style. Myers developed a set of four bi-polar dimensions, called the Myers-Briggs Type Indicator (MBTI). The terminal points on these dimensions are: thinking and feeling; extroversion and introversion; judgment and perception; and sensing and intuition. She claimed that a person's decision making style correlates well with how they score on these four dimensions. For example, someone who scored near the thinking, extroversion, sensing, and judgment ends of the dimensions would tend to have a logical, analytical, objective, critical, and empirical decision making style. However, some psychologists say that the MBTI lacks reliability and validity and is poorly constructed.
Naser Shoukat Firfire
Other studies suggest that these national or cross-cultural differences exist across entire societies. For example, Maris Martinsons has found that American, Japanese and Chinese business leaders each exhibit a distinctive national style of decision making. Optimizing vs. satisficing Herbert Simon coined the phrase "bounded rationality" to express the idea that human decision-making is limited by available information, available time, and the information-processing ability of the mind. Simon also defined two cognitive styles: maximizers try to make an optimal decision, whereas satisficers simply try to find a solution that is "good enough". Maximizers tend to take longer making decisions due to the need to maximize performance across all variables and make tradeoffs carefully; they also tend to more often regret their decisions. Combinatoral vs. positional Styles and methods of decision making were elaborated by the founder of Predispositioning Theory, Aron Katsenelinboigen. In his analysis on styles and methods Katsenelinboigen referred to the game of chess, sayin g th a t ―ches s doe s disclo se various meth odsof operation, n otably th e creation of predisposition— methods which may be applicable to other, more complex systems .‖
  [9 ]
In his book Katsenelinboigen states that apart from the methods (reactive and selective) and sub-methods (randomization, predispositioning, programming), there are two major styles – positional and combinational. Both styles are utilized in the game of chess. According to Katsenelinboigen, the two styles reflect two basic approaches to the uncertainty: deterministic (combinational style) and indeterministic (positional style). Katse nelinb oigen‘s d efinition of th e two style s are th e following. The combinational style is characterized by
a very narrow, clearly defined, primarily material goal, and a program that links the initial position with the final outcome.
In defining the combinational style in chess, Katsenelinboigen writes: The combinational style features a clearly formulated limited objective, namely the capture of material (the main constituent element of a chess position). The objective is implemented via a well defined and in some cases in a unique sequence of moves aimed at reaching the set goal. As a rule, this sequence leaves no options for the opponent. Finding a combinational objective allows the player to focus all his energies o n efficien t ex e c n , th a tis, th e player‘s analysis may be limite d to th e piece s directly p artakin g in utio th e combination. This approach is the crux of the combination and the combinational style of play. The positional style is distinguished by
a positional goal and
Naser Shoukat Firfire
a formation of semi-complete linkages between the initial step and final outcome.
―U n ike th e combinatio nal player, th e positional playe r is occupie d, first a n dforemo st, with th e l elaboration of the position that will allow him to develop in the unknown future. In playing the positional style, the player must evaluate relational and material parameters as inde p e n dent variables. ( … ) The positional style gives the player the opportunity to develop a position until it becomes pregnant with a combination. However, the combination is not the final goal of the positional player—it helps him to achieve the desirable, keeping in mind a predisposition for the future development. The Pyrrhic victory is th e best exa mple of one‘s inab ility to think p ositionally.‖ The positional style serves to a) create a predisposition to the future development of the position; b) induce the environment in a certain way; c) absorb an unexpected outcome in one‘s favor; d) avoid the negative aspects of unexpected outcomes. The positional style gives the player the opportunity to develop a position until it becomes pregnant with a combination. Katsenelinboigen writes: ―A s the game progressed and defense became more sophisticated the combinational style of play declined. . . . The positional style of chess does not eliminate the combinational one with its attempt to see the entire program of action in advance. The positional style merely prepares the transformation to a combination when th e latter becomes fe asible .‖ Neuroscience perspective
The anterior cingulate cortex (ACC), orbitofrontal cortex (and the overlapping ventromedial prefrontal cortex) are brain regions involved in decision making processes. A recent neuroimaging study,
distinctive patterns of neural activation in these regions depending on whether decisions were made on the basis of personal volition or following directions from someone else. Patients with damage to theventromedial prefrontal cortex have difficulty making advantageous decisions. A recent study
involving Rhesus monkeys found that neurons in the parietal cortex not only represent
the formation of a decision but also signal the degree of certainty (or "confidence") associated with the decision. Another recent study found that lesions to the ACC in themacaque resulted in impaired decision making in the long run of reinforcement guided tasks suggesting that the ACC may be involved in evaluating past reinforcement information and guiding future action. Emotion appears to aid the decision making process: Decision making often occurs in the face of uncertainty about whether one's choices will lead to benefit or harm (see also Risk). The somaticmarker hypothesis is a neurobiological theory of how decisions are made in the face of uncertain
Naser Shoukat Firfire
outcome. This theory holds that such decisions are aided by emotions, in the form of bodily states, that are elicited during the deliberation of future consequences and that mark different options for behavior as being advantageous or disadvantageous. This process involves an interplay between neural systems that elicit emotional/bodily states and neural systems that map these emotional/bodily states.
Although it is unclear whether the studies generalize to all processing, there is evidence that volitional movements are initiated, not by the conscious decision making self, but by the subconscious. See the Neuroscience of free will.
MB0047 Master of Business Administration-MBA Semester II MB0047 – Management Information Systems – 4 Credits (Book ID: B1136) Assignment Set- 2 (60 Marks) Marks-60 Attempt each question. Each question carries 10 marks:
Naser Shoukat Firfire
1. How hardware & software support in various MIS activities of the organization? Explain the transaction stages from manual system to automated systems? Hardware support for MIS Generally hardware in the form of personal computers and peripherals like printers, fax machines, copier, scanners etc are used in organization to support various MIS activities of the organization. Advantages of a PC : Advantages a personal computer offers are – a) Speed – A PC can process data at a very high speed. It can process millions of instructions within fraction of seconds. b) Storage – A PC can store large quantity of data in a small space. It eliminates the need of storing the conventional office flat files and box files which requires lots of space. The storage system in a PC is such that the information can be transferred from place to another place in electronic form. c) Communication – A PC on the network can offer great support as a communicator in communicating information in the forms of text and images. Today a PC with internet is used as a powerful tool of communication for every business activity. d) Accuracy – A PC is highly reliable in the sense that it could be used to perform calculations continuously for hours with a great degree of accuracy. It is possible to obtain mathematical results correct up to a great degree of accuracy. e) Conferencing – A PC with internet offers facility of video conferencing worldwide. Business people across the globe travel a lot to meet their business partner, colleagues, and customers etc to discuss about business activities. By video conferencing inconvenience of traveling can be avoided. A block diagram of a computer may be represented as Input unit is used to give input to the processor. Examples of input unit –Keyboard, scanner, mouse, bar code reader etc. -1MB0047-Management Information System A processor refers to unit which processes the input received the way it has been instructed. In a computer the processor is the CPU – Central Processing Unit. It does all mathematical calculations, logical tasks, storing details in the memory etc. Output unit is used to give output s from the computer. Examples of output unit – Monitor, printer, speakers etc. Organization of Business in an E enterprise – Software Applications in MIS Internet technology is creating a universal bench or platform for buying and selling of goods, commodities and services. Essentially Internet and networks enable integration of information, facilitate communication, and provide access to everybody from anywhere. And software solutions make them faster and self-reliant as they can analyze data
Naser Shoukat Firfire
information, interpret and use rules and guidelines for decision-making. These enabling capabilities of technology have given rise to four business models that together work in an E enterprise organization. They are: • • • • E business E communication E commerce E collaboration
These models work successfully because Internet technology provides the infrastructure for running the entire business process of any length. It also provides email and other communication capabilities to plan, track, monitor and control the business operations through the workers located anywhere. It is capable of linking to disparate systems such as logistics, data acquisition and radio frequency used systems and so on. Low cost connectivity physical, virtual and universal standards of Internet technology make it a driving force to change conventional business model to E business enterprise model. Internet has enabled organizations to change their business process and practices. It has dramatically reduced cost of data and information processing, its sending and storing. Information and information products are available in electronic media, and is a resident on the network. Once everyone is connected electronically, information can flow seamlessly from any location to any other location. For example, product information is available on an organization website which also has a feature of order placement. An order placed is processed at the backend and status of acceptance, rejection is communicated instantaneously to the customer. Such order is then placed directly on the order board for scheduling and execution. These basic capabilities of Internet have given rise to number of business models. Some of them are given in Table -2-
Naser Shoukat Firfire
The Internet and networks provide platform and various capabilities whereby communication, collaboration, and conversion has become significantly faster, transparent and cheaper. These technologies help to save time, resource and enable faster decision making. The technology adds speed and intelligence in the business process improving quality of service to the customer. The business process of serving the customer to offer goods, products or services is made up of the following components. • En q u proce g iry ssin • Ord e r preparation • Ord e r placeme n t • Ord e r confirmation • Ord e r plannin g • Ord e r schedulin g • Ord e r man u facturin g • Ord e r statu s monitorin g
MB0047 • Ord e r dispatchin
Naser Shoukat Firfire
2. Explain the various behavioral factors of management organization? As per Porter, how can performance of individual corporations be determined? Behavioral factors Management in all business and organizational activities is the act of getting people together to accomplish desired goals and objectives using available resources efficiently and effectively. Management comprises planning, organizing, staffing, leading or directing, and controllingan organization (a group of one or more people or entities) or effort for the purpose of accomplishing a goal. Resourcing encompasses the deployment and manipulation of human resources, financial resources, technological resources, and natural resources. Because organizations can be viewed as systems, management can also be defined as human action, including design, to facilitate the production of useful outcomes from a system. This view opens the opportunity to 'manage' oneself, a pre-requisite to attempting to manage others. scope At the beginning, one thinks of management functionally, as the action of measuring a quantity on a regular basis and of adjusting some initial plan; or as the actions taken to reach one's intended goal. This applies even in situations where planning does not take place. From this perspective, Frenchman Henri Fayol(1841–1925) considers management to consist of six functions:forecasting, planning, organizing, commanding, coordinating, controlling. He was one of the most influential contributors to modern concepts of management. Another way of thinking, Mary Parker Follett (1868–1933), who wrote on the topic in the early twentieth century, defined management as "the art of getting things done through people". She described management as philosophy.
Some people, however, find this definition, while useful, far too narrow. The phrase "management is what managers do" occurs widely, suggesting the difficulty of defining management, the shifting nature of definitions, and the connection of managerial practices with the existence of a managerial cadre or class. One habit of thought regards management as equivalent to "business administration" and thus excludes management in places outsidecommerce, as for example in charities and in the public sector. More realistically, however, every organization must manage its work, people, processes, technology, etc. in order to maximize its effectiveness. Nonetheless, many people refer to university departments which teach management as "business schools." Some institutions (such as the Harvard Business School) use that name while others (such as the Yale School of Management) employ the more inclusive term "management."
Naser Shoukat Firfire
English speakers may also use the term "management" or "the management" as a collective word describing the managers of an organization, for example of a corporation. Historically this use of the term was often contrasted with the term "Labor" referring to those being managed. Nature of managerial work In for-profit work, management has as its primary function the satisfaction of a range of stakeholders. This typically involves making a profit (for the shareholders), creating valued products at a reasonable cost (for customers), and providing rewarding employment opportunities (for employees). In nonprofit management, add the importance of keeping the faith of donors. In most models of management/governance, shareholders vote for the board of directors, and the board then hires senior management. Some organizations have experimented with other methods (such as employee-voting models) of selecting or reviewing managers; but this occurs only very rarely. In the public sector of countries constituted as representative democracies, voters elect politicians to public office. Such politicians hire many managers and administrators, and in some countries like the United States political appointees lose their jobs on the election of a new president/governor/mayor. Historical development Difficulties arise in tracing the history of management. Some see it (by definition) as a late modern (in the sense of late modernity) conceptualization. On those terms it cannot have a pre-modern history, only harbingers (such as stewards). Others, however, detect management-like-thought back to Sumerian traders and to the builders of the pyramids of ancient Egypt. Slave-owners through the centuries faced the problems of exploiting/motivating a dependent but sometimes unenthusiastic or recalcitrant workforce, but many pre-industrialenterprises, given their small scale, did not feel compelled to face the issues of management systematically. However, innovations such as the spread of Arabic numerals (5th to 15th centuries) and the codification of double-entry book-keeping (1494) provided tools for management assessment, planning and control. Given the scale of most commercial operations and the lack of mechanized record-keeping and recording before the industrial revolution, it made sense for most owners of enterprises in those times to carry out management functions by and for themselves. But with growing size and complexity of organizations, the split between owners (individuals, industrial dynasties or groups of shareholders) and day-to-day managers (independent specialists in planning and control) gradually became more common. Early writing While management has been present for millennia, several writers have created a background of works that assisted in modern management theories.
MB0047 Sun Tzu's The Art of War
Naser Shoukat Firfire
Written by Chinese general Sun Tzu in the 6th century BC, The Art of War is a military strategy book that, for managerial purposes, recommends being aware of and acting on strengths and weaknesses of both a manager's organization and a foe's. Chanakya's Arthashastra Chanakya wrote the Arthashastra around 300BC in which various strategies, techniques and management theories were written which gives an account on the management of empires, economy and family. The work is often compared to the later works of Machiavelli. Niccolò Machiavelli's The Prince Believing that people were motivated by self-interest, Niccolò Machiavelli wrote The Prince in 1513 as advice for the city of Florence, Italy. Machiavelli recommended that leaders use fear—but not hatred— to maintain control. Adam Smith's The Wealth of Nations Written in 1776 by Adam Smith, a Scottish moral philosopher, The Wealth of Nations aims for efficient organization of work throughSpecialization of labor.
  
Smith described how changes in processes could
boost productivity in the manufacture of pins. While individuals could produce 200 pins per day, Smith analyzed the steps involved in manufacture and, with 10 specialists, enabled production of 48,000 pins per day.
19th century Classical economists such as Adam Smith (1723–1790) and John Stuart Mill (1806–1873) provided a theoretical background to resource-allocation, production, and pricing issues. About the same time, innovators like Eli Whitney (1765–1825), James Watt (1736–1819), andMatthew Boulton (1728–1809) developed elements of technical production such as standardization, quality-control procedures, costaccounting, interchangeability of parts, and work-planning. Many of these aspects of management existed in the pre-1861 slave-based sector of the US economy. That environment saw 4 million people, as the contemporary usages had it, "managed" in profitable quasi-mass production. By the late 19th century, marginal economists Alfred Marshall (1842–1924), Léon Walras (1834–1910), and others introduced a new layer of complexity to the theoretical underpinnings of management. Joseph Wharton offered the first tertiary-level course in management in 1881. 20th century By about 1900 one finds managers trying to place their theories on what they regarded as a thoroughly scientific basis (see scientism for perceived limitations of this belief). Examples include Henry R. Towne's Science of management in the 1890s, Frederick Winslow Taylor'sThe Principles of Scientific
Naser Shoukat Firfire
Management (1911), Frank and Lillian Gilbreth's Applied motion study (1917), and Henry L. Gantt's charts (1910s). J. Duncan wrote the first college management textbook in 1911. In 1912 Yoichi Ueno introduced Taylorism to Japan and became first management consultant of the "Japanesemanagement style". His son Ichiro Ueno pioneered Japanese quality assurance. The first comprehensive theories of management appeared around 1920. The Harvard Business School invented the Master of Business Administration degree (MBA) in 1921. People like Henri Fayol (1841–1925) and Alexander Church described the various branches of management and their interrelationships. In the early 20th century, people like Ordway Tead (1891–1973), Walter Scott and J. Mooney applied the principles of psychology to management, while other writers, such as Elton Mayo (1880–1949), Mary Parker Follett (1868–1933),Chester Barnard (1886–1961), Max Weber (1864– 1920), Rensis Likert (1903–1981), and Chris Argyris (1923 - ) approached the phenomenon of management from a sociological perspective. Peter Drucker (1909–2005) wrote one of the earliest books on applied management: Concept of the Corporation (published in 1946). It resulted from Alfred Sloan (chairman of General Motors until 1956) commissioning a study of the organisation. Drucker went on to write 39 books, many in the same vein. H. Dodge, Ronald Fisher (1890–1962), and Thornton C. Fry introduced statistical techniques into management-studies. In the 1940s, Patrick Blackett combined these statistical theories with microeconomic theory and gave birth to the science of operations research. Operations research, sometimes known as "management science" (but distinct from Taylor's scientific management), attempts to take a scientificapproach to solving management problems, particularly in the areas of logistics and operations. Some of the more recent developments include the Theory of Constraints, management by objectives, reengineering, Six Sigma and various information-technology-driven theories such as agile software development, as well as group management theories such as Cog's Ladder. As the general recognition of managers as a class solidified during the 20th century and gave perceived practitioners of the art/science of management a certain amount of prestige, so the way opened for popularised systems of management ideas to peddle their wares. In this context many management fads may have had more to do with pop psychology than with scientific theories of management. Towards the end of the 20th century, business management came to consist of six separate branches, namely:
Human resource management Operations management or production management Strategic management
Naser Shoukat Firfire
Marketing management Financial management
Information technology management responsible for management information systems 21st century In the 21st century observers find it increasingly difficult to subdivide management into functional categories in this way. More and more processes simultaneously involve several categories. Instead, one tends to think in terms of the various processes, tasks, and objects subject to management. Branches of management theory also exist relating to nonprofits and to government: such as public administration, public management, and educational management. Further, management programs related to civil-society organizations have also spawned programs in nonprofit management and social entrepreneurship. Note that many of the assumptions made by management have come under attack from business ethics viewpoints, critical management studies, and anti-corporate activism. As one consequence, workplace democracy has become both more common, and more advocated, in some places distributing all management functions among the workers, each of whom takes on a portion of the work. However, these models predate any current political issue, and may occur more naturally than does a command hierarchy. All management to some degree embraces democratic principles in that in the long term workers must give majority support to management; otherwise they leave to find other work, or go on strike. Despite the move toward workplace democracy, command-and-control organization structures remain commonplace and the de facto organization structure. Indeed, the entrenched nature of command-and-control can be seen in the way that recent layoffs have been conducted with management ranks affected far less than employees at the lower levels of organizations. In some cases, management has even rewarded itself with bonuses when lower level employees have been laid off.
3. Compare various types of development aspect of Information System? Explain the various stages of SDLC? Information Systems (IS) is an academic/professional discipline bridging the business field and the welldefined computer science field that is evolving toward a new scientific area of study. information systems discipline therefore is supported by the theoretical foundations of information and computations such that learned scholars have unique opportunities to explore the academics of various business models as well as related algorithmic processes within a computer science discipline.
Typically, information systems or the more commonlegacy information systems include
Naser Shoukat Firfire
people, procedures, data, software, and hardware (by degree) that are used to gather and analyze digital information.
Specifically computer-based information systems are complementary
networks of hardware/software that people and organizations use to collect, filter, process, create, & distribute data (computing).
ComputerInformation System(s) (CIS) is often a track within the computer
science field studying computers and algorithmic processes, including their principles, their software & hardware designs, their applications, and their impact on society. emphasizes functionality over design.
Overall, an IS discipline
As illustrated by the Venn Diagram on the right, the history of information systems coincides with the history of computer science that began long before the modern discipline of computer science emerged in the twentieth century.
Regarding the circulation of information and ideas, numerous legacy
information systems still exist today that are continuously updated to promote ethnographic approaches, to ensure data integrity, and to improve the social effectiveness & efficiency of the whole process. within business enterprises, and sharing the benefits with modern Information systems are implemented within an organization for the purpose of improving the effectiveness and efficiency of that organization. Capabilities of the information system and characteristics of the organization, its work systems, its people, and its development and implementation methodologies together determine the extent to which that purpose is achieved The Discipline of Information Systems Several IS scholars have debated the nature and foundations of Information Systems which has its roots in other reference disciplines such as Computer Science, Engineering, Mathematics, Management Science, Cybernetics, and others
general, information systems are focused upon processing information within organizations, especially
The Impact on Economic Models
Microeconomic theory model Transaction Cost theory
Agency Theory Differentiating IS from Related Disciplines Similar to computer science, other disciplines can be seen as both related disciplines and foundation disciplines of IS. But, while there may be considerable overlap of the disciplines at the boundaries, the disciplines are still differentiated by the focus, purpose and orientation of their activities.
In a broad scope, the term Information Systems (IS) is a scientific field of study that addresses the range of strategic, managerial and operational activities involved in the gathering, processing, storing,
Naser Shoukat Firfire
distributing and use of information, and its associated technologies, in society and organizations. industry, government agencies and not-for-profit organizations.
term information systems is also used to describe an organizational function that applies IS knowledge in Information Systems often refers to the interaction between algorithmic processes and technology. This interaction can occur within or across organizational boundaries. An information system is not only the technology an organization uses, but also the way in which the organizations interact with the technology and the way in which the technology works with the organization‘s business processes. Informatio n systems are distinct from information technology (IT) in that an information system has an information technology component that interacts with the processes components. Types of information systems
A four level pyramid model of different types of Information Systems based on the different levels of hierarchy in an organization The 'classic' view of Information systems found in the textbooks
of the 1980s was of a pyramid of
systems that reflected the hierarchy of the organization, usually Transaction processing systems at the bottom of the pyramid, followed by Management information systems, Decision support systems and ending with Executive information systems at the top. Although the pyramid model remains useful, since it was first formulated a number of new technologies have been developed and new categories of information systems have emerged, some of which no longer fit easily into the original pyramid model. Some examples of such systems are:
Data warehouses Enterprise resource planning Enterprise systems Expert systems Geographic information system
Naser Shoukat Firfire
Global information system
Office Automation Information systems career pathways Information Systems have a number of different areas of work:
Information systems strategy Information systems management Information systems development Information systems security Information systems iteration Information system organization
There are a wide variety of career paths in the information systems discipline. "Workers with specialized technical knowledge and strong communications skills will have the best prospects. Workers with management skills and an understanding of business practices and principles will have excellent opportunities, as companies are increasingly looking to technology to drive their revenue." Information systems development Information technology departments in larger organizations tend to strongly influence information technology development, use, and application in the organizations, which may be a business or corporation. A series of methodologies and processes can be used in order to develop and use an information system. Many developers have turned and used a more engineering approach such as the System Development Life Cycle (SDLC) which is a systematic procedure of developing an information system through stages that occur in sequence. An Information system can be developed in house (within the organization) or outsourced. This can be accomplished by outsourcing certain components or the entire system.
A specific case is the geographical distribution of the
development team (Offshoring, Global Information System). A computer based information system, following a definition of Langefors,
a technologically implemented medium for recording, storing, and disseminating linguistic expressions, as well as for drawing conclusions from such expressions.
which can be formulated as a generalized information systems design mathematical program
Naser Shoukat Firfire
Geographic Information Systems, Land Information systems and Disaster Information Systems are also some of the emerging information systems but they can be broadly considered as Spatial Information Systems. System development is done in stages which include:
Problem recognition and specification Information gathering Requirements specification for the new system System design System construction System implementation
Review and maintenance Information systems research
Information systems research is generally interdisciplinary concerned with the study of the effects of information systems on the behavior of individuals, groups, and organizations. (2004)
Hevner et al.
categorized research in IS into two scientific paradigms includingbehavioral science which is
to develop and verify theories that explain or predict human or organizational behavior and design science which extends the boundaries of human and organizational capabilities by creating new and innovative artifacts. Salvatore March and Gerald Smith
proposed a framework for researching different aspects of
Information Technology including outputs of the research (research outputs) and activities to carry out this research (research activities). They identified research outputs as follows: 1. Constructs which are concepts that form the vocabulary of a domain. They constitute a conceptualization used to describe problems within the domain and to specify their solutions. 2. A model which is a set of propositions or statements expressing relationships among constructs. 3. A method which is a set of steps (an algorithm or guideline) used to perform a task. Methods are based on a set of underlying constructs and a representation (model) of the solution space. 4. An instantiation is the realization of an artifact in its environment. Also research activities including: 1. Build an artifact to perform a specific task. 2. Evaluate the artifact to determine if any progress has been achieved.
Naser Shoukat Firfire
3. Given an artifact whose performance has been evaluated, it is important to determine why and how the artifact worked or did not work within its environment. Therefore theorize and justify theories about IT artifacts. Although Information Systems as a discipline has been evolving for over 30 years now, or identity of IS research is still subject to debate among scholars such as.
the core focus
There are two main
views around this debate: a narrow view focusing on the IT artifact as the core subject matter of IS research, and a broad view that focuses on the interplay between social and technical aspects of IT that is embedded into a dynamic evolving context.
A third view provided by
calling IS scholars to take a
balanced attention for both the IT artifact and its context. Since information systems is an applied field, industry practitioners expect information systems research to generate findings that are immediately applicable in practice. However, that is not always the case. Often information systems researchers explore behavioral issues in much more depth than practitioners would expect them to do. This may render information systems research results difficult to understand, and has led to criticism.
To study an information system itself, rather than its effects, information systems models are used, such as EATPUT. The international body of Information Systems researchers, the Association for Information Systems (AIS), and its Senior Scholars Forum Subcommittee on Journals (23 April 2007), proposed a 'basket' of journals that the AIS deems as 'excellent', and nominated: Management Information Systems Quarterly (MISQ), Information Systems Research (ISR), Journal of Association of Information Systems (JAIS), Journal of Management Information Systems (JMIS), European Journal of Information Systems (EJIS), and Information Systems Journal (ISJ).
The Systems Development Life Cycle (SDLC), or Software Development Life Cycle insystems engineering, information systems and software engineering, is the process of creating or altering systems, and the models and methodologies that people use to develop these systems. The concept generally refers to computer or information systems. In software engineering the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system : the software development process.
Naser Shoukat Firfire
4. Compare & Contrast E-enterprise business model with traditional business organization model? Explain how in E-enterprise manager role & responsibilities are changed? Explain how manager is a knowledge worker in E-enterprise? e-business may be defined as the conduct of industry, trade and commerce using the computer networks. The network you are most familiar with as a student or consumer is the internet. Whereas internet is a public thorough way, firms use more private, and, hence more secure networks for more effective and efficient management of their internal functions. on the other hand traditional business is the simplest and the know way of doing business which includes no electronic media and where the handing do include the manual work. Ease of formation and lower investment requirements: Unlike a host of procedural requirements for setting up an industry, e-business is relatively easy to start. The benefits of internet technology accrue to big or small business alike. In fact, internet is responsible for the popularity of the phrase: 'networked individuals and firms are more efficient than networthed individuals.' This means that even if you do not have much of the investment (networth) but have contacts (network), you can do fabulous business. Imagine a restaurant that does not have any requirement of a physical space. Yes, you may have an online 'menu' representing the best of cuisines from the best of restaurants the world over that you have networked with. The customer visits your website, decides the menu, places the order that in turn is routed to the restaurant located closest to his location. The food is delivered and the payment collected by the restaurant staff and the amount due to you as a client solicitor is credited to your account through an electronic clearing system. Convenience: Internet offers the convenience of '24 hours × 7 days a week × 365 days' a year business that allowed A and B to go for shopping well after midnight. Such flexibility is available even to the organizational personnel whereby they can do work from wherever they are, and whenever they may want to do it. Advertising Space - After your hub is published advertisements may be placed in this space. Please note, it can take some time after you publish for the ads to match the content of your hub. Speed: As already noted, much of the buying or selling involves exchange of information that internet allows at the click of a mouse. This benefit becomes all the more attractive in the case of informationintensive products such as softwares, movies, music, e-books and journals that can even be delivered online. Cycle time, i.e., the time taken to complete a cycle from the origin of demand to its fulfillment, is substantially reduced due to transformation of the business processes from being sequential to becoming
Naser Shoukat Firfire
parallel or simultaneous. You know that in the digital era, money is defined as electronic pulses at the speed of light, thanks to the electronic funds transfer technology of e-commerce. Global reach/access: Internet is truly without boundaries. On the one hand, it allows the seller an access to the global market; on the other hand, it affords to the buyer a freedom to choose products from almost any part of the world. It would not be an exaggeration to say that in the absence of internet, globalisation would have been considerably restricted in scope and speed. Movement towards a paperless society: Use of internet has considerably reduced dependence on paperwork and the attendant 'red tape.' You know that firms these days does bulk of its sourcing of supplies of materials and components in a paper less fashion. Even the government departments and regulatory authorities are increasingly moving in this direction whereby they allow electronic filing of returns and reports. In fact, e-commerce tools are effecting the administrative reforms aimed at speeding up the process of granting permissions, approvals and licences. In this respect, the provisions of Information Technology Act 2000 are quite noteworthy. It is widely acknowledged today that new technologies, in particular access to the Internet, tend to modify communication between the different players in the professional world, notably:
• • •
relationships between the enterprise and its clients, the internal functioning of the enterprise, including enterprise-employee relationships,
the relationship of the enterprise with its different partners and suppliers. The term "e-Business" therefore refers to the integration, within the company, of tools based on information and communication technologies (generally referred to as business software) to improve their functioning in order to create value for the enterprise, its clients, and its partners. E-Business no longer only applies to virtual companies (called click and mortar) all of whose activities are based on the Net, but also to traditional companies (called brick and mortar). The term e-Commerce (also called Electronic commerce), which is frequently mixed up with the term eBusiness, as a matter of fact, only covers one aspect of e-Business, i.e. the use of an electronic support for the commercial relationship between a company and individuals. The purpose of this document is to present the different underlying "technologies" (in reality, organizational modes based on information and communication technologies) and their associated acronyms. Creation of value The goal of any e-Business project is to create value. Value can be created in different manners:
• • • • • • •
As a result of an increase in margins, i.e. a reduction in production costs or an increase in profits. E-Business makes it possible to achieve this in a number of different ways: Positioning on new markets Increasing the quality of products or services Prospecting new clients Increasing customer loyalty Increasing the efficiency of internal functioning As a result of increased staff motivation. The transition from a traditional activity to an eBusiness activity ideally makes it possible to motivate associates to the extent that:
• • • • • • • • • • • •
Naser Shoukat Firfire
The overall strategy is more visible for the employees and favors a common culture The mode of functioning implies that the players assume responsibilities Teamwork favors improvement of competences As a result of customer satisfaction. As a matter of fact, e-Business favors: a drop in prices in connection with an increase in productivity improved listening to clients products and services that are suitable for the clients' needs a mode of functioning that is transparent for the user As a result of privileged relationships with the partners. The creation of communication channels with the suppliers permits: Increased familiarity with each other Increased responsiveness Improved anticipation capacities
• Sharing of resources that is beneficial for both parties An e-Business project can therefore only work as soon as it adds value to the company, but also to its staff, its clients, and partners.
Time To Market "Time To Market" is the time that is necessary to bring a product on the market from a time an idea was put forward. Worldwide, new technologies provide an incredible source of inspiration to formalize ideas while making Time-To-Market even more critical because of the rapid flow of information and speedy competition. Reduction of costs and ROI The use of new technologies for the functioning of an enterprise makes it possible to reduce the costs on the different levels of its organization in time. Nonetheless, implementation of such a project is generally very costly and necessarily leads to organizational changes, which may cause upheaval in the practices of its employees. It is therefore essential to determine the return on investment (ROI) of such a project, i.e. the difference between the expected profits and the required overall investment, taking into account the cost of human resources mobilized. Characterization of the e-Business A company can be viewed as an entity providing products or services to clients with the support of products or services of partners in a constantly changing environment. The functioning of an enterprise can be roughly modeled in accordance with a set of interacting functions, which are commonly classified in three categories:
Performance functions, which represent the core of its activity (core business), i.e. the production of goods or services. They pertain to activities of production, stock management, and purchasing (purchasing function);
Naser Shoukat Firfire
The management functions, which cover all strategic functions of management of the company; they cover general management of the company, the human resources (HR) management functions as well as the financial and accounting management functions; The support functions, which support the performance functions to ensure proper functioning of the enterprise. Support functions conver all activities related with sales (in certain cases, they are part of the core business) as well as all activities that are transversal to the organization, such as management of technological infrastructures (IT, Information Technology function).
Enterprises are generally characterized by the type of commercial relationships they maintain. Dedicated terms therefore exist to quality this type of relationship:
B To B (Business To Business, sometimes written B2B) means a commercial relationship business to business based on the use of a numerical support for the exchange of information. • B To C (Business To Consumer, sometimes wrritten B2C) means a relationship between a company and the public at large (individuals). This is called electronic commerce, whose definition is not limited to sales, but rather covers all possible exchanges between a company and its clients, from the request for an estimate to after-sales service; • B To A (Business To Administration, sometimes written B2A) means a relationship between a company and the public sector (tax administration, etc.) based on numerical exchange mechanisms (teleprocedures, electronic forms, etc.). As an extension of these concepts, the term B To E (Business To Employees, sometimes written B2E) has also emerged to refer to the relationship between a company and its employees, in particular through the provision of forms directed at them for managing their carreer, vacation, or their relationship with the company committee.
MB0047 Front Office/Back Office
Naser Shoukat Firfire
The terms Front Office and Back Office are generally used to describe the parts of the company (or of itsinformation system) that are dedicated, respectively, to the direct relationship with the client and proper management of the company. The Front-Office (sometimes also called Front line) refers to the front part of the entrepriser that is visible to the clients. In turn, Back Office refers to all parts of the information system to which the final user does not have access. The term therefore covers all internal processes within the enterprise (production, logistics, warehousing, sales, accounting, human resources management, etc.) 5. What do you understand by service level Agreements (SLAs)? Why are they needed? What is the role of CIO in drafting these? Explain the various security hazards faced by an IS? A service level agreement (frequently abbreviated as SLA) is a part of a service contract where the level of service is formally defined. In practice, the term SLA is sometimes used to refer to the contracted delivery time (of the service) or performance. As an example, internet service providers will commonly include service level agreements within the terms of their contracts with customers to define the level(s) of service being sold in plain language terms. In this case the SLA will typically have a technical definition in terms of Mean Time Between Failures (MTBF), Mean Time to Repair or Mean time to recovery (MTTR); various data rates; throughput; jitter; or similar measurable details Service level agreements at different levels SLAs are also defined at different levels:
Customer-Based SLA: An agreement with an individual customer group, covering all the services they use. For example, an SLA between a supplier (IT service provider) and the finance department of a large organization for the services such as finance system, payroll system, billing system, procurement/purchase system, etc.
Service-Based SLA: An agreement for all customers using the services being delivered by the service provider. For example:
A car service station offers a routine service to all the customers and offers certain maintenance as a part of offer with the universal charging. A mobile service provider offers a routine service to all the customers and offers certain maintenance as a part of offer with the universal charging An email system for the entire organization. There are chances of difficulties arising in this type of SLA as level of the services being offered may vary for different customers (for example, head office staff may use high-speed LAN connections while local offices may have to use a lower speed leased line).
Naser Shoukat Firfire
Multilevel SLA: The SLA is split into the different levels, each addressing different set of customers for the same services, in the same SLA.
Corporate Level SLA: Covering all the generic service level management (often abbreviated as SLM) issues appropriate to every customer throughout the organization. These issues are likely to be less volatile and so updates (SLA reviews) are less frequently required.
Customer Level SLA: covering all SLM issues relevant to the particular customer group, regardless of the services being used.
Service Level SLA: covering all SLM issue relevant to the specific services, in relation to this
specific customer group. Common metrics Service level agreements can contain numerous service performance metrics with corresponding service level objectives. A common case inIT service management is a call center or service desk. Metrics commonly agreed to in these cases include:
ABA (Abandonment Rate): Percentage of calls abandoned while waiting to be answered. ASA (Average Speed to Answer): Average time (usually in seconds) it takes for a call to be answered by the service desk. TSF (Time Service Factor): Percentage of calls answered within a definite timeframe, e.g., 80% in 20 seconds. FCR (First-Call Resolution): Percentage of incoming calls that can be resolved without the use of a callback or without having the caller call back the helpdesk to finish resolving the case. TAT (Turn-Around Time): Time taken to complete a certain task.
Uptime is also a common metric, often used for data services such as shared hosting, virtual private servers and dedicated servers. Common agreements include percentage of network uptime, power uptime, number of scheduled maintenance windows, etc. Many SLAs track to the Information Technology Infrastructure Library specifications when applied to IT services. Specific examples Backbone Internet providers It is not uncommon for an Internet backbone service provider (or network service provider) to explicitly state its own service level agreement on its Web site.
The Telecommunications Act of 1996 does
not expressly mandate that companies have SLAs, but it does provide a framework for firms to do so in
Naser Shoukat Firfire
Sections 251 and 252. Section 252(c)(1) for example (―D u tyto Negotiate‖) re quires tha t ILECs negotiate in good faith regarding things like resale, access to rights-of-way, and so forth. WSLA A web service level agreement (WSLA) is a standard for service level agreement compliance monitoring of web services. It allows authors to specify the performance metrics associated with a web service application, desired performance targets, and actions that should be performed when performance is not met. WSLA Language Specification, version 1.0 was published by IBM on January 28, 2001. Cloud computing Cloud computing (alternatively, grid computing or service-oriented architecture) uses the concept of service level agreements to control the use and receipt of computing resources from, and by, third parties. Any SLA management strategy considers two well-differentiated phases: the negotiation of the contract and the monitoring of its fulfilment in real-time. Thus, SLA Management encompasses the SLA contract definition: basic schema with the QoS (quality of service) parameters; SLA negotiation; SLA monitoring; and SLA enforcement—according to defined policies. The main point is to build a new layer upon the grid, cloud, or SOA middleware able to create a negotiation mechanism between providers and consumers of services.
[7 ] 
An example is the European
Union–funded Framework 7 research project, SLA@SOI, which is researching aspects of multi-level, multi-provider SLAs within service-oriented infrastructure and cloud computing. The underlying benefit of cloud computing is shared resources, which is supported by the underlying nature of a shared infrastructure environment. Thus, service level agreements span across the cloud and are offered by service providers as a service based agreement rather than a customer based agreement. Measuring, monitoring and reporting on cloud performance is based upon an end user experience or the end users ability to consume resources. The downside of cloud computing, relative to SLAs, is the difficultly in determining root cause for service interruptions due to the complex nature of the environment. Outsourcing Outsourcing involves the transfer of responsibility from an organization to a supplier. The management of this new arrangement is through a contract that may include a service level agreement. The contract may involve financial penalties and the right to terminate if SLAs metrics are consistently missed. Setting, tracking, and managing SLAs is an important part of the Outsourcing Relationship Management (ORM) discipline. It is typical that specific SLAs are negotiated up front as part of the outsourcing contract, and they are utilized as one of the primary tools of outsourcing governance.
Naser Shoukat Firfire
6. Case Study: Information system in a restaurant. A waiter takes an order at a table, and then enters it online via one of the six terminals located in the restaurant dining room. The order is routed to a printer in the appropriate preparation area: the cold item printer if it is a salad, the hot-item printer if it is a hot sandwich or the bar printer if it is a drink. A customer‘s me al check -listing (bill) the items ordered and the respective prices are automatically generated. This ordering system eliminates the old three-carbon-copy guest check system as well as any problems caus by a waiter‘s handwriting. W hen th e kitc h enrun s ou t o f a foo d item, ed th e c o o sen d ou t ks a n ‗ou t of stock‘ message, w hichwill be displaye d o n th e dinin g room termin als wh e n waiters try to order that item. This gives the waiters faster feedback, enabling them to give better service to the customers. Other system features aid management in the planning and control of their restaurant business. The system provides up-to-the-minute information on the food items ordered and breaks out percentages show ingsales of ea ch ite m versu s total sales. This helps ma n a geme n t pla n me n u s a c c ordin g to customers‘ tastes. The system also compares the weekly sales totals versus food costs, allowing planning for tighter cost controls. In addition, whenever an order is voided, the reasons for the void are keyed in. This may help later in management decisions, especially if the voids consistently related to food or service. Acceptance of the system by the users is exceptionally high since the waiters and waitresses were involved in the selection and design process. All potential users were asked to give their impressions and ideas about the various systems available before one was chosen. Questions: 1. In the light of the system, describe the decisions to be made in the area of strategic planning, managerial control and operational control? What information would you require to make such decisions? 2. 3. What would make the system a more complete MIS rather than just doing transaction processing? Explain the probable effects that making the system more formal would have on the customers and the management. Solution: 1. A management information system (MIS) is an organized combination of people, hardware, communication networks and data sources that collects, transforms and distributes information in an organization. An MIS helps decision making by providing timely, relevant and accurate information to managers. The physical components of an MIS include hardware, software, database, personnel and procedures. Management information is an important input for efficient performance of various managerial functions at different organization levels. The information system facilitates decision making. Management functions include planning, controlling and decision making. Decision making is the core of management and aims at selecting the best alternative to achieve an objective. The decisions may be strategic, tactical or technical. Strategic decisions are characterized by uncertainty. They are future oriented and relate directly to planning activity. Tactical decisions cover both planning and controlling. Technical decisions pertain to implementation of specific tasks through appropriate technology. Sales region analysis, cost analysis, annual budgeting, and relocation analysis are examples of decision-support systems and management information systems. There are 3 areas in the organization. They are strategic, managerial and operational control.
Naser Shoukat Firfire
Strategic decisions are characterized by uncertainty. The decisions to be made in the area of strategic planning are future oriented and relate directly to planning activity. Here basically planning for future that is budgets, target markets, policies, objectives etc. is done. This is basically a top level where up-to-the minute information on the food items ordered and breaks out percentages showing sales of each item versus total sales is provided. The top level where strategic planning is done compares the weekly sales totals versus food costs, allowing planning for tighter cost controls. Executive support systems function at the strategic level, support unstructured decision making, and use advanced graphics and communications. Examples of executive support systems include sales trend forecasting, budget forecasting, operating plan development, budget forecasting, profit planning, and manpower planning. The decisions to be made in the area of managerial control are largely dependent upon the information available to the decision makers. It is basically a middle level where planning of menus is done and whenever an order is voided, the reasons for the void are keyed in which later helps in management decisions, especially if the voids are related to food or service. The managerial control that is middle level also gets customer feedback and is responsible for customer satisfaction. The decisions to be made in the area of operational control pertain to implementation of specific tasks through appropriate technology. This is basically a lower level where the waiter takes the order and enters it online via one of the six terminals located in the restaurant dining room and the order is routed to a printer in th e a p p ropriate preparation area. The item‘s ordered list and the respective prices are automatically generated. The cooks send ‗ou t of stock‘ message when th e kitc h e nru n so u t of a fo o d item, which is basically displayed on the dining room terminals when waiter tries to order that item. This basically gives the waiters faster feedback, enabling them to give better service to the customers. Transaction processing systems function at the operational level of the organization. Examples of transaction processing systems include order tracking, order processing, machine control, plant scheduling, compensation, and securities trading. The information required to make such decision must be such that it highlights the trouble spots and shows the interconnections with the other functions. It must summarize all information relating to the span of control of the manager. The information required to make these decisions can be strategic, tactical or operational information. Advantages of an online computer system:
• • • • • • •
Eliminates carbon copies Waiters‘ hand writing iss ues Out-of-stock message Faster feedback, helps waiters to service the customers Advantages to management: Sales figures and percentages item-wise Helps in planning the menu Cost accounting details 2. If the management provides sufficient incentive for efficiency and results to their customers, it would make the system a more complete MIS and so the MIS should support this culture by providing such information which will aid the promotion of efficiency in the management services and operational system. It is also necessary to study the keys to successful Executive Information System (EIS) development and operation. Decision support systems would also make the system a complete MIS as it
Naser Shoukat Firfire
constitutes a class of computer-based information systems including knowledge-based systems that support decision-making activities. DSSs serve the management level of the organization and help to take decisions, which may be rapidly changing and not easily specified in advance. Improving personal efficiency, expediting problem solving (speed up the progress of problems solving in an organization), facilitating interpersonal communication, promoting learning and training, increasing organizational control, generating new evidence in support of a decision, creating a competitive advantage over competition, encouraging exploration and discovery on the part of the decision maker, revealing new approaches to thinking about the problem space and helping automate the managerial processes would make the system a complete MIS rather than just doing transaction processing. 3. The management system should be an open system and MIS should be so designed that it highlights the critical business, operational, technological and environmental changes to the concerned level in the management, so that the action can be taken to correct the situation. To make the system a success, knowledge will have to be formalized so that machines worldwide have a shared and common understanding of the information provided. The systems developed will have to be able to handle enormous amounts of information very fast. An organization operates in an ever-increasing competitive, global environment. Operating in a global environment requires an organization to focus on the efficient execution of its processes, customer service, and speed to market. To accomplish these goals, the organization must exchange valuable information across different functions, levels, and business units. By making the system more formal, the organization can more efficiently exchange information among its functional areas, business units, suppliers, and customers. As the transactions are taking place every day, the system stores all the data which can be used later on when the hotel is in need of some financial help from financial institutes or banks. As the inventory is always entered into the system, any frauds can be easily taken care of and if anything goes missing then it can be detected through the system
This action might not be possible to undo. Are you sure you want to continue?