You are on page 1of 65

ABSTRACT AI can have two purposes.

One is to use the power of computers to augment human thinking, just as we use motors to augment human or horse power. Robotics and expert systems are major branches of that. The other is to use a computer's artificial intelligence to understand how humans think. In a humanoid way. If you test your programs not merely by what they can accomplish, but how they accomplish it, they you're really doing cognitive science; you're using AI to understand the human mind." Herb Simon Recent studies concerning GIS show that it is the fastest growing segment (both hard & software) of the graphical computer market. 70% of private organizations expect to use GIS as a strategic tool within their company. Like a product, GIS in an organization has a life cycle. According to the model of Nolan this life cycle starts with awareness and ends when full integration with other information system is achieved. Until recently project management for GIS projects was mainly about projects which were considered to be experimental. The requirements for such projects differ from the requirements for projects which are strategic for a company. Strategic GIS projects require a project manager with thorough understanding of issues such as: planning, knowledge of the objectives of the project, project environment and politics. There is little experience with such GIS projects. However the question How to manage a GIS project effectively has to be answered for strategically positioned GIS projects to be successful. It is important for project managers to understand the relationship between the position of GIS in an organisation (Nolan Model) in relationship to the importance of GIS for the organisation (Mc Farlan). The way a GIS project should be handled depends, to a large extend, on these two positionings. A combination of IT methodologies such as Structured Analysis and Design, project management methodologies such as PRINCE and Hewlett-Packards Customer Project Life Cycle 2 combined with best practices are proposed in order to provide a framework, for project managers, to handle GIS projects which are considered strategic for the organisation. This framework, based on prior experience and through evaluation of a complex GIS project has been shown, in some respects, to work. There is still some uncertainty since there is little experience in the market with strategic GIS projects so there are not a lot of best practices to learn from and to further evaluate the proposed approach available.

Intelligence: Intelligence:Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.

the capacity to learn and solve problems (Websters dictionary) in particular, the ability to solve novel problems the ability to act rationally the ability to act like humans Artificial Intelligence:It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents"[1]where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making intelligent machines."[4] AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other.[5] Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. There are subfields which are focused on the solution of specific problems, on one of several possible approaches, on the use of widely differing tools and towards the accomplishment of particular applications. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[6] General intelligence (or "strong AI") is still among the field's long term goals.[7] Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are an enormous number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others. The field was founded on the claim that a central property of humans, intelligence the sapience of Homo sapienscan be so precisely described that it can be simulated by a

machine.[8] This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity.[9] Artificial intelligence has been the subject of optimism,[10] but has also suffered setbacks[11] and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.[12] History:Main articles: History of artificial intelligence and Timeline of artificial intelligence Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the bronze robot of Hephaestus, and Pygmalion's Galatea.[13] Human likenesses believed to have intelligence were built in every major civilization: animated cult images were worshipped in Egypt and Greece[14] and humanoid automatons were built by Yan Shi, Hero of Alexandria and Al-Jazari.[15] It was also widely believed that artificial beings had been created by Jbir ibn Hayyn, Judah Loew and Paracelsus.[16] By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel apek's R.U.R. (Rossum's Universal Robots).[17] Pamela McCorduck argues that all of these are examples of an ancient urge, as she describes it, "to forge the gods".[9] Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence. Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable (imaginable) act of mathematical deduction.[18][19] This, along with concurrent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.[20] The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956.[21] The attendees, including John McCarthy, Marvin Minsky, Allen Newell andHerbert Simon, became the leaders of AI research for many decades.[22] They and their students wrote programs that were, to most people, simply astonishing:[23] Computers were solving word problems in algebra, proving logical theorems and speaking English.[24] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[25] and laboratories had been established around the world.[26] AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[27] They had failed to recognize the difficulty of some of the problems they faced.[28] In 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off all undirected exploratory research in AI. The next few years, when funding for projects was hard to find, would later be called the "AI winter".[29]

In the early 1980s, AI research was revived by the commercial success of expert systems,[30] a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field.[31] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.[32] In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry.[12] The success was due to several factors: the increasing computational power of computers (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[33] On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[34] In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[35] Two years later, a team from CMU won the DARPA Urban Challenge when their vehicle autonomously navigated 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[36] In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system,Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[37] The leading-edge definition of artificial intelligence research is changing over time. One pragmatic definition is: "AI research is that which computing scientists do not know how to do cost-effectively today." For example, in 1956 optical character recognition (OCR) was considered AI, but today, sophisticated OCR software with a context-sensitive spell checker and grammar checkersoftware comes for free with most image scanners. No one would any longer consider already-solved computing science problems like OCR "artificial intelligence" today. Low-cost entertaining chess-playing software is commonly available for tablet computers. DARPA no longer provides significant funding for chess-playing computing system development. TheKinect which provides a 3D bodymotion interface for the Xbox 360 uses algorithms that emerged from lengthy AI research,[38] but few consumers realize the technology source. AI applications are no longer the exclusive domain of U.S. Department of Defense R&D, but are now commonplace consumer items and inexpensive intelligent toys. In common usage, the term "AI" no longer seems to apply to off-the-shelf solved computing-science problems, which may have originally emerged out of years of AI research. Problems:The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention.[6] [edit]Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[39] By the late 1980s and '90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[40] For difficult problems, most of these algorithms can require enormous computational resources most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem-solving algorithms is a high priority for AI research.[41] Human beings solve most of their problems using fast, intuitive judgements rather than the conscious, step-by-step deduction that early AI research was able to model.[42] AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the probabilistic nature of the human ability to guess. Knowledge representation

An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts. Main articles: Knowledge representation and Commonsense knowledge Knowledge representation[43] and knowledge engineering[44] are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[45] situations, events, states and time;[46] causes and effects;[47] knowledge about knowledge (what we know about what other people know);[48] and many other, less well researched domains. A representation of "what exists" is an ontology (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.[49] Among the most difficult problems in knowledge representation are:

Default reasoning and the qualification problem Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969[50] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[51] The breadth of commonsense knowledge The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base ofcommonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering they must be built, by hand, one complicated concept at a time.[52] A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology.[citation needed] The subsymbolic form of some commonsense knowledge Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed"[53] or an art critic can take one look at a statue and instantly realize that it is a fake.[54] These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically.[55] Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped thatsituated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.[55] Planning:Intelligent agents must be able to set goals and achieve them.[56] They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices.[57] In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be.[58] However, if the agent is not the only actor, it must periodically ascertain whether the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.[59] Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used byevolutionary algorithms and swarm intelligence.[60] [edit]Learning Main article: Machine learning Machine learning[61] has been central to AI research from the beginning.[62] In 1956, at the original Dartmouth AI summer conference, Ray Solomonoffwrote a report on unsupervised probabilistic machine

learning: "An Inductive Inference Machine".[63] Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[64] the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer scienceknown as computational learning theory.[65] Natural language processing

A parse tree represents the syntacticstructure of a sentence according to someformal grammar. Main article: Natural language processing Natural language processing[66] gives machines the ability to read and understand the languages that humans speak. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as Internet texts. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.[67] A common method of processing and extracting meaning from natural language is through semantic indexing. Increases in processing speeds and the drop in the cost of data storage makes indexing large volumes of abstractions of the users input much more efficient. [edit]Motion and manipulation Main article: Robotics The field of robotics[68] is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation[69] andnavigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of

the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion - where the robot moves while maintaining physical contact with an object).[70][71] [edit]Perception Main articles: Machine perception, Computer vision, and Speech recognition Machine perception[72] is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision[73] is the ability to analyze visual input. A few selected subproblems are speech recognition,[74] facial recognition and object recognition.[75] Social intelligence Main article: Affective computing

Kismet, a robot with rudimentary social skills[76] Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects.[77][78] It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science.[79] While the origins of the field may be traced as far back as to early philosophical enquiries into emotion,[80] the more modern branch of computer science originated with Rosalind Picard's 1995 paper[81] on affective computing.[82][83] A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behaviour to them, giving an appropriate response for those emotions. Emotion and social skills[84] play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, in an effort to facilitate human-computer interaction, an intelligent machine might want to be able to display emotionseven if it does not actually experience them itselfin order to appear sensitive to the emotional dynamics of human interaction. [edit]Creativity

Main article: Computational creativity A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative, or systems that identify and assess creativity). Related areas of computational research are Artificial intuition and Artificial imagination.[citation needed] [edit]General intelligence Main articles: Strong AI and AI-complete Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them.[7] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[85][86] Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AIcomplete: it may require strong AI to be done as well as humans can do it.[87] [edit]Approaches There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[88] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is toaeronautical engineering?[89] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[90] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require "sub-symbolic" processing?[91] John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence,[92] a term which has since been adopted by some non-GOFAI researchers.[93][94] [edit]Cybernetics and brain simulation Main articles: Cybernetics and Computational neuroscience In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[20] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s. [edit]Symbolic

Main article: GOFAI When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: CMU, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI".[95] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[96] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field. Cognitive simulation Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 80s.[97][98] Logic-based Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[89] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[99] Logic was also focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming languageProlog and the science of logic programming.[100] "Anti-logic" or "scruffy" Researchers at MIT (such as Marvin Minsky and Seymour Papert)[101] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford).[90] Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.[102] Knowledge-based When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[103] This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[30] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

[edit]Sub-symbolic By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception,robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.[91] Bottom-up, embodied, situated, behavior-based or nouvelle AI Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[104]Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 50s and reintroduced the use of control theory in AI. This coincided with the development of theembodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence. Computational Intelligence Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle 1980s.[105] These and other sub-symbolic approaches, such as fuzzy systems andevolutionary computation, are now studied collectively by the emerging discipline of computational intelligence.[106] [edit]Statistical In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (likemathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a "revolution" and "the victory of the neats."[33] Critics argue that these techniques are too focused on particular problems and have failed to address the long term goal of general intelligence.[107] [edit]Integrating the approaches Intelligent agent paradigm An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. The simplest intelligent agents are programs that solve specific problems. More complicated agents include human beings and organizations of human beings (such as firms). The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works some agents are symbolic and logical, some are sub-symbolic neural networks and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fieldssuch as decision theory and economicsthat also use concepts of abstract agents. The intelligent agent paradigm became widely accepted during the 1990s.[2] Agent architectures and cognitive architectures

Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multi-agent system.[108] A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling.[109] Rodney Brooks' subsumption architecture was an early proposal for such a hierarchical system.[110] Tools In the course of 50 years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below. [edit]Search and optimization Main articles: Search algorithm, Mathematical optimization, and Evolutionary computation Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[111] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[112] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[113] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[69] Many learning algorithms use search algorithms based on optimization. Simple exhaustive searches[114] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use "heuristics" or "rules of thumb" that eliminate choices that are unlikely to lead to the goal (called "pruningthe search tree"). Heuristics supply the program with a "best guess" for the path on which the solution lies.[115] A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[116] Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selectingonly the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[117]and evolutionary algorithms (such as genetic algorithms and genetic programming).[118] [edit]Logic

Main articles: Logic programming and Automated reasoning Logic[119] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[120] and inductive logic programming is a method for learning.[121] Several different forms of logic are used in AI research. Propositional or sentential logic[122] is the logic of statements which can be true or false. First-order logic[123] also allows the use ofquantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[124] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[125] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence. Default logics, non-monotonic logics and circumscription[51] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[45] situation calculus, event calculus and fluent calculus (for representing events and time);[46] causal calculus;[47] belief calculus; and modal logics.[48] [edit]Probabilistic methods for uncertain reasoning Main articles: Bayesian network, Hidden Markov model, Kalman filter, Decision theory, and Utility theory Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[126] Bayesian networks[127] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[128] learning (using the expectation-maximization algorithm),[129] planning (using decision networks)[130] and perception (using dynamic Bayesian networks).[131] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[131] A key concept from the science of economics is "utility": a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[132] information value theory.[57] These tools include models such as Markov decision processes,[133] dynamicdecision networks,[131] game theory and mechanism design.[134] [edit]Classifiers and statistical learning methods

Main articles: Classifier (mathematics), Statistical classification, and Machine learning The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[135] A classifier can be trained in various ways; there are many statistical and machine learning approaches. The most widely used classifiers are the neural network,[136] kernel methods such as thesupport vector machine,[137] k-nearest neighbor algorithm,[138] Gaussian mixture model,[139] naive Bayes classifier,[140] and decision tree.[141] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Determining a suitable classifier for a given problem is still more an art than science.[142] [edit]Neural networks Main articles: Neural network and Connectionism

A neural network is an interconnected group of nodes, akin to the vast network ofneurons in the human brain. The study of artificial neural networks[136] began in the decade before the field AI research was founded, in the work of Walter Pitts and Warren McCullough. Other important early researchers were Frank Rosenblatt, who invented the perceptron and Paul Werbos who developed thebackpropagation algorithm.[143] The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback). Among the most popular

feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[144]Among recurrent networks, the most famous is the Hopfield net, a form of attractor network, which was first described by John Hopfield in 1982.[145]Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning and competitive learning.[146] Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[147] [edit]Control theory Main article: Intelligent control Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.[148] [edit]Languages Main article: List of programming languages for artificial intelligence AI researchers have developed several specialized languages for AI research, including Lisp[149] and Prolog.[150] [edit]Evaluating progress Main article: Progress in artificial intelligence In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[151] Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[152] One classification for outcomes of an AI test is:[153] Optimal: it is not possible to perform better. Strong super-human: performs better than all humans. Super-human: performs better than most humans. Sub-human: performs worse than most humans. For example, performance at draughts is optimal,[154] performance at chess is super-human and nearing strong super-human (see computer chess: computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.

A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[155] Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

Branches of AI Q. What are the branches of AI? A. Here's a list, but some branches are surely missing, because no-one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches. logical AI What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. The first article proposing this was [McC59]. [McC89] is a more recent summary. [McC96b] lists some of the concepts involved in logical aI. [Sha97] is an important text. search AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains. pattern recognition When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most. representation Facts about the world have to be represented in some way. Usually languages of mathematical logic are used. inference

From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of non-monotonic reasoning. common sense knowledge and reasoning This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts. learning from experience Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. [Mit97] is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information. planning Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions. epistemology This is a study of the kinds of knowledge that are required for solving problems in the world. ontology Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s. heuristics A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, may be more useful. [My opinion].

genetic programming Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations. It is being developed by John Koza's group Applications of AI Q. What are the applications of AI? A. Here are some. game playing You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation--looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second. speech recognition In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient. understanding natural language Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains. computer vision The world is composed of three-dimensional objects, but the inputs to the human eye and computers' TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use. expert systems A ``knowledge engineer'' interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or

practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. In the present state of AI, this has to be true. The usefulness of current expert systems depends on their users having common sense. heuristic classification One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment). What is so special about GIS? There are several descriptions of GIS.: A GIS is a powerful set of tools for collecting, storing, retrieving at will, transforming and displaying spatial data from the real word. (Burrough, 1986). A System for capturing, checking, manipulating, analyzing and displaying data which are spatially referenced to the Earth (Departement of the Environment,1987). A Geographic Information System is a decision support system that integrates spatially referenced data in a problem-solving environment (i.e. application). (Grupe, 1990). The total of actions and tools that will lead in doing task and taking decisions in relation to spatial questions to the supply of relevant information. (translated from Scholten, 1991). The difference between the above descriptions are considerable and shows that the field of GIS are broad and complex. GEOGRAPHICAL INFORMATION SYSTEMS (GIS) In the past twenty-five years, a host of professions have been in the process of developing automated tools for efficient storage, analysis and presentation of geographic data. These efforts have apparently been the result of increasing demands by users for the data and information of a spatial nature. This rapidly evolving technology has come to be known as Geographic Information Systems (GIS). Geographic information system goes beyond description; it also includes analysis, modeling, and prediction. According to the Environmental Systems Research Institute (ESRI), a GIS is defined as an organized collection of computer hardware, application software, geographic data, and personnel designed to

efficiently capture,store, update, manipulate, analyze, and display all forms of geographic referenced information. Kang Tsung Chang describes GIS is a computer system for capturing, storing, querying, analyzing and displaying geographically referenced data. GIS is essentially a marriage between computerized mapping and database management systems. Thus, a GIS is both a database system with specific capabilities for spatially referenced data, as well as a set of operations for working with the data.Geographically referenced data separates GIS from other information systems. Let us take an example of road. To describe a road, we refer to its location (i.e. where it is) and its characteristics (length, name, speed limit etc.). The location, also called geometry or shape, represents spatial data, whereas characteristics are attribute data. Thus a geographically referenced data has two components: spatial data and attribute data. Spatial Data: Describes the location of spatial features, which may be discrete or continuous. Discrete features are individually distinguishable features that dont exist between observations. Discrete features include points (wells), lines (roads) and areas (land-use types). Continuous features are features that exist spatially between observations (elevation and precipitation). A GIS represents these spatial features on a plane surface. This transformation involves two main issues: the spatial reference system and the data model. Attribute Data: Describes characteristics of spatial features. For raster data, each cell value should correspond to the attribute of the spatial feature at that location. A cell is tightly bound to a cell value. For raster data, the amount of attribute data is associated with a spatial feature can vary significantly. The coordinate location of a Land parcel would be spatial data, while its characteristics, e.g. area, owner name, vacant/ built-up, land use etc., would be attribute data. GIS APPLICATIONS IN CONSTRUCTION INDUSTRY GIS applications have proliferated in the construction industry in recent years. This fact is illustrated by the growing number of articles finding their way into civil engineering and construction journals and conference proceedings, in addition to the handful of special publications devoted to GIS (Oloufa et al. 1994) GIS can be used for: Progress monitoring system in construction Networking solutions 3-D data analysis Site location and Client Distance Comparison of data Construction scheduling and progress control with 3-D visualization Government Regulations

The Pakistan Bureau of Statistics (PBS) Monday introduced Geographical Information System (GIS) Laboratory here at its newly established Statistics House with an objective to bring credibility and transparency in the overall census data collection system and make it at par with international standards. The PBS has already established four such GIS Laboratories in all the provincial capitals including Karachi, Quetta, Lahore and Peshawar and all these labs have already been functional.These labs have been established in collaboration with UNFPA, the United Nations Population Fund and UN-HABITAT, the UN Agency Promoting Sustainable Urbanization & Clean Water Access. The lab in the federal capital was formally inaugurated here on Monday by Secretary Statistics Division, Suhail Ahmed who was accompanied by representative UNFPA and IM-HABITAT and Chief Census Commissioner and other officials of the PBS. "The GIS Laboratory would certainly bring international-standard credibility and transparency in census data," Suhail Ahmed said while addressing the inaugural ceremony. He said that the GIS would not only help the country adopt the latest technology but would also enable it to use it to the optimum benefit. He said that GIS project team has vigorously worked to make the it success and said that there was need to sustain it for the benefit of the country.It is pertinent to mention here that GIS captures, stores, analyzes, manages and presents data that is linked to location. Technically GIS is geographic information systems which includes mapping software and its application with remote sensing, land surveying, aerial photography, mathematics, photogrametric, geography and tools that can be implemented with GIS software. The GIS is the merging of graphic map entities and database and it allows view, understand, question, interpret and visualize data in may ways that reveal relationships, patterns and trends in the form of maps, globes, reports and charges. The system also helps answer questions and solve problems by looking at your data in a way that is quickly understood and easily shared. The GIS technology can be integrated into any enterprise information system framework and consumer users would likely be familiar with applications for finding their required information. The tool can be used for census purposes by preparing census maps with geo reference and scale and delimit census areas by remote sensing and GPS collected filed data can be used with GIS for more cutting edge analysis. In addition, the system can helpful for municipality and local self government, public health engineering, town planning, government administration, land revenue and land records, education, telecommunication, forestry, crime control and law, disaster management and for agriculture purposes. UNFPA representative also spoke on the occasion and the system would help transparency and accuracy in the data collection.

Speaking on the occasion, UN-HABITAT representative said that 5 GIS labs have been established in the country and the agency would also cooperate in establishing GIS in Gilgit Baltistan, Azad Kashmir, Sukkur and Multan as and when funds are available. He said that the agency would also provide capacity building training to the PBS and Population officials to enable them use the latest technology and acquire desired results for the country's benefit. What is GIS? A geographic information system (GIS) integrates hardware, software, and data for capturing, managing, analyzing, and displaying all forms of geographically referenced information. GIS allows us to view, understand, question, interpret, and visualize data in many ways that reveal relationships, patterns, and trends in the form of maps, globes, reports, and charts. A GIS helps you answer questions and solve problems by looking at your data in a way that is quickly understood and easily shared. GIS technology can be integrated into any enterprise information system framework. Top Five Benefits of GIS GIS benefits organizations of all sizes and in almost every industry. There is a growing awareness of the economic and strategic value of GIS. The benefits of GIS generally fall into five basic categories: Cost Savings and Increased Efficiency Better Decision Making Improved Communication Better Recordkeeping Managing Geographically The Geographic Approach Geography is the science of our world. Coupled with GIS, geography is helping us to better understand the earth and apply geographic knowledge to a host of human activities. The outcome is the emergence of The Geographic Approacha new way of thinking and problem solving that integrates geographic information into how we understand and manage our planet. This approach allows us to create geographic knowledge by measuring the earth, organizing this data, and analyzing and modeling various processes and their relationships. The Geographic Approach also allows us to apply this knowledge to the way we design, plan, and change our world. Step 1: Ask Step 2: Acquire

Step 3: Examine Step 4: Analyze Step 5: Act

Act: Frame the Question Approaching a problem geographically involves framing the question from a location-based perspective. What is the problem you are trying to solve or analyze, and where is it located? Being as specific as possible about the question you're trying to answer will help you with the later stages of The Geographic Approach, when you're faced with deciding how to structure the analysis, which analytic methods to use, and how to present the results to the target audience.

Acquire: Find Data After clearly defining the problem, it is necessary to determine the data needed to complete your analysis and ascertain where that data can be found or generated. The type of data and the geographic scope of your project will help direct your methods of collecting data and conducting the analysis. If the method of analysis requires detailed and/or high-level information, it may be necessary to create or calculate the new data. Creating new data may simply mean calculating new values in the data table or obtaining new map layers or attributes but may also require geoprocessing.

Examine the Data You will not know for certain whether the data you have acquired is appropriate for your study until you thoroughly examine it. This includes visual inspection, as well as investigating how the data is organized (its schema), how well the data corresponds to other datasets and the rules of the physical world (its topology), and the story of where the data came from (its metadata).

Analyze the Data The data is processed and analyzed based on the method of examination or analysis you choose, which is dependent on the results you hope to achieve. Do not underestimate the power of "eyeballing" the data. Looking at the results can help you decide whether the information is valid or useful, or whether you should rerun the analysis using different parameters or even a different method. GIS modeling tools make it relatively easy to make these changes and create new output.

Act: Share Your Results The results and presentation of the analysis are important parts of The Geographic Approach. The results can be shared through reports, maps, tables, and charts and delivered in printed form or digitally over a network or on the Web. You need to decide on the best means for presenting your analysis. You can compare the results from different analyses and see which method presents the information most accurately. And you can tailor the results for different audiences. For example, one audience might require a conventional report that summarizes the analyses and conveys recommendations or comparable alternatives. Another audience may need an interactive format that allows them to ask what-if questions or pursue additional analysis.

What Can You Do with GIS? GIS gives us a new way to look at the world around us. With GIS you can: Map Where Things Are

Map Quantities Map Densities Find What's Inside Find What's Nearby Map Change

Map Where Things Are Mapping where things are lets you find places that have the features you're looking for and to see patterns. Map Quantities People map quantities to find places that meet their criteria and take action. A children's clothing company might want to find ZIP Codes with many young families with relatively high income. Public health officials might want to map the numbers of physicians per 1,000 people in each census tract to identify which areas are adequately served, and which are not. Map Densities A density map lets you measure the number of features using a uniform areal unit so you can clearly see the distribution. This is especially useful when mapping areas, such as census tracts or counties, which vary greatly in size. On maps showing the number of people per census tract, the larger tracts might have more people than smaller ones. But some smaller tracts might have more people per square milea higher density. Find What's Inside Use GIS to monitor what's happening and to take specific action by mapping what's inside a specific area. For example, a district attorney would monitor drug-related arrests to find out if an arrest is within 1,000 feet of a schoolif so, stiffer penalties apply. Find What's Nearby GIS can help you find out what's occurring within a set distance of a feature by mapping what's nearby. Map Change Map the change in an area to anticipate future conditions, decide on a course of action, or to evaluate the results of an action or policy. By mapping where and how things move over a period of time, you can gain insight into how they behave. For example, a meteorologist might study the paths of hurricanes to predict where and when they might occur in the future. DaleelTeq (Pvt) Ltd Pakistan was established in 2006

to carry on the advancement in the field of GIS (Geographical Information System), DMS (Documents Management System) and ERP solutions. DaleelTeq has its offices in Saudi Arabia, Pakistan, Tunis, Sudan and Mali. DaleelTeq has vast experience in GIS based field survey to create the digital and navigable maps. We have most enriched map of Pakistan available with us covering more than 50 cities of Pakistan. DaleelTeq specializes in developing and providing Geographical Information System (GIS) knowledge. Since the demand for geographical based decisions is highly enhanced so acknowledging that, DaleelTeq offers a range of visual geographical information services to the customers. The company employs highly professional experts with extensive experience in various fields such as: GIS Survey and Mapping, GIS programming, application development and planning. DaleelTeq has the experience and knowledge to develop various solutions for a diversity of environments.

Our Vision DaleelTeq's vision is: To become the most enriched Map data provider of Pakistan. To become the global leader providing excellent GIS Solutions / Services. To facilitate our valuable clients through superior Solutions / Services. DaleelTeq envisions growth ensured by customer satisfaction, cost-effective in-time delivery by qualified & experienced personnel with varied project experience coupled with state-of-the-art infrastructure. Consistently achieved customer delight by focusing on value adding activities throughout our value chain effective and responsive systems and processes that will

underpin our business decisions to manage risks, become an exciting organization which attracts and retains best talent worldwide for global competitiveness. A strong global supply base for world class goods and services and become a proactive, integral and responsible member of our environment and communities.

Mission Statement With a professional approach towards work and a team of highly qualified technical and managerial experts, we are persistently devising to accomplish the highest levels of client satisfaction and are very much focused in contributing further to the overall growth and development of the whole world. Our mission is to be a quality conscious global player in spatial technology industry. Our continuous mission is to provide quality mapping services and solutions that meet the needs of each client.

Our Services

Survey & Digitizing Services Survey for creation of Digital Maps and collection of landmarks using GPS devices Creation of digital & navigational maps for vehicle navigation systems GIS Services like digitization,

tracing, mosaicing, georeferencing, remote sensing etc. using AutoCAD, ArcGIS9, Manifold, MapInfo, ERDAS, etc. Conversion of Raster images into vectors. OmniSTAR Subscriptions DaleelTeq is a dealer of OmniSTAR in Pakistan and provides the subscriptions of OmniSTAR services. Click here for detail.

Vehicle Tracking Services Offering Sate of the Art Vehicle Tracking and Fleet Management Services Have options for Web Based Tracking as well as command center based tracking Updated high scale/resolution maps for Complete Pakistan are available including all motorways, highways, streets of cities, POIs like Banks, Mosques, Education Points, Emergency Points, Fuel Stations, etc. Triple Play Services The Fiberline Triple Play Service is an end-to-end solution that integrates high speed broadband access; IP Telephone and Digital Crystal Clear Cable Channels at your home or Business Premises. It addresses the mass-market requirements for triple play service delivery, providing high subscriber scale, high bandwidth throughput per subscriber and high concurrency. Fiberline is committed to provide

Quality of Service QoS to its every valued customer. Triple Play Service Provides three type of Services : Internet Telephone Cable For more information

Wireless Internet We are providing wireless Network Services in Rawalpindi / Islamabad cities. On Wireless Network Telephone & Internet Service are available. For more information

Free Internet We are providing free Dialup Internet Service. For more information

Other Services Data related Services like video conferencing, vehicle tracking, DSL, Hardware Supply, Installation, LAN/WAN networking, Fiber Optics installation, Maintenance and Support.

Products Geomectis The idea of Global Village was considered as inconceivable one, if we look at the near past. The concept started getting popularity with the evolution of computer industry. With the introduction of software customization and availability of precise satellite maps, the professionals started considering it seriously and began to convert this impossibility into a real world application. GIS Solution is yet another profound web based effort in this regard, for which computers systems will be utilized for not only searching destination, but will also cover information of almost all popular places. User will log on to our application to go through a true geographic information system of the entire world. Your employees, customers, and partners can remove geographical boundaries, strengthen business relationships and reduce expenses by using these Internet-based solutions to collaborate and access online services. read more GIS Products Trimble Mapping & GIS Products Daleel Track & Trace (Vehicle Tracking System) GIS based Decision Support system Daleel MAP/GIS Framework - (Our own SDK for Maps management system development) Daleel City Explorers (Vector based map viewing software) Asset tracking solution (AVL) Hajj Pilgrims Monitoring System Entreprise Municipal Suite Bravo ICT Management Route optimization solutions

Navigation solutions Customized GIS Applications GIS Based software for Waste Management System Office Automation Products

Banking Solutions (with Online and eBanking) Correspondence Management System RxDoX - Medical Imaging Litigation Document Management System Other Products Document Management & Archiving Solution eDoX Enterprise Contents Management System - eOffice Enterprise BPM & Workflow Management System - eOffice Digital Library Management Solution - eDoX Correspondence Management System - eDoX Electronic Fax Management - eFaX Ultimate Learning

Solution - eCollege Front-end applications to the Archiving Solutions

Software Development

Our main strength lies in customized software solutions and software conversion projects. We specialize in RDBMS, Client/Server technology, Object Oriented technology and Internet, Intranet and Extranet applications. For us these technologies represent general, logical models which can be applied to a wide range of applications. RDBMS take up a large portion of our Client/Server technology efforts, mostly as a part of Internet, Intranet and Extranet based applications. Currently we are working on DMS, GIS and ERP based software solutions. It is not always feasible to buy off-the-shelf packages and put your business at risk, there could be hidden shortcuts and glitches. You will feel more constraints along the road, the more you use those non-customized software packages without proper training and technical support. DaleelTeq offers unique program to customize in-house developed modules to 100% suite your business requirements. These modules will not only be for current size or business setup but keep on growing with your environment. We have been designing and developing business modules including Accounts Payable, Accounts Receivable, Inventory, Payroll/Personnel, Costing and Project Management. With our solid company resources and commitment to designing solutions that work for any size of business, you can always be confident that we are the business problem solvers. Our software division has experience in a variety of application areas such as:

Banking & Finance Telecommunications Education Institutions Manufacturing & Construction Municipal & Rural Affairs Construction Petrochemical Healthcare Transportation Food Sector Tourism Tools & Technologies Operating Systems MS Windows XP/ Vista, Linux, Fedora Databases MS SQL Server Oracle IBM DB/2 My SQL

Programming Languages .Net (C Sharp, VB.Net ) Visual C++ Visual Basic Java ASP Cold Fusion

Business Solutions

Whatever your business needs, we provide integrated e-business solutions to help you, and the people who work for you, hence maximizing overall efficiency. Our products and services make it easier for your team to do business with your customers, vendors, and partners in persistently evolving environment. DaleelTeq applications address the following business needs:

DMS Solution - The Archiving Power Document Management System for complete paperless office automation is available in online and desktop versions, comprising of customizable workflow and archival system. Automate the essential

functions of organizations from small scale to enterprise level and maximize your staff's time, let your team to archive data in organized manner and retrieve them efficiently, hence increasing your customer satisfaction, and assist in turning your service operations into a profitable center. A broad range of flexible, customizable archiving, searching and reporting, optionsfrom advanced consolidation analysis to simple reporting requests helps the decision-makers and workers across your company to transform data into valuable information.

Vehicle Tracking System

Overview The GPS AVL Unit is a general purpose tracking device that adds security and tracking capability to assets such as cars, trucks, trailers, containers, trains , Busses,police department, delivery services and concerned parents of teenage children. The user can track their vehicle in real time using internet/intranet from their home or office location. This integrated system with Web Based & local Command Centre is very useful for Organizations who are maintaining a number of vehicles/fleet specially for field duties. GSM-GPRS/GPS Vehicle Tracker system is self-reliant, compact, reliable, affordable and easy to use. Its small size makes it easy to install and access.

General Features Optimized for Vehicle security system Easy installation similar to existing car alarm systems Geo Fencing status GSM-SMS and GPRS connectivity Multiple I/Os for Sensors integration Like fuel level indication, distance travel indication, finally used for report generation and vehicle maintenance system Battery Backup Command Centers Software Features Web Based Monitoring Dynamic Map Loading Options Layers Control Map Display Features like Zoom, Pan, and Measure Distance etc. Locate a Vehicle Vehicle group Management Geo Fencing

Automatic Vehicle Locator

Architecture Web Based Interface with office users and field Staff of NHA


Company Profile

DaleelTeq Private Limited is a Pakistan established organization that is striving to carry on the advancement in the field of DMS (Document Management System), GIS (Geographical Information System), ERP (Enterprise Resource Planning) and Networks based solutions. The company is also focusing on the development of the VTS (Vehicle Tracking System) for Pakistan that requires profound field surveys of entire country. The application will base on digital and navigational maps and will certainly be helpful for all concerned users.

Latest News

appointed as Trimble's MGIS Dealer for Pakistan.

awarded data clause license to DaleelTeq for AJK Regions.

prepared Navigational maps of all Highways & Motorways of Pakistan.

launched its Fleet Management Solution "Track & Trace", it is for the management of company's vehicle Fleet. Fleet Management includes the

management of motor vehicles such as cars, vans, & trucks. Fleet Management can include a range of fleet Management function such as Vehicle Tracking & Monitoring Security Milleage Maintenence

Trimble Mapping & GIS - Products & Services Product s& Services

Handhel d Comput ers with GNSS GNSS Receive rs Handhel d Comput ers Softwar e Referen ce Stations Trimble VRS

Trimble Product

Compar ison Table

Request Quote Handheld Computers with GNSS GeoExplorer 6000 series Introducing the new GeoXH handheld The Trimble GeoExplorer 6000 series takes GNSS productivity to a whole new level. Bringing together the essential functionality for high-accuracy field work in one device, the GeoXH handheld delivers real-time decimeter (10 cm / 4 inch) accuracy positioning, high quality photo capture, and integrated Internet connectivity options. Together with the latest field software enhancements and GNSS innovationsincluding Trimble Floodlight satellite shadow reduction technologythe GeoXH handheld establishes a new standard for GNSS system performance and handheld data capture.


Features & Benefits

Technical Specs Designed for work For utility companies, municipalities, environmental management agencies, and many other organisations, timely accurate information is paramount to good decision making. The GeoXH handheld is the best high accuracy hardware platform for any organization needing to map information at decimeter (10 cm / 4 inch) accuracy.

In conjunction with a Trimble Mapping & GIS field software application or a custom application developed by a Trimble Mapping & GIS Business Partner, the GeoXH handheld is the ideal platform for: Utilities: Underground asset mapping and inspection, water network modelling, as-built mapping of lines/cable installations, and incident/outage reporting. Local government: High-density urban asset mapping, underground asset relocation, municipal asset inventory and inspection. Environmental management: Weed management, water debris management, pollution mapping, environmental incident mapping, sample gathering, agricultural subsidy determination. The GeoXH handheld is an ideal solution in any industry requiring a mobile decimeter accuracy mapping data collection and maintenance solution. To discuss the opportunities for a customised high accuracy data collection or maintenance solution for your industry using the Trimble GeoExplorer 6000 series GeoXH handheld, contact your local Trimble reseller.

Introducing the new GeoXT handheld The Trimble GeoExplorer 6000 series takes GNSS productivity to a whole new level. Combining submeter accuracy GNSS, high quality photo capture, wireless Internet, and connectivity options in a single product, the GeoXT handheld is the ideal field device for organizations mapping critical assets and infrastructure, or for anyone needing dependable submeter accuracy GNSS data, simple operation, and repeatable results. Together with the latest field software enhancements and GNSS innovations including Trimble Floodlight satellite shadow reduction technologythe GeoXT handheld is the ideal submeter field solution for any industry, including utility companies, local government organizations, and federal agencies. Applications Designed for work

For utility companies, municipalities, environmental management agencies, and many other organisations, timely accurate information is paramount to good decision making. The GeoXT handheld is the ideal field device for organizations mapping critical assets and infrastructure, or for anyone needing dependable submeter accuracy GNSS data, simple operation, and repeatable results. In conjunction with a Trimble Mapping & GIS field software application or a custom application developed by a Trimble Mapping & GIS Business Partner, the GeoXT handheld is the ideal platform for: Utilities: Asset surveys, meter inspections, incident and outage reporting. Local government: High density urban asset mapping, municipal asset inventory and inspection. Environmental management: Weed management, wildlife monitoring, pollution mapping, environmental incident mapping, sample gathering, urban forest management. Applications

Features & Benefits

Technical Specs 220 channel GNSS receiver Submeter real-time and 50 cm postprocessed accuracy Integrating the latest in Trimble GNSS receiver technology, with the optional ability to track both GPS and GLONASS satellites, the GeoXT handheld delivers consistent submeter accuracy in real time and 50 cm accuracy after postprocessing. Even higher levels of postprocessed accuracy are possible if GNSS carrier data is logged for extended periods.

Trimble Floodlight satellite shadow reduction More positions and increased accuracy in tough environments With the optional Trimble Floodlight satellite shadow reduction technology option installed, the GeoXT receiver can compute positions even with very weak satellite signals. Floodlight technology increases the

number of positions that are gathered in difficult locations, and boosts accuracy in those places where normally only low accuracy data is available. With the GeoXT handheld, field crews can now work with fewer disruptions, meaning means better data, faster, at less cost. Read more:

Smart Settings Set and forget GNSS configuration for better results in all conditions The GeoXT handheld is fully supported by the latest versions of Trimble field software which include seamless out-of-the-box receiver configuration and Smart Settings. Using Smart Settings, the GNSS receiver can calculate the optimal accuracy each second without imposing strict masks, and without sacrificing productivity. There is no need to adjust settings from one location to the nextsimply activate Smart Settings, and let the receiver do the rest.

4.2" polarized display Crystal clear text, photos, and mapseven in direct sunlight The GeoXT handheld includes a sunlight-optimized display designed specifically for outdoor operation. It maintains exceptional clarity in all outdoor conditions, including direct sunlight. Text is crisp and easy to read. Background maps and photos are rich and vibrant. At 4.2" (10.7 cm), the display is also big, so the touch panel is spacious and easier to control.

Integrated 5 megapixel autofocus camera Capture high quality photos and link directly to features A photo is often the best way to capture information about an asset, event, or site. The GeoXT handheld includes a 5 megapixel autofocus camera with geotagging capability. The camera can be controlled by the TerraSync software and other third-party applications, so photo capture and linking of images to GIS features is seamless and simple to integrate with existing data capture workflows.

Integrated Wi-Fi, Bluetooth, and optional 3.5G cellular modem

Work online, anywhere, cable-free With the GeoXT handheld, wireless connectivity options including cellular, Wi-Fi, and Bluetooth technology ensure that field workers can remain in contact with the office and each other, even from remote locations. An optional integrated 3.5G cellular modem allows continuous network and Internet access to real-time map data, web-based services, VRS corrections, and live update of field information. Bluetooth technology also enables wireless connection to other external devices such as Bluetooth-enabled laser range finders, barcode scanners, or underground pipe locators.

Lightning-fast processing power and expandable storage Work with large, complex datasets without compromising performance The GeoXT handheld is powered by a super-fast OMAP-3503 series processor and 256 MB RAM. With 2 GB of internal storage and the capacity to add an additional 32 GB via SDHC card, the GeoXT handheld has the capacity and power needed to work with high resolution maps and the most complex datasets.

Field swappable battery More than 11 hours operation on a single charge and swap-and-go battery replacement in the field The Lithium-Ion battery can provide more than 11 hours of GNSS operation on a single charge and can be swapped on-the-go without shutting down the deviceenabling near-continuous operation and minimizing field worker downtime.

Windows Mobile 6.5 A proven mobile platform with the latest features and functionality Powered by Windows Mobile 6.5 Professional edition, the GeoXH handheld is fully compatible with the latest releases of Trimble Mapping & GIS field software as well as a large variety of third-party data collection and maintenance applications.

Designed for work Reliable rugged design The fully ruggedized IP65 construction is designed to withstand the harshest environments. Wherever field workers go, they can take the GeoXT handheld with the confidence that the equipment can handle the toughest conditions

The GeoXT handheld is an ideal solution in any industry requiring a mobile submeter accuracy mapping data collection and maintenance solution. To discuss the opportunities for a customised high accuracy data collection or maintenance solution for your industry using the Trimble GeoExplorer 6000 series GeoXT handheld, contact your local Trimble reseller.

GeoExplorer 3000 series GeoExplorer 3000 Series GeoXH Handheld For high-accuracy GIS data collection and asset relocation, the Trimble GeoXH handheld from the GeoExplorer 3000 series is the perfect integrated solution. Engineered with Trimble H-Star technology, the GeoXH handheld delivers the accuracy you need when you need it. It is ideal for electric and gas utilities, water and wastewater services, land reform projects, and other applications where on-the-spot positioning is crucial. The GeoXH handheld provides real-time subfoot (<30 cm) accuracy with the internal antenna, or decimeter (10 cm / 4 inch) accuracy after postprocessing. Decimeter accuracy can be achieved in real-time with the optional Tornado external antenna. Because high accuracy positions are available in real-time, you can track down buried and hidden assets with ease, and excavate cables and pipes without

wasted effort or risk of damage to nearby assets. Back-office data processing is eliminated, streamlining asset inventories and as-built mapping jobs. When you postprocess with Trimble office software you can be confident of achieving decimeter level accuracy with greater consistency at longer baselines, in tougher environments, and with shorter occupations. With a powerful 520 MHz processor, 128 MB RAM, and 1 GB of onboard storage, the GeoXH handheld is a high performance device designed to work as hard as you do. The handheld gives you all the power you need to work with maps and large data sets in the field, and its high resolution VGA display allows for crisp and clear viewing of your data. Read more

GeoExplorer 3000 Series GeoXT Handheld The Trimble GeoXT handheld from the GeoExplorer 3000 series is the essential tool for maintaining your GIS. A high performance GPS receiver combined with a rugged handheld computer, the GeoXT handheld is optimized to provide reliable location data, when and where you need it. It's ideal for use by utility companies, local government organizations, federal agencies, or anyone managing assets or mapping critical infrastructure who needs accurate data to do the job rightthe first time. With EVEREST multipath rejection technology onboard, the GeoXT handheld records quality GPS positions even under canopy, in urban canyons, and in all the everyday environments you work in, so you know your GIS has the information that others can depend on. Read more

GeoExplorer 3000 Series GeoXM Handheld The Trimble GeoXM handheld from the GeoExplorer 3000 series is the affordable, all-in-one solution for mobile workers who need to take your GIS to the field. With a GeoXM handheld, your crews will collect reliable 1 to 3 meter GPS data for your GIS, relocating assets with confidence and fulfilling work orders efficiently.

Because the GPS receiver and antenna are built into the handheld computer, it's never been easier to use GPS in your application. Use the integrated SBAS receiver to get WAAS, EGNOS, or MSAS corrections, or use the integrated Bluetooth wireless technology to connect to a Trimble GeoBeacon receiver to reliably navigate back to assets, or to record new data to keep your GIS up to the minute. Read more

Juno Series Juno SA Handheld The Juno SA handheld* is a durable, compact field computer with an integrated high-yield GPS receiver, ideal for asset management and inspection applications. It is the affordable way to arm an entire data collection workforce with a reliable and accurate professional GPS handheld incorporating an industry standard Windows Mobile 6.1 platform. The Juno SA handheld is an economical solution, ideally suited to organizations that are looking to equip their entire field workforce while managing strict budgetsby combining the Juno SA handheld with the required field application software at of a cost-effective price point. With a 533 MHz processor, 128 MB RAM, a 3.5 inch display, and support for 10 languages, the Juno SA handheld is a powerful and versatile field tool. Key features: Cost-effective solution, ideal for large deployments Industry standard Windows Mobile 6.1 platform High-sensitivity GPS receiver Long life battery for all-day use Lightweight and compact

Juno SB Handheld Arm your crew with the durable, compact field computer that integrates a rich array of functionality, including photo capture and high-yield GPS receiver with 2 to 5 meter positioning accuracy in real time or 1 to 3 meter postprocessed. The Juno SB handheld is the affordable way to maximize the productivity of your entire workforce. While minimizing expenditure, you won't have to compromise on features or functionality. The Juno SB handheld includes a 533 MHz processor, 3.5 inch display, and a 3 megapixel camera. Now each member of your workforce has the ability to augment their GPS information with photographs while performing GIS data collection, maintenance, and inspection activities. Read more

Juno SC Handheld The Juno SC handheld is a durable, lightweight field computer that integrates an array of powerful features. Providing photo capture, cellular data transmission, and high yield GPS receiver with 2 to 5 meter positioning accuracy in real time or 1 to 3 meter postprocessed, the Juno SC is an affordable solution that will increase the productivity of your entire mobile workforce. The integrated 3.5G HSDPA cellular modem provides high-speed internet connectivity worldwide. Your entire field workforce will be able to quickly and reliably access the data they need in the fieldwork-orders, map data, reference files, emails, and even the Internet. The Juno SC handheld also enables connections to networks and other devices with its integrated Bluetooth and wireless LAN capabilities. Read more

Juno SD Handheld The Juno SD handheld is a durable, lightweight handheld that integrates an array of powerful features. Providing integrated cellular data and voice call capability, photo capture, and high yield GPS positioning, the Juno SD handheld will empower and increase the efficiency of your entire mobile workforce.

Whether you are managing critical assets, responding to emergencies, or updating your enterprise GIS, the Juno SD is the ultimate solution. The integrated 3.5G HSDPA cellular capability keeps your entire field workforce in contact with the office and the data they need. The Juno SD provides a high-speed Internet connection, enabling your team to access crucial information in the fieldwork-orders, map data, reference files, emails, and even the Internet. Field workers can stay in touch with cellular voice capability, enabling them to relay live updates of the situation back to the office, call the office for their next job, or to request assistancea must for worker safety. The integrated camera enables field workers to store a visual record of jobs, and with the microSD card slot in Juno SD handheld, they need never worry about running out of memory in the field.

Yuma Series Yuma Rugged Tablet Computer The Trimble Yuma rugged tablet computer is built to withstand even the most challenging work environment. Safeguard your software and data in the face of dust, sand, mud, humidity, and extreme temperature. Conduct inspections, collect information, capture photos, and communicate with headquarters, all with the assurance that your data is protected. Overcoming the elements presents an initial challenge, since water, dust, and dirt easily threaten the internal components of all but the most rugged of outdoor computers. The Yuma tablet features an ingress

protection rating of 67 (IP67), meaning it is sealed against dust and has been water immersion tested for 30 minutes at a depth of one meter (3.28 ft). Water and dust won't sideline the Yuma tablet. Shock, vibration, and extreme temperature fluctuation present a second level of challenges to outdoor computing. The rugged design of the Yuma tablet incorporates a solid state hard drive, eliminating internal moving parts and providing protection against stress from impact and vibration. In addition, MIL-STD810F specifications ensure that the Yuma tablet survives bitter cold, blistering desert heat, and everything in betweeneven accidentally launching the Yuma tablet off the tailgate of your truck.

Nomad G series

Nomad G series handhelds

The Trimble Nomad 900G series of integrated GPS handhelds offer all-in-one convenience in a device engineered for superior performance in harsh environments. They offer full compatibility with Trimble Mapping & GIS software and a choice of configurations to match your existing workflow. The Nomad 900G series handhelds feature a huge 6 GB of Flash storage, 128 MB of RAM, a powerful 806 MHz processor, Wi-Fi and Bluetooth wireless technology connectivity, a SecureDigital (SD) slot for removable cards, and a 3.5 inch (8.9 cm) VGA display. With a variety of configuration options, including a cellular modem, 5 megapixel digital camera with integrated flash, a laser bar code scanner, and CompactFlash (CF) and USB expansion options, the Nomad 900G series provides a range of all-inone solutions for field data collection and asset management activities. GNSS Receivers

GPS Pathfinder Series GPS Pathfinder ProXRT Receiver Whether you need to relocate buried pipes and cables, or accurately map underground assets and critical infrastructure, the Trimble GPS Pathfinder ProXRT receiver has it all. This real-time receiver can achieve decimeter (10 cm / 4 inch) accuracy, giving you the confidence to know the job was done right while you're still on site. Combining H-Star technology, OmniSTAR support, and with the option of GLONASS

support on top of dual frequency GPS, the GPS Pathfinder ProXRT receiver is a truly versatile solution offering you the accuracy you need, worldwide. The GPS Pathfinder ProXRT receiver brings Trimble H-Star technology to the field in real time; just connect to a VRS network or a local base station correction source and you can collect decimeter to subfoot (<30 cm) positions in the field. Alternatively, OmniSTAR HP can be used to achieve real-time decimeter accuracy. The OmniSTAR antenna is integrated so there's no need to carry any extra equipmentjust purchase a subscription and wait for the over the air corrections. The GPS Pathfinder ProXRT receiver is also capable of using the OmniSTAR XP (20 cm accuracy) and VBS (instantaneous submeter accuracy) services.

GPS Pathfinder ProXH Receiver The GPS Pathfinder ProXH receiver introduces a new era in GPS for GIS data collection. A GPS receiver, antenna, and all-day battery in one, the ProXH receiver delivers subfoot (<30 cm) accuracy with the revolutionary Trimble H-Star technology. And when high accuracy is critical to your application, add a Tornado antenna to your ProXH receiver for decimeter (10 cm / 4

inch) postprocessed accuracy. Bringing together advanced GPS receiver design and a powerful new postprocessing engine, H-Star technology is in a class of its own. Working together with the Trimble TerraSync software and Trimble GPScorrect extension for ESRI ArcPad software, the ProXH receiver quickly and efficiently logs the data you need to achieve subfoot accuracy. Back in the office, the GPS Pathfinder Office software or the Trimble GPS Analyst extension for ESRI ArcGIS Desktop software guides you through the H-Star correction process and displays the accuracy you've achieved.

GPS Pathfinder ProXT Receiver Purpose-built for GIS data collection, the GPS Pathfinder ProXT receiver sets new standards for ease of use. A precision GPS receiver, antenna, and all-day battery in one, the ProXT receiver is totally cablefree, making data collection more straightforward than ever before. With an advanced design and features like EVEREST multipath rejection technology, the ProXT receiver delivers consistent, reliable accuracy, so you can work under canopy, in urban environments, or wherever accuracy is crucial. If you need to be sure of your accuracy in the field, the integrated SBAS receiver or optional GeoBeacon receiver provides submeter accuracy in real time. For the very best results, postprocessing is easy with Trimble GPS Pathfinder Office software or the Trimble GPS Analyst extension for ESRI ArcGIS Desktop software. These office processing suites use Trimble DeltaPhase technology to achieve 50 cm accuracy for GPS code measurements after postprocessing, and even higher levels of postprocessed accuracy are possible when you log GPS carrier data for extended periods. Read more

Handheld Computers

Recon Handheld The new-generation Trimble Recon handheld is as tough as ever. With an IP67 rating, it's impervious to water and dust, and inside the rugged casing it's packed with new connectivity options. As well as increased memory, and an industrystandard, open operating system, you now have the option of builtin Bluetooth and wireless LAN. The Recon handheld features a high-performance 400 MHz processor, built-in Bluetooth wireless technology, and built-in wireless LAN. The system provides two CompactFlash (CF) slots, letting you add peripherals such as GPS cards, barcode scanners, or memory cards. You can use Bluetooth to connect wirelessly to other devices such as a laser rangefinder, a mobile phone for connection to the Internet, or Trimble's GPS Pathfinder receivers. And if you are within range of a WiFi network, the built-in wireless LAN in your Recon handheld makes it very easy to send and receive data. As soon as you arrive at a WiFi hotspot, such as your work depot, you can quickly and securely transfer large amounts of data into the network. Cellular connectivity can be

added to the Recon handheld via the TDL 3G cellular modem. Connecting via wireless LAN or Bluetooth, the TDL 3G provides continuous network/internet access to real-time map data, web-based services, and live update of field information. Read more Software

TerraSync Software The Trimble TerraSync software is designed for fast and efficient field GIS data collection and maintenance. Paired with a supported Trimble GNSS receiver and field computer, its a powerful system for the collection of high quality feature and position data for GIS update and maintenance. The TerraSync software makes the field data collection workflow seamless by including intelligent features such as map-centric operation, graphical status display, and the ability to record a position offset at the field workers fingertips. The TerraSync software also makes it easy to incorporate photo capture into the data collection workflow using a Trimble handheld with an integrated camera or the Trimble TrimPix Pro system. The software also includes the ability to use a data dictionary previously created in the Trimble GPS Pathfinder Office software, based on the

enterprise GIS to preserve data integrity. Read more

Trimble GPScorrect Extension for Esri ArcPad Software The Trimble GPScorrect extension for Esri ArcPad software lets you take full control of your Trimble GNSS receiver, and adds the power of differential correction to ArcPad. With the GPScorrect extension and ArcPad software, it's easier than ever to bring GNSS and GIS data together. The GPScorrect extension ensures that you have the most reliable and accurate data for your GIS. With postprocessed differential correction, you can improve the accuracy of your GNSS positions from 10 meters to submeter or even decimeter-level (10 cm / 4 inch) accuracy, depending on the environment and your GNSS receiver. For differential correction of your field data you have a choice of postprocessing software. Use the Trimble GPS Analyst extension for Esri ArcGIS Desktop software for a stream-lined workflow between the field and the office. Or use the popular GPS Pathfinder Office software to effortlessly correct the data you collected in the field for extra precision.Read more

GPS Pathfinder Office Software The Trimble GPS Pathfinder Office software is a powerful and easy-to-use software package of GNSS postprocessing tools incorporating Trimble DeltaPhase differential correction technology, designed to develop GIS information that is consistent, reliable, and accurate from GNSS data collected in the field. Postprocessing with the GPS Pathfinder Office significantly improves the autonomous accuracy of data collected in the field all the way down to decimeter (10 cm / 4 inch) level, depending on the environment and the GNSS receiver. Decimeter accuracy can be achieved with the GPS Pathfinder ProXH and ProXRT receivers or the GeoXH handheld, which incorporate Trimble H-Star technology. Alternatively, optimal GNSS code processing accuracy is possible with the Trimble DeltaPhase technology using a GeoXM, GeoXT or Juno series handheld, or a ProXT receiver. Data can be imported to the GPS Pathfinder Office software from a number of GIS and database formats, allowing previously collected GIS data to be taken back to the field for verification and update. The software's Data Dictionary Editor creates custom lists of features and attributes for

field data collection and supports the development of conditional attribute data capture forms in Trimble TerraSync software that dynamically adapt to previously entered attribute values for maximum data collection efficiency. Read more

Reference Stations NetR9 Reference Station



Support The Trimble NetR9a highly versatile, ground-breaking GNSS reference receiver for infrastructure and network applications. A full-feature, top-ofthe-line receiver with an industry-leading 440 channels for unrivaled GNSS multiple constellation tracking performance, the Trimble NetR9 was designed to provide the network operator with maximum features and functionality from a single receiver. In addition, it can be used as a campaign receiver for post-processing, as a Continuously Operating Reference Station (CORS) receiver or portable base station for Real-time Kinematic (RTK) applications, and as scientific reference station.

Virtual Reference Stations

Trimble VRS Now H-Star Service Trimble VRS Now H-Star service gives field workers the ability to produce real-time, decimeter accurate positions consistently and directly from the job site. With instant access to H-Star corrections in-the-field and on demand, worker productivity is increased and high-accuracy mapping projects can be up and running in minutes. Subscribe today and discover how with only a Trimble H-Star compatible receiver (either the GeoXH 6000 series handheld receiver or GPS Pathfinder ProXRT receiver), a cellular connection, and a subscription to this breakthrough Trimble service, you increase efficiencies across your organization and achieve immediate ROI.

Trimble Product Comparison Table

Home | About Us | Picture Gallery | Business Solutions | Our Services Clients | Partners | Products | Software Development | Contact Us | Request Quote :: Copyrights @ 2011 DALEELTEQ :: Digital Maps of Pakistan

Overview In Future There is no Decision without GIS DaleelTeq Pvt Ltd is an ISO certified Multinational Company working in the field of GIS having computer based solutions for improving situational awareness and enabling better decision making. The company's products and solutions are today used extensively in strategic and tactical applications by the defense, government and research organization/institutions.

Maps Availability DaleelTeq has following up-to-date maps: Navigational Maps for Cities of Pakistan DaleelTeq is pioneer in producing navigational maps. On our inventory, we have highly accurate upto-date navigational maps for major cites of Pakistan, and in future over target is whole Pakistan. These navigational maps are built on high resolution Satellite images: with the help of GPS Survey and updated resources. In cities we have collected all the information for Navigation System. Our Survey Teams are always on the road to update these maps. Islamabad City Rawalpindi City Lahore City Peshawar City Faisalabad City Karachi City AJK and other cities All Highways and Motorways of Pakistan We have navigational maps for Motorways & Highways of Pakistan. These maps include all the required navigational information and wide range of Landmarks/POIs which are needed for automatic navigation on the roads. Our navigational maps are built based upon highly accurate GPS devices and road survey.

Daleel Digital Topographic Maps We have developed Topographic Maps of Pakistan

on different scales. International Standard Parameters are used to build this data. Digital maps are available in different layers, like Administrative boundries, road network, rivers and water features, contours, etc.

Vehicle Tracking System

Overview The GPS AVL Unit is a general purpose tracking device that adds security and tracking capability to assets such as cars, trucks, trailers, containers, trains , Busses,police department, delivery services and concerned parents of teenage children. The user can track their vehicle in real time using internet/intranet from their home or office location. This integrated system with Web Based & local Command Centre is very useful for Organizations who are maintaining a number of vehicles/fleet specially for field duties. GSM-GPRS/GPS Vehicle Tracker system is self-reliant, compact, reliable, affordable and easy to use. Its small size makes it easy to install and access.

General Features Optimized for Vehicle security system Easy installation similar to existing car alarm systems Geo Fencing status GSM-SMS and GPRS connectivity Multiple I/Os for Sensors integration Like fuel level indication, distance travel indication, finally used for report generation and vehicle maintenance system Battery Backup

Command Centers Software Features Web Based Monitoring Dynamic Map Loading Options Layers Control Map Display Features like Zoom, Pan, and Measure Distance etc. Locate a Vehicle Vehicle group Management Geo Fencing

Automatic Vehicle Locator

Architecture Web Based Interface with office users and field Staff of NHA