You are on page 1of 24

BIG DATA AND SP THEORY OF

INTELLIGENCE

Presented By:
ATHIRA M RAJ
Roll No:22
Semester/Branch:S7/CSE

CONTENTS
 Introduction
 SP Theory of Intelligence
 Problems of Big Data
 Volume
 Efficiency
 Transmission
 Variety
 Veracity
 Visualization
 A Road Map
 Conclusion
 References
2

• Visualization of knowledge structures and inferential processes 3 . • Analysis of streaming data.velocity • Economies in the transmission of data • Veracity in big data.INTRODUCTION • SP theory of intelligence be applied to the management and analysis of big data • Overcomes the problem of variety in big data.

SP THEORY OF INTELLIGENCE The SP theory is conceived as a brain-like system that receives New information and compresses it to create Old information 4 .

Via the building of multiple alignments 5 .  Processing are done by compressing information  Via the matching and unification of patterns. and human perception and cognition. mainstream computing.Designed to simplify and integrate concepts across artificial intelligence. Product of an extensive program of development and testing via the SP computer model. Knowledge represented with arrays of atomic symbols in one or two dimensions called “patterns”.

 Deeper insights and better solutions in several areas of application. including software.BENEFITS OF SP THEORY Conceptual simplicity combined with descriptive and explanatory power across several aspects of intelligence. Simplification of computing systems.  Seamless integration of structures and functions within and between different areas of application 6 .

Simplification of Computing system 7 .

8 .MULTIPLE ALIGNMENT The system aims to find multiple alignments that enable a New pattern to be encoded economically in terms of one or more Old patterns Multiple alignment provides the key to:  Versatility in representing different kinds of knowledge.  Versatility in different kinds of processing in AI and mainstream computing.

AN SP MULTIPLE ALIGNMENT 9 .

10 .Compression difference: CD = BN-BE BN :total number of bits in those symbol in the New pattern that are aligned with Old symbols in the alignment BE :the total number of bits in the symbols in the code pattern Compression ratio: CR = BN/BE.

BIG DATA 11 .

 Interpretation of data: pattern recognition. formats.  Transmission of information and the use of energy. reasoning  Velocity: analysis of streaming data. Unsupervised learning: discovering ‘natural’ structures in data.  Veracity: errors and uncertainties in data. and modes of processing.PROBLEMS OF BIG DATA AND SOLUTIONS Volume: big data is … BIG! Efficiency in computation and the use of energy.  Variety: in kinds of data.  Visualization: representing structures and processes 12 .

 Indirect benefits efficiency in computation and the use of energy unsupervised learning additional economies in transmission and the use of energy assistance in the management of errors and uncertainties in data processes of interpretation. Direct benefits in storage. 13 . management and transmission. Information compression.Volume: Making Big Data Smaller Very-large-scale data sets introduce many data management challenges.

Efficiency Via Reduction in volume Reducing the size of big data and size of search terms Via Probabilities Get out unnecessary searching Via a synergy with data-centric computing Close integration of data and processing 14 .

  The SP system can increase the efficiency of transmission  By making big data smaller (“Volume”). they have to discover ways to move the data as little as possible.  By separating grammar (G) from encoding (E). 15 .Transmission Of Information Since so much of the energy in computing is required to move data around. as in some dictionary techniques and analysis/synthesis schemes  Efficiency in transmission can mean cuts in the use of energy.

there is a case for developing a universal framework for the representation and processing of diverse kinds (UFK) The SP system is a good candidate for the role of UFK because of its versatility in the representation and processing of diverse kinds of knowledge. 16 . Adding to the complexity is that each kind of data and each format normally requires its own special mode of processing Although some kinds of diversity are useful.Variety of Big Data Diverse kinds of data and also there are often several different computer formats for each kind of data.

that excludes most ‘errors’ and generalizes beyond I. 17 . principles of minimum-length encoding provide the key Aim to minimize the overall size of G and E.Veracity For any body of data. I. E + G is a lossless compression of I including typos etc but without generalizations.  Systematic distortions remain a problem. G is a distillation or ‘essence’ of I.

the process of interpretation may be seen to achieve:  Pattern recognition  Information retrieval  Parsing and production of natural language  Translation from one representation to another  Planning  Problem solving 18 .Interpretation of Data Processing I in conjunction with a pre-established grammar (G) to create a relatively compact encoding (E) of I Depending on the nature of I and G.

” This style of analysis is at the heart of how the SP system has been designed. “This is the way humans process information. 19 .Velocity: Analysis of Streaming Data In the context of big data. Unsupervised learning. “velocity” means the analysis of streaming data as it is received.

Transparency in processing. 20 . There is clear potential to integrate visualization with the statistical techniques that lie at the heart of how the SP system works.Visualizations The SP system is well suited to visualization for these reasons: Transparency in the representation of knowledge. The system is designed to discover ‘natural’ structures in data.

A ROAD MAP Develop a high-parallel. open-source version of the SP machine.  This facility would be a means for researchers everywhere to explore what can be done with the system and to create new versions of it. 21 .

CONCLUSION  Designed to simplify and integrate concepts across artificial intelligence. 22 . and human perception and cognition. helping to reduce the problem of variety in big data  The great diversity of formalisms and formats for knowledge. has potential in the management and analysis of big data. and how they are processed.  The SP system has potential as a universal framework for the representation and processing of diverse kinds of knowledge (UFK). mainstream computing.

2014. December (2014). IEEE Access. Issue 12.cognitionresearch. 301-315. Article: “Big data and the SP theory of intelligence”. J G Wolff. International Journal of Computer Engineering and Technology (IJCET). 2. ISSN 0976-6367.org/sp.htm . pp.Volume 5. 207-213 © IAEME 23 .REFERENCES www.

24 .