Professional Documents
Culture Documents
use ciphers to do pipes for all data transfers to and from storage and between services
management and positioning of intelligence .
Flow of data around network join to utility or owner
reporting of errors to provider
control of network with instructions
change owner of item
5.13.3.2 Reference
5.15.2 References
37 44 U.S.C. § 3542(b)(1)
38 Stewart, James (2012). CISSP Study Guide. Canada: John Wiley & Sons, Inc.
pp. 255–257. ISBN 978-1-118-31417-3 – via Online PSU course resource, EBL Reader.
39 Gordon, Lawrence; Loeb, Martin (November 2002). "The Economics of
Information Security Investment". ACM Transactions on Information and System
Security. 5 (4): 438–457. doi:10.1145/581271.581274.
40 Stewart, James (2012). CISSP Certified Information Systems Security
Professional Study Guide Sixth Edition. Canada: John Wiley & Sons, Inc. pp. 255–
257. ISBN 978-1-118-31417-3.
41 Suetonius Tranquillus, Gaius (2008). Lives of the Caesars (Oxford World's
Classics). New York: Oxford University Press. p. 28. ISBN 978-0199537563.
42 Singh, Simon (2000). The Code Book. Anchor. pp. 289–290. ISBN 0-385-49532-3.
43 Cherdantseva Y. and Hilton J.: "Information Security and Information Assurance.
The Discussion about the Meaning, Scope and Goals". In: Organizational, Legal, and
Technological Dimensions of Information System Administrator. Almeida F., Portela, I.
(eds.). IGI Global Publishing. (2013)
44 ISO/IEC 27000:2009 (E). (2009). Information technology - Security techniques -
Information security management systems - Overview and vocabulary. ISO/IEC.
45 Committee on National Security Systems: National Information Assurance (IA)
Glossary, CNSS Instruction No. 4009, 26 April 2010.
46 ISACA. (2008). Glossary of terms, 2008. Retrieved
from http://www.isaca.org/Knowledge-Center/Documents/Glossary/glossary.pdf
47 Pipkin, D. (2000). Information security: Protecting the global enterprise. New
York: Hewlett-Packard Company.
48 B., McDermott, E., & Geer, D. (2001). Information security is information risk
management. In Proceedings of the 2001 Workshop on New Security Paradigms NSPW
‘01, (pp. 97 – 104). ACM. doi:10.1145/508171.508187
49 Anderson, J. M. (2003). "Why we need a new definition of information
security". Computers & Security. 22 (4): 308–313. doi:10.1016/S0167-4048(03)00407-3.
50 Venter, H. S.; Eloff, J. H. P. (2003). "A taxonomy for information security
technologies". Computers & Security. 22 (4): 299–307. doi:10.1016/S0167-
4048(03)00406-1.
51 https://www.isc2.org/uploadedFiles/(ISC)2_Public_Content/2013%20Global
%20Information%20Security%20Workforce%20Study%20Feb%202013.pdf
52 Perrin, Chad. "The CIA Triad". Retrieved 31 May 2012.
53 "Engineering Principles for Information Technology Security" (PDF).
csrc.nist.gov.
54 "oecd.org" (PDF). Archived from the original (PDF) on May 16, 2011.
Retrieved 2014-01-17.
55 "NIST Special Publication 800-27 Rev A" (PDF). csrc.nist.gov.
56 Aceituno, Vicente. "Open Information Security Maturity Model". Retrieved 12
February 2017.
57 Boritz, J. Efrim. "IS Practitioners' Views on Core Concepts of Information
Integrity". International Journal of Accounting Information Systems. Elsevier. 6 (4):
260–279. doi:10.1016/j.accinf.2005.07.001. Retrieved 12 August 2011.
58 Loukas, G.; Oke, G. (September 2010) [August 2009]. "Protection Against Denial
of Service Attacks: A Survey" (PDF). Comput. J. 53 (7): 1020–
1037. doi:10.1093/comjnl/bxp078.
59 ISACA (2006). CISA Review Manual 2006. Information Systems Audit and Control
Association. p. 85. ISBN 1-933284-15-3.
60 Spagnoletti, Paolo; Resca A. (2008). "The duality of Information Security
Management: fighting against predictable and unpredictable threats". Journal of
Information System Security. 4 (3): 46–62.
61 Kiountouzis, E.A.; Kokolakis, S.A. Information systems security: facing the
information society of the 21st century. London: Chapman & Hall, Ltd. ISBN 0-412-
78120-4.
62 "NIST SP 800-30 Risk Management Guide for Information Technology
Systems"(PDF). Retrieved 2014-01-17.
63 [1]
64 "Segregation of Duties Control matrix". ISACA. 2008. Archived from the
original on 3 July 2011. Retrieved 2008-09-30.
65 Shon Harris (2003). All-in-one CISSP Certification Exam Guide (2nd
ed.). Emeryville, California: McGraw-Hill/Osborne. ISBN 0-07-222966-7.
66 itpi.org Archived December 10, 2013, at the Wayback Machine.
67 "book summary of The Visible Ops Handbook: Implementing ITIL in 4 Practical
and Auditable Steps". wikisummaries.org. Retrieved 2016-06-22.
68 Harris, Shon (2008). All-in-one CISSP Certification Exam Guide (4th ed.). New
York, NY: McGraw-Hill. ISBN 978-0-07-149786-2.
69 "The Disaster Recovery Plan". Sans Institute. Retrieved 7 February 2012.
70 Lim, Joo S., et al. "Exploring the Relationship between Organizational Culture
and Information Security Culture." Australian Information Security Management
Conference.
71 Schlienger, Thomas; Teufel, Stephanie (2003). "Information security culture-from
analysis to change". South African Computer Journal. 31: 46–52.
72 "BSI-Standards". BSI. Retrieved 29 November 2013.
some physical object in the possession of the user, such as a USB stick with a secret
token, a bank card, a key, etc.
some secret known to the user, such as a password, PIN, TAN, etc.
some physical characteristic of the user (biometrics), such as a fingerprint, eye iris,
voice, typing speed, pattern in key press intervals, etc.[3]
Knowledge factors[4]
6.1.4.1.3.2 Guidance
NIST Special Publication 800-63-2 discusses various forms of two-factor authentication
and provides guidance on using them in business processes requiring different levels of
assurance.[18]
In 2005, the United States' Federal Financial Institutions Examination Council issued
guidance for financial institutions recommending financial institutions conduct risk-
based assessments, evaluate customer awareness programs, and develop security
measures to reliably authenticate customers remotely accessing online financial
services, officially recommending the use of authentication methods that depend on
more than one factor (specifically, what a user knows, has, and is) to determine the
user's identity.[19] In response to the publication, numerous authentication vendors
began improperly promoting challenge-questions, secret images, and other knowledge-
based methods as "multi-factor" authentication. Due to the resulting confusion and
widespread adoption of such methods, on August 15, 2006, the FFIEC published
supplemental guidelines—which states that by definition, a "true" multi-factor
authentication system must use distinct instances of the three factors of
authentication it had defined, and not just use multiple instances of a single factor.[20]
6.1.4.1.3.3 Security
According to proponents, multi-factor authentication could drastically reduce the
incidence of online identity theft and other online fraud, because the victim's password
would no longer be enough to give a thief permanent access to their information.
However, many multi-factor authentication approaches remain vulnerable to phishing,
[21] man-in-the-browser, and man-in-the-middle attacks.[22]
Multi-factor authentication may be ineffective against modern threats, like ATM
skimming, phishing, and malware.[23]
6.1.4.1.3.4 Industry regulation
Payment Card Industry Data Security Standard (PCI-DSS)[edit]
The Payment Card Industry (PCI) Data Security Standard, requirement 8.3, requires the
use of MFA for all remote network access that originates from outside the network to a
Card Data Environment (CDE).[24] Beginning with PCI-DSS version 3.2, the use of MFA
is required for all administrative access to the CDE, even if the user is within a trusted
network.[25]
6.1.4.1.3.5 Implementation considerations
Many multi-factor authentication products require users to deploy client software to
make multi-factor authentication systems work. Some vendors have created separate
installation packages
for network login, Web access credentials and VPN connection credentials. For such
products, there may be four or five different software packages to push down to
the client PC in order to make use of the token or smart card. This translates to four or
five packages on which version control has to be performed, and four or five packages
to check for conflicts with business applications. If access can be operated using web
pages, it is possible to limit the overheads outlined above to a single application. With
other multi-factor authentication solutions, such as "virtual" tokens and some hardware
token products, no software must be installed by end users.
There are drawbacks to multi-factor authentication that are keeping many approaches
from becoming widespread. Some consumers have difficulty keeping track of a
hardware token or USB plug. Many consumers do not have the technical skills needed
to install a client-side software certificate by themselves. Generally, multi-factor
solutions require additional investment for implementation and costs for maintenance.
Most hardware token-based systems are proprietary and some vendors charge an
annual fee per user. Deployment of hardware tokens is logistically challenging.
Hardware tokens may get damaged or lost and issuance of tokens in large industries
such as banking or even within large enterprises needs to be managed. In addition to
deployment costs, multi-factor authentication often carries significant additional
support costs. A 2008 survey of over 120 U.S. credit unions by the Credit Union
Journal reported on the support costs associated with two-factor authentication. In
their report, software certificates and software toolbar approaches were reported to
have the highest support costs.
6.1.4.1.3.6 Examples
Several popular web services employ multi-factor authentication, usually as an optional
feature that is deactivated by default.[26]
Two-factor authentication
Many Internet services (among them: Google, Amazon AWS) use open Time-based One-
time Password Algorithm (TOTP) to support multi-factor or two-factor authentication
6.1.4.2 References
1. "Two-factor authentication: What you need to know (FAQ) - CNET". CNET.
Retrieved 2015-10-31.
2. "How to extract data from an iCloud account with two-factor authentication
activated". iphonebackupextractor.com. Retrieved 2016-06-08.
3. "What is 2FA?". Retrieved 19 February 2015.
4. "Securenvoy - what is 2 factor authentication?". Retrieved April 3, 2015.
5. de Borde, Duncan. "Two-factor authentication" (PDF). Archived from the
original(PDF) on January 12, 2012.
6. b van Tilborg, Henk C.A.; Jajodia, Sushil, eds. (2011). Encyclopedia of
Cryptography and Security, Volume 1. Springer Science & Business Media.
p. 1305. ISBN 9781441959058.
7. Biometrics for Identification and Authentication - Advice on Product Selection
8. http://eprint.iacr.org/2014/135.pdf
9. "Mobile Two Factor Authentication" (PDF). securenvoy.com. Retrieved August
30, 2016. 2012 copyright
10. "How Russia Works on Intercepting Messaging Apps - bellingcat". bellingcat.
2016-04-30. Retrieved 2016-04-30.
11. SSMS – A Secure SMS Messaging Protocol for the M-Payment Systems,
Proceedings of the 13th IEEE Symposium on Computers and Communications (ISCC'08),
pp. 700–705, July 2008 arXiv:1002.3171
12. Rosenblatt, Seth; Cipriani, Jason (June 15, 2015). "Two-factor authentication:
What you need to know (FAQ)". CNET. Retrieved 2016-03-17.
13. "Sound-Proof: Usable Two-Factor Authentication Based on Ambient Sound |
USENIX". www.usenix.org. Retrieved 2016-02-24.
14. US Security Directive as issued on August 12, 2007 Archived September 16,
2012, at the Wayback Machine.
15. "Frequently Asked Questions on FFIEC Guidance on Authentication in an Internet
Banking Environment", August 15, 2006[dead link]
16. "SANS Institute, Critical Control 10: Secure Configurations for Network Devices
such as Firewalls, Routers, and Switches".
17. "SANS Institute, Critical Control 12: Controlled Use of Administrative Privileges".
18. "Electronic Authentication Guide" (PDF). Special Publication 800-63-2. NIST.
2013. Retrieved 2014-11-06.
19. "FFIEC Press Release". 2005-10-12. Retrieved 2011-05-13.
20. FFIEC (2006-08-15). "Frequently Asked Questions on FFIEC Guidance on
Authentication in an Internet Banking Environment" (PDF). Retrieved 2012-01-14.
21. Brian Krebs (July 10, 2006). "Security Fix - Citibank Phish Spoofs 2-Factor
Authentication". Washington Post. Retrieved 20 September 2016.
22. Bruce Schneier (March 2005). "The Failure of Two-Factor
Authentication". Schneier on Security. Retrieved 20 September 2016.
23. "The Failure of Two-Factor Authentication - Schneier on Security". schneier.com.
Retrieved 23 October 2015.
24. "Official PCI Security Standards Council Site - Verify PCI Compliance, Download
Data Security and Credit Card Security Standards". www.pcisecuritystandards.org.
Retrieved 2016-07-25.
25. "For PCI MFA Is Now Required For Everyone | Centrify Blog". blog.centrify.com.
Retrieved 2016-07-25.
26. GORDON, WHITSON (3 September 2012). "Two-Factor Authentication: The Big
List Of Everywhere You Should Enable It Right Now". LifeHacker. Australia. Retrieved 1
6.1.5 Best Practices
6.1.5.1 Commentary
IoT security, like all information security, cannot absolute or guaranteed. Threats are
constantly discovered requiring monitoring, maintenance and review for policy and
practice on a regularly.
IoT Security Foundation (IoTSF) publishes, reviews and maintains guidance and
framework information on a regular basis or in exceptional circumstances when this is
required. It is built on feedback from users and experts to provide continuous
improvement. It provides publicity when needed and gives an audit trail on its website.
The IoT security compliance framework is a structured checklist process to direct
organisations through the IoT security assurance process where information collected
demonstrates conformance with best practice.
6.1.5.2 References
6.1.7.2 References
1. Naveen, Sharanya. "Anti-virus software". Retrieved May 31, 2016.
2. Henry, Alan. "The Difference Between Antivirus and Anti-Malware (and Which to
Use)".
3. "What is antivirus software?". Microsoft. Archived from the original on April 11,
2011.
4. von Neumann, John (1966) Theory of self-reproducing automata. University of
Illinois Press.
5. Thomas Chen, Jean-Marc Robert (2004). "The Evolution of Viruses and Worms".
Retrieved February 16, 2009.
6. From the first email to the first YouTube video: a definitive internet history. Tom
Meltzer and Sarah Phillips. The Guardian. October 23, 2009
7. IEEE Annals of the History of Computing, Volumes 27–28. IEEE Computer Society,
2005. 74: "[...]from one machine to another led to experimentation with
the Creeper program, which became the world's first computer worm: a computation
that used the network to recreate itself on another node, and spread from node to
node."
8. John Metcalf (2014). "Core War: Creeper & Reaper". Retrieved May 1, 2014.
9. "Creeper – The Virus Encyclopedia".
10. What was the First Antivirus Software?. Anti-virus-software-
review.toptenreviews.com. Retrieved on January 3, 2017.
11. "Elk Cloner". Retrieved December 10, 2010.
12. "Top 10 Computer Viruses: No. 10 – Elk Cloner". Retrieved December 10, 2010.
13. "List of Computer Viruses Developed in 1980s". Retrieved December 10, 2010.
14. Fred Cohen: "Computer Viruses – Theory and Experiments" (1983).
Eecs.umich.edu (November 3, 1983). Retrieved on 2017-01-03.
15. Cohen, Fred (April 1, 1988). "Invited Paper: On the Implications of Computer
Viruses and Methods of Defense". Computers & Security. 7 (2): 167–
184. doi:10.1016/0167-4048(88)90334-3 – via ACM Digital Library.
16. Szor, Peter (February 13, 2005). The Art of Computer Virus Research and
Defense. Addison-Wesley Professional. ASIN 0321304543 – via Amazon.
17. "Virus Bulletin :: In memoriam: Péter Ször 1970–2013".
18. "History of Viruses".
19. Leyden, John (January 19, 2006). "PC virus celebrates 20th birthday". The
Register. Retrieved March 21, 2011.
20. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
20https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-20 "About computer viruses
of 1980's" (PDF). Retrieved February 17, 2016.
21. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
21https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-21 Panda Security (April
2004). "(II) Evolution of computer viruses". Archived from the original on August 2,
2009. Retrieved June 20, 2009.
22. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
22https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-22 Kaspersky Lab Virus list.
viruslist.com
23. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
23https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-23 Wells, Joe (August 30,
1996). "Virus timeline". IBM. Archived from the original on June 4, 2008. Retrieved June
6, 2008.
24. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Gdata_24-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Gdata_24-0 G Data Software
AG (2011). "G Data presents security firsts at CeBIT 2010". Retrieved August 22, 2011.
25. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-viruskit_25-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-viruskit_25-0 G Data
Software AG (2016). "Virus Construction Set II". Retrieved July 3, 2016.
26. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-UniqueNameOfRef_26-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-UniqueNameOfRef_26-
0 Karsmakers, Richard (January 2010). "The ultimate Virus Killer Book and Software".
Retrieved July 6, 2016.
27. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
27https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-27 "McAfee Becomes Intel
Security". McAfee Inc. Retrieved January 15, 2014.
28. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
28https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-28 Cavendish, Marshall
(2007). Inventors and Inventions, Volume 4. Paul Bernabeo. p. 1033. ISBN 0761477675.
29. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
29https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-29 "About ESET Company".
30. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
30https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-30 "ESET NOD32 Antivirus".
Vision Square. February 16, 2016. to:a b Cohen, Fred, An Undetectable Computer Virus
(Archived), 1987, IBM
31. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
32https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-32 Yevics, Patricia A. "Flu
Shot for Computer Viruses". americanbar.org.
32. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
33https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-33 Strom, David (April 1,
2010). "How friends help friends on the Internet: The Ross Greenberg Story".
wordpress.com.
33. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
34https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-34 "Anti-virus is 30 years
old". spgedwards.com. April 2012.
34. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
35https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-35 "A Brief History of
Antivirus Software". techlineinfo.com.
35. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
36https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-36 Grimes, Roger A. (June
1, 2001). Malicious Mobile Code: Virus Protection for Windows. O'Reilly Media, Inc.
p. 522. ISBN 9781565926820.
36. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
37https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-37 "F-PROT Tækniþjónusta –
CYREN Iceland". frisk.is.
37. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
38https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-38 Direccion General del
Derecho de Autor, SEP, Mexico D.F. Registry 20709/88 Book 8, page 40, dated
November 24, 1988. to:a b "The 'Security Digest' Archives (TM) : www.phreak.org-
virus_l".
38. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
40https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-40 "Symantec Softwares
and Internet Security at PCM".
39. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
41https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-41 SAM Identifies Virus-
Infected Files, Repairs Applications, InfoWorld, May 22, 1989
40. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
42https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-42 SAM Update Lets Users
Program for New Viruses, InfoWorld, February 19, 1990
41. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
43https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-43 Naveen,
Sharanya. "Panda Security". Retrieved May 31, 2016.
42. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
44https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-44 http://www.tgsoft.it, TG
Soft S.a.s. -. "Who we are – TG Soft Software House".
43. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
45https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-45 "A New Virus Naming
Convention (1991) – CARO – Computer Antivirus Research Organization".
44. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
46https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-46 "CARO Members". CARO.
Retrieved June 6, 2011.
45. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
47https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-47 CAROids, Hamburg
2003 Archived November 7, 2014, at the Wayback Machine.
46. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
48https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-48 "F-Secure Weblog : News
from the Lab". F-secure.com. Retrieved September 23, 2012.
47. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
49https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-49 "About EICAR". EICAR
official website. Retrieved October 28, 2013.
48. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
50https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-50 David Harley, Lysa Myers
& Eddy Willems. "Test Files and Product Evaluation: the Case for and against Malware
Simulation" (PDF). AVAR2010 13th Association of anti Virus Asia Researchers
International Conference. Archived from the original (PDF) on September 29, 2011.
Retrieved June 30, 2011.
49. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
51https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-51 "Dr. Web LTD Doctor Web
/ Dr. Web Reviews, Best AntiVirus Software Reviews, Review Centre".
Reviewcentre.com. Retrieved February 17, 2014. to:a b c d [In 1994, AV-Test.org
reported 28,613 unique malware samples (based on MD5). "A Brief History of Malware;
The First 25 Years"]
50. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
53https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-53 "BitDefender Product
History".
51. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
54https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-54 "InfoWatch
Management". InfoWatch. Retrieved August 12, 2013.
52. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
55https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-55 "Linuxvirus – Community
Help Wiki".
53. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
56https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-56 "Sorry – recovering...".
54. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
57https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-57 "Sourcefire acquires
ClamAV". ClamAV. August 17, 2007. Retrieved February 12, 2008.
55. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
58https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-58 "Cisco Completes
Acquisition of Sourcefire". cisco.com. October 7, 2013. Retrieved June 18, 2014.
56. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
59https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-59 Der Unternehmer – brand
eins online. Brandeins.de (July 2009). Retrieved on January 3, 2017.
57. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
60https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-60 Williams, Greg (April
2012). "The digital detective: Mikko Hypponen's war on malware is escalating". Wired.
58. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
61https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-61 "Everyday cybercrime –
and what you can do about it".
59. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
62https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-62 Szor 2005, pp. 66–67
60. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
63https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-63 "New virus travels in PDF
files". August 7, 2001. Retrieved October 29, 2011.
61. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
64https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-64 Slipstick Systems
(February 2009). "Protecting Microsoft Outlook against Viruses". Archived from the
original on June 2, 2009. Retrieved June 18, 2009.
62. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
65https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-65 "CloudAV: N-Version
Antivirus in the Network Cloud". usenix.org.
63. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
66https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-66 McAfee Artemis Preview
Report. av-comparatives.org
64. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
67https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-67 McAfee Third Quarter
2008. corporate-ir.net
65. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
68https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-68 "AMTSO Best Practices
for Testing In-the-Cloud Security Products » AMTSO".
66. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
69https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-69 "TECHNOLOGY
OVERVIEW". AVG Security. Retrieved February 16, 2015.
67. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
70https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-70 "Magic Quadrant
Endpoint Protection Platforms 2016". Gartner Research.
68. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-NetworkWorld_71-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-NetworkWorld_71-0 Messmer,
Ellen. "Start-up offers up endpoint detection and response for behavior-based malware
detection". networkworld.com.
69. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-HSToday.US_72-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-HSToday.US_72-0 "Homeland
Security Today: Bromium Research Reveals Insecurity in Existing Endpoint Malware
Protection Deployments".
70. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
73https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-73 "Duelling Unicorns:
CrowdStrike Vs. Cylance In Brutal Battle To Knock Hackers Out". Forbes. July 6, 2016.
71. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
74https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-74 Potter, Davitt (June 9,
2016). "Is Anti-virus Dead? The Shift Toward Next-Gen Endpoints".
72. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
75https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-75 "CylancePROTECT®
Achieves HIPAA Security Rule Compliance Certification". Cylance.
73. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
76https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-76 "Trend Micro-XGen".
Trend Micro. October 18, 2016.
74. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
77https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-77 "Next-Gen Endpoint".
Sophos.
75. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
78https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-78 The Forrester Wave™:
Endpoint Security Suites, Q4 2016. Forrester.com (October 19, 2016). Retrieved on
2017-01-03.
76. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
79https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-79 Sandboxing Protects
Endpoints | Stay Ahead Of Zero Day Threats. Enterprise.comodo.com (June 20, 2014).
Retrieved on 2017-01-03.
77. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
80https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-80 Szor 2005, pp. 474–481
78. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
81https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-81 Kiem, Hoang; Thuy,
Nguyen Yhanh and Quang, Truong Minh Nhat (December 2004) "A Machine Learning
Approach to Anti-virus System", Joint Workshop of Vietnamese Society of AI, SIGKBS-
JSAI, ICS-IPSJ and IEICE-SIGAI on Active Mining ; Session 3: Artificial Intelligence, Vol.
67, pp. 61–65
79. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
82https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-82 Data Mining Methods for
Malware Detection. ProQuest. 2008. pp. 15–. ISBN 978-0-549-88885-7.
80. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
83https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-83 Dua, Sumeet; Du, Xian
(April 19, 2016). Data Mining and Machine Learning in Cybersecurity. CRC Press.
pp. 1–. ISBN 978-1-4398-3943-0.
81. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
84https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-84 Firdausi, Ivan; Lim,
Charles; Erwin, Alva; Nugroho, Anto Satriyo (2010). "Analysis of Machine learning
Techniques Used in Behavior-Based Malware Detection". 2010 Second International
Conference on Advances in Computing, Control, and Telecommunication Technologies.
p. 201. doi:10.1109/ACT.2010.33. ISBN 978-1-4244-8746-2.
82. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
85https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-85 Siddiqui, Muazzam;
Wang, Morgan C.; Lee, Joohan (2008). "A survey of data mining techniques for malware
detection using file features". Proceedings of the 46th Annual Southeast Regional
Conference on XX – ACM-SE 46.
p. 509. doi:10.1145/1593105.1593239. ISBN 9781605581057.
83. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
86https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-86 Deng, P.S.; Jau-Hwang
Wang; Wen-Gong Shieh; Chih-Pin Yen; Cheng-Tan Tung (2003). "Intelligent automatic
malicious code signatures extraction". IEEE 37th Annual 2003 International Carnahan
Conference on Security Technology, 2003. Proceedings.
p. 600. doi:10.1109/CCST.2003.1297626. ISBN 0-7803-7882-2.
84. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
87https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-87 Komashinskiy, Dmitriy;
Kotenko, Igor (2010). "Malware Detection by Data Mining Techniques Based on
Positionally Dependent Features". 2010 18th Euromicro Conference on Parallel,
Distributed and Network-based Processing. p. 617. doi:10.1109/PDP.2010.30. ISBN 978-
1-4244-5672-7.
85. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
88https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-88 Schultz, M.G.; Eskin, E.;
Zadok, F.; Stolfo, S.J. (2001). "Data mining methods for detection of new malicious
executables". Proceedings 2001 IEEE Symposium on Security and Privacy. S&P 2001.
p. 38. doi:10.1109/SECPRI.2001.924286. ISBN 0-7695-1046-9.
86. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
89https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-89 Ye, Yanfang; Wang,
Dingding; Li, Tao; Ye, Dongyi (2007). "IMDS". Proceedings of the 13th ACM SIGKDD
international conference on Knowledge discovery and data mining – KDD '07.
p. 1043. doi:10.1145/1281192.1281308. ISBN 9781595936097.
87. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
90https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-90 Kolter, J. Zico; Maloof,
Marcus A. (December 1, 2006). "Learning to Detect and Classify Malicious Executables
in the Wild". 7: 2721–2744.
88. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
91https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-91 Tabish, S. Momina;
Shafiq, M. Zubair; Farooq, Muddassar (2009). "Malware detection using statistical
analysis of byte-level file content". Proceedings of the ACM SIGKDD Workshop on
Cyber Security and Intelligence Informatics – CSI-KDD '09.
p. 23. doi:10.1145/1599272.1599278. ISBN 9781605586694.
89. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
92https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-92 Ye, Yanfang; Wang,
Dingding; Li, Tao; Ye, Dongyi; Jiang, Qingshan (2008). "An intelligent PE-malware
detection system based on association mining". Journal in Computer Virology. 4 (4):
323. doi:10.1007/s11416-008-0082-4.
90. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
93https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-93 Sami, Ashkan; Yadegari,
Babak; Peiravian, Naser; Hashemi, Sattar; Hamze, Ali (2010). "Malware detection based
on mining API calls". Proceedings of the 2010 ACM Symposium on Applied Computing –
SAC '10. p. 1020. doi:10.1145/1774088.1774303. ISBN 9781605586397.
91. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
94https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-94 Shabtai, Asaf; Kanonov,
Uri; Elovici, Yuval; Glezer, Chanan; Weiss, Yael (2011). ""Andromaly": A behavioral
malware detection framework for android devices". Journal of Intelligent Information
Systems. 38: 161. doi:10.1007/s10844-010-0148-x.
92. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
95https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-95 Fox-Brewster,
Thomas. "Netflix Is Dumping Anti-Virus, Presages Death Of An Industry". Forbes.
Retrieved September 4, 2015.
93. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
96https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-96 Automatic Malware
Signature Generation. (PDF) . Retrieved on January 3, 2017.
94. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
97https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-97 Szor 2005, pp. 252–288
95. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
98https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-98 "Generic detection".
Kaspersky. Retrieved July 11, 2013.
96. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
99https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-99 Symantec Corporation
(February 2009). "Trojan.Vundo". Archived from the original on April 9, 2009.
Retrieved April 14, 2009.
97. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
100https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-100 Symantec Corporation
(February 2007). "Trojan.Vundo.B". Archived from the original on April 27, 2009.
Retrieved April 14, 2009.
98. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
101https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-101 "Antivirus Research
and Detection Techniques". ExtremeTech. Archived from the original on February 27,
2009. Retrieved February 24, 2009.
99. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
102https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-102 "Terminology – F-
Secure Labs".
100. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
103https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-103 Kaspersky Lab
Technical Support Portal Archived February 14, 2011, at WebCite
101. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
104https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-104 Kelly, Michael
(October 2006). "Buying Dangerously". Retrieved November 29, 2009.
102. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
105https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-105 Bitdefender
(2009). "Automatic Renewal". Retrieved November 29, 2009.
103. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
106https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
106 Symantec (2014). "Norton Automatic Renewal Service FAQ". Retrieved April
9, 2014.
104. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
107https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-107 SpywareWarrior
(2007). "Rogue/Suspect Anti-Spyware Products & Web Sites". Retrieved November
29, 2009.
105. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
108https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-108 Protalinski, Emil
(November 11, 2008). "AVG incorrectly flags user32.dll in Windows XP SP2/SP3". Ars
Technica. Retrieved February 24, 2011.
106. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
109https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-109 McAfee to
compensate businesses for buggy update, retrieved December 2, 2010
107. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
110https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-110 Buggy McAfee update
whacks Windows XP PCs, archived from the original on January 13, 2011,
retrieved December 2, 2010
108. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
111https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-111 Tan, Aaron (May 24,
2007). "Flawed Symantec update cripples Chinese PCs". CNET Networks.
Retrieved April 5, 2009. to:a b Harris, David (June 29, 2009). "January 2010 – Pegasus
Mail v4.52 Release". Pegasus Mail. Archived from the original on May 28, 2010.
Retrieved May 21, 2010.
109. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
113https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-113 "McAfee DAT 5958
Update Issues". April 21, 2010. Archived from the original on April 24, 2010.
Retrieved April 22, 2010.
110. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
114https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-114 "Botched McAfee
update shutting down corporate XP machines worldwide". April 21, 2010. Archived from
the original on April 22, 2010. Retrieved April 22, 2010.
111. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
115https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-115 Leyden, John
(December 2, 2010). "Horror AVG update ballsup bricks Windows 7". The Register.
Retrieved December 2, 2010.
112. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
116https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-116 MSE false positive
detection forces Google to update Chrome, retrieved October 3, 2011
113. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
117https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-117 Sophos Antivirus
Detects Itself as Malware, Deletes Key Binaries, The Next Web, retrieved March 5, 2014
114. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
118https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-118 Shh/Updater-B false
positive by Sophos anti-virus products, Sophos, retrieved March 5, 2014
115. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
119https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-119 "Plus! 98: How to
Remove McAfee VirusScan". Microsoft. January 2007. Archivedfrom the original on
April 8, 2010. Retrieved September 27, 2014.
116. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
120https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-120 Vamosi, Robert (May
28, 2009). "G-Data Internet Security 2010". PC World. Retrieved February 24, 2011.
117. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
121https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-121 Higgins, Kelly Jackson
(May 5, 2010). "New Microsoft Forefront Software Runs Five Antivirus Vendors'
Engines". Darkreading. Retrieved February 24, 2011.
118. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
122https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-122 "Steps to take before
you install Windows XP Service Pack 3". Microsoft. April 2009. Archived from the
original on December 8, 2009. Retrieved November 29, 2009.
119. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
123https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-123 "Upgrading from
Windows Vista to Windows 7". Retrieved March 24, 2012. Mentioned within "Before you
begin".
120. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
124https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-124 "Upgrading to
Microsoft Windows Vista recommended steps.". Retrieved March 24, 2012.
121. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
125https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-125 "How to troubleshoot
problems during installation when you upgrade from Windows 98 or Windows
Millennium Edition to Windows XP". May 7, 2007. Retrieved March 24, 2012.Mentioned
within "General troubleshooting".
122. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
126https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-126 "Troubleshooting".
Retrieved February 17, 2011.
123. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
127https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-127 "Spyware, Adware,
and Viruses Interfering with Steam". Retrieved April 11, 2013.Steam support page.
124. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
128https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-128 "Field Notice: FN –
63204 – Cisco Clean Access has Interoperability issue with Symantec Anti-virus –
delays Agent start-up".
125. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
129https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-129 Goodin, Dan
(December 21, 2007). "Anti-virus protection gets worse". Channel Register.
Retrieved February 24, 2011.
126. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
130https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-130 "ZeuS Tracker ::
Home".
127. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
131https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-131 Illett, Dan (July 13,
2007). "Hacking poses threats to business". Computer Weekly. Retrieved November
15, 2009.
128. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
132https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-132 Espiner, Tom (June 30,
2008). "Trend Micro: Antivirus industry lied for 20 years". ZDNet. Retrieved September
27, 2014.
129. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
133https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-133 AV Comparatives
(December 2013). "Whole Product Dynamic "Real World" Production
Test" (PDF). Archived (PDF) from the original on January 2, 2014. Retrieved January
2, 2014.
130. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
134https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-134 Kirk,
Jeremy. "Guidelines released for antivirus software tests".
131. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Harley_2011_135-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Harley_2011_135-0 Harley,
David (2011). AVIEN Malware Defense Guide for the Enterprise. Elsevier.
p. 487. ISBN 9780080558660.
132. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
136https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-136 Kotadia, Munir (July
2006). "Why popular antivirus apps 'do not work'". Retrieved April 14, 2010. to:a b The
Canadian Press (April 2010). "Internet scam uses adult game to extort cash". CBC
News. Archived from the original on April 18, 2010. Retrieved April 17, 2010.
133. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
138https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-138 Exploit Code; Data
Theft; Information Security; Privacy; Hackers; system, Security mandates aim to shore
up shattered SSL; Reader, Adobe kills two actively exploited bugs in; stalker, Judge
dismisses charges against accused Twitter. "Researchers up evilness ante with GPU-
assisted malware".
134. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
139https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-139 Iresh, Gina (April 10,
2010). "Review of Bitdefender Antivirus Security Software 2017
edition". www.digitalgrog.com.au. Digital Grog. Retrieved November 20, 2016.
135. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
140https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-140 "Why F-PROT Antivirus
fails to disinfect the virus on my computer?". Retrieved August 20, 2015.
136. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
141https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-141 "Actions to be
performed on infected objects". Retrieved August 20, 2015.
137. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
142https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-142 "Cryptolocker
Ransomware: What You Need To Know". Retrieved March 28, 2014.
138. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
143https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-143 "How Anti-Virus
Software Works". Retrieved February 16, 2011.
139. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
144https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-144 "BT Home Hub
Firmware Upgrade Procedure". Retrieved March 6, 2011.
140. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
145https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-145 "The 10 faces of
computer malware". July 17, 2009. Retrieved March 6, 2011.
141. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
146https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-146 "New BIOS Virus
Withstands HDD Wipes". March 27, 2009. Retrieved March 6, 2011.
142. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
147https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-147 "Phrack Inc.
Persistent BIOS Infection". June 1, 2009. Archived from the original on April 30, 2011.
Retrieved March 6, 2011.
143. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
148https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-148 "Turning USB
peripherals into BadUSB". Retrieved October 11, 2014.
144. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
149https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-149 "Why the Security of
USB Is Fundamentally Broken". July 31, 2014. Retrieved October 11, 2014.
145. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
150https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-150 "How Antivirus
Software Can Slow Down Your Computer". Support.com Blog. Retrieved July 26, 2010.
146. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
151https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-151 "Softpedia Exclusive
Interview: Avira 10". Ionut Ilascu. Softpedia. April 14, 2010. Retrieved September
11, 2011.
147. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
152https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-152 "Norton AntiVirus
ignores malicious WMI instructions". Munir Kotadia. CBS Interactive. October 21, 2004.
Retrieved April 5, 2009.
148. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
153https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-153 "NSA and GCHQ
attacked antivirus software so that they could spy on people, leaks indicate". June 24,
2015. Retrieved October 30, 2016. to:a b "Popular security software came under
relentless NSA and GCHQ attacks". Andrew Fishman, Morgan Marquis-Boire. June 22,
2015. Retrieved October 30, 2016.
149. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
155https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-155 Zeltser, Lenny
(October 2010). "What Is Cloud Anti-Virus and How Does It Work?". Archived from the
original on October 10, 2010. Retrieved October 26, 2010.
150. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
156https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-156 Erickson, Jon (August
6, 2008). "Antivirus Software Heads for the Clouds". Information Week.
Retrieved February 24, 2010.
151. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
157https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-157 "Comodo Cloud
Antivirus released". wikipost.org. Retrieved May 30, 2016.
152. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
158https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-158 "Comodo Cloud
Antivirus User Guideline PDF" (PDF). help.comodo.com. Retrieved May 30, 2016.
153. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
159https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-159 Krebs, Brian (March 9,
2007). "Online Anti-Virus Scans: A Free Second Opinion". Washington Post.
Retrieved February 24, 2011.
154. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
160https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-160 Naraine, Ryan
(February 2, 2007). "Trend Micro ships free 'rootkit buster'". ZDNet. Retrieved February
24, 2011. to:a b Rubenking, Neil J. (March 26, 2010). "Avira AntiVir Personal 10". PC
Magazine. Retrieved February 24, 2011.
155. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
162https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-162 Rubenking, Neil J.
(September 16, 2010). "PC Tools Spyware Doctor with AntiVirus 2011". PC Magazine.
Retrieved February 24, 2011.
156. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
163https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-163 Rubenking, Neil J.
(October 4, 2010). "AVG Anti-Virus Free 2011". PC Magazine. Retrieved February
24, 2011.
157. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
164https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-164 Rubenking, Neil J.
(November 19, 2009). "PC Tools Internet Security 2010". PC Magazine.
Retrieved February 24, 2011. to:a b Skinner, Carrie-Ann (March 25, 2010). "AVG Offers
Free Emergency Boot CD". PC World. Retrieved February 24, 2011.
158. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
166https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-166 "FBI estimates major
companies lose $12m annually from viruses". January 30, 2007. Retrieved February
20, 2011.
159. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
167https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-167 Kaiser, Michael (April
17, 2009). "Small and Medium Size Businesses are Vulnerable". National Cyber Security
Alliance. Retrieved February 24, 2011.
160. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
168https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-168 Nearly 50% Women
Don’t Use Anti-virus Software. Spamfighter.com (September 2, 2010). Retrieved on
January 3, 2017.
161. Szor, Peter (2005), The Art of Computer Virus Research and Defense, Addison-
Wesley, ISBN 0-321-30454-3
6.1.8 Firewall
6.1.8.1 Commentary
A firewall is a network security system to monitor and control network traffic based on
predetermined security rules.[1] It provides a mechanism of transfer between trusted
internal network and the outside network.[2] They can be classified as network or host-
based. Network firewalls filter traffic between networks and are software running on
computers or specialist hardware. Host-based systems have software on one host that
controlling traffic.[3][4] Firewalls give extra functionality as a DHCP[5][6] or VPN[7][8]
[9][10] server for the network.[11][12][28][29][30]
Firewalls came with the Internet and its global use and connectivity[14] and succeeded
routers used in the late 1980s.[15][16][17] The first firewalls filtered network
addresses and ports of packet to allow or block.[18][19] When special ports are used the
firewall can be enhanced.[20][21][22] Second generation firewalls were circuit-level
gateways[23] using the transport layer.[24][25][26] The third generation firewal works at
the application layer eg File Transfer Protocol (FTP), Domain Name System (DNS),
or Hypertext Transfer Protocol (HTTP)) via the Firewall Toolkit, next-generation
firewall (NGFW), Intrusion prevention systems (IPS), User identity management integration,
and Web application firewall (WAF).[27] A proxy server which can act as a firewall by
responding to input packets like an application is a gateway from one network to
another for an application[2] and make it harder to tamper with a system. Network
address translation is applied with firewalls to hide local addressing. [31]
6.1.8.2 References
1. Boudriga, Noureddine (2010). Security of mobile communications. Boca Raton:
CRC Press. pp. 32–33. ISBN 0849379423.
2. Oppliger, Rolf (May 1997). "Internet Security: FIREWALLS and
BEYOND". Communications of the ACM. 40 (5): 94. doi:10.1145/253769.253802.
3. Vacca, John R. (2009). Computer and information security handbook.
Amsterdam: Elsevier. p. 355. ISBN 9780080921945.
4. "What is Firewall?". Retrieved 2015-02-12.
5. "Firewall as a DHCP Server and Client". Palo Alto Networks. Retrieved 2016-02-
08.
6. "DHCP". www.shorewall.net. Retrieved 2016-02-08.
7. "What is a VPN Firewall? - Definition from Techopedia". Techopedia.com.
Retrieved 2016-02-08.
8. "VPNs and Firewalls". technet.microsoft.com. Retrieved 2016-02-08.
9. "VPN and Firewalls (Windows Server)". Resources and Tools for IT Professionals
| TechNet.
10. "Configuring VPN connections with firewalls".
11. Andrés, Steven; Kenyon, Brian; Cohen, Jody Marc; Johnson, Nate; Dolly, Justin
(2004). Birkholz, Erik Pack, ed. Security Sage's Guide to Hardening the Network
Infrastructure. Rockland, MA: Syngress. pp. 94–95. ISBN 9780080480831.
12. Naveen, Sharanya. "Firewall". Retrieved 7 June 2016.
13. Canavan, John E. (2001). Fundamentals of Network Security (1st ed.). Boston,
MA: Artech House. p. 212. ISBN 9781580531764.
14. Liska, Allan (Dec 10, 2014). Building an Intelligence-Led Security Program.
Syngress. p. 3. ISBN 0128023708.
15. Ingham, Kenneth; Forrest, Stephanie (2002). "A History and Survey of Network
Firewalls" (PDF). Retrieved 2011-11-25.
16. [1] Firewalls by Dr.Talal Alkharobi
17. RFC 1135 The Helminthiasis of the Internet
18. Peltier, Justin; Peltier, Thomas R. (2007). Complete Guide to CISM Certification.
Hoboken: CRC Press. p. 210. ISBN 9781420013252.
19. Ingham, Kenneth; Forrest, Stephanie (2002). "A History and Survey of Network
Firewalls" (PDF). p. 4. Retrieved 2011-11-25.
20. TCP vs. UDP By Erik Rodriguez
21. William R. Cheswick, Steven M. Bellovin, Aviel D. Rubin (2003). "Google Books
Link". Firewalls and Internet Security: repelling the wily hacker
22. Aug 29, 2003 Virus may elude computer defenses by Charles Duhigg, Washington
Post
23. Proceedings of National Conference on Recent Developments in Computing and
Its Applications, August 12–13, 2009. I.K. International Pvt. Ltd. 2009-01-01.
Retrieved 2014-04-22.
24. Conway, Richard (204). Code Hacking: A Developer's Guide to Network Security.
Hingham, Massachusetts: Charles River Media. p. 281. ISBN 1-58450-314-9.
25. Andress, Jason (May 20, 2014). The Basics of Information Security:
Understanding the Fundamentals of InfoSec in Theory and Practice (2nd ed.). Elsevier
Science. ISBN 9780128008126.
26. Chang, Rocky (October 2002). "Defending Against Flooding-Based Distributed
Denial-of-Service Attacks: A Tutorial". IEEE Communications Magazine. 40 (10): 42–
43. doi:10.1109/mcom.2002.1039856.
27. "WAFFle: Fingerprinting Filter Rules of Web Application Firewalls". 2012.
28. "Firewalls". MemeBridge. Retrieved 13 June 2014.
29. "Software Firewalls: Made of Straw? Part 1 of 2". Symantec Connect Community.
2010-06-29. Retrieved 2014-03-28.
30. "Auto Sandboxing". Comodo Inc. Retrieved 2014-08-28.
31. "Advanced Security: Firewall". Microsoft. Retrieved 2014-08-28.
32. Internet Firewalls: Frequently Asked Questions, compiled by Matt Curtin, Marcus
Ranum and Paul Robertson.
33. Firewalls Aren’t Just About Security - Cyberoam Whitepaper focusing on Cloud
Applications Forcing Firewalls to Enable Productivity.
34. Evolution of the Firewall Industry - Discusses different architectures and their
differences, how packets are processed, and provides a timeline of the evolution.
35. A History and Survey of Network Firewalls - provides an overview of firewalls at
the various ISO levels, with references to the original papers where first firewall work
was reported.
36. Software Firewalls: Made of Straw? Part 1 and Software Firewalls: Made of
Straw? Part 2 - a technical view on software firewall design and potential weaknesses
6.1.9 Intrusion detection system
6.1.9.1 Commentary
An intrusion detection system (IDS) monitors a network or systems for malicious
activity or policy violations. Any detected activity or violation is logged for an
administrator, central facility and event management (SIEM) system with
filtering techniques to detect malicious activity. An IDS monitors all activities whereas
a firewall monitor external activities.
IDS can be classified by where detection takes place (network or host) and the
detection method that is employed. Network intrusion detection systems (NIDS)
monitor traffic to and from all devices on the network and analyse passing traffic on
the entire subnet, and matches it to profiles of known attacks. If a match is found the
administrator is notified.[1] Host intrusion detection systems (HIDS) monitor the traffic
on a device only and alert the administrator if suspicious activity is detected by taking
snapshot of existing system files and matching it to the previous snapshot. Intrusion
detection systems can also be system-specific using custom tools and honeypots.
Intrusion prevention systems use signature-based, statistical anomaly-based, and
stateful protocol analysis.[8][12] Signature-based IDS looks for specific patterns eg
known malicious instruction sequences (signatures) used by malware. It is easily to
detect known attacks, but impossible for new attacks. Anomaly-based intrusion
detection systems finds unknown attacks using machine learning to model trusted
activity, and then checking against this model.[2][3][4][13] Stateful protocol analysis
detection identifies deviations of protocol states by comparing observed events with
predetermined profiles of generally accepted definitions of benign activity.[8]
Intrusion detection and prevention systems (IDPS) identify incidents, log information,
and report attempts and listing problems with policies, threats and security violations
by individuals.[5][6][7][8][9][10].
Intrusion prevention systems are categorised as:[6][11]
Network-based intrusion prevention system monitoring the entire network for
suspicious traffic by analyzing protocol activity.
Wireless intrusion prevention systems as above but for a wireless network.
Network behavior analysis examining anomalies in network traffic to identify.
Host-based intrusion prevention system monitoring a single host for suspicious activity
by analyzing events occurring within that host.
The intrusion systems are restricted by device errors, false addresses, out of date
software and signatures[14], delayed updates[13], encrypted information, erroneous
packet addresses and network/host problems.[15]
Attackers use simple techniques to evade protection eg fragmented message packets,
changed communication ports, coordinated attacks, address spoofing/proxying, pattern
change of attack data.
Tools help administrators review audit trails.[16] They can require ever increasing
resources.[17] They follow a basic model[18] using statistics for anomaly detection in
profiles of users, host and target systems[19] and rule-based expert system to detect
known intrusions.[20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37]
[38]
6.1.9.2 References
1 Abdullah A. Mohamed, "Design Intrusion Detection System Based On Image
Block Matching", International Journal of Computer and Communication Engineering,
IACSIT Press, Vol. 2, No. 5, September 2013.
2 "Gartner report: Market Guide for User and Entity Behavior Analytics".
September 2015.
3 "Gartner: Hype Cycle for Infrastructure Protection, 2016".
4 "Gartner: Defining Intrusion Detection and Prevention Systems".
Retrieved September 20, 2016.
5 Scarfone, Karen; Mell, Peter (February 2007). "Guide to Intrusion Detection and
Prevention Systems (IDPS)" (PDF). Computer Security Resource Center. National
Institute of Standards and Technology (800–94). Retrieved 1 January 2010.
6 "NIST – Guide to Intrusion Detection and Prevention Systems (IDPS)" (PDF).
February 2007. Retrieved 2010-06-25.
7 Robert C. Newman (19 February 2009). Computer Security: Protecting Digital
Resources. Jones & Bartlett Learning. ISBN 978-0-7637-5994-0. Retrieved 25
June 2010.
8 Michael E. Whitman; Herbert J. Mattord (2009). Principles of Information
Security. Cengage Learning EMEA. ISBN 978-1-4239-0177-8. Retrieved 25 June 2010.
9 Tim Boyles (2010). CCNA Security Study Guide: Exam 640-553. John Wiley and
Sons. p. 249. ISBN 978-0-470-52767-2. Retrieved 29 June 2010.
10 Harold F. Tipton; Micki Krause (2007). Information Security Management
Handbook. CRC Press. p. 1000. ISBN 978-1-4200-1358-0. Retrieved 29 June 2010.
11 John R. Vacca (2010). Managing Information Security. Syngress.
p. 137. ISBN 978-1-59749-533-2. Retrieved 29 June 2010.
12 Engin Kirda; Somesh Jha; Davide Balzarotti (2009). Recent Advances in Intrusion
Detection: 12th International Symposium, RAID 2009, Saint-Malo, France, September
23–25, 2009, Proceedings. Springer. p. 162. ISBN 978-3-642-04341-3. Retrieved 29
June 2010.
13 nitin.; Mattord, verma (2008). Principles of Information Security. Course
Technology. pp. 290–301. ISBN 978-1-4239-0177-8.
14 c Anderson, Ross (2001). Security Engineering: A Guide to Building Dependable
Distributed Systems. New York: John Wiley & Sons. pp. 387–388. ISBN 978-0-471-38922-
4.
15 http://www.giac.org/paper/gsec/235/limitations-network-intrusion-
detection/100739
16 Anderson, James P., "Computer Security Threat Monitoring and Surveillance,"
Washing, PA, James P. Anderson Co., 1980.
17 David M. Chess; Steve R. White (2000). "An Undetectable Computer
Virus". Proceedings of Virus Bulletin Conference.
18 Denning, Dorothy E., "An Intrusion Detection Model," Proceedings of the Seventh
IEEE Symposium on Security and Privacy, May 1986, pages 119–131
19 Lunt, Teresa F., "IDES: An Intelligent System for Detecting Intruders,"
Proceedings of the Symposium on Computer Security; Threats, and Countermeasures;
Rome, Italy, November 22–23, 1990, pages 110–121.
20 Lunt, Teresa F., "Detecting Intruders in Computer Systems," 1993 Conference on
Auditing and Computer Technology, SRI International
21 Sebring, Michael M., and Whitehurst, R. Alan., "Expert Systems in Intrusion
Detection: A Case Study," The 11th National Computer Security Conference, October,
1988
22 Smaha, Stephen E., "Haystack: An Intrusion Detection System," The Fourth
Aerospace Computer Security Applications Conference, Orlando, FL, December, 1988
23 Vaccaro, H.S., and Liepins, G.E., "Detection of Anomalous Computer Session
Activity," The 1989 IEEE Symposium on Security and Privacy, May, 1989
24 Teng, Henry S., Chen, Kaihu, and Lu, Stephen C-Y, "Adaptive Real-time Anomaly
Detection Using Inductively Generated Sequential Patterns," 1990 IEEE Symposium on
Security and Privacy
25 Heberlein, L. Todd, Dias, Gihan V., Levitt, Karl N., Mukherjee, Biswanath, Wood,
Jeff, and Wolber, David, "A Network Security Monitor," 1990 Symposium on Research in
Security and Privacy, Oakland, CA, pages 296–304
26 Winkeler, J.R., "A UNIX Prototype for Intrusion and Anomaly Detection in Secure
Networks," The Thirteenth National Computer Security Conference, Washington, DC.,
pages 115–124, 1990
27 Dowell, Cheri, and Ramstedt, Paul, "The ComputerWatch Data Reduction Tool,"
Proceedings of the 13th National Computer Security Conference, Washington, D.C.,
1990
28 Snapp, Steven R, Brentano, James, Dias, Gihan V., Goan, Terrance L., Heberlein,
L. Todd, Ho, Che-Lin, Levitt, Karl N., Mukherjee, Biswanath, Smaha, Stephen E., Grance,
Tim, Teal, Daniel M. and Mansur, Doug, "DIDS (Distributed Intrusion Detection System)
-- Motivation, Architecture, and An Early Prototype," The 14th National Computer
Security Conference, October, 1991, pages 167–176.
29 Jackson, Kathleen, DuBois, David H., and Stallings, Cathy A., "A Phased
Approach to Network Intrusion Detection," 14th National Computing Security
Conference, 1991
30 Paxson, Vern, "Bro: A System for Detecting Network Intruders in Real-Time,"
Proceedings of The 7th USENIX Security Symposium, San Antonio, TX, 1998
31 Amoroso, Edward, "Intrusion Detection: An Introduction to Internet Surveillance,
Correlation, Trace Back, Traps, and Response," Intrusion.Net Books, Sparta, New
Jersey, 1999, ISBN 0-9666700-7-8
32 Kohlenberg, Toby (Ed.), Alder, Raven, Carter, Dr. Everett F. (Skip), Jr., Esler,
Joel., Foster, James C., Jonkman Marty, Raffael, and Poor, Mike, "Snort IDS and IPS
Toolkit," Syngress, 2007, ISBN 978-1-59749-099-3
33 Barbara, Daniel, Couto, Julia, Jajodia, Sushil, Popyack, Leonard, and Wu,
Ningning, "ADAM: Detecting Intrusions by Data Mining," Proceedings of the IEEE
Workshop on Information Assurance and Security, West Point, NY, June 5–6, 2001
34 Intrusion Detection Techniques for Mobile Wireless Networks, ACM WINET 2003
<http://www.cc.gatech.edu/~wenke/papers/winet03.pdf>
35 Viegas, E.; Santin, A. O.; Fran?a, A.; Jasinski, R.; Pedroni, V. A.; Oliveira, L. S.
(2017-01-01). "Towards an Energy-Efficient Anomaly-Based Intrusion Detection Engine
for Embedded Systems". IEEE Transactions on Computers. 66 (1): 163–
177. doi:10.1109/TC.2016.2560839. ISSN 0018-9340.
36 França, A. L.; Jasinski, R.; Cemin, P.; Pedroni, V. A.; Santin, A. O. (2015-05-
01). "The energy cost of network security: A hardware vs. software comparison". 2015
IEEE International Symposium on Circuits and Systems (ISCAS): 81–
84. doi:10.1109/ISCAS.2015.7168575.
37 França, A. L. P. d; Jasinski, R. P.; Pedroni, V. A.; Santin, A. O. (2014-07-
01). "Moving Network Protection from Software to Hardware: An Energy Efficiency
Analysis". 2014 IEEE Computer Society Annual Symposium on VLSI: 456–
461. doi:10.1109/ISVLSI.2014.89.
38 "Towards an Energy-Efficient Anomaly-Based Intrusion Detection Engine for
Embedded Systems" (PDF). SecPLab. This article incorporates public domain
material from the National Institute of Standards and Technology document "Guide to
Intrusion Detection and Prevention Systems, SP800-94" by Karen Scarfone, Peter Mell
(retrieved on 1 January 2010).
39 Hansen, James V.; Benjamin Lowry, Paul; Meservy, Rayman; McDonald, Dan
(2007). "Genetic programming for prevention of cyberterrorism through dynamic and
evolving intrusion detection". Decision Support Systems (DSS). 43 (4): 1362–
1374. doi:10.1016/j.dss.2006.04.004. SSRN 877981.
40 Scarfone, Karen; Mell, Peter (February 2007). "Guide to Intrusion Detection and
Prevention Systems (IDPS)" (PDF). Computer Security Resource Center. National
Institute of Standards and Technology (800-94). Retrieved 1 January 2010.
41 Saranya, J.; Padmavathi, G. (2015). "A Brief Study on Different Intrusions and
Machine Learning-based Anomaly Detection Methods in Wireless Sensor
Networks" (PDF). Avinashilingam Institute for Home Science and Higher Education for
Women (6(4)). Retrieved 4 April 2015.
42 Singh, Abhishek. "Evasions In Intrusion Prevention Detection Systems". Virus
Bulletin. Retrieved April 2010. Check date values in: |access-date= (help)
43 Bezroukov, Nikolai (11 December 2008). "Architectural Issues of Intrusion
Detection Infrastructure in Large Enterprises (Revision 0.82)". Softpanorama.
Retrieved 30 July 2010.
44 P.M. Mafra and J.S. Fraga and A.O. Santin (2014). "Algorithms for a distributed
IDS in MANETs". Journal of Computer and System Sciences. 80 (3): 554–
570. doi:10.1016/j.jcss.2013.06.011.
45 Common vulnerabilities and exposures (CVE) by product
46 NIST SP 800-83, Guide to Malware Incident Prevention and Handling
47 NIST SP 800-94, Guide to Intrusion Detection and Prevention Systems (IDPS)
48 Study by Gartner "Magic Quadrant for Network Intrusion Prevention System
Appliances"
6.1.10 Computational Linguistics methods
Computational Linguistics methods study fundamental principles of encoding,
transmission and comprehension of information using logic, philosophy, linguistics,
musicology, mathematics, computer science, artificial intelligence and cognitive
science by a mix of top-down and bottom-up approaches like a ‘model approach’ and
ontology approach coupled with distributional semantics and vector models.
We consider knowledge representation, computational linguistics and visualisation and
geometrical algorithmics, probabilistic parsing and grammar induction, computational
pragmatics and dialogue modelling, information retrieval, statistical morpho-syntactic
parsing and machine translation, optimality theory and discourse, and computational
cognitive science, mathematics, focusing on consequence and plurality.
6.1.11 Deep Learning and Machine Intelligence
• Deep learning and neural networks for computer vision, and natural language
processing
• Topic modeling, probabilistic graphical models for sequential data and dynamic
systems,
• Meta-heuristic algorithms, reinforcement learning, game algorithms, knowledge
engineering
• Big data processing framework and platform: Hadoop MapReduce, Spark, Hadoop
cluster
• Data mining algorithms, Data visualization, Practical big data analysis for scientific
and industrial data
• Pattern mining and intelligent services for IoT(Internet of Things) and Stream data
• Big data modeling and big data quality management
6.2.8 Cryptanalysis
6.2.8.1 Commentary
Cryptanalysis analyses information systems to find the hidden views of the
systems[1] and is applied to break ciphers and access to the contents of encrypted
messages. It uses mathematics to study cryptographic algorithms and side-channel
attacks not targeting weaknesses in the algorithms and their implementation including
the use of integer factorization.
The problem can be stated as given some encrypted ciphertext acquire maximum
information about the original unencrypted plaintext by discerning the encipherment
process then finding the unique key for encrypted messages.
Attacks are defined according to the type of information the attacker has. It assumes
the analysis gives a known general algorithm (Shannon's Maxim[2], Kerckhoffs'
principle[3])[4] viz.
• Ciphertext-only
• Known-plaintext
• Chosen-plaintext (chosen-ciphertext)
• Adaptive chosen-plaintext
• Adaptive chosen ciphertext attack.
• Related-key attack
Attacks can also be defined by the computational resources required e.g.
• Time
• Memory
• Data
Estimates are often impractical especially when not tested live[5] but it is sufficient to
find any weakness.[6] Cryptanalysis can give various measures of usefulness e.g.
• Total break
• Global deduction
• Instance (local) deduction
• Information deduction
• Distinguishing algorithm
Academics attack weakened versions of cryptosystems as attacks are normally
exponential in difficulty[7] so the full cryptosystem can be strong even though parts are
weak due to resource requirements. However some of these strategies actually are
fatal e.g. DES, MD5, and SHA-1.[6]
Cryptanalysis history is described in [8][9].[10][11][12][13][14][15][16][17][18][19][20]
[21][22][23][24].
Multiple messages with the same key are insecure and are defined as "in depth."[25] It
is found by messages with the same indicator for key generator initial settings.[26]
Cryptanalysts benefit from lining up identical enciphering operations among a set of
messages. The individual plaintexts can then be worked out linguistically by
trying probable words (or phrases), known as "cribs," at various locations; a correct
guess, when combined with the merged plaintext stream, produces intelligible text
from the other plaintext component. Any recovered fragment of plaintext can often be
extended and the extra characters can be combined with the merged plaintext stream
to extend the plaintext back and forth between the plaintexts with intelligibility
criterion to check guesses to recover much or all of the original plaintexts. When a
recovered plaintext is then combined with its ciphertext, the key is revealed.
Knowledge of a key gives the other messages encrypted with the same key, and
related keys determining the system used for constructing them.[24][27][28]
Symmetric ciphers use differential cryptanalysis, impossible differential cryptanalysis,
improbable differential cryptanalysis, integral cryptanalysis, linear cryptanalysis and
mod-n cryptanalysis. Attacks come from boomerang attack, brute-force attack, Davies'
attack, meet-in-the-middle attack, related-key attack, sandwich attack, slide attack
and XSL attack.
Asymmetric cryptography relies on using two mathematically related keys, a private,
and a public and based hard mathematical problems for security. Attack requires the
solution of the problem. Two-key cryptography uses different mathematical questions
from a single-key cryptography and wider mathematical research.
Advances in computing technology make cryptanalysis easier by speed. Factoring
techniques have developed with mathematical insight and creativity. Asymmetric
schemes suffer from the knowledge from the public key.[29]
Attacking cryptographic hash systems comes from birthday attack and are analysed
with hash function security summary and rainbow table.
Side-channel attacks are analysed with black-bag cryptanalysis, power analysis,
rubber-hose cryptanalysis and timing analysis. The attacks come from man-in-the-
middle attack and replay attack.
Quantum computers have potential use in cryptanalysis by cutting computational
complexity.[30][31]
6.2.8.2 References
1. "Cryptanalysis/Signals Analysis". Nsa.gov. 2009-01-15. Retrieved 2013-04-15.
2. Shannon, Claude (4 October 1949). "Communication Theory of Secrecy
Systems". Bell System Technical Journal. 28: 662. Retrieved 20 June2014.
3. Kahn, David (1996), The Codebreakers: the story of secret writing (second ed.),
Scribners, p. 235
4. Schmeh, Klaus (2003). Cryptography and public key infrastructure on the
Internet. John Wiley & Sons. p. 45. ISBN 978-0-470-84745-9.
5. McDonald, Cameron; Hawkes, Philip; Pieprzyk, Josef, SHA-1 collisions now
252 (PDF), retrieved 4 April 2012
6. Schneier 2000
7. For an example of an attack that cannot be prevented by additional rounds,
see slide attack.
8. Smith 2000, p. 4
9. "Breaking codes: An impossible task?". BBC News. June 21, 2004.
10. History of Islamic philosophy: With View of Greek Philosophy and Early history of
Islam P.199
11. The Biographical Encyclopedia of Islamic Philosophy P.279
12. Crypto History Archived August 28, 2008, at the Wayback Machine.
13. Singh 1999, p. 17
14. Singh 1999, pp. 45–51
15. Singh 1999, pp. 63–78
16. Singh 1999, p. 116
17. Winterbotham 2000, p. 229.
18. Hinsley 1993.
19. Copeland 2006, p. 1
20. Singh 1999, p. 244
21. Churchhouse 2002, pp. 33, 34
22. Budiansky 2000, pp. 97–99
23. Calvocoressi 2001, p. 66
24. Tutte 1998
25. Churchhouse 2002, p. 34
26. Churchhouse 2002, pp. 33, 86
27. David Kahn Remarks on the 50th Anniversary of the National Security Agency,
November 1, 2002.
28. Tim Greene, Network World, Former NSA tech chief: I don't trust the
cloud Archived 2010-03-08 at the Wayback Machine.. Retrieved March 14, 2010.
29. Stallings, William (2010). Cryptography and Network Security: Principles and
Practice. Prentice Hall. ISBN 0136097049.
30. "Shor's Algorithm – Breaking RSA Encryption". AMS Grad Blog. 2014-04-30.
Retrieved 2017-01-17.
31. Daniel J. Bernstein (2010-03-03). "Grover vs. McEliece" (PDF).
32. Ibrahim A. Al-Kadi,"The origins of cryptology: The Arab
contributions", Cryptologia, 16(2) (April 1992) pp. 97–126.
33. Friedrich L. Bauer: "Decrypted Secrets". Springer 2002. ISBN 3-540-42674-4
34. Budiansky, Stephen (10 October 2000), Battle of wits: The Complete Story of
Codebreaking in World War II, Free Press, ISBN 978-0-684-85932-3
35. Burke, Colin B. (2002). "It Wasn't All Magic: The Early Struggle to Automate
Cryptanalysis, 1930s-1960s". Fort Meade: Center for Cryptologic History, National
Security Agency.
36. Calvocoressi, Peter (2001) [1980], Top Secret Ultra, Cleobury Mortimer,
Shropshire: M & M Baldwin, ISBN 0-947712-41-0
37. Churchhouse, Robert (2002), Codes and Ciphers: Julius Caesar, the Enigma and
the Internet, Cambridge: Cambridge University Press, ISBN 978-0-521-00890-7
38. Copeland, B. Jack, ed. (2006), Colossus: The Secrets of Bletchley Park's
Codebreaking Computers, Oxford: Oxford University Press, ISBN 978-0-19-284055-4
39. Helen Fouché Gaines, "Cryptanalysis", 1939, Dover. ISBN 0-486-20097-3
40. David Kahn, "The Codebreakers - The Story of Secret Writing", 1967. ISBN 0-684-
83130-9
41. Lars R. Knudsen: Contemporary Block Ciphers. Lectures on Data Security 1998:
105-126
42. Schneier, Bruce (January 2000). "A Self-Study Course in Block-Cipher
Cryptanalysis". Cryptologia. 24 (1): 18–34. doi:10.1080/0161-110091888754
43. Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach,
Mathematical Association of America, 1966. ISBN 0-88385-622-0
44. Christopher Swenson, Modern Cryptanalysis: Techniques for Advanced Code
Breaking, ISBN 978-0-470-13593-8
45. Friedman, William F., Military Cryptanalysis, Part I, ISBN 0-89412-044-1
46. Friedman, William F., Military Cryptanalysis, Part II, ISBN 0-89412-064-6
47. Friedman, William F., Military Cryptanalysis, Part III, Simpler Varieties of
Aperiodic Substitution Systems, ISBN 0-89412-196-0
48. Friedman, William F., Military Cryptanalysis, Part IV, Transposition and
Fractionating Systems, ISBN 0-89412-198-7
49. Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I,
Volume 1, ISBN 0-89412-073-5
50. Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I,
Volume 2, ISBN 0-89412-074-3
51. Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II,
Volume 1, ISBN 0-89412-075-1
52. Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II,
Volume 2, ISBN 0-89412-076-X
53. Hinsley, F.H. (1993), Introduction: The influence of Ultra in the Second World
War in Hinsley & Stripp 1993, pp. 1–13
54. Singh, Simon (1999), The Code Book: The Science of Secrecy from Ancient Egypt
to Quantum Cryptography, London: Fourth Estate, pp. 143–189, ISBN 1-85702-879-1
55. Smith, Michael (2000), The Emperor's Codes: Bletchley Park and the breaking of
Japan's secret ciphers, London: Random House, ISBN 0-593-04641-2
56. Tutte, W. T. (19 June 1998), Fish and I (PDF), archived from the original (PDF) on
10 July 2007, retrieved 7 October 2010Transcript of a lecture given by Prof. Tutte at
the University of Waterloo
57. Winterbotham, F.W. (2000) [1974], The Ultra secret: the inside story of Operation
Ultra, Bletchley Park and Enigma, London: Orion Books Ltd, ISBN 978-0-7528-3751-
2, OCLC 222735270
58. Bard, Gregory V. (2009). Algebraic Cryptanalysis. Springer. ISBN 978-1-4419-
1019-6.
59. Hinek, M. Jason (2009). Cryptanalysis of RSA and Its Variants. CRC
Press. ISBN 978-1-4200-7518-2.
60. Joux, Antoine (2009). Algorithmic Cryptanalysis. CRC Press. ISBN 978-1-4200-
7002-6.
61. Junod, Pascal; Canteaut, Anne (2011). Advanced Linear Cryptanalysis of Block
and Stream Ciphers. IOS Press. ISBN 978-1-60750-844-1.
62. Stamp, Mark & Low, Richard (2007). Applied Cryptanalysis: Breaking Ciphers in
the Real World. John Wiley & Sons. ISBN 978-0-470-11486-5.
63. Sweigart, Al (2013). Hacking Secret Ciphers with Python. Al Sweigart. ISBN 978-
1482614374.
64. Swenson, Christopher (2008). Modern cryptanalysis: techniques for advanced
code breaking. John Wiley & Sons. ISBN 978-0-470-13593-8.
65. Wagstaff, Samuel S. (2003). Cryptanalysis of number-theoretic ciphers. CRC
Press. ISBN 978-1-58488-153-7.
66. Basic Cryptanalysis (files contain 5 line header, that has to be removed first)
67. Distributed Computing Projects
68. List of tools for cryptanalysis on modern cryptography
69. Simon Singh's crypto corner
70. The National Museum of Computing
71. UltraAnvil tool for attacking simple substitution ciphers
72. How Alan Turing Cracked The Enigma Code Imperial War Museums
6.2.8.3 Examples
6.2.8.3.1 Mod n cryptanalysis
In cryptography, mod n cryptanalysis is an attack applicable to block and stream
ciphers. It is a form of partitioning cryptanalysis that exploits unevenness in how
the cipher operates over equivalence classes (congruence classes) modulo n. The
method was first suggested in 1999 by John Kelsey, Bruce Schneier, and David
Wagner and applied to RC5P (a variant of RC5) and M6 (a family of block ciphers used in
the FireWire standard). These attacks used the properties of binary addition and bit
rotation modulo a Fermat prime.
6.2.8.3.2 References
1. John Kelsey, Bruce Schneier, David Wagner (March 1999). Mod n Cryptanalysis,
with Applications Against RC5P and M6(PDF/PostScript). Fast Software Encryption,
Sixth International Workshop Proceedings. Rome: Springer-Verlag. pp. 139–155.
Retrieved 2007-02-12.
2. Vincent Rijmen (2003-12-01). ""mod n" Cryptanalysis of Rabbit" (PDF). White
paper, Version 1.0. Cryptico. Retrieved 2007-02-12.
3. Toshio Tokita; Tsutomu Matsumoto. "M8". Ipsj Journal. 42 (8).
6.2.8.3.3 Improbable differential cryptanalysis
In cryptography, impossible differential cryptanalysis is a form of differential
cryptanalysis for block ciphers. While ordinary differential cryptanalysis tracks
differences that propagate through the cipher with greater than expected probability,
impossible differential cryptanalysis exploits differences that are impossible (having
probability 0) at some intermediate state of the cipher algorithm.
Lars Knudsen appears to be the first to use a form of this attack, in the 1998 paper
where he introduced his AES candidate, DEAL.[1] The first presentation to attract the
attention of the cryptographic community was later the same year at the rump session
of CRYPTO '98, in which Eli Biham, Alex Biryukov, and Adi Shamir introduced the name
"impossible differential"[2] and used the technique to break 4.5 out of 8.5 rounds
of IDEA[3] and 31 out of 32 rounds of the NSA-designed cipher Skipjack.[4] This
development led cryptographer Bruce Schneier to speculate that the NSA had no
previous knowledge of impossible differential cryptanalysis.[5] The technique has since
been applied to many other ciphers: Khufu and Khafre, E2, variants
of Serpent, MARS, Twofish, Rijndael, CRYPTON, Zodiac, Hierocrypt-3, TEA, XTEA, Mini-
AES, ARIA, Camellia, and SHACAL-2.
Biham, Biryukov and Shamir also presented a relatively efficient specialized method for
finding impossible differentials that they called a miss-in-the-middle attack. This
consists of finding "two events with probability one, whose conditions cannot be met
together."[6]
6.2.8.3.4 References
1. Lars Knudsen (February 21, 1998). "DEAL - A 128-bit Block Cipher". Technical
report no. 151. Department of Informatics, University of Bergen, Norway.
Retrieved 2015-05-28.
2. Shamir, A. (August 25, 1998) Impossible differential attacks.CRYPTO '98 rump
session (video at Google Video—uses Flash)
3. Biryukov, A. (August 25, 1998) Miss-in-the-middle attacks on IDEA. CRYPTO '98
rump session (video at Google Video—uses Flash)
4. Biham, E. (August 25, 1998) Impossible cryptanalysis of Skipjack. CRYPTO '98
rump session (video at Google Video—uses Flash)
5. Bruce Schneier (September 15, 1998). "Impossible Cryptanalysis and
Skipjack". Crypto-Gram Newsletter.
6. E. Biham; A. Biryukov; A. Shamir (March 1999). Miss in the Middle Attacks on
IDEA, Khufu and Khafre. 6th International Workshop on Fast Software Encryption (FSE
1999). Rome: Springer-Verlag. pp. 124–138. Archived from the
original (gzippedPostScript) on 2011-05-15. Retrieved 2007-02-14.
7. Orr Dunkelman (March 1999). An Analysis of Serpent-p and Serpent-p-
ns (PDF/PostScript). Rump session, 2nd AES Candidate Conference. Rome: NIST.
Retrieved 2007-02-27.
8. E. Biham; A. Biryukov; A. Shamir (May 1999). Cryptanalysis of Skipjack Reduced
to 31 Rounds using Impossible Differentials(PDF/PostScript). Advances in Cryptology
- EUROCRYPT '99. Prague: Springer-Verlag. pp. 12–23. Retrieved 2007-02-13.
9. Kazumaro Aoki; Masayuki Kanda (1999). "Search for Impossible Differential of
E2" (PDF/PostScript). Retrieved 2007-02-27.
10. Eli Biham, Vladimir Furman (April 2000). Impossible Differential on 8-Round
MARS' Core (PDF/PostScript). 3rd AES Candidate Conference. pp. 186–194.
Retrieved 2007-02-27.
11. Eli Biham; Vladimir Furman (December 2000). Improved Impossible Differentials
on Twofish (PDF/PostScript). INDOCRYPT 2000. Calcutta: Springer-Verlag. pp. 80–92.
Retrieved 2007-02-27.
12. Deukjo Hong; Jaechul Sung; Shiho Moriai; Sangjin Lee; Jongin Lim (April
2001). Impossible Differential Cryptanalysis of Zodiac(PDF). 8th International Workshop
on Fast Software Encryption (FSE 2001). Yokohama: Springer-Verlag. pp. 300–311.
Retrieved 2006-12-30.
13. Raphael C.-W. Phan; Mohammad Umar Siddiqi (July 2001). "Generalised
Impossible Differentials of Advanced Encryption Standard" (PDF). Electronics
Letters. 37 (14): pp. 896–898. doi:10.1049/el:20010619. Retrieved 2007-07-17.
14. Jung Hee Cheon, MunJu Kim, and Kwangjo Kim (September 2001). Impossible
Differential Cryptanalysis of Hierocrypt-3 Reduced to 3 Rounds (PDF). Proceedings of
2nd NESSIE Workshop. Retrieved 2007-02-27.
15. Jung Hee Cheon; MunJu Kim; Kwangjo Kim; Jung-Yeun Lee; SungWoo Kang
(December 26, 2001). Improved Impossible Differential Cryptanalysis of Rijndael and
Crypton. 4th International Conference on Information Security and Cryptology (ICISC
2001). Seoul: Springer-Verlag. pp. 39–49. CiteSeerX 10.1.1.15.9966 .
16. Dukjae Moon; Kyungdeok Hwang; Wonil Lee; Sangjin Lee; AND Jongin Lim
(February 2002). Impossible Differential Cryptanalysis of Reduced Round XTEA and
TEA (PDF). 9th International Workshop on Fast Software Encryption (FSE 2002). Leuven:
Springer-Verlag. pp. 49–60. Retrieved 2007-02-27.
17. Raphael C.-W. Phan (May 2002). "Classes of Impossible Differentials of Advanced
Encryption Standard" (PDF). Electronics Letters. 38 (11): pp. 508–
510. doi:10.1049/el:20020347. Retrieved 2007-07-17.
18. Raphael C.-W. Phan (October 2003). "Impossible Differential Cryptanalysis of
Mini-AES" (PDF). Cryptologia. XXVII (4): pp. 283–292. doi:10.1080/0161-
110391891964. ISSN 0161-1194. Archived from the original (PDF) on 2007-09-26.
Retrieved 2007-02-27.
19. Raphael C.-W. Phan (July 2004). "Impossible Differential Cryptanalysis of 7-round
AES". Information Processing Letters. 91 (1): pp. 29–32. doi:10.1016/j.ipl.2004.03.006.
Retrieved 2007-07-19.
20. Wenling Wu; Wentao Zhang; Dengguo Feng (2006). "Impossible Differential
Cryptanalysis of ARIA and Camellia" (PDF). Retrieved 2007-02-27.
6.2.8.3.5 Integral cryptanalysis
In cryptography, integral cryptanalysis is a cryptanalytic attack that is particularly
applicable to block ciphers based on substitution-permutation networks. It was
originally designed by Lars Knudsen as a dedicated attack against Square, so it is
commonly known as the Square attack. It was also extended to a few other ciphers
related to Square: CRYPTON, Rijndael, and SHARK. Stefan Lucksgeneralized the attack
to what he called a saturation attack and used it to attack Twofish, which is not at all
similar to Square, having a radically different Feistel network structure. Forms of
integral cryptanalysis have since been applied to a variety of ciphers,
including Hierocrypt, IDEA, Camellia, Skipjack, MISTY1, MISTY2, SAFER++, KHAZAD,
and FOX (now called IDEA NXT).
Unlike differential cryptanalysis, which uses pairs of chosen plaintexts with a
fixed XOR difference, integral cryptanalysis uses sets or even multisets of chosen
plaintexts of which part is held constant and another part varies through all
possibilities. For example, an attack might use 256 chosen plaintexts that have all but
8 of their bits the same, but all differ in those 8 bits. Such a set necessarily has an XOR
sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide
information about the cipher's operation. This contrast between the differences of pairs
of texts and the sums of larger sets of texts inspired the name "integral cryptanalysis",
borrowing the terminology of calculus.
6.2.8.3.6 References
1. Joan Daemen, Lars Knudsen, Vincent Rijmen (January 1997). The Block Cipher
Square (PDF). 4th International Workshop on Fast Software Encryption (FSE '97),
Volume 1267 of Lecture Notes in Computer Science. Haifa: Springer-Verlag. pp. 149–
165. Retrieved 2007-02-15.
2. Carl D'Halluin, Gert Bijnens, Vincent Rijmen, Bart Preneel (March 1999). Attack
on Six Rounds of Crypton (PDF/PostScript). 6th International Workshop on Fast
Software Encryption (FSE '99). Rome: Springer-Verlag. pp. 46–59. Retrieved 2007-03-03.
3. N. Ferguson, J. Kelsey, S. Lucks, B. Schneier, M. Stay, D. Wagner, D. Whiting
(April 2000). Improved Cryptanalysis of Rijndael(PDF/PostScript). 7th International
Workshop on Fast Software Encryption (FSE 2000). New York City: Springer-Verlag.
pp. 213–230. Retrieved 2007-03-06.
4. Stefan Lucks (September 14, 2000). The Saturation Attack - a Bait for
Twofish (PDF/PostScript). 8th International Workshop on Fast Software Encryption (FSE
'01). Yokohama: Springer-Verlag. pp. 1–15. Retrieved 2006-11-30.
5. Paulo S. L. M. Barreto, Vincent Rijmen, Jorge Nakahara, Jr., Bart Preneel, Joos
Vandewalle, Hae Yong Kim (April 2001). Improved SQUARE Attacks against Reduced-
Round HIEROCRYPT (PDF). 8th International Workshop on Fast Software Encryption
(FSE '01). Yokohama: Springer-Verlag. pp. 165–173. Retrieved 2007-03-03.
6. Jorge Nakahara, Jr.; Paulo S.L.M. Barreto; Bart Preneel; Joos Vandewalle; Hae Y.
Kim (2001). "SQUARE Attacks on Reduced-Round PES and IDEA Block
Ciphers" (PDF/PostScript). Retrieved 2007-03-03.
7. Yongjin Yeom; Sangwoo Park; Iljun Kim (February 2002). On the Security of
CAMELLIA against the Square Attack (PDF). 9th International Workshop on Fast
Software Encryption (FSE '02). Leuven: Springer-Verlag. pp. 89–99. Retrieved 2007-03-
03.
8. Kyungdeok Hwang; Wonil Lee; Sungjae Lee; Sangjin Lee; Jongin Lim (February
2002). Saturation Attacks on Reduced Round Skipjack (PDF). 9th International
Workshop on Fast Software Encryption (FSE '02). Leuven: Springer-Verlag. pp. 100–111.
Retrieved 2007-03-03.
9. Lars Knudsen; David Wagner (December 11, 2001). Integral
cryptanalysis (PDF/PostScript). 9th International Workshop on Fast Software Encryption
(FSE '02). Leuven: Springer-Verlag. pp. 112–127. Retrieved 2006-11-30.
10. Gilles Piret, Jean-Jacques Quisquater (February 16, 2003). "Integral
Cryptanalysis on reduced-round Safer++" (PDF/PostScript). Retrieved 2007-03-03.
11. Frédéric Muller (December 2003). A New Attack against Khazad (PDF). Advances
in Cryptology - ASIACRYPT 2003. Taipei: Springer-Verlag. pp. 347–358. Retrieved 2007-
03-03.
12. Wu Wenling; Zhang Wentao; Feng Dengguo (August 25, 2005). "Improved Integral
Cryptanalysis of FOX Block Cipher" (PDF). Retrieved 2007-03-03.
6.2.8.3.7 Linear cryptanalysis
In cryptography, linear cryptanalysis is a general form of cryptanalysis based on
finding affine approximations to the action of a cipher. Attacks have been developed
for block ciphers and stream ciphers. Linear cryptanalysis is one of the two most
widely used attacks on block ciphers; the other being differential cryptanalysis.
The discovery is attributed to Mitsuru Matsui, who first applied the technique to
the FEAL cipher (Matsui and Yamagishi, 1992).[1]Subsequently, Matsui published an
attack on the Data Encryption Standard (DES), eventually leading to the first
experimental cryptanalysis of the cipher reported in the open community (Matsui, 1993;
{
\
d
1994).[2][3] The attack on DES is not generally practical, requiring 247 known
iplaintexts.[3]
s variety of refinements to the attack have been suggested, including using multiple
A
linear approximations or incorporating non-linear expressions, leading to a
pgeneralized partitioning cryptanalysis. Evidence of security against linear cryptanalysis
lis usually expected of new cipher designs.
6.2.8.3.7.1 Overview
aThere are two parts to linear cryptanalysis. The first is to construct linear equations
yrelating plaintext, ciphertext and key bits that have a high bias; that is, whose
probabilities of holding (over the space of all possible values of their variables) are as
s
close as possible to 0 or 1. The second is to use these linear equations in conjunction
t
with known plaintext-ciphertext pairs to derive key bits.
y6.2.8.3.7.2 Constructing linear equations
For the purposes of linear cryptanalysis, a linear equation expresses the equality of two
lexpressions which consist of binary variables combined with the exclusive-or (XOR)
operation.
e For example, the following equation, from a hypothetical cipher, states the
XOR sum of the first and third plaintext bits (as in a block cipher's block) and the first
ciphertext bit is equal to the second bit of the key:
In
P an ideal cipher, any linear equation relating plaintext, ciphertext and key bits would
hold with probability 1/2. Since the equations dealt with in linear cryptanalysis will vary
_in probability, they are more accurately referred to as linear approximations.
{ procedure for constructing approximations is different for each cipher. In the most
The
basic type of block cipher, a substitution-permutation network, analysis is
iconcentrated primarily on the S-boxes, the only nonlinear part of the cipher (i.e. the
_operation of an S-box cannot be encoded in a linear equation). For small enough S-
boxes, it is possible to enumerate every possible linear equation relating the S-box's
{input and output bits, calculate their biases and choose the best ones. Linear
1approximations for S-boxes then must be combined with the cipher's other actions,
such as permutation and key mixing, to arrive at linear approximations for the entire
}cipher. The piling-up lemma is a useful tool for this combination step. There are also
}techniques for iteratively improving linear approximations (Matsui 1994).
\6.2.8.3.7.3 Deriving key bits
Having obtained a linear approximation of the form:
owe can then apply a straightforward algorithm (Matsui's Algorithm 2), using known
plaintext-ciphertext
p pairs, to guess at the values of the key bits involved in the
approximation.
lFor each set of values of the key bits on the right-hand side (referred to as a partial
key),
u count how many times the approximation holds true over all the known plaintext-
ciphertext pairs; call this count T. The partial key whose T has the greatest absolute
sdifference from half the number of plaintext-ciphertext pairs is designated as the most
likely set of values for those key bits. This is because it is assumed that the correct
partial key will cause the approximation to hold with a high bias. The magnitude of the
P
bias is significant here, as opposed to the magnitude of the probability itself.
_ procedure can be repeated with other linear approximations, obtaining guesses at
This
values of key bits, until the number of unknown key bits is low enough that they can be
{attacked with brute force.
i6.2.8.3.7.4 References
1. Matsui, M. & Yamagishi, A. "A new method for known plaintext attack of FEAL
_cipher". Advances in Cryptology - EUROCRYPT1992.
{2. Matsui, M. "The first experimental cryptanalysis of the data encryption
standard". Advances in Cryptology - CRYPTO 1994.
2
}
}
3. ^ Jump up to:a b Matsui, M. "Linear cryptanalysis method for DES
cipher"(PDF). Advances in Cryptology - EUROCRYPT 1993. Archived from the
original (PDF) on 2007-09-26. Retrieved 2007-02-22.
6.2.8.3.8 Differential cryptanalysis
Differential cryptanalysis is a general form of cryptanalysis applicable primarily to
block ciphers, but also to stream ciphers and cryptographic hash functions. In the
broadest sense, it is the study of how differences in information input can affect the
resultant difference at the output. In the case of a block cipher, it refers to a set of
techniques for tracing differences through the network of transformation, discovering
where the cipher exhibits non-random behavior, and exploiting such properties to
recover the secret key (cryptography key).
6.2.8.3.8.1 History
The discovery of differential cryptanalysis is generally attributed to Eli Biham and Adi
Shamir in the late 1980s, who published a number of attacks against various block
ciphers and hash functions, including a theoretical weakness in the Data Encryption
Standard (DES). It was noted by Biham and Shamir that DES is surprisingly resistant to
differential cryptanalysis but small modifications to the algorithm would make it much
more susceptible.[1]
In 1994, a member of the original IBM DES team, Don Coppersmith, published a paper
stating that differential cryptanalysis was known to IBM as early as 1974, and that
defending against differential cryptanalysis had been a design goal.[2] According to
author Steven Levy, IBM had discovered differential cryptanalysis on its own, and
the NSA was apparently well aware of the technique.[3] IBM kept some secrets, as
Coppersmith explains: "After discussions with NSA, it was decided that disclosure of
the design considerations would reveal the technique of differential cryptanalysis, a
powerful technique that could be used against many ciphers. This in turn would
weaken the competitive advantage the United States enjoyed over other countries in
the field of cryptography."[2] Within IBM, differential cryptanalysis was known as the
"T-attack"[2] or "Tickle attack".[4]
While DES was designed with resistance to differential cryptanalysis in mind, other
contemporary ciphers proved to be vulnerable. An early target for the attack was
the FEAL block cipher. The original proposed version with four rounds (FEAL-4) can be
broken using only eight chosen plaintexts, and even a 31-round version of FEAL is
susceptible to the attack. In contrast, the scheme can successfully cryptanalyze DES
with an effort on the order 247 chosen plaintexts.
6.2.8.3.8.2 Attack mechanics
Differential cryptanalysis is usually a chosen plaintext attack, meaning that the
attacker must be able to obtain ciphertexts for some set of plaintexts of their choosing.
There are, however, extensions that would allow a known plaintext or even
a ciphertext-only attack. The basic method uses pairs of plaintext related by a
constant difference; difference can be defined in several ways, but the eXclusive OR
(XOR)operation is usual. The attacker then computes the differences of the
corresponding ciphertexts, hoping to detect statistical patterns in their distribution.
The resulting pair of differences is called a differential. Their statistical properties
depend upon the nature of the S-boxes used for encryption, so the attacker analyses
differentials (ΔX, ΔY), where ΔY = S(X ⊕ ΔX) ⊕ S(X) (and ⊕ denotes exclusive or) for
each such S-box S. In the basic attack, one particular ciphertext difference is expected
to be especially frequent; in this way, the ciphercan be distinguished from random.
More sophisticated variations allow the key to be recovered faster than exhaustive
search.
In the most basic form of key recovery through differential cryptanalysis, an attacker
requests the ciphertexts for a large number of plaintext pairs, then assumes that the
differential holds for at least r − 1 rounds, where r is the total number of rounds. The
attacker then deduces which round keys (for the final round) are possible, assuming
the difference between the blocks before the final round is fixed. When round keys are
short, this can be achieved by simply exhaustively decrypting the ciphertext pairs one
round with each possible round key. When one round key has been deemed a potential
round key considerably more often than any other key, it is assumed to be the correct
round key.
For any particular cipher, the input difference must be carefully selected for the attack
to be successful. An analysis of the algorithm's internals is undertaken; the standard
method is to trace a path of highly probable differences through the various stages of
encryption, termed a differential characteristic.
Since differential cryptanalysis became public knowledge, it has become a basic
concern of cipher designers. New designs are expected to be accompanied by evidence
that the algorithm is resistant to this attack, and many, including the Advanced
Encryption Standard, have been proven secure against the attack.
6.2.8.3.8.3 Attack in detail
The attack relies primarily on the fact that a given input/output difference pattern only
occurs for certain values of inputs. Usually the attack is applied in essence to the non-
linear components as if they were a solid component (usually they are in fact look-up
tables or S-boxes). Observing the desired output difference (between two chosen or
known plaintext inputs) suggests possible key values.
For example, if a differential of 1 => 1 (implying a difference in the least significant
bit (LSB) of the input leads to an output difference in the LSB) occurs with probability of
4/256 (possible with the non-linear function in the AES cipher for instance) then for only
4 values (or 2 pairs) of inputs is that differential possible. Suppose we have a non-linear
function where the key is XOR'ed before evaluation and the values that allow the
differential are {2,3} and {4,5}. If the attacker sends in the values of {6, 7} and observes
the correct output difference it means the key is either 6 ⊕ K = 2, or 6 ⊕ K = 4, meaning
the key K is either 2 or 4.
In essence, for an n-bit non-linear function one would ideally seek as close to 2−(n −
1) as possible to achieve differential uniformity. When this happens, the differential
attack requires as much work to determine the key as simply brute forcing the key.
The AES non-linear function has a maximum differential probability of 4/256 (most
entries however are either 0 or 2). Meaning that in theory one could determine the key
with half as much work as brute force, however, the high branch of AES prevents any
high probability trails from existing over multiple rounds. In fact, the AES cipher would
be just as immune to differential and linear attacks with a much weaker non-linear
function. The incredibly high branch (active S-box count) of 25 over 4R means that over
8 rounds no attack involves fewer than 50 non-linear transforms, meaning that the
probability of success does not exceed Pr[attack] ≤ Pr[best attack on S-box]50. For
example, with the current S-box AES emits no fixed differential with a probability
higher than (4/256)50 or 2−300 which is far lower than the required threshold of
2−128 for a 128-bit block cipher. This would have allowed room for a more efficient S-
box, even if it is 16-uniform the probability of attack would have still been 2−200.
There exist no bijections for even sized inputs/outputs with 2-uniformity. They exist in
odd fields (such as GF(27)) using either cubing or inversion (there are other exponents
that can be used as well). For instance S(x) = x3 in any odd binary field is immune to
differential and linear cryptanalysis. This is in part why the MISTY designs use 7- and 9-
bit functions in the 16-bit non-linear function. What these functions gain in immunity to
differential and linear attacks they lose to algebraic attacks.[why?] That is, they are
possible to describe and solve via a SAT solver. This is in part why AES (for instance)
has an affine mapping after the inversion.
6.2.8.3.8.4 Specialized types
Specialised types can be found with:
• Higher-order differential cryptanalysis
• Truncated differential cryptanalysis
• Impossible differential cryptanalysis
• Boomerang attack
6.2.8.3.8.5 References
1. Biham and Shamir, 1993, pp. 8-9
2. Coppersmith, Don (May 1994). "The Data Encryption Standard (DES) and its
strength against attacks" (PDF). IBM Journal of Research and Development. 38 (3):
243. doi:10.1147/rd.383.0243. (subscription required)
3. Levy, Steven (2001). Crypto: How the Code Rebels Beat the Government —
Saving Privacy in the Digital Age. Penguin Books. pp. 55–56. ISBN 0-14-024432-8.
4. Matt Blaze, sci.crypt, 15 August 1996, Re: Reverse engineering and the Clipper
chip"
5. Eli Biham, Adi Shamir, Differential Cryptanalysis of the Data Encryption Standard,
Springer Verlag, 1993. ISBN 0-387-97930-1, ISBN 3-540-97930-1.
6. Biham, E. and A. Shamir. (1990). Differential Cryptanalysis of DES-like
Cryptosystems. Advances in Cryptology — CRYPTO '90. Springer-Verlag. 2–21.
7. Eli Biham, Adi Shamir,"Differential Cryptanalysis of the Full 16-Round DES," CS
708, Proceedings of CRYPTO '92, Volume 740 of Lecture Notes in Computer Science,
December 1991. (Postscript)
6.2.8.3.9 Symmetric-key algorithm
Symmetric-key algorithms[1] are algorithms for cryptography that use the
same cryptographic keys for both encryption of plaintext and decryption of ciphertext.
The keys may be identical or there may be a simple transformation to go between the
two keys[2]. The keys, in practice, represent a shared secret between two or more
parties that can be used to maintain a private information link.[3] This requirement that
both parties have access to the secret key is one of the main drawbacks of symmetric
key encryption, in comparison to public-key encryption (also known as asymmetric key
encryption).[4]
6.2.8.3.9.1 Types of symmetric-key algorithms
Symmetric-key encryption can use either stream ciphers or block ciphers.[5]
Stream ciphers encrypt the digits (typically bytes) of a message one at a time.
Block ciphers take a number of bits and encrypt them as a single unit, padding the
plaintext so that it is a multiple of the block size. Blocks of 64 bits were commonly
used. The Advanced Encryption Standard (AES) algorithm approved by NIST in
December 2001, and the GCM block cipher mode of operation use 128-bit blocks.
6.2.8.3.9.2 Implementations
Examples of popular symmetric-key algorithms
include Twofish, Serpent, AES (Rijndael), Blowfish, CAST5, Kuznyechik, RC4, 3DES, Ski
pjack, Safer+/++ (Bluetooth), and IDEA.[6][7]
6.2.8.3.9.3 Cryptographic primitives based on symmetric ciphers
Symmetric ciphers are commonly used to achieve other cryptographic primitives than
just encryption.
Encrypting a message does not guarantee that this message is not changed while
encrypted. Hence often a message authentication codeis added to a ciphertext to
ensure that changes to the ciphertext will be noted by the receiver. Message
authentication codes can be constructed from symmetric ciphers (e.g. CBC-MAC).
However, symmetric ciphers cannot be used for non-repudiation purposes except by
involving additional parties. See the ISO/IEC 13888-2 standard.
Another application is to build hash functions from block ciphers. See one-way
compression function for descriptions of several such methods.
6.2.8.3.9.4 Construction of symmetric ciphers
Many modern block ciphers are based on a construction proposed by Horst Feistel.
Feistel's construction makes it possible to build invertible functions from other
functions that are themselves not invertible.
6.2.8.3.9.5 Security of symmetric ciphers
Symmetric ciphers have historically been susceptible to known-plaintext
attacks, chosen-plaintext attacks, differential cryptanalysis and linear cryptanalysis.
Careful construction of the functions for each round can greatly reduce the chances of
a successful attack.
6.2.8.3.10 Key management
6.2.8.3.10.1 Key establishment
Symmetric-key algorithms require both the sender and the recipient of a message to
have the same secret key. All early cryptographic systems required one of those people
to somehow receive a copy of that secret key over a physically secure channel.
Nearly all modern cryptographic systems still use symmetric-key algorithms internally
to encrypt the bulk of the messages, but they eliminate the need for a physically secure
channel by using Diffie–Hellman key exchange or some other public-key protocol to
securely come to agreement on a fresh new secret key for each message (forward
secrecy).
6.2.8.3.10.2 Key generation
When used with asymmetric ciphers for key transfer, pseudorandom key generators are
nearly always used to generate the symmetric cipher session keys. However, lack of
randomness in those generators or in their initialization vectors is disastrous and has
led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation
uses a source of high entropy for its initialization.[8][9][10]
6.2.8.3.11 Reciprocal cipher
A reciprocal cipher is a cipher where, just as one enters the plaintext into
the cryptography system to get the ciphertext, one could enter the ciphertext into the
same place in the system to get the plaintext. A reciprocal cipher is also sometimes
referred as self-reciprocal cipher. Examples of reciprocal ciphers include:
• Beaufort cipher
• Enigma machine
• ROT13
• XOR cipher
• Vatsyayana cipher
Other terms for symmetric-key encryption are secret-key, single-key, shared-key, one-
key, and private-key encryption. Use of the last and first terms can create ambiguity
with similar terminology used in public-key cryptography. Symmetric-key cryptography
is to be contrasted with asymmetric-key cryptography.
6.2.8.3.12 References
1. Kartit, Zaid (February 2016). "Applying Encryption Algorithms for Data Security in
Cloud Storage, Kartit, et. al". Advances in ubiquitous networking: proceedings of
UNet15: 147.
2. Delfs, Hans & Knebl, Helmut (2007). "Symmetric-key encryption". Introduction to
cryptography: principles and applications. Springer. ISBN 9783540492436.
3. Mullen, Gary & Mummert, Carl (2007). Finite fields and applications. American
Mathematical Society. p. 112. ISBN 9780821844182.
4. Pelzl & Paar (2010). Understanding Cryptography. Berlin: Springer-Verlag. p. 30.
5. Ayushi (2010). "A Symmetric Key Cryptographic Algorithm" (PDF). International
Journal of Computer Applications. 1-No 15.
6. Roeder, Tom. "Symmetric-Key Cryptography". www.cs.cornell.edu.
Retrieved 2017-02-05.
7. Ian Goldberg and David Wagner. "Randomness and the Netscape Browser".
January 1996 Dr. Dobb's Journal. quote: "it is vital that the secret keys be generated
from an unpredictable random-number source."
8. Thomas Ristenpart , Scott Yilek. "When Good Randomness Goes Bad: Virtual
Machine Reset Vulnerabilities and Hedging Deployed Cryptography
(2010)" CiteSeerx: 10.1.1.183.3583 quote from abstract: "Random number generators
(RNGs) are consistently a weak link in the secure use of cryptography."
9. "Symmetric Cryptography". James. 2006-03-11.
6.2.8.4 Lattice-based cryptography
Lattice-based cryptography uses lattices for constructions of cryptographic
primitives in the construction or security proof and aid post-quantum cryptography.
They resist attack by both classical and quantum computers as many lattice
problems cannot be solved efficiently.
6.2.8.4.1 History
Ajtai developed a lattice-based cipher protocol with security based on the
computational complexity of classic lattice problems known as Short Integer
Solutions[1] and the equivalence of a cryptographic hash function those solutions.
Hoffstein, Pipher and Silverman used the NTRU lattice-based public-key
encryption scheme[2] which is not so successful. Regev[3] proved a security scheme
with Learning with Errors problem and extensions[4][5] and improved efficiency.[6][7]
[8][9] Eventually Gentry formulated a fully homomorphic encryption scheme, using a
lattice problem.[10]
6.2.8.4.2 Mathematical background
The theory applies a lattice as a set of all integer linear combinations of basis vectors
where basis for a lattice is not unique giving a Shortest Vector Problem or minimal
Euclidean length of a non-zero lattice vector. Both problems are hard to solve
efficiently, even with approximation factors that are polynomial leading to secure
lattice-based cryptographic constructions.
6.2.8.4.3 Lattice-based cryptosystems
Gentry used the technique to show a fully homomorphic encryption scheme[11][12]
[13] supporting arbitrary depth circuits with the following properties:
6.2.8.4.3.1 Encryption
• Peikert's Ring - Learning With Errors (Ring-LWE) Key Exchange[7]
• GGH encryption scheme
• NTRUEncrypt
6.2.8.4.4 Signature
• Güneysu, Lyubashevsky, and Poppleman's Ring - Learning with Errors (Ring-LWE)
Signature[14]
• GGH signature scheme
• NTRUSign
6.2.8.4.5 Hash function
• SWIFFT
• LASH (Lattice Based Hash Function)[15][16]
6.2.8.4.6 Security
Lattice-based systems lead to the public-key cryptography race[17] and alternatives
which are based on the computational complexity for factoring and discrete
logarithm and related problems are solvable in polynomial time on a quantum
computer[18][1][3][4] depending on the probability of breaking the cipher.
6.2.8.4.7 Functionality
The elements of security relie fully homomorphic encryption,[10] indistinguishability
obfuscation,[19] cryptographic multilinear maps, and functional encryption.[19]
6.2.8.4.8 References
1. Ajtai, Miklós (1996). "Generating Hard Instances of Lattice Problems".
Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing.
pp. 99–108. doi:10.1145/237814.237838. ISBN 0-89791-785-5.
2. Hoffstein, Jeffrey; Pipher, Jill; Silverman, Joseph H. (1998-06-21). "NTRU: A ring-
based public key cryptosystem". Algorithmic Number Theory. Springer, Berlin,
Heidelberg: 267–288. doi:10.1007/bfb0054868.
3. Regev, Oded (2005-01-01). "On Lattices, Learning with Errors, Random Linear
Codes, and Cryptography". Proceedings of the Thirty-seventh Annual ACM Symposium
on Theory of Computing. STOC '05. New York, NY, USA: ACM: 84–
93. doi:10.1145/1060590.1060603. ISBN 1581139608.
4. Peikert, Chris (2009-01-01). "Public-key Cryptosystems from the Worst-case
Shortest Vector Problem: Extended Abstract". Proceedings of the Forty-first Annual
ACM Symposium on Theory of Computing. STOC '09. New York, NY, USA: ACM: 333–
342. doi:10.1145/1536414.1536461.ISBN 9781605585062.
5. Brakerski, Zvika; Langlois, Adeline; Peikert, Chris; Regev, Oded; Stehlé, Damien
(2013-01-01). "Classical Hardness of Learning with Errors". Proceedings of the Forty-
fifth Annual ACM Symposium on Theory of Computing. STOC '13. New York, NY, USA:
ACM: 575–584.doi:10.1145/2488608.2488680. ISBN 9781450320290.
6. Lyubashevsky, Vadim; Peikert, Chris; Regev, Oded (2010-05-30). "On Ideal
Lattices and Learning with Errors over Rings". Advances in Cryptology – EUROCRYPT
2010. Springer, Berlin, Heidelberg: 1–23. doi:10.1007/978-3-642-13190-5_1.
7. Peikert, Chris (2014-07-16). "Lattice Cryptography for the Internet" (PDF). IACR.
Retrieved 2017-01-11.
8. Alkim, Erdem; Ducas, Léo; Pöppelmann, Thomas; Schwabe, Peter (2015-01-
01). "Post-quantum key exchange - a new hope".
9. Bos, Joppe; Costello, Craig; Ducas, Léo; Mironov, Ilya; Naehrig, Michael;
Nikolaenko, Valeria; Raghunathan, Ananth; Stebila, Douglas (2016-01-01). "Frodo: Take
off the ring! Practical, Quantum-Secure Key Exchange from LWE".
10. Gentry, Craig (2009-01-01). A Fully Homomorphic Encryption Scheme (Thesis).
Stanford, CA, USA: Stanford University.
11. Gentry, Craig (2009). Fully homomorphic encryption using ideal lattices.
Proceedings of the forty-first annual ACM symposium on Theory of computing. pp. 169–
178. doi:10.1145/1536414.1536440. ISBN 978-1-60558-506-2.
12. "IBM Researcher Solves Longstanding Cryptographic Challenge". IBM Research.
2009-06-25. Retrieved 2017-01-11.
13. Michael Cooney (2009-06-25). "IBM touts encryption innovation". Computer
World. Retrieved 2017-01-11.
14. Güneysu, Tim; Lyubashevsky, Vadim; Pöppelmann, Thomas (2012). "Practical
Lattice-Based Cryptography: A Signature Scheme for Embedded Systems" (PDF).
IACR. doi:10.1007/978-3-642-33027-8_31. Retrieved 2017-01-11.
15. "LASH: A Lattice Based Hash Function". Archived from the original on October
16, 2008. Retrieved 2008-07-31. CS1 maint: BOT: original-url status unknown
(link) (broken)
16. Scott Contini, Krystian Matusiewicz, Josef Pieprzyk, Ron Steinfeld, Jian Guo, San
Ling and Huaxiong Wang (2008). "Cryptanalysis of LASH" (PDF). doi:10.1007/978-3-540-
71039-4_13. CS1 maint: Uses authors parameter (link)
17. Micciancio, Daniele; Regev, Oded (2008-07-22). "Lattice-based
cryptography" (PDF). Retrieved 2017-01-11.
18. Shor, Peter W. (1997-10-01). "Polynomial-Time Algorithms for Prime Factorization
and Discrete Logarithms on a Quantum Computer". SIAM Journal on Computing. 26 (5):
1484–1509. doi:10.1137/S0097539795293172. ISSN 0097-5397.
19. Garg, Sanjam; Gentry, Craig; Halevi, Shai; Raykova, Mariana; Sahai, Amit; Waters,
Brent (2013-01-01). "Candidate Indistinguishability Obfuscation and Functional
Encryption for all circuits".
20. Oded Goldreich, Shafi Goldwasser, and Shai Halevi. "Public-key cryptosystems
from lattice reduction problems". In CRYPTO ’97: Proceedings of the 17th Annual
International Cryptology Conference on Advances in Cryptology, pages 112–131,
London, UK, 1997. Springer-Verlag.
21. Phong Q. Nguyen. "Cryptanalysis of the Goldreich–Goldwasser–Halevi
cryptosystem from crypto ’97". In CRYPTO ’99: Proceedings of the 19th Annual
International Cryptology Conference on Advances in Cryptology, pages 288–304,
London, UK, 1999. Springer-Verlag.
22. Chris Peikert, “Public-key cryptosystems from the worst-case shortest vector
problem: extended abstract,” in Proceedings of the 41st annual ACM symposium on
Theory of computing (Bethesda, MD, USA: ACM, 2009), 333–342, DOI
10.1145/1536414.1536461
23. Oded Regev. Lattice-based cryptography. In Advances in cryptology (CRYPTO),
pages 131–141, 2006.
6.3 Summary
6.3.1 Defences
IoT solutions have impact on technologies and services that store, integrate, visualize,
and analyze IoT data. Security has to tackle cyberbullying, cybercrime and
cyberwarfare. Infectious malware consists of computer viruses and worms.
Concealment type are Trojan horses, rootkits, backdoors, zombie computer, man-in-the-
middle, man-in-the-browser, man-in-the-mobile and clickjacking. Malware for profit
comes as privacy-invasive software, adware, spyware, botnet, keystroke logging, form
grabbing, web threats, fraudulent dialer, malbot, scareware, rogue security software,
ransomware and crimeware. Threats can show as denial of service, eavesdropping,
exploitation, rootkits and vulnerability.
Security aims to help identify and apply standards for protection and prevention at
physical, personnel and organizational levels. It requires securing networks,
allied infrastructure, securing applications and databases. IoT security needs to cover
information security, mobile security, network security, internet security, application
security, computer security and data-centric security.
Security defence philosophies should be:
• reduction/mitigation – build ways to eliminate problems
• assign/transfer – divert cost through insurance or outsourcing
• accept – if cost of countermeasure v threat is uneconomical
• ignore/reject – if threat can be ignored
Cybersecurity requirements span five key areas:
• Identification—understanding risk profile and current state
• Protection—applying prevention strategies to mitigate vulnerabilities and threats
• Detection—detecting anomalies and events
• Response—incident response, mitigation, and improvements
• Recovery—continuous life cycle improvement
Defences are reduced to “find what defence and where” using known security threats
and risk analysis. Specifically security threats / risks are countered by software,
hardware and procedures.[4] Counter measure procedures are supported by security
testing, information systems auditing, business continuity planning and digital
forensics.
Protection are classified by anti-keylogger, antivirus software, browser security,
internet security, mobile security, network security, defensive computing, firewall,
intrusion detection system, intrusion protection system, ciphering and data loss
prevention software (APIDS). Operational countermeasures consist of computer and
network surveillance, honeypot and operation bot roast.
6.3.2 Conclusion
From the analysis in the previous section, an antivirus program, a tuned firewall (for
incoming are sufficient to cover all input sources and all output destinations), APIDS
and agreed ciphers are sufficient to protect IoT security.
6.3.3 References
1. Turner, Dawn M. "Digital Authentication: The Basics". Cryptomathic. Retrieved 9
August 2016.
2. Ahi, Kiarash (May 26, 2016). "Advanced terahertz techniques for quality control
and counterfeit detection". Proc. SPIE 9856, Terahertz Physics, Devices, and Systems
X: Advanced Applications in Industry and Defense, 98560G. doi:10.1117/12.2228684.
Retrieved May 26, 2016.
3. "How to Tell – Software". microsoft.com. Retrieved 11 December 2016.
4. Federal Financial Institutions Examination Council (2008). "Authentication in an
Internet Banking Environment" (PDF). Retrieved 2009-12-31.
5. Committee on National Security Systems. "National Information Assurance (IA)
Glossary" (PDF). National Counterintelligence and Security Center. Retrieved 9
August2016.
6. European Central Bank. "Recommendations for the Security of Internet
Payments"(PDF). European Central Bank. Retrieved 9 August 2016.
7. "FIDO Alliance Passes 150 Post-Password Certified Products". InfoSecurity
Magazine. 2016-04-05. Retrieved 2016-06-13.
8. Brocardo ML, Traore I, Woungang I, Obaidat MS. "Authorship verification using
deep belief network systems". Int J Commun Syst. 2017. doi:10.1002/dac.3259
9. "Draft NIST Special Publication 800-63-3: Digital Authentication Guideline".
National Institute of Standards and Technology, USA. Retrieved 9 August 2016.
10. Eliasson, C; Matousek (2007). "Noninvasive Authentication of Pharmaceutical
Products through Packaging Using Spatially Offset Raman Spectroscopy". Analytical
Chemistry. 79(4): 1696–1701. doi:10.1021/ac062223z. PMID 17297975. Retrieved 9
Nov 2014.
11. Li, Ling (March 2013). "Technology designed to combat fakes in the global supply
chain". Business Horizons. 56 (2): 167–177. doi:10.1016/j.bushor.2012.11.010.
Retrieved 9 Nov 2014.
12. How Anti-shoplifting Devices Work", HowStuffWorks.com
13. Norton, D. E. (2004). The effective teaching of language arts. New York:
Pearson/Merrill/Prentice Hall.
14. McTigue, E.; Thornton, E.; Wiese, P. (2013). "Authentication Projects for
Historical Fiction: Do you believe it?". The Reading Teacher. 66: 495–
505. doi:10.1002/trtr.1132.
15. The Register, UK; Dan Goodin; 30 March 2008; Get your German Interior
Minister's fingerprint, here. Compared to other solutions, "It's basically like leaving the
password to your computer everywhere you go, without you being able to control it
anymore", one of the hackers comments.
16. https://technet.microsoft.com/en-us/library/ff687018.aspx
17. "AuthN, AuthZ and Gluecon – CloudAve". cloudave.com. 26 April 2010.
Retrieved 11 December 2016.
18. A mechanism for identity delegation at authentication level, N Ahmed, C Jensen
– Identity and Privacy in the Internet Age – Springer 2009
7 Methodology
7.1 Commentary
7.1.1 Introduction
This section reviews how some other technologies can contribute to language
processing. It consists of 22 further sub-sections reflecting the 20 theories that are
helpful. They are search theory, network theory, Markov theory, algebraic theory, logic
theory, programming language theory, geographic information systems, quantitative
theory, learning theory, statistics theory, probability theory, communications theory,
compiler technology theory, database technology, curve fitting, configuration
management, continuous integration/delivery and virtual reality. We summarise the
results in the last part of this section.
7.1.2 Methodology from Search Theory
7.1.2.1 Introduction
The operations research technique known as the theory of search is applied to the
study of a system. The results of this theoretical study have implications for the
characteristics of the personnel using the system, user and rules governing the system.
The theory of search is used for firing missiles and prospecting minerals. It uses three
basic algorithms. The mean path theorem defines the chances of a missile hitting a
target. The optimal search effort procedure determines the best place to search based
on known probabilities and income from the targets. The clustering technique uses a
cheap scanning where more detailed searches can be performed.
7.1.2.2 Results
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies. The theory
has resulted in a measurable set of requirements and a method of assessing how good
the system, the system user and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
7.1.3 Methodology from Network Theory
7.1.3.1 Introduction
This section introduces the concepts of network theory as a system. It considers ways in
which methods can help the validation of connections within such a system. A
description is given of the concepts of such a system based on a set of network theory
structures (nodes and edges) analogous to that used in the system. A set of algorithms
are defined which are derived from network theory and can be used to resolve ordered
elements, single root trees and flows in networks. We developed the network structures
and algorithms to define a structure for entities with dependences. It demonstrates how
the method can be applied to the validation of a system for consistency, completeness
and use with and without dangles or loops.
7.1.3.2 Algorithms from Network Theory.
The algorithms from network theory follow the natural structures of a graph. They use
the adjacency matrix, the connectivity matrix and the flow matrix as the basis of the
analysis of the structures under consideration.
a. The first consideration is the property that nodes of a graph are adjacent. In an
undirected graph the adjacency matrix is symmetric about the major diagonal. For a
directed graph the symmetric property implies a loop.
b. The adjacency matrix of a sub-graph will have element values which are zero in the
same position as in the full graph adjacency matrix and many have a zero where there is
a one in the full graph adjacency matrix.
c. A bipartite graph is represented by a reachable matrix which shows a block structure.
d. A path from one node to another node is determined from the reachable matrix by the
common row numbers and column numbers which are covered in the row corresponding
to the starting node and the column responding to the end node.
e. A cycle or circuit is determined by diagonal entries of the reachable matrix being non
zero.
f. A tree is found when there are no non-zero diagonal elements in the reachable matrix.
g. A rooted tree is resolved through the condition that the reachable matrix has all
diagonal elements zero and only one row has all non-zero elements except on the
diagonal position.
h. A block structured graph is found through the circumstances where there is only one
element in a row of the reachable matrix.
i. A digraph is established when the non zero elements Aij of the adjacency matrix imply a
direction from node i to node j.
j. An ancestor and a descendent are determined in the same way as the derivation of a
path shown above.
k. A leaf is determined by finding the row of the adjacency matrix which is zero.
l. A terminal node in a network is decided through the method found as the leaf.
m. The height and depth are calculated by A(1-An-1)D(1-A) where A is the adjacency
matrix and D is the matrix with its elements all unity.
n. The flow through a node is derived from the formula A(1-An-1)D/(1-A) where A is the
flow matrix and D is the input flow values to the nodes.
o. The length of a network path is A(1-An-1) D/(1-A) where A is the adjacency matrix and D
is the matrix is the input flow values to the nodes.
p. The critical path is defined by the algorithm finding the longest path in the network.
q. The number of paths from one node to another is determined by the value of the
corresponding element of the Ak matrix where A is the adjacency matrix.
7.1.3.3 Results
There are six validation cases discussed in this paper. They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
7.1.3.3.1 Well Structured
Let us consider a system where an entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in
section 3.2.3 would highlight this case in the definition of the system by the fact that
there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after
review the network structure is adjusted as appropriate.
7.1.3.3.2 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the network structure is adjusted as appropriate.
7.1.3.3.3 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after
review the network structure is adjusted as appropriate.
7.1.4 Methodology from Markov Theory
7.1.4.1 Introduction
This section introduces the concepts of Markov theory in a system. It considers ways
in which these methods can help the sizing of such a system. A description is given of
the concepts of such a system. It shows a set of Markov theory structures analogous to
that used in a system. A set of algorithms are defined which are derived from Markov
theory and can be used to resolve flows in networks. We developed the Markov
structures and algorithms to define a flow of information. It demonstrates how the
method can be applied to the sizing of a system and its parts.
7.1.4.2 Algorithms from Markov Theory.
The algorithms from Markov theory follow the natural structures of a graph. They use
the adjacency matrix, A, the connectivity matrix and the flow matrix as the basis of the
analysis of the structures under consideration for the sizing of properties by using
probabilities for the connections. They can use the matrices to reflect the transfer
between nodes of the graph one cycle at a time and assess the size of flow between
different nodes.
a. The first consideration is the property that vertices of a graph are adjacent. In an
undirected graph the adjacency matrix has no meaning. For a directed graph the no
zero symmetric property about the major diagonal implies a loop of some form.
b. The adjacency matrix of a sub-graph will have element values which are zero in the
same position as in the full graph adjacency matrix and many have zero where there is
a one in the full graph adjacency matrix.
c. A bipartite graph is represented by a reachable matrix which shows a block
structure and sizes of transfer in the graph.
d. A quantity path from one node to another node is determined from the reachable
matrix by the common row numbers and column numbers which are covered in the row
corresponding to the starting node and the column responding to the end node.
e. A cycle or circuit size is determined by diagonal entries of the reachable matrix
being non zero.
f. A tree is found when there are no non-zero diagonal elements in the reachable matrix.
g. A rooted tree is resolved through the condition that the reachable matrix has all
diagonal elements zero and only one row has all non-zero elements except on the
diagonal position.
h. A block structured graph is found through the circumstances where there is only one
element in a row of the reachable matrix.
i. A digraph is established when the non zero elements A of the adjacency matrix imply
a direction from mode i to node j.
j. An ancestor and a descendent are determined in the same way as the derivation of a
path shown above.
k. A leaf is determined by finding the row of the adjacency matrix which is zero.
l. A terminal node in a network is decided through the method found as the leaf.
m. The height and depth are calculated by A(1-An-1)D(1-A) where A is the adjacency
matrix and D is the matrix with its elements all unit.
n. The flow through a node is derived from the formula A(1-An-1)D/(1-A) where A is the
flow matrix and D is the input flow values to the nodes.
o. The length of a network path is A(1-An-1) D/(1-A) where A is the adjacency matrix and
D is the matrix is the input flow values to the nodes.
p. The critical path is defined by the algorithm finding the longest path in the network.
q. The number of paths from one node to another is determined by the value of the
corresponding element of the Ak matrix where A is the adjacency matrix.
7.1.4.3 Results
Using the algorithms in the previous sub-section we can determine what nodes have
flow through them and which do not. We can find the edges that are used and those
unused. We can ascertain what the flow is between the nodes and which are single
entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network. Using the algorithms in the previous sub-section
we can determine what nodes have flow through them and which do not. We can find
the edges that are used and those unused. We can ascertain what the flow is between
the nodes and which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
7.1.5 Methodology from Algebraic Theory
7.1.5.1 Introduction
This section looks at the contribution that algebra can make to a system that we shall
use later in this paper. The algebraic system starts with a set of entities which have
operations. Formally we define a set and a function that maps domain elements onto
range elements. We define the type of function of the sets that we are considering and
define the associated domain and range.
7.1.5.2 Techniques of Algebraic Theory
We start any algebraic system with a system which is a set of elements. They have a
set of operations which are subject to various rules. The elements have a set of
properties to link the each. The operations that are of use to us in the fields of
linguistics are those of combination and valuation. Other operations that are helpful to
us are those following the associative, commutative and distributive laws. Our basic
system elements are entities, pictures or sounds which combine to services,
sentences, paragraphs, etc. respectively. We obtain meaning to the combinations using
the valuation technique which becomes a more complicated procedure as we continue
to greater quantities of natural language. We use relationships between parts of
language to define difference such as generalisation of a concept or specialisation. We
use equivalence of different elements to give a more varied experience of the language.
7.1.5.3 Results
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a linguistic system. The basic elements are derived from text,
pictures and/or sound. We restrict these basic elements by specifying what is allowed.
We apply rules of combination to the elements to form larger elements that we classify
as compound elements for which we have rules to say what is correct and what is
erroneous. We iterate on the combination for more elements to be validated against
format rules and evaluation rules.
We use valuation rules to classify symbols and numbers and evaluate the number. The
elements are classified into compound elements to help the format rules. Evaluation
rules give meaning (valuation) to elements and compound elements. Relations are
derived from another set of operations which give links such as synonyms, antonyms,
generalisation, and specification based on properties of the elements. Conjunctions
give ways of replicating actions and elements. Other rules are ways of defining
properties of objects or operations whilst some apply to the scope of meaning and the
scope of an object or operation or value.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
7.1.6 Methodology from Logic Theory
7.1.6.1 Introduction
This section looks at the contribution that logic can make to a system that we shall
use later in this paper. A logic system starts with a set of primitive elements with
logical values which can be combined to give statements of logical values and consider
the system for consistency, validity, soundness, and completeness. Formally we define
a set of sentences and derive values from them and then check that they are
consistent, valid, sound, and complete.
7.1.6.2 Techniques of Logic Theory
We start any logic system with a system which is a set of primitives. We give the
primitives values for their truth. The primitives include operations subject to various
rules. The operations that are of use to us in the fields of linguistics are those of
combination and valuation. Our basic system elements are entities, pictures or sounds
which combine to services, sentences, paragraphs, etc. Other operations give
associative, commutative and distributive laws for the evaluation of the combinations.
We obtain meaning to the combinations using the valuation technique which becomes
more complicated procedure as we continue to greater quantities of natural language.
We derive relationships from parts of language to define difference such as
generalisation of a concept or specialisation. We use equivalence of different elements
to give a more varied experience of the language.
7.1.6.3 Results
We have used the scheme of logic theory to give us a set with elements and functions
to be a basis of a linguistic system. We have primitives which are derived from pictures
and/or sound and have values associated with them. We restrict these basic elements
by specifying what is allowed with the rules in the logic. We apply rules of combination
to the elements to form larger elements that we classify as services for which we have
rules to say what is correct and what is erroneous. We iterate on the combination to
form more elements to be validated against syntactic rule and eventually semantic
rules.
We use valuation rules to classify services and numbers and evaluate the meaning. The
services are classified into parts of speech to help the syntactic rules. technique gives
meaning (valuation) to services and combinations of them. Relations are derived from
another set of primitive functions which give links such as synonyms, antonyms,
generalisation, and specification based on values of entities. Conjunctions give ways of
replicating actions and elements with distributive laws defined by the logic. Other parts
of speech are ways of defining rules for properties of objects or operations whilst some
give the scope of meaning and the scope of an object, operation or value.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
7.1.7 Methodology from Programming Language Theory
7.1.7.1 Introduction
This section reviews the history of programming languages starting with the ways in
which they have been formalised over the years. The concepts of sound and graphics
are included into the study. It goes on to recount the ways that object oriented
techniques have been reflected in terms of natural language.
7.1.7.2 Results
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From
graphics and sound technologies we find a similar kind of definition. We discover we
need to set a priority of the rules for catching idioms such as "jack in a box" = toy.
Object oriented programming gives us the concept of scope for meaning, nouns being
objects, adjective as properties, verbs as methods with arguments of nouns and
adverbs, pronouns the equivalent of the "this" operator and the concepts of synonyms,
generalisation and specification. Overloading of definitions allows for meaning to
change according to context. Conjunctions give ways of replicating actions (iterations)
under different cases and follow distributive rules for evaluation. Other parts of speech
are ways of defining properties of objects or actions with polymorphism for nouns and
verbs. Novels are analogous to procedures that do little action and return no results on
exit Research Journals provide articles which are likened to procedures with results on
exit. People specialise the different packages-libraries that they use. Experts extend
their knowledge network in one particular way whilst slaves do it in another.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
7.1.8 Methodology from Quantitative Theory
7.1.8.1 Introduction
This section considers the quantitative aspects of languages. It reflects on software
physics developed by Halstead. It goes on to review extensions of predicted value
based on work of the mean path theorem. Then we consider some interesting results for
measures of complexity and develop formulae for the best strategy of search. Using this
formula we show that we need to search an area the number of times that we expect to
find separate targets and that the value of a measure for the best strategy is unity when
we need to find 6.4 targets.
7.1.8.2 Results
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 = total number of occurrences of operands
N1 = n1log n1
N2 = n2log n2
n= program vocabulary
N= program length
n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
V= actual program volume
V*= theoretical program volume
V = N log n
V* = N* log n*
L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation gives us the expect number of errors or the number of statements when
we apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Variable Value Value
mm 0 1
mP(m,m) 0 6.4
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their valuesv for A, where A has v distinct values.
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
7.1.9.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in language research we are talking about
events, recurring events or choices of event. In the case of strings of occurrences we
have the probability of selecting the correct entity. We use the logical and operator for
selecting groups of entities based on the recurrence of selecting a entity. When we are
considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
language implies that we will always have to use specific further entities we will use
the dependent forms of the and and or logical operations. The structures of linguistics
imply a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
7.1.12 Methodology from Geographic Information Systems
7.1.12.1 Introduction
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. It is applied to engineering, planning, management,
transport/logistics, telecommunications, and business. It can relate objects to
locations and possibly time.
GIS uses digital information, collected by digitization paper maps or survey plans with
CAD program and ortho-rectified imagery. The data can be represented by discrete
objects and continuous fields as raster images and vector. Displays can illustrate and
analyse features, enhance descriptive understanding and intelligence.
7.1.12.2 Results
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
7.1.13 Methodology from Communications Theory
7.1.13.1 Introduction
This section investigates results from communications theory. It first discusses the
role of communications models in computing and then goes onto review the
communications of people from the view point of artificial intelligence.
7.1.13.2 Results
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between parts in a system and use the same
language. Elements consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing). Protocol architecture is the task of
communication broken up into modules. At each layer, protocols are used to
communicate and control information is added to user data at each layer.
Grammar specifies the compositional structure of complex messages. A formal
language is a set of strings of terminal symbols. Each string in the language can be
analysed/generated by the grammar. The grammar is a set of rewrite rules to form non
terminals. Parse trees demonstrate the grammatical structure of a message to view
structure as an essential step towards meaning. If we add dialogues to non-terminals
to construct messages communications can be incorporated into protocols.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
7.1.14 Methodology from Compiler Technology Theory
7.1.14.1 Introduction
This section reviews the make up of compilers in particular the Production Quality
Compiler-Compiler Project of Carnegie Mellon University and goes onto review how we
can apply the techniques to the language studies. We first give an overall guide to
general compiler workings. We then give a more detailed description of one particular
implementation viz. the Production Quality Compiler-Compiler Project of Carnegie
Mellon University. Next we follow up the previous ideas with the way in which the ideas
may be applied to language studies.
7.1.14.2 Results
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
For language processing we start with a set of basics concepts of picture/sound. We
start applying rules for basic entities, services then combinations of services through
standard and technique to pragmatics. We can apply a bottom up "standard analyser"
as in a compiler to give us rules to add to our knowledge universe. The priority of the
rules gives us ways of catching idioms such as "jack in a box" = toy. We have rules to
give us generalisation e.g. animals and specification e.g. white tailed frog. Nouns
define entities, verbs actions, pronouns the equivalent of "this" in object oriented
programming. Conjunctions give ways of replicating actions under different cases and
follow distributive rules for evaluation. Other parts of speech are ways of defining
properties of objects or actions. Novels provide enclosed knowledge to be deleted as in
some procedures on exit. Research journals provide enclosed knowledge to be not
deleted as in procedures with own variables on exit. Experts extend their knowledge
network in one particular way whilst slaves do it in another. We can prioritise rules and
develop measures to determine the priority of meaning throughout a particular network
of experience and learning.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
7.1.15 Methodology from Database Technology
7.1.15.1 Introduction
A database is an aggregation of data to support the modelling of processes using the
data. A database management system is a software application using the data in the
database for users and other applications to collect and analyse the data. It allows the
definition (create, change and remove definitions of the organization of the data using a
data definition language (conceptual definition)), querying (retrieve information usable
for the user or other applications using a query language), update (insert, modify, and
delete of actual data using data manipulation language), and administration (maintain
users, data security, performance, data integrity, concurrency and data recovery using
utilities (physical definition)) of the database. Databases and DBMSs are classified by
the database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security. The database
technology follows the conceptual model (data model structure) i.e. navigational,
relational and post-relational. Navigational data models have examples as the
hierarchical model e.g. IBM's IMS and network model e.g. CODASYL model
implemented as Univac’s DMS. A relational database (exemplified by DB2 and Oracle) is
a set of tables containing data fitted into predefined categories. Each table (relation)
contains one or more data categories in columns. Each row contains a unique instance
of data for the categories defined by the columns (each with constraints that apply to
the data value). The definition of a relational database is a table of metadata (formal
descriptions of the tables, columns, domains, and constraints). A post-relational
database (e.g. NoSQLMongoDB and NewSQL/ScaleBase) are derived from object
databases to overcome the problems met with object programming and relational
database and also the development of hybrid object-relational databases. They use fast
key-value stores and document-oriented databases with XML to give interoperability
between different implementations.
So far we have classified databases by architecture. We can define them by content
e.g. bibliographic, document-text, statistical, or multimedia objects. Other categories
are:
in-memory database
event-driven architecture database
cloud database
data warehouse
deductive database
distributed database
embedded database
end-user databases
federated database system
multi-database
graph database
array database
hypertext hypermedia database
knowledge base
mobile database
operational databases
parallel database
probabilistic database
real-time database
spatial database
temporal database
terminology-oriented database
unstructured data database
Logical data models include:
Hierarchical database model
Network model
Relational model
Entity–relationship model
Enhanced entity–relationship model
Object model
Document model
Entity–attribute–value model
Star schema
An object-relational database combines the two related structures.
Physical data models include:
Inverted index
Flat file
Other models include:
Associative model
Multidimensional model
Array model
Multivalue model
Semantic model
XML database
7.1.15.2 Results
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of language studies.
The applications are bibliographic, document-text, statistical and multimedia objects.
The database management system must support users and other applications to
collect and analyse the data for language processes. The system allows the definition
(create, change and remove definitions of the organization of the data using a data
definition language (conceptual definition)), querying (retrieve information usable for
the user or other applications using a query language), update (insert, modify, and
delete of actual data using a data manipulation language), and administration (maintain
users, data security, performance, data integrity, concurrency and data recovery using
utilities (physical definition)) of the database. The database model most suitable for the
applications relies on post-relational databases (e.g. NoSQLMongoDB or
NewSQL/ScaleBase) are derived from object databases to overcome the problems met
with object programming and relational database and also the development of hybrid
object-relational databases. They use fast key-value stores and document-oriented
databases with XML to give interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
7.1.16 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data, and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
7.1.17 Configuration Management
7.1.17.1 Introduction
Configuration management gives an organised method for setting up and ensuring
consistency of a product throughout its life cycle from requirements via acceptance to
operations and maintenance through visibility and control of the reference attributes. It
gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes ate documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation. It sets a technical and
administrative direction to policies, procedures, techniques, and tools for managing,
evaluating proposed changes, tracking status of changes, and maintaining the
inventory of system and support documents as the system changes. It emphasises
personnel, responsibilities and resources, training requirements, administrative
meeting guidelines, definition of procedures and tools, base-lining processes,
configuration control, configuration status accounting, naming conventions, audits and
reviews, and subcontractor/vendor requirements.
Configuration management is broken down into 4 aspects, viz. configuration
identification, configuration control, configuration status accounting and configuration
audits. The configuration identification defines all attributes of the item with an end-
user purpose documented and base-lined for later tracking of changes. Configuration
change control are processes and approval stages for changing an item's attributes
and making a new baseline. Configuration status accounting records and reports on the
baselines for each item at any time. Configuration audits occur at delivery or
completing the change to check that all requirements have been achieved.
7.1.17.2 Results
Configuration management requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
7.1.18 Continuous Integration
7.1.18.1 Introduction
Continuous integration (compared with frequent integration) merges all developer
working copies to a shared mainline several times a day to prevent integration
problems. It uses automated unit tests for test-driven development. It verifies all unit
tests in the developer's local environment before committing to the mainline to avoid
corrupting other developers' work. The concept evolved into build servers
automatically running unit tests periodically or after every commit to report results to
developers. It extended these to applying quality control in general small effort, applied
frequently by executing the unit and integration tests with static and dynamic tests,
measure and profile performance, extracting and formatting documentation from the
source code and help other QA processes to improve the quality and reduce delivery
time.
The process needs a version control system used by all to make a buildable system on
each fresh checkout without additional dependencies. The tools should support
branching but this should be minimal and built into the main production line. The
control system should support atomic commits, i.e. all of a developer's changes may be
seen as a single commit operation. A build process which is triggered on every commit
to the repository or periodically should be fast and automated to include compiling
binaries, generating documentation, website pages, statistics and distribution media,
integration, deployment into a production-like environment so that it is self-testing i.e.
the code is built and all tests should run to confirm that it behaves as it should.
Everyone commits to the baseline every day to reduce the conflicting changes that can
be resolved quickly and every completed work unit commit (to baseline) should be built
again to reduce and resolve conflicts.
The test system is a clone of the production environment to reduce conflicts between
any test environment and deployment into production. The pre-production test
environment should be a scalable version of the actual production environment to help
minimise costs and verify the production system by using service virtualisation for
dependences. If it is easy to use the latest deliverables for stakeholders and testers
then rework of new features can be reduced whilst defects can be shown up easily.
Automation can be extended to deployment to the pre-production system and
eventually the production environment after defects or regressions have been passed.
Continuous integration saves cost and time by :
• detecting integration bugs early with small change sets.
• avoiding mass integrations near release dates
• reverting small changes due to failure of unit tests or bugs.
• constant availability of a "current" build for testing, demo, or release purposes
• modular, less complex code
Automated testing gives:
• frequent testing
• fast feedback on impact of local changes
• collecting metrics on code coverage, code complexity, and features complete
concentrate on functional, quality code, and team momentum
7.1.18.2 Results
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualisation for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
7.1.19 Continuous Delivery
7.1.19.1 Introduction
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
7.1.19.2 Results
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
7.1.20 Virtual Reality
7.1.20.1 Introduction
Virtual reality is an immersive multimedia or computer-simulated reality which
replicates an environment simulating the user's presence, environment and interaction
of sight, touch, hearing, and smell. Displays use a computer screen or a
special headset and some simulations include sound. Some systems use sight tracking
or tactile information, eg force feedback in medical, gaming and military applications.
Other aspects use remote communication to give virtual presence for
telepresence and telexistence or a virtual artifact through standard input devices or
multimodal devices eg wired glove, data suit or treadmill. An immersive environment
presents a real world for a life like experience eg flight simulation or video games.
Virtual reality is used in education, training eg combat situations in multiple
environments, entertainment and exposure therapy with phobia treatment. It is used
with artificial intelligence to react in different ways or assess the results in the
environment. It allows reset time truncated and more repetition of processes in a
shorter amount of time. There is the opportunity for reducing of transference time
between simulated and real environment with corresponding safety, economy and
possible absence of pollution. It is applied with geographic and geometric situations for
the demonstration of architectural, urban planning, civil engineering and surveying.
7.1.20.2 Results
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
7.1.21 Summary
This section has reviewed how some other technologies can contribute to language
processing. It consisted of 19 further sub-sections reflecting the 19 theories that are
helpful. They are search theory, network theory, Markov theory, algebraic theory, logic
theory, programming language theory, quantitative theory, learning theory, statistics
theory, probability theory, communications theory, compiler technology theory,
database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise now.
We have studied a theory for systems based on the operations research technique
known as the theory of search. We found that there were restrictions between the user,
the system and its documentation which resulted in a measurable set of requirements
and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
A communications model extended the definition of the system requirements for
source, transmission, reception and destination. It defined levels of support,
coordination and corrective recovery of errors. It developed descriptions of the
problems associated with networks due to noise.
There were six validation cases discussed for network analysis. They were:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examined the algorithms based on affinity, connectivity and flow matrices. Using
Markov processes, we determined the flow through nodes and edges and partitioning of
the systems. By including an error sink we resolve probabilities of error for each of the
node states. We used this network analogy with probability to provide overall
probabilities for elements where choices are splits at nodes and combination is
expressed as loops or sub-nodes. We showed how statistics from the experience we
have with the language processes to develop a metric network model.
Using the algebraic, logical and programming language theory we found that the network
is reflected with valid combinations of basic elements through combination and valuation
to reflect syntactic rules and dictionaries to generate objects and actions, generalisation
specification with properties. Other parts of the system are ways of defining properties
of objects or operations whilst some apply to the scope of meaning and the scope of an
object or operation or value. We found how object oriented programming gave us the
concept of scope for objects, properties, methods with arguments, "this" operator and
the concepts of synonyms, generalisation and specification. Overloading of definitions
allows for meaning to change according to context. The system has ways of
polymorphism and replicating actions (iterations) under different cases and follow
distributive rules for evaluation. Rules are analogous to procedures that do little action
and return no results on exit, others which are likened to procedures with results on
exit. Systems specialise the different formats, purposes and persistence and
knowledge is extended according to the speciality of the environment.
We considered the quantitative aspects of systems. It reflected on software physics
developed by Halstead. It went on to review extensions of predicted value based on
work of the mean path theorem. Then we considered some interesting results for
measures of complexity and developed formulae for the best strategy of search. Using
this formula we showed that we need to search an area the number of times that we
expect to find separate targets and that the value of a measure for the best strategy is
unity when we need to find 6.4 targets.
We introduced the effects from learning theory. It studied 3 methods that are used for
evaluating a learning hypothesis. It showed how learning can vary by considering the
effects of repetitive reinforcement, then the effects of standardisation and then the
effects of tiredness. The story continued by watching the way children are taught from
early years through pictures, sound and written form. We studied the effects of making
errors and how we overcome them. We saw how rules are developed for children to
determine easier ways of remembering the system and facts they need to know. We
describe technique to determine heuristics for remembering and learning by induction
and deduction.
Compiler technology led to a processor which follows the network representation of the
language, the macro pre-processor concepts using inclusion of files, macro definitions
and expansions, conditional compilation and line control, context sensitive translation
and the compiler-compiler intermediate language.
Database technology provided the storage basis for the processing. It used storage for
system data, control data, document-text, statistical and multimedia objects and user
data. It supported the irregular update of system definition and the regular update,
insert and deletion data of the knowledge base. It used a post-relational database
model of hybrid object-relational databases based on XML. Other requirements were
support of event-driven architecture, deductive database, graph database, hypertext
hypermedia database, knowledge base, probabilistic database, real-time database and
temporal database.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
7.2 References
1. A.Payne, Paper, On Communications in Organisations. Computer Personnel
Review 1981
2. A.Payne, Paper, A Basis for Software Physics. Sigplan, 1981
3. A.Payne, Paper. Health Registration from Disease Registers. American
Association for Medical Systems and Informatics Conference, 1982
4. A.Payne, Paper. A Measure of Complexity. Sigart Newsletter, 1982
5. A.Payne, Paper. A Basis for the Rate of Change in Programs. Sigplan, 1982
6. A.Payne, Paper. Disease Registers Recycled for Computer Aided Learning. IEEE
General Systems Conference, 1982
7. A.Payne, Paper. Disease Registers and their Applications. IEEE MEDCOMP
Conference, Boston, USA, 1982
8. A.Payne, Paper. Disease Registers and Health Education. AAMSI Conference,
Boston, USA, 1982
9. A.Payne, Paper. A Markov Graph Model for the Planning of University Resources.
AMSE 83 Symposium, Bermuda
10. A.Payne, Paper. Microcomputers in Medical Records. Medinfo 80, Japan.
11. A.Payne, Paper. A Network Structure of Errors in Organisation Information Flow.
5th European Meeting on Cybernetics and Systems Research, 1980, Austria.
12. A.Payne, Seminar. Theory of Search Applied to System Development. Liverpool
University, Loughborough University, Imperial College (London), Manchester University,
Karlsruhe University, 1979.
13. A.Payne, Seminar. Medical Records and Cancer Treatment. Royal Berkshire
Hospital, UK, 1979.
14. A.Payne, Seminar. Theory of Search and System Development,. Hamilton, NZ,
1980.
15. A.Payne, Seminar. Medical Records and Microprocessor. Hamilton, 1980.
16. A.Payne, Paper. Micro Computer and Disease Registers. 20th Conference on
Physical Sciences and Engineering in Medicine and Biology, Christchurch, NZ, 1980.
17. A.Payne, Paper. A Network Structure of Errors in Databases. Congress on
Applied Systems Research and Cybernetics, Mexico, 1980.
18. A.Payne, Paper. The Theory of Search Applied to Programming Language
Studies. 3rd Hungarian Computer Science Conference, 1981
19. A.Payne, Paper. Innovations on Programming Language Studies. International
Computer Symposium, 1980, Taiwan.
20. A.Payne, Paper. Towards a Standard Simulation Language. IMACS TC3, 1981.
21. A.Payne, Paper. A System for Learning. Sigart Newsletter, 1978
22. A.Payne, Paper. Use of Standards in a Production Environment. Sigda
Newsletter, 1978
23. A.Payne, Paper. The Effects of Tiredness in a Production Environment.
Installation Management Review, 1978.
24. A.Payne, Paper. The O and M of System Development. New Zealand Computer
Conference, 1978.
25. A.Payne, Paper. The Application of Search Theory to Simulation Models. New
Zealand OR Conference, 1978.
26. A.Payne, Paper. The Application of Search Theory to Simulation Languages.
Symposium on Modelling and Simulation Methodology. Israel, 1978.
27. A.Payne, Paper. Properties Defining Usable Standard Languages. Sorrento
Workshop for International Standardisation of Simulation Languages, 1979.
28. A.Payne, Paper. A Review of Compiling Methods. LRRC. Maps Project Technical
Paper 10
29. A.Payne, Paper. Theoretical Consideration of Compiler Writing. LRRC. Maps
Project Technical Paper 14
30. A.Payne, Paper. Theoretical Consideration for Optimising Computer Programs.
LRRC. Maps Project Technical Paper 15.
31. A.Payne, Paper. An Approach to Compiler Construction. LRRC. Maps Project
Technical Paper 18.
32. A.Payne, Paper. A Language for Describing the Compilation Process. LRRC. Maps
Project Technical Paper 20
33. A.Payne, Paper. An Implementation of Program Optimisation. LRRC. Maps Project
Technical Paper 22.
34. Houlden, B.T., Some Technique of Operational Research, WUP 1962
35. Landau, L.D., and Lifshitz, E.M. Theory of Elasticity Pergoman, 1959, London
36. Halstead, M.H. Elements of Software Science, Elsevier North Holland, New York,
1977
37. Mohanty, S.N. Models and Measures for Quality Assurance of Software, Computer
Surveys, Vol. 11, No. 3., Sept. 79.
38. L.K. Ford and D.R. Fuller, Flows in Networks, Princeton University Press,
Princeton, New Jersey, 1962.
39. L J. Peters and L L. Tripp , A Model Of Software Engineering, Icse 78 Proceedings
Of The 3rd International Conference On Software Engineering IEEE Press Piscataway,
NJ, USA, 1978
40. D Teichroew, Problem Statement Analysis: Requirements For The Problem
Statement Analyzer (PSA) - System Analysis Techniques, 1974 - John Wiley & Sons
41. Halstead, M.H. Elements of Software Science, Elsevier, New York, 1978
42. Payne, A.J., The Theory of Search Applied to Systems Development, 8th
Australian Computer Conference, 1978
43. Payne, A.J., A Basis for Software Physics, Sigplan 1981
44. Leverett; Cattel; Hobbs; Newcomer; Reiner; Schatz; Wulf, An Overview of the
Production Quality Compiler-Compiler Project, in Computer 13(8):38-49 (August 1980)
45. Scott, Michael Lee, Programming Language Pragmatics, Morgan Kaufmann,
2005, 2nd edition, 912 pages. ISBN 0-12-633951-1.
46. AJ. Payne,Use of Standards In A Production Environment,SIGDA Newsletter ,
Volume 8 Issue 1, March 1978
47. AJ. Payne ,A Basis For Complexity Measurement?SIGART Bulletin , Issue
82,October 1982
48. A.Payne, Paper. Consistency in Systems. New Zealand Mathematics Colloquium,
May 1978
49. A.Payne, Paper. Completeness in Systems, New Zealand Mathematics
Colloquium, May 1980
50. A.Payne, Paper. On Network Structures. New Zealand Mathematics Colloquium,
May 1981
51. A.Payne, Paper. On Language Definitions. New Zealand Mathematics Colloquium,
May 1982
52. A Payne,Chill Compiler, ITT Systems Journal, 1983
53. APayne, Chill Systems, ITTE Systems Journal, 1983
54. A Payne, ITT Optimising Compiler, CCITT Chill Conference, Cambridge, 1984
55. A.Payne, Paper. Consistency in Systems. CCITT Z200 Working Group, 1986
56. A.Payne, Paper. Completeness in Systems, CCITT Z200 Working Group, 1986
57. Gosling, James; Joy, Bill; Steele, Guy; Bracha, Gilad; Buckley, Alex (2014). The
Java® Language Specification (PDF) (Java SE 8 ed.).2015, Oracle America, Inc. and/or
its affiliates. 500 Oracle Parkway, Redwood City, California 94065, U.S.A.
58. Numerical Methods, Robert W. Hornbeck, 1982, Prentice Hall, ISBN-10:
0136266142 • ISBN-13: 9780136266143
59. Applied Numerical Methods for Engineers and Scientists, Singiresu S. Rao, 2002
• Pearson, ISBN-10: 013089480X • ISBN-13: 9780130894809
8 Applications
8.1 Introduction
This section reviews how some other technologies can contribute to IoT security and
its processes. It consists of 21 further sub sections. It considers how the
methodologies from the 19 theories of section 3 are helpful. They are search theory,
network theory, Markov theory, algebraic theory, logic theory, programming language
theory, quantitative theory, learning theory, statistics theory, probability theory,
communications theory, compiler technology theory, database technology, curve
fitting, configuration management, continuous integration/delivery and virtual reality.
The last part presents the summary of the section.
We study a theory for systems based on the operations research technique known as
the theory of search. We find that there were restrictions between the user, the system
and its documentation which results in a measurable set of requirements and a method
of assessing how good the system, the system user and the documentation come up to
the requirements.
A communications modelextends the definition of the system requirements for source,
transmission, reception and destination. It defines levels of support, coordination and
corrective recovery of errors. It develops descriptions of the problems associated with
communication due to noise, differing understanding of current context and ambiguity,
indexicality, vagueness, dialogue structure, metaphor and non-compositionality.
There are six validation cases discussed for network analysis. They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms based on affinity, connectivity and flow matrices. Using
Markov processes, we determine the flow through nodes and edges and partitioning of
the systems. By including an error sink we resolve probabilities of error for each of the
node states. We use this network analogy with probability to provide overall probabilities
for elements where choices are splits at nodes and combination is expressed as loops or
sub-nodes. We show how statistics from the experience we have with the language
processes to develop a metric network model.
Using the algebraic, logical and programming language theory we find that the network
is reflected with valid combinations of basic elements through combination and valuation
to reflect standards and techniques to generate objects and actions, generalisation
specification with properties. Others give ways of defining properties of objects or
operations whilst some apply to the scope of meaning and the scope of an object or
operation or value. We find how object oriented programming gives us the concept of
scope for meaning, nouns being objects, adjective as properties, verbs as methods with
arguments of nouns and adverbs (properties of verbs), pronouns the equivalent of the
"this" operator and the concepts of synonyms, generalisation and specification.
Overloading of definitions allows for meaning to change according to context.
Conjunctions give ways of replicating actions (iterations) under different cases and
follow distributive rules for evaluation. Other parts of speech are ways of defining
properties of objects or actions with polymorphism for nouns and verbs. Novels are
analogous to procedures that do little action and return no results on exit Research
journals provide articles which are likened to procedures with results on exit. People
specialise the different packages libraries that they use. Experts extend their
knowledge network in one particular way whilst slaves do it in another.
We consider the quantitative aspects of languages. It reflects on software physics
developed by Halstead. It goes on to review extensions of predicted value based on
work of the mean path theorem. Then we consider some interesting results for
measures of complexity and developed formulae for the best strategy of search. Using
this formula we show that we need to search an area the number of times that we
expect to find separate targets and that the value of a measure for the best strategy is
unity when we need to find 6.4 targets.
We introduce the effects from learning theory. It study 3 methods that are used for
evaluating a learning hypothesis. It shows how learning can vary by considering the
effects of repetitive reinforcement, then the effects of standardisation and then the
effects of tiredness. The story continues by watching the way children are taught from
early years through pictures, sound and written form. We study the effects of making
errors and how we overcome them. We see how rules are developed for children to
determine easier ways of remembering the language and facts they need to know. We
describe techniques to determine heuristics for remembering and learning by induction
and deduction.
The compiler technology leads to a processor which follows the network representation
of the language, the macro pre-processor concepts using inclusion
of files, macro definitions and expansions, conditional compilation and line control,
context sensitive translation and the compiler-compiler intermediate language
Database technology provides the storage basis for the language processing. It uses
storage for bibliographic, document-text, statistical and multimedia objects. It supports
the irregular update of language definition and the regular update, insert and deletion
data of the knowledge base. It uses a post-relational database model of hybrid object-
relational databases based on XML. Other requirements are support of event-driven
architecture, deductive database, graph database, hypertext hypermedia database,
knowledge base, probabilistic database, real-time database and temporal database
8.2 Search Theory
8.2.1 Introduction
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
8.2.2 Entities
The entity system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the entity system should be modifiable to the experience of
the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no entity is found then the error is reported and after review the entity is added to the
system.
8.2.3 Services
The service system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the system should be modifiable to the experience of the
users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. It should be
standardised and have minimum number of pages and facts. services should be small,
logically place and have minimum number of reference strategies.
If no service is found then the error is reported and after review the service is added to
the system.
8.2.4 Techniques
The techniques system should be standardised, simple, specialised, logically
organised, concise, have minimum ambiguity, have minimum error cases and have
partitioning facilities. The facilities for systems should be modifiable to the experience
of the users.
All reference documentation should have stiff spines, and small thin stiff light pages
with simple content which is adjustable to the experience of the user. It should be
standardised and have minimum number of pages and facts. Facts should be small,
logically place and have minimum number of reference strategies.
If no technique is found then the error is reported and after review the technique is
added to the system.
8.2.5 Communications
The transmission part of the dialogue system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users. The reception part of the dialogue system should be
standardised, simple, specialised, logically organised, concise, have minimum
ambiguity, have minimum error cases and have partitioning facilities. The facilities for
systems should be modifiable to the experience of the users. The interaction of the
dialogue system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for systems should be modifiable to the experience of the users.
Reference documentation for the transmission, reception and interaction parts should
have stiff spines, and small thin stiff light pages with simple content which is
adjustable to the experience of the user. The documentation should be standardised
and have minimum number of pages and facts. Facts should be small, logically place
and have minimum number of reference strategies.
If any error is found then the error is reported and after review the error is added to the
system.
8.2.6 Antivirus
The antivirus system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the antivirus system should be modifiable to the experience
of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If any error is found then the error is reported and after review the error is added to the
antivirus system .
8.2.7 Firewall
The firewall system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the entity system should be modifiable to the experience of
the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If any error is found then the error is reported and after review the error is added to the
firewall system .
8.2.8 APIDS
The APIDS system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the entity system should be modifiable to the experience of
the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If any error is found then the error is reported and after review the error is added to the
APIDS system .
8.2.9 Cipher
The cipher system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the entity system should be modifiable to the experience of
the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If any error is found then the error is reported and after review the error is added to the
cipher system .
8.3 Algebraic Theory
8.3.1 Introduction
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.2 Entities
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.3 Services
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (entities) of combination of
services to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the services through entities.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.4 Standards
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (techniques) of combination
of standards to form larger elements that we classify as systems or subsystems for
which we have rules (techniques) to say what is correct and what is erroneous. We
iterate on the combination for more complex elements to be validated against further
standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The standards are classified into parts of system with techniques.
Techniques give meaning to standards and combinations of them. Relations are derived
from another set of operations which give links such as generalisation (techniques) and
specification based on properties of the entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.5 Techniques
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (standards) of combination
of techniques to form larger elements that we classify as systems or subsystems for
which we have rules (standards) to say what is correct and what is erroneous. We
iterate on the combination for more complex elements to be validated against further
standards.
We use valuation rules to classify entities, services, standards, techniques and
communications. The techniques are classified into parts of system with standards.
Standards give meaning to techniques and combinations of them. Relations are derived
from another set of operations which give links such as generalisation (standards) and
specification based on properties of the entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.6 Communication
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process.
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a dialogue system based on the results of previous
subsections with basic elements derived from entities for the data/information for
sources, destinations and transmission systems. The basic elements are derived from
entities, services, standards, techniques and communications. We restrict these basic
elements by specifying what is allowed. We apply rules of combination to the elements
to form larger elements that we classify as systems or subsystems for which we have
rules to say what is correct and what is erroneous. We iterate on the combination for
more complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.7 Antivirus
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Antivirus adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.8 Firewall
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Firewall adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.9 APIDS
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
ADIPS adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.10 Ciphers
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Ciphers adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4 Logic Theory
8.4.1 Introduction
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.2 Entities
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.3 Services
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (entities) of combination of
services to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the services through entities.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.4 Standards
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (techniques) of combination
of standards to form larger elements that we classify as systems or subsystems for
which we have rules (techniques) to say what is correct and what is erroneous. We
iterate on the combination for more complex elements to be validated against further
standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The standards are classified into parts of system with techniques.
Techniques give meaning to standards and combinations of them. Relations are derived
from another set of operations which give links such as generalisation (techniques) and
specification based on properties of the entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.5 Techniques
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (standards) of combination
of techniques to form larger elements that we classify as systems or subsystems for
which we have rules (standards) to say what is correct and what is erroneous. We
iterate on the combination for more complex elements to be validated against further
standards.
We use valuation rules to classify entities, services, standards, techniques and
communications. The techniques are classified into parts of system with standards.
Standards give meaning to techniques and combinations of them. Relations are derived
from another set of operations which give links such as generalisation (standards) and
specification based on properties of the entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.6 Communication
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process.
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a dialogue system based on the results of previous
subsections with basic elements derived from entities for the data/information for
sources, destinations and transmission systems. The basic elements are derived from
entities, services, standards, techniques and communications. We restrict these basic
elements by specifying what is allowed. We apply rules of combination to the elements
to form larger elements that we classify as systems or subsystems for which we have
rules to say what is correct and what is erroneous. We iterate on the combination for
more complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.7 Antivirus
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Antivirus adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.8 Firewall
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Firewall adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.9 APIDS
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
APIDS adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.10 Ciphers
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Ciphers adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.5 Network Theory
8.5.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.1.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.1.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.1.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.2 Entities
8.5.2.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. In this case the network system is based on entities as
units and services, standards, techniques and communications as relations. There are
six validation cases discussed in this paper. They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.2.2 Well Structured
Let us consider a system where a entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in section
3.2.3 would highlight this case in the definition of the system by the fact that there is a
connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.2.3 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.2.4 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.3 Services
8.5.3.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. In this case the network system is based on services
as units and entities, standards, techniques and communications as relations. There are
six validation cases discussed in this paper. They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.3.2 Well Structured
Let us consider a system where a service is connected to other services. What will the
source of the connection be with the other services? Will it be with one particular
service or another? There will be confusion and the well structured criterion described in
section 3.2.3 would highlight this case in the definition of the system by the fact that
there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.3.3 Consistency
A service is accessed from two other different services. What interpretation will be
placed on the meaning by the recipient service? The consistency condition under portion
3.2.3 will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.3.4 Completeness
From the service viewpoint, we can assume that there are services being defined but
unused. The services are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.4 Standards
8.5.4.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. In this case the network system is based on standards
as units and services, entities, techniques and communications as relations. There are
six validation cases discussed in this paper. They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.4.2 Well Structured
Let us consider a system where a standard is connected to other standards. What will
the source of the connection be with the other standards? Will it be with one particular
standard or another? There will be confusion and the well structured criterion described
in section 3.2.3 would highlight this case in the definition of the system by the fact that
there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.4.3 Consistency
A standard is accessed from two other different standards. What interpretation will be
placed on the meaning by the recipient standard? The consistency condition under
portion 3.2.3 will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.4.4 Completeness
From the standard viewpoint, we can assume that there are standards being defined but
unused. The standards are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.5 Techniques
8.5.5.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. In this case the network system is based on techniques
as units and services, standards, entities and communications as relations. There are six
validation cases discussed in this paper. They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.5.2 Well Structured
Let us consider a system where a technique is connected to other techniques. What will
the source of the connection be with the other techniques? Will it be with one particular
technique or another? There will be confusion and the well structured criterion described
in section 3.2.3 would highlight this case in the definition of the system by the fact that
there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.5.3 Consistency
A technique is accessed from two other different techniques. What interpretation will be
placed on the meaning by the recipient technique? The consistency condition under
portion 3.2.3 will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.5.4 Completeness
From the technique viewpoint, we can assume that there are techniques being defined
but unused. The techniques are a waste and would cause confusion if they are known.
The completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.6 Communications
8.5.6.1 Introduction
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process.
8.5.6.2 Network Structure
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. In this case the network system is based on
communications as units and services, standards, techniques and entities as relations.
There are six validation cases discussed in this paper. They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.6.3 Well Structured
Let us consider a system where a communication is connected to other communications.
What will the source of the connection be with the other communications? Will it be with
one particular communication or another? There will be confusion and the well
structured criterion described in section 3.2.3 would highlight this case in the definition
of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.6.4 Consistency
A communication is accessed from two other different communications. What
interpretation will be placed on the meaning by the recipient communication? The
consistency condition under portion 3.2.3 will detect the problem within the system.
8.5.6.5 Completeness
From the communication viewpoint, we can assume that there are communications
being defined but unused. The communications are a waste and would cause confusion if
they are known. The completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.7 Antivirus
8.5.7.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques, communications and antivirus constraints. In this case the network system
is based on entities as units and services, standards, techniques, communications and
antivirus constraints as relations. There are six validation cases discussed in this paper.
They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.7.2 Well Structured
Let us consider a system where an entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in section
3.2.3 would highlight this case in the definition of the system by the fact that there is a
connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.7.3 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.7.4 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.8 Firewall
8.5.8.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques, communications and firewall constraints. In this case the network system is
based on entities as units and services, standards, techniques, communications and
firewall constraints as relations. There are six validation cases discussed in this paper.
They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.8.2 Well Structured
Let us consider a system where a entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in section
3.2.3 would highlight this case in the definition of the system by the fact that there is a
connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.8.3 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.8.4 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.9 APIDS
8.5.9.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques, communications and APIDS constraints. In this case the network system is
based on entities as units and services, standards, techniques, communications and
APIDS constraints as relations. There are six validation cases discussed in this paper.
They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.9.2 Well Structured
Let us consider a system where an entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in section
3.2.3 would highlight this case in the definition of the system by the fact that there is a
connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.9.3 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.9.4 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.10 Ciphers
8.5.10.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques, communications and cipher constraints. In this case the network system is
based on entities as units and services, standards, techniques, communications and
cipher constraints as relations. There are six validation cases discussed in this paper.
They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.10.2 Well Structured
Let us consider a system where an entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in section
3.2.3 would highlight this case in the definition of the system by the fact that there is a
connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.10.3 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.10.4 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.6 Markov Theory
8.6.1 Introduction
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
8.6.2 Entities
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on entities classified as
nodes and the services, standards, techniques and communications as edges.
Using the algorithms in the previous sub-section on network theory for entities we can
determine what entities have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
entities and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.3 Services
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on services classified as a
unit and the entities, standards, techniques and communications as edges.
Using the algorithms in the previous sub-section on network theory for services we can
determine what services have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
services and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.4 Standards
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on standards classified as a
unit and the entities, services, techniques and communications as edges.
Using the algorithms in the previous sub-section on network theory for standards we
can determine what standards have flow through them and which do not. We can find
the edges that are used and those unused. We can ascertain what the flow is between
the standards and which are single entry or single exit blocks (groupings) within the
system. If we make a node which is to be taken as the error sink we can use the extra
edges to discover what is the probability of error at different parts in the system and
the error node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.5 Techniques
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on techniques classified as
nodes and the entities, services, standards and communications as edges.
Using the algorithms in the previous sub-section on network theory for techniques we
can determine what techniques have flow through them and which do not. We can find
the edges that are used and those unused. We can ascertain what the flow is between
the techniques and which are single entry or single exit blocks (groupings) within the
system. If we make a node which is to be taken as the error sink we can use the extra
edges to discover what is the probability of error at different parts in the system and
the error node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.6 Communications
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on communications classified
as nodes and the entities, services, standards and techniques as edges.
Using the algorithms in the previous sub-section on network theory for communication
nodes we can determine what communication nodes have flow through them and which
do not. We can find the edges that are used and those unused. We can ascertain what
the flow is between the communication nodes and which are single entry or single exit
blocks (groupings) within the system. If we make a node which is to be taken as the
error sink we can use the extra edges to discover what is the probability of error at
different parts in the system and the error node gives an estimate of the total error rate
of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.7 Antivirus
The network system is based on entities, services, standards, techniques,
communications and antivirus constraints. In this case the network system is based on
entities classified as nodes and the services, standards, techniques, communications
and antivirus constraints as edges.
Using the algorithms in the previous sub-section on network theory for entities we can
determine what entities have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
entities and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.8 Firewall
The network system is based on entities, services, standards, techniques,
communications and firewall constraints. In this case the network system is based on
entities classified as nodes and the services, standards, techniques, communications
and firewall constraints as edges.
Using the algorithms in the previous sub-section on network theory for entities we can
determine what entities have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
entities and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.9 APIDS
The network system is based on entities, services, standards, techniques,
communications and APIDS constraints. In this case the network system is based on
entities classified as nodes and the services, standards, techniques, communications
and APIDS constraints as edges.
Using the algorithms in the previous sub-section on network theory for entities we can
determine what entities have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
entities and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.10 Ciphers
The network system is based on entities, services, standards, techniques,
communications and ciphers constraints. In this case the network system is based on
entities classified as nodes and the services, standards, techniques, communications
and ciphers constraints as edges.
Using the algorithms in the previous sub-section on network theory for entities we can
determine what entities have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
entities and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.7 Quantitative Theory
8.7.1 Introduction
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
8.8.1.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
Finally we choose the attribute with the largest IG.
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.2 Entities
Probability is a measure of the likeliness that an event will occur.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct entity. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a entity. When we
are considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
system implies that we will always have to use specific further entities we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the methods described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.3 Services
Probability is a measure of the likeliness that an event will occur.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct service. We use the logical and
operator for selecting groups of services based on the recurrence of selecting a
service. When we are considering the correctness of the alternatives of services in a
service we use the logical or operation. When we come across a situation where one
service for a particular system implies that we will always have to use specific further
services we will use the dependent forms of the and and or logical operations. The
structures of a system imply a network form and we can use the methods described in
the part on network structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.4 Standards
Probability is a measure of the likeliness that an event will occur.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct standard. We use the logical and
operator for selecting groups of standards based on the recurrence of selecting a
standard. When we are considering the correctness of the alternatives of standards in a
service we use the logical or operation. When we come across a situation where one
standard for a particular system implies that we will always have to use specific
further standards we will use the dependent forms of the and and or logical operations.
The structures of a system imply a network form and we can use the methods
described in the part on network structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.5 Techniques
Probability is a measure of the likeliness that an event will occur.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct technique. We use the logical and
operator for selecting groups of techniques based on the recurrence of selecting a
technique. When we are considering the correctness of the alternatives of techniques
in a service we use the logical or operation. When we come across a situation where
one technique for a particular system implies that we will always have to use specific
further techniques we will use the dependent forms of the and and or logical
operations. The structures of a system imply a network form and we can use the
methods described in the part on network structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.6 Communications
The communications probability is based on a mixture of entities, services, standards
and techniques which seem to be too complicated to analyse at present. We use the
network model described above to give a basis for the collection of data about the
system. When we consider the occurrence of an event in system research we are
talking about events, recurring events or choices of event. In the case of sequences of
occurrences we have the count of using a particular node. We use the logical and
operator for using groups of nodes based on the recurrence of using a node. When we
are considering the correctness of the alternatives of nodes in a system we use the
logical or operation. When we come across a situation where one node for a particular
system implies that we will always have to use specific further nodes we will use the
dependent forms of the and and or logical operations. The structures of systems imply
a network form and we can use the methods described in the part on network
structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.7 Antivirus
Probability is a measure of the likeliness that an event will occur.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct entity. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a entity. When we
are considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
system implies that we will always have to use specific further entities we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the methods described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.8 Firewall
Probability is a measure of the likeliness that an event will occur.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct entity. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a entity. When we
are considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
system implies that we will always have to use specific further entities we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the methods described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.9 APIDS
Probability is a measure of the likeliness that an event will occur.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct entity. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a entity. When we
are considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
system implies that we will always have to use specific further entities we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the methods described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.10 Ciphers
Probability is a measure of the likeliness that an event will occur.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct entity. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a entity. When we
are considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
system implies that we will always have to use specific further entities we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the methods described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.11 Geographic Information Systems
8.11.1 Introduction
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.2 Entity
Entities fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables entities to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.3 Services
Services fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a service type. The other is dependent on
position eg a service in the network. The data can be held as discrete objects (raster)
and continuous fields (vector). It enables services to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.4 Standards
Standards fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg a physical electrical conection in the network. The data can be held as
discrete objects (raster) and continuous fields (vector). It enables standards to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.5 Techniques
Techniques fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a bluetooth type. The other is dependent on
position eg a radio connection in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables techniques to be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process.
Communications fall into 2 forms. The first is pure data values which are not effected
by position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables communications to be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.7 Antivirus
Entities fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables entities to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.8 Firewall
Entities fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables entities to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.9 APIDS
Entities fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables entities to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.10 Ciphers
Entities fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables entities to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.12 Curve Fitting
8.12.1 Introduction
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data, and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.2 Entities
Curve fitting is a method of selecting the correct entity from the group of entities that
make up a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.3 Services
Curve fitting is a method of selecting the correct service from the group of services to
process an entity.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.4 Standards
Curve fitting is a method of selecting the correct standard from the group of standards
to process a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.5 Techniques
Curve fitting is a method of selecting the correct technique from the group of
techniques to apply to a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. Curve fitting is a
method of selecting the correct source from the group of nodes to apply to a
communications service. It is used to select the correct destination from the group of
nodes to apply to a communications service and then to select the correct connection
from the group of routes to apply to a communications service. Curve fitting is a
method of checking the entity, service, technique, standard and communications from
the components that make up the system.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.7 Antivirus
Curve fitting is a method of selecting the correct entity from the group of entities that
make up a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.8 Firewall
Curve fitting is a method of selecting the correct entity from the group of entities that
make up a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.9 APIDS
Curve fitting is a method of selecting the correct entity from the group of entities that
make up a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.10 Ciphers
Curve fitting is a method of selecting the correct entity from the group of entities that
make up a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.17.5 Techniques
The requirements for the technique data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It contains iteration control, name as string, sound and picture, hardware
representation, meaning, version, timestamp, properties (name and value), statistics,
nesting, events (name, value and interrupt service), priority and relative to technique.
We define a set of rules for extending the techniques of the system which are
performed in coordination with the extended standard and extended technique
definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications metrics are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present.
The requirements for the communications data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It defines name (string, sound, picture), hardware representation, version, timestamp,
statistics, entities, services,techniques and standards. Extensions are defined from a
similar set of rules.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.7 Antivirus
The requirements for the service data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It has an iteration control, name, identity by sound and picture, hardware
representation, meaning, version, timestamp, geographic position, properties (name
and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.8 Firewall
The requirements for the service data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It has an iteration control, name, identity by sound and picture, hardware
representation, meaning, version, timestamp, geographic position, properties (name
and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.9 APIDS
The requirements for the service data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It has an iteration control, name, identity by sound and picture, hardware
representation, meaning, version, timestamp, geographic position, properties (name
and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.10 Ciphers
The requirements for the service data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It has an iteration control, name, identity by sound and picture, hardware
representation, meaning, version, timestamp, geographic position, properties (name
and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.18 Compiler Technology Theory
8.18.1 Introduction
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.2 Entities
As regards entities, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the entities are processed
based on the learning, probability, network analysis and Markov theory for the entities
sections. If a entity is not recognised then the input entity is queried to see if there is
an error or the entity should be added to the entity set. An escape sequence can be
used to extend the entity set.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.3 Services
As regards services, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the services are processed
based on the learning, probability, network analysis and Markov theory for the services
sections. If a service is not recognised then the input service is queried to see if there
is an error or the service should be added to the service set. We define a set of rules for
extending the services of the language which are performed in coordination with the
extended standard and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.4 Standards
As regards standard, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They are defined by the standard definitions for each of the
entities, services and techniques. They also give priorities of how the standard are
processed based on the learning, probability, network analysis and Markov theory for
the standard sections. If a standard is not recognised then the input standard is
queried to see if there is an error or the standard should be added to the standard set.
We define a set of rules for extending the standard of the system which are performed
in coordination with the extended services and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.5 Techniques
As regards technique, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They are defined by the technique definitions for each of the
system. They also give priorities of how the technique are processed based on the
learning, probability, network analysis and Markov theory for the technique sections. If
a technique definition is not recognised then the input definition is queried to see if
there is an error or the technique definition should be added to the technique definition
set. We define a set of rules for extending the technique of the system which are
performed in coordination with the extended standard and extended services definition
sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. As regards
technique, compiler technology follows the formal definition found in programming
languages for both source (input) language, intermediate language and target (output)
language. They are defined by the technique definitions for each of the system. They
also give priorities of how the technique are processed based on the learning,
probability, network analysis and Markov theory for the technique sections. If a
technique definition is not recognised then the input definition is queried to see if there
is an error or the technique definition should be added to the technique definition set.
We define a set of rules for extending the technique of the system which are performed
in coordination with the extended standard and extended services definition sections.
We start with a set of base elements as the entities, services, techniques and
standards of the system and a sequence for extending the entity set, services set,
techniques set and standards set.
If a system element is not recognised then the element is queried to see if there is an
error or the element should be added to the system set. An escape sequence can be
used to extend the system element set with a definition based on appropriate
references to previously defined elements.
By analogy with object oriented programming we find it gives us the concept of scope
for meaning, objects, properties, methods with arguments, the "this" operator and the
concepts of synonyms, generalisation and specification. Overloading of definitions
allows for meaning to change according to context. Replicating actions (iterations) can
be performed under different cases. Other operation are ways of defining properties of
objects or actions with polymorphism. We note that the multiple definitions of system
elements found in the system dictionaries are equivalent to the conditional
compilations and macros such as found in the c programming language. We define a set
of rules for extending the entities, services, standards and techniques of the system
which are performed in coordination with the extended entities, extended services,
extended standard and extended technique definition sections. This technique is also
used to extend the power of the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.7 Antivirus
As regards services, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the services are processed
based on the learning, probability, network analysis and Markov theory for the services
sections. If a service is not recognised then the input service is queried to see if there
is an error or the service should be added to the service set. We define a set of rules for
extending the services of the language which are performed in coordination with the
extended standard and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.8 Firewall
As regards services, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the services are processed
based on the learning, probability, network analysis and Markov theory for the services
sections. If a service is not recognised then the input service is queried to see if there
is an error or the service should be added to the service set. We define a set of rules for
extending the services of the language which are performed in coordination with the
extended standard and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.9 APIDS
As regards services, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the services are processed
based on the learning, probability, network analysis and Markov theory for the services
sections. If a service is not recognised then the input service is queried to see if there
is an error or the service should be added to the service set. We define a set of rules for
extending the services of the language which are performed in coordination with the
extended standard and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.10 Ciphers
As regards services, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the services are processed
based on the learning, probability, network analysis and Markov theory for the services
sections. If a service is not recognised then the input service is queried to see if there
is an error or the service should be added to the service set. We define a set of rules for
extending the services of the language which are performed in coordination with the
extended standard and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.19 Communications Theory
8.19.1 Introduction
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.2 Entities
As regards entities, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the entities are processed based on the learning, probability,
network analysis and Markov theory for the entities sections. If a entity is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the entity is not recovered, the entity is queried to
a human to see if there is an error or the entity should be added to the entity set. An
escape sequence can be used to extend the entity set.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.3 Services
As regards services, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the services are processed based on the learning, probability,
network analysis and Markov theory for the services sections. If a service is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the service is not recovered, the service is queried
to a human to see if there is an error or the service should be added to the dictionary
set. We define a set of rules for extending the services of the language which are
performed in coordination with the extended standard and extended technique
definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.4 Standards
As regards standard, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the standard is processed based on the learning, probability,
network analysis and Markov theory for the standard sections. If a standard is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the standard is not recovered, the standard is
queried to a human to see if there is an error or the standard unit should be added to
the standard dictionary set. We define a set of rules for extending the standard of the
language which are performed in coordination with the extended services and
extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.5 Techniques
As regards technique, communications technology follows the formal definition found
in programming languages for source, transmission and destination languages. They
also give priorities of how the technique is processed based on the learning,
probability, network analysis and Markov theory for the technique sections. If a
technique is not recognised then it is passed to a recovery process based on repeated
analysis of the situation by some parallel check. If the technique is not recovered, the
technique is queried to a human to see if there is an error or the technique unit should
be added to the technique dictionary set. We define a set of rules for extending the
technique of the system which are performed in coordination with the extended
services and extended standard definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.6 Communications
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
Grammar specifies the compositional structure of complex messages. A formal
language is a set of strings of terminal symbols. Each string in the language can be
analysed-generated by the grammar. The grammar is a set of rewrite rules to form non
terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a message.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.7 Antivirus
As regards services, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the services are processed based on the learning, probability,
network analysis and Markov theory for the services sections. If a service is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the service is not recovered, the service is queried
to a human to see if there is an error or the service should be added to the dictionary
set. We define a set of rules for extending the services of the language which are
performed in coordination with the extended standard and extended technique
definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.8 Firewall
As regards services, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the services are processed based on the learning, probability,
network analysis and Markov theory for the services sections. If a service is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the service is not recovered, the service is queried
to a human to see if there is an error or the service should be added to the dictionary
set. We define a set of rules for extending the services of the language which are
performed in coordination with the extended standard and extended technique
definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.9 APIDS
As regards services, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the services are processed based on the learning, probability,
network analysis and Markov theory for the services sections. If a service is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the service is not recovered, the service is queried
to a human to see if there is an error or the service should be added to the dictionary
set. We define a set of rules for extending the services of the language which are
performed in coordination with the extended standard and extended technique
definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.10 Ciphers
As regards services, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the services are processed based on the learning, probability,
network analysis and Markov theory for the services sections. If a service is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the service is not recovered, the service is queried
to a human to see if there is an error or the service should be added to the dictionary
set. We define a set of rules for extending the services of the language which are
performed in coordination with the extended standard and extended technique
definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.20 Database Technology
8.20.1 Introduction
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.2 Entities
The requirements for the entity database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.1 (Appendix – Database Scheme – Entities) and an escape sequence for
extending the entity set as in section 8.1 (Appendix – Database Scheme - Entities).
The entity definition set above is created once when the entity is added to the system
and changed and removed infrequently as the entity set is extended. It is queried
frequently for every entity that is read. The entities are updated (inserted, modified,
and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.3 Services
The requirements for the service database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.2 (Appendix – Database Scheme – Services). We define a set of rules for
extending the services of the system which are performed in coordination with the
extended standard and extended technique definition sections as in section 8.2
(Appendix – Database Scheme - Services).
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every service that is read. The services are updated (inserted,
modified, and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.4 Standards
The requirements for the standard database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.3 (Appendix – Database Scheme – Standards).
We define a set of rules for extending the standard of the system which are performed
in coordination with the extended services and extended technique definition sections
as in section 8.3 (Appendix – Database Scheme – Standards).
The standard definition set above is created once when the language is added to the
system and changed and removed infrequently as the language standard set is
extended. It is queried frequently for every standard unit that is read. The standard
rules are updated (inserted, modified, and deleted) infrequently. The administration
(maintain users, data security, performance, data integrity, concurrency and data
recovery using utilities) of the database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.5 Techniques
The requirements for the technique database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML
tagsas in section 8.4 (Appendix – Database Scheme – Techniques). We define a set of
rules for extending the techniques of the system which are performed in coordination
with the extended standard and extended technique definition sections as in section
8.4 (Appendix – Database Scheme – Techniques).
The technique definition set above is created once when the technique is added to the
system and changed and removed infrequently as the technique set is extended. It is
queried frequently for every technique rule that is read. The technique are updated
(inserted, modified, and deleted) infrequently. The administration (maintain users, data
security, performance, data integrity, concurrency and data recovery using utilities) of
the database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.6 Communications
In electronics a dialogue is a communications operation which uses a source,
generating data to be transmitted, a transmitter, converting data into transmittable
information, a transmission system, carrying information, a receiver, converting
received information into data, and a destination taking incoming data. The
communications is a two way process with the source talking to the destination and
the destination returning the conversation to the source.
Key communications tasks consist of transmission system utilization, interfacing,
information generation, synchronization, exchange management, error detection and
correction, addressing and routing, recovery, information formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Key elements of a protocol are standard (data information formats),
technique (control information, error handling) and timing (speed matching,
sequencing). Protocol architecture is the task of communication broken up into
modules for processing at different levels of functionality.
The requirements for the communications database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.5 (Appendix – Database Scheme – Communications). Extensions are
defined from as in section 8.5 (Appendix – Database Scheme – Communications).
The communications definition set above is created once when the communications is
added to the system and changed and removed infrequently as the communications set
is extended. It is queried frequently for every communications rule that is read. The
communications are updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.7 Antivirus
The requirements for the service database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.2 (Appendix – Database Scheme – Services). We define a set of rules for
extending the services of the system which are performed in coordination with the
extended standard and extended technique definition sections as in section 8.2
(Appendix – Database Scheme - Services).
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every service that is read. The services are updated (inserted,
modified, and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.8 Firewall
The requirements for the service database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.2 (Appendix – Database Scheme – Services). We define a set of rules for
extending the services of the system which are performed in coordination with the
extended standard and extended technique definition sections as in section 8.2
(Appendix – Database Scheme - Services).
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every service that is read. The services are updated (inserted,
modified, and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.9 APIDS
The requirements for the service database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.2 (Appendix – Database Scheme – Services). We define a set of rules for
extending the services of the system which are performed in coordination with the
extended standard and extended technique definition sections as in section 8.2
(Appendix – Database Scheme - Services).
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every service that is read. The services are updated (inserted,
modified, and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.10 Ciphers
The requirements for the service database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.2 (Appendix – Database Scheme – Services). We define a set of rules for
extending the services of the system which are performed in coordination with the
extended standard and extended technique definition sections as in section 8.2
(Appendix – Database Scheme - Services).
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every service that is read. The services are updated (inserted,
modified, and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.21 Summary
8.21.1 Introduction
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.2 Entities
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.3 Services
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.4 Standards
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.5 Techniques
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.6 Communications
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.7 Antivirus
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.8 Firewall
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.9 APIDS
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.10 Ciphers
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
object oriented type
event-driven architecture data set
hypertext hypermedia data set
probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
object oriented type
event-driven architecture database
hypertext hypermedia database
probabilistic database
real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.22 Implementation
8.22.1 General Commentary
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.
The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:
The logical database structure must follow the object oriented type with the XML tags:
8.22.2 Entity
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.
The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:
The logical database structure must follow the object oriented type with the XML tags:
8.22.3 Services
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.
The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:
8.22.4 Standards
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.
The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:
The logical database structure must follow the object oriented type with the XML tags:
8.22.5 Techniques
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.
The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:
The logical database structure must follow the object oriented type with the XML tags:
8.22.6 Communications
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.
The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:
Escape sequences are defined as follows:
The logical database structure must follow the object oriented type with the XML tags:
8.22.7 Antivirus
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.
The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:
The logical database structure must follow the object oriented type with the XML tags:
8.22.8 Firewall
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.
The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:
The logical database structure must follow the object oriented type with the XML tags:
8.22.9 APIDS
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.
The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:
The logical database structure must follow the object oriented type with the XML tags:
8.22.10 Ciphers
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.
The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:
The logical database structure must follow the object oriented type with the XML tags:
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.2.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.2.3 Analysis
9.2.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.2.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.2.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.2.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.2.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.2.4 Implementation
9.2.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.2.4.2 Learning Theory
9.2.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.2.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
9.3 Entities
9.3.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for entities.
9.3.2 Theoretical Studies
9.3.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.3.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.3.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.3.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.3.3 Analysis
9.3.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.3.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.3.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.3.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.3.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.3.4 Implementation
9.3.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.3.4.2 Learning Theory
9.3.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.3.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
9.4 Services
9.4.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for services.
9.4.2 Theoretical Studies
9.4.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.4.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.4.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.4.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.4.3 Analysis
9.4.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.4.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.4.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.4.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.4.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.4.4 Implementation
9.4.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.4.4.2 Learning Theory
9.4.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.4.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
9.5 Standards
9.5.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for standards.
9.5.2 Theoretical Studies
9.5.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.5.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.5.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.5.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.5.3 Analysis
9.5.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.5.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.5.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.5.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.5.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.5.4 Implementation
9.5.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.5.4.2 Learning Theory
9.5.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.5.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.6.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.6.3 Analysis
9.6.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.6.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.6.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.6.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.6.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.6.4 Implementation
9.6.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.6.4.2 Learning Theory
9.6.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.6.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
9.7 Communications
9.7.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for communications.
9.7.2 Theoretical Studies
9.7.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.7.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.7.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.7.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.7.3 Analysis
9.7.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.7.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.7.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.7.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.7.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.7.4 Implementation
9.7.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.7.4.2 Learning Theory
9.7.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.7.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
9.8 Antivirus
9.8.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for antivirus.
9.8.2 Theoretical Studies
9.8.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.8.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.8.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.8.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.8.3 Analysis
9.8.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.8.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.8.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.8.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.8.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.8.4 Implementation
9.8.4.1 Introduction
The implementation stage of antivirus studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.8.4.2 Learning Theory
9.8.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.8.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.9.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.9.3 Analysis
9.9.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.9.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.9.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.9.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.9.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.9.4 Implementation
9.9.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.9.4.2 Learning Theory
9.9.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.9.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.10.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.10.3 Analysis
9.10.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.10.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.10.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.10.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.10.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.10.4 Implementation
9.10.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.10.4.2 Learning Theory
9.10.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.10.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
9.11 Ciphers
9.11.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for ciphers.
9.11.2 Theoretical Studies
9.11.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.11.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.11.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.11.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.11.3 Analysis
9.11.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.11.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.11.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.11.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.11.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
event-driven architecture database
deductive database
multi-database
graph database
hypertext hypermedia database
knowledge base
probabilistic database
real-time database
temporal database
Logical data models are:
object model
document model
object-relational database combines the two related structures.
Physical data models are:
Semantic model
XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.11.4 Implementation
9.11.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.11.4.2 Learning Theory
9.11.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:
p n p p n n
I( , ) log 2 log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.
v
p i ni pi ni
remainder ( A) I( , )
i 1 pn pi ni pi ni
The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A) I ( , ) remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.11.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as
10.11 Summary
IoT security processing is the function of extracting entities, services, standards,
techniques, communications, antivirus, firewall, APIDS and ciphers from a set of
information. The contributions of IoT technology are defined below:
10.11.1 IoT Technology
IoT processing is the function of extracting entities, services, standards, techniques
and communications from a set of information. The contributions of IoT technology are
defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation of the IoT system.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity, service, standard, technique and communications definition set above is
created once when the entity, service, standard, technique or communications is added
to the system and changed and removed infrequently as the service set is extended. It
is queried frequently for every entity, service, standard, technique and communications
rule that is read. The definition set is updated (inserted, modified, and deleted)
infrequently. The administration (maintain users, data security, performance, data
integrity, concurrency and data recovery using utilities - services) of the database will
be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
10.11.2 IoT Security Technology
IoT security processing is the function of extracting entities, services, standards,
techniques and communications from a set of information for antivirus, firewall, APIDS
and ciphers. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation of the IoT system.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity, service, standard, technique and communications definition set above for
antivirus, firewall, APIDS and ciphers is created once when the entity, service,
standard, technique or communications for antivirus, firewall, APIDS and ciphers is
added to the system and changed and removed infrequently as the service set is
extended. It is queried frequently for every entity, service, standard, technique and
communications rule that is read. The definition set is updated (inserted, modified, and
deleted) infrequently. The administration (maintain users, data security, performance,
data integrity, concurrency and data recovery using utilities - services) of the database
will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.
Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.
Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.
Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.
Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.
Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.
Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.
Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.
12.10.1.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities. Standards’ protocols are expressed in terms of a
formal language. Services are given priorities on how the entities are processed based
on the protocol. The protocol can be improved through the learning, probability,
network analysis and Markov theory for the entities, services, standards and
techniques sections. The definitions are translated using compiler technology.
If an entity or a service is not recognised then it is passed to a recovery process based
on repeated analysis of the situation by some parallel check. If the entity or service is
not recovered, the entity / service stack dump is reviewed to determine if there is an
error or the entity / service should be extended to the entity / service set with an
escape sequence. Similarly standards and techniques can be updated by experience.
12.10.1.11 Learning theory
Learning theory affords a set of methods for adding data, relations and modifications to
the knowledge database of the IoT system using the procedure for learning described
in section 7.1.9.2.4.
12.10.1.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
12.10.1.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.10.1.14 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data, and estimate values of a function
for empty data values. They also summarize relations of the variables. Extrapolation
takes the fitted curve to calculate values beyond the range of the observed data, and
gives uncertainty due to which particular curve has been determined. Curve fitting
relies on various types of constraints such as a specific point, angle, curvature or other
higher order constraints especially at the ends of the points being considered. The
number of constraints sets a limit on the number of combined functions defining the
fitted curve even then there is no guarantee that all constraints are met or the exact
curve is found. Curves are assessed by various measures with a popular procedure
being the least squares method which measures the deviations of the given data
points.
Curve fitting can select the correct source / destination from the group of nodes to
apply to a communications service and then to select the correct connection from the
group of routes to apply to the service. Curve fitting can check the entity, service,
technique, standard and communications from the components that make up the
system.
12.10.1.15 Configuration Management
Configuration management follows the process and database described in section
7.1.17.2. Each entity, service, standard, technique and communications protocol is
subject to a configuration management life cycle and is supported by the appropriate
services and database. If an element or relation is not found then the error is reported
as a stack dump and after review the database structure is adjusted.
12.10.1.16 Continuous Integration
Continuous integration uses an extension of the configuration management as
described in section 7.1.18.2. It applies to entities, services, standards and techniques.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
12.10.1.17 Continuous Delivery
Continuous delivery extends the processes and databases along the lines of section
7.19.1.2. It is supported by services as appropriate and applies to developments of
entities, services, standards and techniques. If an element or relation is not found then
the error is reported as a stack dump and after review the database structure is
adjusted.
12.10.1.18 Virtual Reality
Virtual reality is the user interface for monitoring and control of the IOT system. It
works with other technology such as remote communication, artificial intelligence and
spacial data to assist the technology. Errors for entities, services, standards,
techniques and communications are reported using this method so that corrective
actions can be made remotely. The reported error is displayed as a device stack and
position then evaluated with respect to time, device, device type, position and after
review the system structure is modified appropriately.
12.10.2 Physical System
12.10.2.1 Database Processing
The database supports IoT activities with a multimedia hybrid object-relational NOSQL
multi-database with appropriate DDL, QL, DML and PDL. It supports an XML schema
defined in section 14 (Appendix – Database Scheme) with services giving facilities for
event-driven architecture, deduction, graph structures, hypertext hypermedia,
knowledge base, probability, real-time, loading and executing cipher from libraries and
temporal information. It is a virtual store.
12.10.2.2 Geographic Information Systems
Entities use GIS data and communications protocols, services, standards and
techniques collect, process and report the GIS information for visualization and
analysis.
12.10.2.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.10.2.4 Network Theory
Network theory gives algorithms for services to validate and optimise links in the
database schema and the data for well structured-ness, consistency, completeness,
completion of processes, optimal structure for minimum time of processing and
maximum ease of look up applied to entities, services, techniques, standards and
communications.
12.10.2.5 Markov Theory
Markov theory extends network theory with services to validate and optimise nodes
and edges for schema structure and database structure for entities, services,
techniques, standards and communications respectively and review any problems
reported in the database.
12.10.2.6 Algebraic Theory
Algebraic theory transforms the logic of of services as assertions and form constraints
of input using compiler technology. Entities are analysed with set theory to verify
constraints on them. The system is combined with logic flows to verify that the outputs
and following inputs are consistent. Techniques and standards follow the integration
process. Communications follow the same integration actions.
12.10.2.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.10.2.8 Programming Language Theory
Programming language theory gives formalised ways for defining entities, services,
techniques, standards and communications. It gives a basis for compiler technology to
process entities, services, techniques, standards and communications for network,
Markov, algebraic and logical validation of the schema and database.
12.10.2.9 Compiler Technology Theory
Compiler technology translates the definitions of entities, services, techniques,
standards and communications for validation processes of database and its schema. It
is also used to optimise the system entities, services, techniques, standards and
communications through learning, probability, network analysis and Markov theory.
12.10.2.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities best expressed in a formal language. The
representation is translated with compiler technology to give validation through
network, Markov, algebraic and logical analysis and improved through the learning,
probability, network analysis and Markov theory.
12.10.2.11 Learning theory
Learning theory uses the procedure for learning described in section 7.1.9.2.4 to
optimise the database schema and data into a knowledge database for the IoT system.
12.10.2.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
12.10.2.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.10.2.14 Curve Fitting
Curve fitting uses extended Pearson coefficient analysis to assess trust in the curve
fitting process. The curve fitting uses Chebycheff polynomials and splines for
interpolation and extrapolation for multi dimensional analysis. It is particularly useful
for selecting the correct source from the group of nodes to apply to a communications
service, the correct destination from the group of nodes to apply to a communications
service and then the correct connection from the group of routes to apply to a
communications service.
12.10.2.15 Configuration Management
Configuration management has the database include system construction and item
identity and status as entities. Services provide base-lining, configuration control with
approval stages and baselines, configuration status accounting and audits versus
revision and defect correction.
12.10.2.16 Continuous Integration
Continuous integration extends configuration management with services to extract a
copy of the system from a repository and perform a build and a set of automated tests
to ensure that their environment is valid for update. A build server builds the system,
documentation, statistics, distribution media, integrates and deploys into a scalable
version clone of production environment through service virtualization for
dependences. Automated tests for unit and integration (defect or regression) tests with
static and dynamic tests, measure and profile performance confirms it behaves as it
should. The updated repository triggers another build process and tests. The new
updates are committed to the repository when all the tests have been verified and
delivered to stakeholders and testers otherwise updates have rollback. The build / test
process is repeated periodically to ensure no corruption of the system.
12.10.2.17 Continuous Delivery
Continuous delivery automates source control all the way through to production. It
include continuous integration, application release automation, build automation,
and application life cycle management.
12.10.2.18 Virtual Reality
Virtual reality is the user interface with services for monitoring and control of the IOT
system. It works with other technology such as remote communication, artificial
intelligence and spacial data to assist the technology. Errors for entities, services,
standards, techniques and communications are reported using this method so that
corrective actions can be made remotely. The reported error is displayed as a device
stack and position then evaluated with respect to time, device, device type, position
and after review the system structure is modified appropriately.
12.10.2.19 Commentary
The cipher service definition set above is created once when the service is added to
the system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
The named entity recognition process has a basic process of syntax analysis and
semantics lookup (compiler technology) to analyse the input and the database
technology to look up the syntax rules, semantics rules, words and the classification
information for the activity. The logic theory is used to validate the language input and
in the case of error the human user of the system is queried to correct the error or
extend the language using the learning algorithms. The tuning of the database is
provided with the statistics data that is collected over the use of the database and
language data.
12.11 Summary
IoT security processing is the function of extracting entities, services, standards,
techniques, communications, antivirus, firewall, APIDS and ciphers from a set of
information. The contributions of IoT technology are defined below:
12.11.1 IoT Technology
IoT processing is the function of extracting entities, services, standards, techniques
and communications from a set of information. The contributions of IoT technology are
defined below:
t) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
u) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
v) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
w) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
x) Markov theory can determine usage and errors of the structure of IoT system.
y) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
z) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
aa) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
ab) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
ac) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
ad) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
ae) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
af) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
ag) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
ah) Curve fitting is used for the interpretation of the IoT system.
ai) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
aj) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
ak) Continuous delivery extends continuous integration by automating the process
from start to production.
al) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity, service, standard, technique and communications definition set above is
created once when the entity, service, standard, technique or communications is added
to the system and changed and removed infrequently as the service set is extended. It
is queried frequently for every entity, service, standard, technique and communications
rule that is read. The definition set is updated (inserted, modified, and deleted)
infrequently. The administration (maintain users, data security, performance, data
integrity, concurrency and data recovery using utilities - services) of the database will
be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
12.11.2 IoT Security Technology
IoT security processing is the function of extracting entities, services, standards,
techniques and communications from a set of information for antivirus, firewall, APIDS
and ciphers. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation of the IoT system.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity, service, standard, technique and communications definition set above for
antivirus, firewall, APIDS and ciphers is created once when the entity, service,
standard, technique or communications for antivirus, firewall, APIDS and ciphers is
added to the system and changed and removed infrequently as the service set is
extended. It is queried frequently for every entity, service, standard, technique and
communications rule that is read. The definition set is updated (inserted, modified, and
deleted) infrequently. The administration (maintain users, data security, performance,
data integrity, concurrency and data recovery using utilities - services) of the database
will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
13 Conclusions
This paper reviews how some other technologies can contribute to IoT and its
processes. It consists of 12 further sections. The next gives a summary of IoT. Section
4 considers the Intel Active Management Technology whilst the fifth part describes IoT
security followed by a component on IoT security solutions. The seventh section is
devoted to methodologies that can be add to the techniques for IoT processing. There
are 20 theories that are helpful. They are search theory, network theory, Markov theory,
algebraic theory, logic theory, programming language theory, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, geographic information systems, curve fitting,
configuration management continuous integration/delivery and virtual reality. These
techniques are applied in turn to IoT studies in the part eight in turn to entity (address,
database, entities, firmware, functions, hardware, languages, network hardware,
network media), services (access management, accounting management, address
management, application management, communications management, content
management, continuous delivery, data analysis, data transfer, network management,
protocol management, reliability and fault tolerance, resource management, search
management, security management, service engineering management, statistics
management, status management, test facility management, development facility
management, virtualisation), standards and techniques (cognitive networks,
cooperative networks, machine learning, neural networks, reinforcement learning, self-
organizing distributed networks, surrogate models, time series analysis) and
communications. In the ninth section we study IoT security processing from the view of
its different activities with the requirements being described in part 10. Part 11 gives a
specification of the IoT security tools and implementation specified in section 12. The
penultimate part presents the conclusions of the paper whilst the final section is a set
of references to help the reader.
14.2 Services
a. <service set>
<service set name><string></service set name>
<standard set name><string></standard set name>
<technique set name><string></technique set name>
<service><iteration control>
<service name><string></service name>
<service name sound> <service name sound file> </service name sound>
<service hardware representation><string</service hardware representation>
<service picture> <service picture file> </service picture>
<service sound> <service sound file> </service sound>
<service meaning> <string> <service meaning>
<service version> <string> </service version>
<service timestamp> <timestamp> </service timestamp>
<geographic position><coordinates></geographic position>
<properties>
<property> <property name> <property value>
<service property system statistic> <number> <service property system statistic>
<service property personal statistic> <number> <service property personal statistic>
</property>
.........
<service subset>
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<service priority> <string> </service priority>
...........
<service relations>
<service relation>
<relation name><string></relation name>
<service name>
</service relation>
...........
</service relations>
<service language statistic> <number> </service language statistic>
<service personal statistic> <number> </service personal statistic>
</service></iteration control>
.........
</service set>
b. <service subset>
<service subset name><string></service subset name>
<standard set name><string></standard set name>
<technique set name><string></technique set name>
<service><iteration control>
<service name><string></service name>
<service name sound> <service name sound file> </service name sound>
<service hardware representation><string</service hardware representation>
<service picture> <service picture file> </service picture>
<service sound> <service sound file> </service sound>
<service meaning> <string> <service meaning>
<service version> <string> </service version>
<service timestamp> <timestamp> </service timestamp>
<properties>
<property> <property name> <property value>
service property system statistic> <number> <service property system statistic>
service property personal statistic> <number> <service property personal statistic>
</property>
.........
<service subset>
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<service priority> <string> </service priority>
...........
<service relations>
<service relation>
<relation name><string></relation name>
<service name>
</service relation>
...........
</service relations>
<service language statistic> <number> </service language statistic>
<service personal statistic> <number> </service personal statistic>
</service></iteration control>
.........
</service subset>
14.3 Standards
a. <standard set>
<standard set name><string></standard set name>
<standard>
<standard name><string></standard name>
<standard hardware representation><string></standard hardware representation>
<standard rule>
<standard name>.............<standard name>
<standard version> <string> </standard version>
<standard timestamp> <timestamp> </standard timestamp>
<standard system rule statistic> <number> </standard system rule statistic>
<standard personal rule statistic> <number> </standard personal rule statistic>
</standard rule>
<standard entity>
<entity name>.............<entity name>
<standard system entity statistic> <number> </standard system entity statistic>
<standard personal entity statistic> <number> </standard personal entity statistic>
</standard entity>
<standard service>
<service name>.............<service name>
<standard system service statistic> <number> /standard system service statistic>
<standard personal service statistic> <number> <s/standard personal service statistic>
</standard service>
<standard technique>
<technique name>.............<technique name>
<standard system technique statistic> <number> </standard system technique
statistic>
<standard personal technique statistic> <number> </standard personal technique
statistic>
</standard technique>
...........
<standard system statistic> <number> </standard system statistic>
<standard personal statistic> <number> /<standard personal statistic>
</standard>
.........
</standard set>
14.4 Techniques
a. <technique set>
<technique set name><string></technique set name>
<technique><iteration control>
<technique name><string></technique name>
<technique name sound> <technique name sound file> </technique name sound>
<technique hardware representation><string</technique hardware representation>
<technique picture> <technique picture file> </technique picture>
<technique sound> <technique sound file> </technique sound>
<technique meaning> <string> <technique meaning>
<technique version> <string> </technique version>
<technique timestamp> <timestamp> </technique timestamp>
<properties>
<property> <property name> <property value>
technique property system statistic> <number> <technique property system statistic>
technique property personal statistic> <number> <technique property personal
statistic>
</property>
.........
<technique subset>
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<technique priority> <string> </technique priority>
...........
<technique relations>
<technique relation>
<relation name><string></relation name>
<technique name>
</technique relation>
...........
</technique relations>
<technique language statistic> <number> </technique language statistic>
<technique personal statistic> <number> </technique personal statistic>
</technique></iteration control>
.........
</technique set>
b. <technique subset>
<technique subset name><string></technique subset name>
<technique><iteration control>
<technique name><string></technique name>
<technique name sound> <technique name sound file> </technique name sound>
<technique hardware representation><string</technique hardware representation>
<technique picture> <technique picture file> </technique picture>
<technique sound> <technique sound file> </technique sound>
<technique meaning> <string> <technique meaning>
<technique version> <string> </technique version>
<technique timestamp> <timestamp> </technique timestamp>
<properties>
<property> <property name> <property value>
technique property system statistic> <number> <technique property system statistic>
technique property personal statistic> <number> <technique property personal
statistic>
</property>
.........
<technique subset>
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<technique priority> <string> </technique priority>
...........
<technique relations>
<technique relation>
<relation name><string></relation name>
<technique name>
</technique relation>
...........
</technique relations>
<technique language statistic> <number> </technique language statistic>
<technique personal statistic> <number> </technique personal statistic>
</technique></iteration control>
.........
</technique subset>
14.5 Communications
a. <communication set>
<communication set name><string></communication set name>
<communication>
<standard set name><string></standard set name>
<technique set name><string></technique set name>
<communication name><string></communication name>
<communication name sound> <communication name sound file> </communication
name sound>
<communication hardware representation><string</communication hardware
representation>
<communication picture> <communication picture file> </communication picture>
<communication sound> <communication sound file> </communication sound>
<communication version> <string> </communication version>
<communication timestamp> <timestamp> </communication timestamp>
<communication language statistic> <number> </communication language statistic>
<communication personal statistic> <number> </communication personal statistic>
<entity name> .........
<service name> .........
<technique name> .........
<standard name> .........
</communication>
........
</communication set>
<Anti Spam>
<Filter>
<Norton Antispam> Norton Antispam Can Be On Or Off </Norton Antispam>
<Address Book Exclusions> Address Book Exclusions Can Be Configured To
<Name> Name </Name>
<Email Address> Email Address </Email Address>
Operations Can Be Add, Edit Or Removed
</Address Book Exclusions>
<Allow List> Allow List Can Be Configured With
<allow Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</allow Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</Allow List>
<block List> Block List Can Be Configured With
<block Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</block Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</block List>
<Web Query> Web Query Can Be On Or Off </Web Query>
<Protected Ports> Protected Ports Can Be Configured To
<Protected Port>
<Name> Name </Name>
<Number> Number </Number>
</Protected Port>*
Operations Can Be Add And Remove
</Protected Ports>
</Filter>
<Client Integration>
<Email Clients>
<Outlook> Outlook Can Be On Or Off
Operations Can Register Add-Ins
</Outlook>
<Address Books>
<Integrate With Outlook Contact List> Integrate With Outlook Contact List Can Be On
Or Off </Integrate With Outlook Contact List>
<Integrate With Windows Address Book> Integrate With Windows Address Book Can Be
On Or Off </Integrate With Windows Address Book>
</Address Books>
</Email Clients>
<Miscellaneous>
<Welcome Screen> Welcome Screen Can Be On Or Off </Welcome Screen>
<Feedback> Feedback Can Be On, Ask-Me Or Off </Feedback>
</Miscellaneous>
</Client Integration>
</Anti Spam>
<Identity Safe>
<Identity Safety> Identity Safety Can Be On Or Off
It Can Be Configured But It Has To Be Signed In
</Identity Safety>
<Safe Surfing>
<Anti Phishing> Anti Phishing Can Be On Or Off
<Submit Full Site Information> Submit Full Site Information Can Be On Or Off </Submit
Full Site Information>
</Anti Phishing>
<Norton Safe Web> Norton Safe Web Can Be On Or Off
<Block Malicious Pages> Block Malicious Pages Can Be On Or Off </Block Malicious
Pages>
<Site Rating Icons In Search Results> Site Rating Icons In Search Results Can Be On Or
Off </Site Rating Icons In Search Results>
<Scan In Sight> Scan In Sight Can Be On Or Off </Scan In Sight>
</Norton Safe Web>
</Safe Surfing>
</Identity Safe>
<Task Scheduling>
<Automatic Tasks>
<Automatic Task>
Automatic Task Can Configure
<Task Name> Task Name </Task Name>
<Description> Description </Description>
</Automatic Task>*
</Automatic Tasks>
<Scheduling>
<Schedule> Schedule Can Be Automatic, Weekly, Monthly Or Manual Schedule
</Schedule>
</Scheduling>
</Task Scheduling>
<Administrative Settings>
<Background Tasks> Background Tasks Can Be Configured
<Background Task>
<Norton Task> Norton Task Can Be One Of A List </Norton Task>
<Last Run> Last Run </Last Run>
<Duration> Duration </Duration>
<Run During Idle> Run During Idle Can Be Yes Or No</Run During Idle>
<Status> Status Can Be Complete Or Not Run</Status>
</Background Task>*
</Background Tasks>
<Idle Time Optimiser> Idle Time Optimiser Can Be On Or Off </Idle Time Optimiser>
<Report Card> Report Card Can Be On Or Off </Report Card>
<Automatic Download Of New Version> Automatic Download Of New Version Can Be On
Or Off </Automatic Download Of New Version>
<Search Short Key> Search Short Key Can Be On An Off
<global> Global Can Be Set On Or Off </global>
<function Key> Function Key Can Be Control , Alt Or Win </function Key>
<key> Key Can Be A To Z Or F1 To F12 </key>
</Search Short Key>
<Network Proxy Settings> Network Proxy Settings Can Be Configured With
<Automatic Configuration>
<Configuration> Configuration With
<Automatic Detect Settings> Automatic Detect Settings Can Be On Or Off</Automatic
Detect Settings>
<Use An Automatic Configurations Script> Use An Automatic Configurations Script
</Use An Automatic Configurations Script>
<Ulr> Ulr </Ulr>
</Configuration>
<Proxy Settings> Proxy Settings
<Use Proxy Setting> Use Proxy Setting </Use Proxy Setting>
<Address> Address </Address>
<Port> Port </Port>
</Proxy Settings>
Authentication Can Be Set With I Need Authentication To Connect Through My Firewill
Or Proxy With Username And Password
</Automatic Configuration>
</Network Proxy Settings>
<Norton Community Watch> Norton Community Watch Can Be On Or Off
<Detailed Error Data Collection> Detailed Error Data Collection Can Be Ask-Me, Never
Or Always </Detailed Error Data Collection>
</Norton Community Watch>
<Remote Management> Remote Management Can Be On Or Off </Remote Management>
<Family> Family Is Not Installed And It Can Be Installed </Family>
<Norton Task Notification> Norton Task Notification Can Be On The Off </Norton Task
Notification>
<Performance Monitoring> Performance Monitoring Can Be On Or Off
<Performance Alerting> Performance Alerting Can Be On, Off And Log-Only
</Performance Alerting>
</Performance Monitoring>
</Administrative Settings>
</Norton>
<Norton>
<Antivirus>
<Automatic Protection>
<Boot Time Protection> Boot Time Protection Can Be Aggressive, Normal, Off </Boot
Time Protection>
<Real-Time Protection>
<Auto Protect>Auto Protect Can Be On, Off
<Removable Media Scan>Removable Media Scan Can Be On, Off </Removable>
</Auto Protect>
<Sonar Protection> Sonar Protection Can Be On, Off
<Network Drive Protection>Network Drive Protection Can Be On, Off </Network Drive
Protection>
<Sonar Advanced Mode> Sonar Advanced Mode Can Be Off, Aggressive, Automatic
<Removable Discs Automatically> Removable Discs Automatically Can Be Ask-Me,
Always, Highly-Certainty-Only </Removable Discs Automatically>
<Remove Risks If I Am Away> Remove Risks If I Am Away Highly-Certainty-Only, Ignore,
Always </Remove Risks If I Am Away>
</Sonar Advanced Mode>
<Shows Sonar Block Notifications>Shows Sonar Block Notifications Are Show-All Or
Log-Only </Shows Sonar Block Notifications>
<Early Launch Anti Malware Protection> Early Launch Anti Malware Protection Can Be
On Or Off </Early Launch Anti Malware Protection>
</Sonar Protection>
</Real-Time Protection>
</Automatic Protection>
<Scan And Risks>
<Computer Scans>
<Compressed File Scan> Compressed File Scan Can Be On Or Off
<Remove Infected Folders>Remove Infected Folders Can Be Automatic Or Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Rootkits And Stealth Items Scan>Rootkits And Stealth Items Scan Can Be On Or
Off</Rootkits And Stealth Items Scan>
<Network Drive Scan> Network Drive Scan Can Be On Or Off </Network Drive Scan>
<Heuristic Protection> Heuristic Protection Can Be Automatic, Off Or Aggressive
</Heuristic Protection>
<Tracking Cookies> Tracking Cookies Can Be Remove, Ignore Or Ask-Me</Tracking
Cookies>
<Full System Scan> Full System Scan Can Be Configured
<Scan Items> Scan Items Entire Computer </Scan Items>
<Scan Schedule> Scan Schedule Can Be Do Not Schedule This Scan, Run At A Specific
Time Interval (set The Number Of Days, Hours), Daily, Weekly, Monthly
<Only Time Idle Time> Only Only Time Idle Time Only Can Be On Or Off </Only Time Idle
Time Only>
<On Ac Power> On Ac Power Can Be On Or Off </On Ac Power>
<Prevent Standby> Prevent Standby Can Be On Or Off </Prevent Standby>
<After Scan Completion> After Scan Completion Can Be Stay On, Turn Off, Sleep Or
Hibernate </After Scan Completion>
</Scan Schedule>
<Scan Options>
<Compressed File Scan> Compressed File Scan Can Be On Or Off
<Remove Infected Folders> Remove Infected Folders Can Be Automatic Or Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Network Drive Scan> Network Drive Scan Can Be On Or Off </Network Drive Scan>
<Low Risks> Low Risks Can Be Remove, Ignore Or Ask-Me </Low Risks>
</Scan Options>
</Full System Scan>
</Computer Scans>
<Protected Ports> Protected Ports Can Be Configured
<Full Name> Full Name </Full Name>
<Port Number> Port Number </Port Number>
With Add Or Remove Operations
</Protected Ports>
<Email Antivirus Scan> <Email Antivirus Scan Control> Email Antivirus Scan Can Be On
Or Off </Email Antivirus Scan Control>
Email Antivirus Scan Can Be Configured
<Scan Incoming Email> Scan Incoming Email Can Be On Or Off </Scan Incoming Email>
<Scan Outgoing Email> Scan Outgoing Email Can Be On Or Off </Scan Outgoing Email>
<Scan Outgoing Messages For Suspect Worms> Scan Outgoing Messages For Suspect
Worms Can Be On Or Off
<How To Respond When The Output Threat> How To Respond When The Output Threat
Is Found Can Be Automatically Removed Or Ask-Me </How To Respond When The
Output Threat>
</Scan Outgoing Messages For Suspect Worms>
<What To Do What To Do When Scanning Email Messages> What To Do What To Do
When Scanning Email Messages Can Be
<Protect Against Timeouts> Protect Against Timeouts Can Be On Or Off </Protect
Against Timeouts>
<Display Process Indicator> Display Process Indicator Can Be On Or Off </Display
Process Indicator>
</What To Do What To Do When Scanning Email Messages>
</Email Antivirus Scan>
<Exclusions And Low Risk>
<Low Risks> Low Risks Can Be Removed, Ignore Or Ask-Me </Low Risks>
<Items To Exclude From Scans> Items To Exclude From Scans Can Be Configured To
<Item To Exclude From Scans>System Volume Information </Item To Exclude From
Scans>*
With Operations Add Folders, Add Files, Edit And Remove
<Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Item To Exclude From Auto Protect Sonar And Download Intelligence Detection>
Items To Exclude From Auto Protect Sonar And Download Intelligence Detection Can
Be Configured To </Item To Exclude From Auto Protect Sonar And Download
Intelligence Detection>*
Operation Can Be Add Folder, Add Files, Edit And Remove
</Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Signatures To Exclude From All Detections>
<Signature To Exclude From All Detections>
Signatures To Exclude From All Detections Can Be Configured To
Operation Can Be Add, Remove And Risk Details
</Signature To Exclude From All Detections>*
</Signatures To Exclude From All Detections>
<Clear File Id Excluded During Scans> Clear File Id Excluded During Scans Can Be Set
With The Operation Clear All </Clear File Id Excluded During Scans>
</Items To Exclude From Scans>
</Exclusions And Low Risk>
</Scan And Risks>
<Updates>
Updates
<Automatic Live Update> Automatic Live Update Can Be On Or Off </Automatic Live
Update>
<Apply Updates Only On Reboot> Apply Updates Only On Reboot Can Be On Or Off
</Apply Updates Only On Reboot>
</Antivirus>
<Firewall>
<General Settings>
<Smart Firewall>
<Smart Firewall Setting> Smart Firewall Setting Can Be On Or Off </Smart Firewall
Setting>
<Uncommon Protocols> Uncommon Protocols Can Be Configured With
<Protocol Entry>
<Protocol Number> Protocol Number </Protocol Number>
<Protocol Name> Protocol Name </Protocol Name>
<Protocol Status> Protocol Status Can Be Enable, Disabled </Protocol Status>
</Protocol Entry>*
</Uncommon Protocols>
<Firewall Reset> Firewall Reset Operation To Reset Or Not </Reset Firewall Reset>
<Stealth Blocked Ports> Stealth Blocked Ports Can Be On Or Off </Stealth Blocked
Ports>
<Stateful Protocol Filter> Stateful Protocol Filter Can Be On Or Off </Stateful Protocol
Filter >
<Public Network Exceptions> Public Network Exceptions Can Be Configured To Allow
And Service
<Network Discovery> Network Discovery Can Be Allow Or Not Allow </Network
Discovery>
<File And Printer Sharing> File And Printer Sharing Can Be Allow Or Not Allow </File
And Printer Sharing>
<Remote Desk Top Connection> Remote Desk Top Connection Can Be Allow Or Not
Allow </Remote Desk Top Connection>
<Remote Assistence> Remote Assistence Can Be Allow Or Not Allow </Remote
Assistence>
<Windows Media Play > Windows Media Play Can Be Allow Or Not Allow </Windows
Media Play>
<Windows Media Player Extended> Windows Media Player Extended Can Be Allow Or
Not Allow </Windows Media Player Extended>
<Windows Web Services> Windows Web Services Can Be Allow Or Not Allow </Windows
Web Services>
<Remote Procedure Call> Remote Procedure Call Can Be Allow Or Not Allow </Remote
Procedure Call>
<Internets Connection Sharing> Internets Connection Sharing Can Be Allow Or Not
Allow </Internets Connection Sharing>
</Public Network Exceptions>
</Smart Firewall>
<Network Settings>
<Network Cost Awareness>
<Network Cost Awareness Status> Network Cost Awareness Status Can Be On Or Off
</Network Cost Awareness Status>
Network Cost Awareness And Can Be Configured To
<Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<Policy> Policy Can Be Auto, No Limit, Economy, No Traffic </Policy>
<In Use> In Use Can Be In Use Or Not Use </In Use>
</Network Cost Awareness>
<Network Trust> Network Trust Can Be Configured To The Same Thing Trust Level Can
Be <Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<trust Level>
Trust Level Can Be Full Trust, Private, Public Or Restricted
</trust Level>
<In Use> In Use Can Be In Use Or Not Use </In Use>
</Network Trust>
<Device Trust>
Device Trust Can Be Configured To
<Name>Name</Name>
<Type>Type</Type>
<Trust Level>Trust Level Can Be Full Trust Or Restricted</Trust Level>
<Address> Address Can Be Ip Or Physical Address </Address>
<Ips Extrusion> Ips Extrusion </Ips Extrusion>
The Operations Are Add And Remove
</Device Trust>
</Network Settings>
</General Settings>
<Program Control>
Program Control Can Be
<Owner> Owner </Owner>
<Trust> Trust </Trust>
<Program> Program </Program>
<Access> Access Can Be Allowed, Block Or Custom </Access>
Operations Can Be Program Search, Add, Modify, Remove, Rename
</Program Control>
<Traffic Rules> Traffic Rules Can Be
<Traffic Rule>
<Active> Active Can Be On Or Off </Active>
<Direction> Direction </Direction>
<Description> Description </Description>
<Action> Actions Can Be Allow, Block Or Monitor </Action>
<Connections> Connections Can Be Connection To Other Computers, Connection From
Other Computers Connections To And From Other Computers </Connections>
<Computers> Computers Can Be Any Computer, Computer In The Local Subnet Or Only
The Computers And Sites Listed Below </Computers>
Operations Can Be Add And Remove
<Communications> Communications Can Be Protocol, All Types Of Communications Or
Listed Below </Communications>
Operation Can Be Add And Remove Ports
<Advanced> Advanced
<Create The Security History To Log Entry> Create The Security History To Log Entry
Can Be On Or Off </Create The Security History To Log Entry>
<Apply Rule For Nat Transfersal Traffic> Apply Rule For Nat Transfersal Traffic Can Be
On, If Explicitly Requested Or Off </Apply Rule For Nat Transfersal Traffic>
</Advanced>
<Description Is The Name For The Rule> Description Is The Name For The Rule
</Description Is The Name For The Rule>
</Traffic Rule>*
Operations Can Be Add, View, Remove, Move Up, Move Down
</Traffic Rules>
15.2 Services
<Norton>
<Antivirus>
<Automatic Protection>
<Boot Time Protection> Boot Time Protection Can Be
set Aggressive
set Normal
set Off
</Boot Time Protection>
<Real-Time Protection>
<Auto Protect>Auto Protect Can Be
set On
set Off
<Removable Media Scan>Removable Media Scan Can Be
set On
set Off
</Removable Media Scan>
</Auto Protect>
<Sonar Protection> Sonar Protection Can Be
set On
set Off
<Network Drive Protection>Network Drive Protection Can Be
set On
set Off
</Network Drive Protection>
<Sonar Advanced Mode> Sonar Advanced Mode Can Be
set Off
set Aggressive
set Automatic
<Removable Discs Automatically> Removable Discs Automatically Can Be
set Ask-Me
set Always
set Highly-Certainty-Only
</Removable Discs Automatically>
<Remove Risks If I Am Away> Remove Risks If I Am Away
set Highly-Certainty-Only
set Ignore
set Always
</Remove Risks If I Am Away>
</Sonar Advanced Mode>
<Shows Sonar Block Notifications>Shows Sonar Block Notifications Are
set Show-All
set Log-Only
</Shows Sonar Block Notifications>
<Early Launch Anti Malware Protection> Early Launch Anti Malware Protection Can Be
set On
set Off
</Early Launch Anti Malware Protection>
</Sonar Protection>
</Real-Time Protection>
</Automatic Protection>
<Scan And Risks>
<Computer Scans>
<Compressed File Scan> Compressed File Scan Can Be
set On
set Off
<Remove Infected Folders>Remove Infected Folders Can Be
set Automatic
set Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Rootkits And Stealth Items Scan>Rootkits And Stealth Items Scan Can Be
set On
set Off
</Rootkits And Stealth Items Scan>
<Network Drive Scan> Network Drive Scan Can Be
set On
set Off
</Network Drive Scan>
<Heuristic Protection> Heuristic Protection Can Be
set Automatic
set Off
set Aggressive
</Heuristic Protection>
<Tracking Cookies> Tracking Cookies Can Be
set Remove
set Ignore
set Ask-Me
</Tracking Cookies>
<Full System Scan> Full System Scan Can Be Configured
<Scan Items> Scan Items Entire Computer </Scan Items>
<Scan Schedule> Scan Schedule Can Be
set Do Not Schedule This Scan
set Run At A Specific Time Interval (set The Number Of Days, Hours), Daily, Weekly,
Monthly
<Only Time Idle Time> Only Only Time Idle Time Only Can Be
set On
set Off
</Only Time Idle Time>
<On Ac Power> On Ac Power Can Be
set On
set Off
</On Ac Power>
<Prevent Standby> Prevent Standby Can Be
set On
set Off
</Prevent Standby>
<After Scan Completion> After Scan Completion Can Be
set Stay On
set Turn Off
set Sleep
set Hibernate
</After Scan Completion>
</Scan Schedule>
<Scan Options>
<Compressed File Scan> Compressed File Scan Can Be
set On
set Off
<Remove Infected Folders> Remove Infected Folders Can Be
set Automatic
set Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Network Drive Scan> Network Drive Scan Can Be
set On
set Off
</Network Drive Scan>
<Low Risks> Low Risks Can Be
set Remove
set Ignore
set Ask-Me
</Low Risks>
</Scan Options>
</Full System Scan>
</Computer Scans>
<Protected Ports> Protected Ports Can Be Configured
<Full Name> Full Name </Full Name>
<Port Number> Port Number </Port Number>
With Add Or Remove Operations
</Protected Ports>
<Email Antivirus Scan> <Email Antivirus Scan Control> Email Antivirus Scan Can Be
set On
set Off
</Email Antivirus Scan Control>
Email Antivirus Scan Can Be Configured
<Scan Incoming Email> Scan Incoming Email Can Be
set On
set Off
</Scan Incoming Email>
<Scan Outgoing Email> Scan Outgoing Email Can Be
set On
set Off
</Scan Outgoing Email>
<Scan Outgoing Messages For Suspect Worms> Scan Outgoing Messages For Suspect
Worms Can Be
set On
set Off
<How To Respond When The Output Threat> How To Respond When The Output Threat
Is Found Can Be
set Automatically Removed
set Ask-Me
</How To Respond When The Output Threat>
</Scan Outgoing Messages For Suspect Worms>
<What To Do What To Do When Scanning Email Messages> What To Do What To Do
When Scanning Email Messages Can Be
<Protect Against Timeouts> Protect Against Timeouts Can Be
set On
set Off
</Protect Against Timeouts>
<Display Process Indicator> Display Process Indicator Can Be
set On
set Off
</Display Process Indicator>
</What To Do What To Do When Scanning Email Messages>
</Email Antivirus Scan>
<Exclusions And Low Risk>
<Low Risks> Low Risks Can Be
set Removed
set Ignore
set Ask-Me
</Low Risks>
<Items To Exclude From Scans> Items To Exclude From Scans Can Be Configured To
<Item To Exclude From Scans>System Volume Information </Item To Exclude From
Scans>*
With Operations Add Folders, Add Files, Edit And Remove
<Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Item To Exclude From Auto Protect Sonar And Download Intelligence Detection>
Items To Exclude From Auto Protect Sonar And Download Intelligence Detection Can
Be Configured To </Item To Exclude From Auto Protect Sonar And Download
Intelligence Detection>*
Operation Can Be Add Folder, Add Files, Edit And Remove
</Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Signatures To Exclude From All Detections>
<Signature To Exclude From All Detections> Signatures To Exclude From All Detections
Can Be Configured To
Operation Can Be Add, Remove And Risk Details
</Signature To Exclude From All Detections>*
</Signatures To Exclude From All Detections>
<Clear File Id Excluded During Scans> Clear File Id Excluded During Scans Can Be Set
With The Operation Clear All </Clear File Id Excluded During Scans>
</Items To Exclude From Scans>
</Exclusions And Low Risk>
</Scan And Risks>
<Updates>
Updates
<Automatic Live Update> Automatic Live Update Can Be
set On
set Off
</Automatic Live Update>
<Apply Updates Only On Reboot> Apply Updates Only On Reboot Can Be
set On
set Off
</Apply Updates Only On Reboot>
</Updates>
</Antivirus>
<Firewall>
<General Settings>
<Smart Firewall>
<Smart Firewall Setting> Smart Firewall Setting Can Be
set On
set Off
</Smart Firewall Setting>
<Uncommon Protocols> Uncommon Protocols Can Be Configured With
<Protocol Entry>
<Protocol Number> Protocol Number </Protocol Number>
<Protocol Name> Protocol Name </Protocol Name>
<Protocol Status> Protocol Status Can Be
set Enable
set Disabled
</Protocol Status>
</Protocol Entry>*
</Uncommon Protocols>
<Firewall Reset> Firewall Reset Operation To Reset Or Not </Firewall Reset>
<Stealth Blocked Ports> Stealth Blocked Ports Can Be
set On
set Off
</Stealth Blocked Ports>
<Stateful Protocol Filter> Stateful Protocol Filter Can Be
set On
set Off
</Stateful Protocol Filter>
<Public Network Exceptions> Public Network Exceptions Can Be Configured To Allow
And Service
<Network Discovery> Network Discovery Can Be
set Allow
set Not Allow
</Network Discovery>
<File And Printer Sharing> File And Printer Sharing Can Be
set Allow
set Not Allow
</File And Printer Sharing>
<Remote Desk Top Connection> Remote Desk Top Connection Can Be
set Allow
set Not Allow
</Remote Desk Top Connection>
<Remote Assistence> Remote Assistence Can Be
set Allow
set Not Allow
</Remote Assistence>
<Windows Media Play> Windows Media Play Can Be
set Allow
set Not Allow
</Windows Media Play>
<Windows Media Player Extended> Windows Media Player Extended Can Be
set Allow
set Not Allow
</Windows Media Player Extended>
<Windows Web Services> Windows Web Services Can Be
set Allow
set Not Allow
</Windows Web Services>
<Remote Procedure Call> Remote Procedure Call Can Be
set Allow
set Not Allow
</Remote Procedure Call>
<Internets Connection Sharing> Internets Connection Sharing Can Be
set Allow
set Not Allow
</Internets Connection Sharing>
</Public Network Exceptions>
</Smart Firewall>
<Network Settings>
<Network Cost Awareness>
<Network Cost Awareness Status> Network Cost Awareness Status Can Be
set On
set Off
</Network Cost Awareness Status>
Network Cost Awareness And Can Be Configured To
<Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<Policy> Policy Can Be
set Auto
set No Limit
set Economy
set No Traffic
</Policy>
<In Use> In Use Can Be
set In Use
set Not Use
</In Use>
</Network Cost Awareness>
<Network Trust> Network Trust Can Be Configured To The Same Thing Trust Level Can
Be
<Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<trust Level>
Trust Level Can Be
set Full Trust
set Private
set Public
set Restricted
</trust Level>
<In Use> In Use Can Be
set In Use
set Not Use
</In Use>
</Network Trust>
<Device Trust>
Device Trust Can Be Configured To
<Name>Name</Name>
<Type>Type</Type>
<Trust Level>Trust Level Can Be
set Full Trust
set Restricted
</Trust Level>
<Address> Address Can Be
set Ip
set Physical Address
</Address>
<Ips Extrusion> Ips Extrusion </Ips Extrusion>
The Operations Are Add And Remove
</Device Trust>
</Network Settings>
</General Settings>
<Program Control>
Program Control Can Be
<Owner> Owner </Owner>
<Trust> Trust </Trust>
<Program> Program </Program>
<Access> Access Can Be
set Allowed
set Block
set Custom
</Access>
Operations Can Be Program Search, Add, Modify, Remove, Rename
</Program Control>
<Traffic Rules> Traffic Rules Can Be
<Traffic Rule>
<Active> Active Can Be
set On
set Off
</Active>
<Direction> Direction </Direction>
<Description> Description </Description>
<Action> Actions Can Be
set Allow
set Block
set Monitor
</Action>
<Connections> Connections Can Be Connection To Other Computers, Connection From
Other Computers Connections To And From Other Computers </Connections>
<Computers> Computers Can Be Any Computer, Computer In The Local Subnet Or Only
The Computers And Sites Listed Below </Computers>
Operations Can Be Add And Remove
<Communications> Communications Can Be Protocol, All Types Of Communications Or
Listed Below </Communications>
Operation Can Be Add And Remove Ports
<Advanced> Advanced
<Create The Security History To Log Entry> Create The Security History To Log Entry
Can Be
set On
set Off
</Create The Security History To Log Entry>
<Apply Rule For Nat Transfersal Traffic> Apply Rule For Nat Transfersal Traffic Can Be
set On
set If Explicitly Requested
set Off
</Apply Rule For Nat Transfersal Traffic>
</Advanced>
<Description Is The Name For The Rule> Description Is The Name For The Rule
</Description Is The Name For The Rule>
</Traffic Rule>*
Operations Can Be Add, View, Remove, Move Up, Move Down
</Traffic Rules>
<Intrusion And Browser Protection>
<Intrusion Protection Prevention> Intrusion Protection Prevention Can Be
set On
set Off
</Intrusion Protection Prevention>
<Anti Spam>
<Filter>
<Norton Antispam> Norton Antispam Can Be
set On
set Off
</Norton Antispam>
<Address Book Exclusions> Address Book Exclusions Can Be Configured To
<Name> Name </Name>
<Email Address> Email Address </Email Address>
Operations Can Be Add, Edit Or Removed
</Address Book Exclusions>
<Allow List> Allow List Can Be Configured With
<allow Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</allow Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</Allow List>
<block List> Block List Can Be Configured With
<block Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</block Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</block List>
<Web Query> Web Query Can Be
set On
set Off
</Web Query>
<Protected Ports> Protected Ports Can Be Configured To
<Protected Port>
<Name> Name </Name>
<Number> Number </Number>
</Protected Port>*
Operations Can Be Add And Remove
</Protected Ports>
</Filter>
<Client Integration>
<Email Clients>
<Outlook> Outlook Can Be
set On
set Off
Operations Can Register Add-Ins
</Outlook>
<Address Books>
<Integrate With Outlook Contact List> Integrate With Outlook Contact List Can Be
set On
set Off
</Integrate With Outlook Contact List>
<Integrate With Windows Address Book> Integrate With Windows Address Book Can Be
set On
set Off
</Integrate With Windows Address Book>
</Address Books>
</Email Clients>
<Miscellaneous>
<Welcome Screen> Welcome Screen Can Be
set On
set Off
</Welcome Screen>
<Feedback> Feedback Can Be
set On
set Off
set Ask-Me
</Feedback>
</Miscellaneous>
</Client Integration>
</Anti Spam>
<Identity Safe>
<Identity Safety> Identity Safety Can Be
set On
set Off
It Can Be Configured But It Has To Be Signed In
</Identity Safety>
<Safe Surfing>
<Anti Phishing> Anti Phishing Can Be
set On
set Off
<Submit Full Site Information> Submit Full Site Information Can Be
set On
set Off
</Submit Full Site Information>
</Anti Phishing>
<Norton Safe Web> Norton Safe Web Can Be
set On
set Off
<Block Malicious Pages> Block Malicious Pages Can Be
set On
set Off
</Block Malicious Pages>
<Site Rating Icons In Search Results> Site Rating Icons In Search Results Can Be
set On
set Off
</Site Rating Icons In Search Results>
<Scan In Sight> Scan In Sight Can Be
set On
set Off
</Scan In Sight>
</Norton Safe Web>
</Safe Surfing>
</Identity Safe>
<Task Scheduling>
<Automatic Tasks>
<Automatic Task>
Automatic Task Can Configure
<Task Name> Task Name </Task Name>
<Description> Description </Description>
</Automatic Task>*
</Automatic Tasks>
<Scheduling>
<Schedule> Schedule Can Be
set Automatic
set Weekly
set Monthly
set Manual Schedule
</Schedule>
</Scheduling>
</Task Scheduling>
<Administrative Settings>
<Background Tasks> Background Tasks Can Be Configured
<Background Task>
<Norton Task> Norton Task Can Be One Of A List </Norton Task>
<Last Run> Last Run </Last Run>
<Duration> Duration </Duration>
<Run During Idle> Run During Idle Can Be
set Yes
set No
</Run During Idle>
<Status> Status Can Be
set Complete
set Not Run
</Status>
</Background Task>*
</Background Tasks>
<Idle Time Optimiser> Idle Time Optimiser Can Be
set On
set Off
</Idle Time Optimiser>
<Report Card> Report Card Can Be
set On
set Off
</Report Card>
<Automatic Download Of New Version> Automatic Download Of New Version Can Be
set On
set Off
</Automatic Download Of New Version>
<Search Short Key> Search Short Key Can Be
set On
set Off
<global> Global Can Be
set On
set Off
</global>
<function Key> Function Key Can Be Control , Alt Or Win </function Key>
<key> Key Can Be A To Z Or F1 To F12 </key>
</Search Short Key>
<Network Proxy Settings> Network Proxy Settings Can Be Configured With
<Automatic Configuration>
<Configuration> Configuration With
<Automatic Detect Settings> Automatic Detect Settings Can Be
set On
set Off
</Automatic Detect Settings>
<Use An Automatic Configurations Script> Use An Automatic Configurations Script
</Use An Automatic Configurations Script>
<Ulr> Ulr </Ulr>
</Configuration>
<Proxy Settings> Proxy Settings
<Use Proxy Setting> Use Proxy Setting </Use Proxy Setting>
<Address> Address </Address>
<Port> Port </Port>
</Proxy Settings>
Authentication Can Be Set With I Need Authentication To Connect Through My Firewill
Or Proxy With Username And Password
</Automatic Configuration>
</Network Proxy Settings>
<Norton Community Watch> Norton Community Watch Can Be
set On
set Off
<Detailed Error Data Collection> Detailed Error Data Collection Can Be
se Ask-Me
set Never
set Always
</Detailed Error Data Collection>
</Norton Community Watch>
<Remote Management> Remote Management Can Be
set On
set Off
</Remote Management>
<Family> Family Is Not Installed And It Can Be Installed </Family>
<Norton Task Notification> Norton Task Notification Can Be
set On
set Off
</Norton Task Notification>
<Performance Monitoring> Performance Monitoring Can Be
set On
set Off
<Performance Alerting> Performance Alerting Can Be
set On
set Off
set Log-Only
</Performance Alerting>
</Performance Monitoring>
</Administrative Settings>
</Norton>
15.3 Standards
15.4 Techniques
15.5 Communications
<Norton>
<Antivirus>
<Automatic Protection>
<Boot Time Protection> Boot Time Protection Can Be Aggressive, Normal, Off </Boot
Time Protection>
<Real-Time Protection>
<Auto Protect>Auto Protect Can Be On, Off
<Removable Media Scan>Removable Media Scan Can Be On, Off </Removable>
</Auto Protect>
<Sonar Protection> Sonar Protection Can Be On, Off
<Network Drive Protection>Network Drive Protection Can Be On, Off </Network Drive
Protection>
<Sonar Advanced Mode> Sonar Advanced Mode Can Be Off, Aggressive, Automatic
<Removable Discs Automatically> Removable Discs Automatically Can Be Ask-Me,
Always, Highly-Certainty-Only </Removable Discs Automatically>
<Remove Risks If I Am Away> Remove Risks If I Am Away Highly-Certainty-Only, Ignore,
Always </Remove Risks If I Am Away>
</Sonar Advanced Mode>
<Shows Sonar Block Notifications>Shows Sonar Block Notifications Are Show-All Or
Log-Only </Shows Sonar Block Notifications>
<Early Launch Anti Malware Protection> Early Launch Anti Malware Protection Can Be
On Or Off </Early Launch Anti Malware Protection>
</Sonar Protection>
</Real-Time Protection>
</Automatic Protection>
<Scan And Risks>
<Computer Scans>
<Compressed File Scan> Compressed File Scan Can Be On Or Off
<Remove Infected Folders>Remove Infected Folders Can Be Automatic Or Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Rootkits And Stealth Items Scan>Rootkits And Stealth Items Scan Can Be On Or
Off</Rootkits And Stealth Items Scan>
<Network Drive Scan> Network Drive Scan Can Be On Or Off </Network Drive Scan>
<Heuristic Protection> Heuristic Protection Can Be Automatic, Off Or Aggressive
</Heuristic Protection>
<Tracking Cookies> Tracking Cookies Can Be Remove, Ignore Or Ask-Me</Tracking
Cookies>
<Full System Scan> Full System Scan Can Be Configured
<Scan Items> Scan Items Entire Computer </Scan Items>
<Scan Schedule> Scan Schedule Can Be Do Not Schedule This Scan, Run At A Specific
Time Interval (set The Number Of Days, Hours), Daily, Weekly, Monthly
<Only Time Idle Time> Only Only Time Idle Time Only Can Be On Or Off </Only Time Idle
Time Only>
<On Ac Power> On Ac Power Can Be On Or Off </On Ac Power>
<Prevent Standby> Prevent Standby Can Be On Or Off </Prevent Standby>
<After Scan Completion> After Scan Completion Can Be Stay On, Turn Off, Sleep Or
Hibernate </After Scan Completion>
</Scan Schedule>
<Scan Options>
<Compressed File Scan> Compressed File Scan Can Be On Or Off
<Remove Infected Folders> Remove Infected Folders Can Be Automatic Or Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Network Drive Scan> Network Drive Scan Can Be On Or Off </Network Drive Scan>
<Low Risks> Low Risks Can Be Remove, Ignore Or Ask-Me </Low Risks>
</Scan Options>
</Full System Scan>
</Computer Scans>
<Protected Ports> Protected Ports Can Be Configured
<Full Name> Full Name </Full Name>
<Port Number> Port Number </Port Number>
With Add Or Remove Operations
</Protected Ports>
<Email Antivirus Scan> <Email Antivirus Scan Control> Email Antivirus Scan Can Be On
Or Off </Email Antivirus Scan Control>
Email Antivirus Scan Can Be Configured
<Scan Incoming Email> Scan Incoming Email Can Be On Or Off </Scan Incoming Email>
<Scan Outgoing Email> Scan Outgoing Email Can Be On Or Off </Scan Outgoing Email>
<Scan Outgoing Messages For Suspect Worms> Scan Outgoing Messages For Suspect
Worms Can Be On Or Off
<How To Respond When The Output Threat> How To Respond When The Output Threat
Is Found Can Be Automatically Removed Or Ask-Me </How To Respond When The
Output Threat>
</Scan Outgoing Messages For Suspect Worms>
<What To Do What To Do When Scanning Email Messages> What To Do What To Do
When Scanning Email Messages Can Be
<Protect Against Timeouts> Protect Against Timeouts Can Be On Or Off </Protect
Against Timeouts>
<Display Process Indicator> Display Process Indicator Can Be On Or Off </Display
Process Indicator>
</What To Do What To Do When Scanning Email Messages>
</Email Antivirus Scan>
<Exclusions And Low Risk>
<Low Risks> Low Risks Can Be Removed, Ignore Or Ask-Me </Low Risks>
<Items To Exclude From Scans> Items To Exclude From Scans Can Be Configured To
<Item To Exclude From Scans>System Volume Information </Item To Exclude From
Scans>*
With Operations Add Folders, Add Files, Edit And Remove
<Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Item To Exclude From Auto Protect Sonar And Download Intelligence Detection>
Items To Exclude From Auto Protect Sonar And Download Intelligence Detection Can
Be Configured To </Item To Exclude From Auto Protect Sonar And Download
Intelligence Detection>*
Operation Can Be Add Folder, Add Files, Edit And Remove
</Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Signatures To Exclude From All Detections>
<Signature To Exclude From All Detections>
Signatures To Exclude From All Detections Can Be Configured To
Operation Can Be Add, Remove And Risk Details
</Signature To Exclude From All Detections>*
</Signatures To Exclude From All Detections>
<Clear File Id Excluded During Scans> Clear File Id Excluded During Scans Can Be Set
With The Operation Clear All </Clear File Id Excluded During Scans>
</Items To Exclude From Scans>
</Exclusions And Low Risk>
</Scan And Risks>
<Updates>
Updates
<Automatic Live Update> Automatic Live Update Can Be On Or Off </Automatic Live
Update>
<Apply Updates Only On Reboot> Apply Updates Only On Reboot Can Be On Or Off
</Apply Updates Only On Reboot>
</Antivirus>
<Firewall>
<General Settings>
<Smart Firewall>
<Smart Firewall Setting> Smart Firewall Setting Can Be On Or Off </Smart Firewall
Setting>
<Uncommon Protocols> Uncommon Protocols Can Be Configured With
<Protocol Entry>
<Protocol Number> Protocol Number </Protocol Number>
<Protocol Name> Protocol Name </Protocol Name>
<Protocol Status> Protocol Status Can Be Enable, Disabled </Protocol Status>
</Protocol Entry>*
</Uncommon Protocols>
<Firewall Reset> Firewall Reset Operation To Reset Or Not </Reset Firewall Reset>
<Stealth Blocked Ports> Stealth Blocked Ports Can Be On Or Off </Stealth Blocked
Ports>
<Stateful Protocol Filter> Stateful Protocol Filter Can Be On Or Off </Stateful Protocol
Filter >
<Public Network Exceptions> Public Network Exceptions Can Be Configured To Allow
And Service
<Network Discovery> Network Discovery Can Be Allow Or Not Allow </Network
Discovery>
<File And Printer Sharing> File And Printer Sharing Can Be Allow Or Not Allow </File
And Printer Sharing>
<Remote Desk Top Connection> Remote Desk Top Connection Can Be Allow Or Not
Allow </Remote Desk Top Connection>
<Remote Assistence> Remote Assistence Can Be Allow Or Not Allow </Remote
Assistence>
<Windows Media Play > Windows Media Play Can Be Allow Or Not Allow </Windows
Media Play>
<Windows Media Player Extended> Windows Media Player Extended Can Be Allow Or
Not Allow </Windows Media Player Extended>
<Windows Web Services> Windows Web Services Can Be Allow Or Not Allow </Windows
Web Services>
<Remote Procedure Call> Remote Procedure Call Can Be Allow Or Not Allow </Remote
Procedure Call>
<Internets Connection Sharing> Internets Connection Sharing Can Be Allow Or Not
Allow </Internets Connection Sharing>
</Public Network Exceptions>
</Smart Firewall>
<Network Settings>
<Network Cost Awareness>
<Network Cost Awareness Status> Network Cost Awareness Status Can Be On Or Off
</Network Cost Awareness Status>
Network Cost Awareness And Can Be Configured To
<Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<Policy> Policy Can Be Auto, No Limit, Economy, No Traffic </Policy>
<In Use> In Use Can Be In Use Or Not Use </In Use>
</Network Cost Awareness>
<Network Trust> Network Trust Can Be Configured To The Same Thing Trust Level Can
Be <Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<trust Level>
Trust Level Can Be Full Trust, Private, Public Or Restricted
</trust Level>
<In Use> In Use Can Be In Use Or Not Use </In Use>
</Network Trust>
<Device Trust>
Device Trust Can Be Configured To
<Name>Name</Name>
<Type>Type</Type>
<Trust Level>Trust Level Can Be Full Trust Or Restricted</Trust Level>
<Address> Address Can Be Ip Or Physical Address </Address>
<Ips Extrusion> Ips Extrusion </Ips Extrusion>
The Operations Are Add And Remove
</Device Trust>
</Network Settings>
</General Settings>
<Program Control>
Program Control Can Be
<Owner> Owner </Owner>
<Trust> Trust </Trust>
<Program> Program </Program>
<Access> Access Can Be Allowed, Block Or Custom </Access>
Operations Can Be Program Search, Add, Modify, Remove, Rename
</Program Control>
<Traffic Rules> Traffic Rules Can Be
<Traffic Rule>
<Active> Active Can Be On Or Off </Active>
<Direction> Direction </Direction>
<Description> Description </Description>
<Action> Actions Can Be Allow, Block Or Monitor </Action>
<Connections> Connections Can Be Connection To Other Computers, Connection From
Other Computers Connections To And From Other Computers </Connections>
<Computers> Computers Can Be Any Computer, Computer In The Local Subnet Or Only
The Computers And Sites Listed Below </Computers>
Operations Can Be Add And Remove
<Communications> Communications Can Be Protocol, All Types Of Communications Or
Listed Below </Communications>
Operation Can Be Add And Remove Ports
<Advanced> Advanced
<Create The Security History To Log Entry> Create The Security History To Log Entry
Can Be On Or Off </Create The Security History To Log Entry>
<Apply Rule For Nat Transfersal Traffic> Apply Rule For Nat Transfersal Traffic Can Be
On, If Explicitly Requested Or Off </Apply Rule For Nat Transfersal Traffic>
</Advanced>
<Description Is The Name For The Rule> Description Is The Name For The Rule
</Description Is The Name For The Rule>
</Traffic Rule>*
Operations Can Be Add, View, Remove, Move Up, Move Down
</Traffic Rules>
17.3 Profile
The profile is built from core.
While adding to them a set of specifications that make services ready for the IoT
including:
• Configuration Management
• Fault Tolerance
• Security
• Metrics
• Health Checks
• JWT Authorization
• Type-safe REST Client
• OpenAPI
• OpenTracing
• Recovery
• Fallback