You are on page 1of 745

Loading executing updating programs storing programs one level store

use ciphers to do pipes for all data transfers to and from storage and between services
management and positioning of intelligence .
Flow of data around network join to utility or owner
reporting of errors to provider
control of network with instructions
change owner of item

An Amateur’s Look at IoT Security and Its Processes


By A.J.Payne
1 Abstract
This paper reviews how some other technologies can contribute to IoT security and its
processes.
2. Introduction
This paper reviews how some other technologies can contribute to IoT Security and its
processes. It consists of 12 further sections. The next gives a summary of IoT. Section
4 considers the Intel Active Management Technology whilst the fifth part describes IoT
security followed by a component on IoT security solutions. The seventh section is
devoted to methodologies that can be add to the techniques for IoT processing. There
are 20 theories that are helpful. They are search theory, network theory, Markov theory,
algebraic theory, logic theory, programming language theory, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, geographic information systems, curve fitting,
configuration management continuous integration/delivery and virtual reality. These
techniques are applied in turn to IoT studies in the part eight in turn to entity (address,
database, entities, firmware, functions, hardware, languages, network hardware,
network media), services (access management, accounting management, address
management, application management, communications management, content
management, continuous delivery, data analysis, data transfer, network management,
protocol management, reliability and fault tolerance, resource management, search
management, security management, service engineering management, statistics
management, status management, test facility management, development facility
management, virtualisation), standards and techniques (cognitive networks,
cooperative networks, machine learning, neural networks, reinforcement learning, self-
organizing distributed networks, surrogate models, time series analysis) and
communications. In the ninth section we study IoT security processing from the view of
its different activities with the requirements being described in part 10. Part 11 gives a
specification of the IoT security tools and implementation specified in section 12. The
penultimate part presents the conclusions of the paper whilst the final section is a set
of references to help the reader.
3. Internet of Things
3.1 Commentary
3.1.1 Definition
The Internet of things (IoT) is a set of connected devices and systems enabling them to
collect, store and exchange data[1][2][3] giving virtual cyber-systems and smart
grid[13]. It allows remote actions[4] for integration and automation to give better
efficiency, accuracy and economic benefit by reduced human intervention.[5][6][7][8][9]
[10] using a various protocols, domains, and applications.[12] Each part is uniquely
identifiable through its embedded computing system but is able to interoperate within
the existing Internet infrastructure. But all this leads to techniques for fast
aggregation, storage and processing of the data. The IoT uses a unique addressing
system for all things with a URI[93] which are controlled through agents and referenced
through IPv6 with its stateless address auto-configuration  and the IETF
6LoWPAN header compression.[94][95][96][97] This is improved with
GS1/EPCglobal EPC Information Services[98] (EPCIS) specifications and used for
objects in many industries.[99]
3.1.2 Applications
3.1.2.1 Predictions
Gartner, Inc. predict 20.8 billion devices on the IoT by 2020.[40] while ABI
Research anticipate more than 30 billion connected by wireless.[41] 
3.1.2.2 General Information
Applications of IoT are collecting information in natural ecosystems, buildings (security
and automation[53]) and factories,[47] as environmental sensing and urban planning.
[48] It leads to intelligent shopping systems where customers are profiled through
tracking their mobile phones. In contrast customers can automatically order favourites
from optimal suppliers via phone.[49][50] It is found in utility management and
assisted transportation systems.[51][52] Biological sensors allow cloud-based analyses
DNA or other molecules.[54][55] This leads to IoT products and solutions with different
characteristics.[56]
3.1.2.3 Media
Media industries use data mining to target consumers with a message or content using
the best technology at optimal times in optimal locations consistent with the
consumer's mindset.[57] The IoT leads to opportunities to measure, collect and analyse
behavioural statistics to target projection of products, services, policies, propaganda
and ideas to develop economic growth and competitiveness.[58][59][60] 
3.1.2.4 Environmental monitoring
The IoT uses individual sensors in environmental protection[61] to check air or water
quality,[16] atmospheric or soil conditions,[62] and movements of wildlife and habitats.
[63] Distributed and mobile sensors over large areas give earthquake or tsunami early-
warning systems for emergency services.[47] 
3.1.2.5 Infrastructure management
The IoT is ideal to monitor and control urban and rural infrastructures e.g. bridges,
railways, wind-farms and waste processing by monitoring any events or changes in
structures for safety and risk.[65][67]  It can also be used for improved scheduling
maintenance through coordination[47] and thus better operational management and
metrics.[66] 
3.1.2.6 Manufacturing
The IoT gives opportunities for control, automation, optimisation and management of
the full manufacturing life cycle from asset and situation management, process control,
safety, security and supply chain through predictive metrics and analytics.[68] [47][65]
[69][70][71][72][73][74][75] 
3.1.2.7 Energy management
When devices have the ability to collect information, to act on the information and pass
the information to management centres via communications the IoT gives the
opportunity to control and optimise utility facilities from generation to consumption and
remote operation by the customer.[47][77] The information collected gives the chance
for automation to improve the efficiency, reliability, economics, and sustainability of
the production and distribution.[77] 
3.1.2.8 Medical and healthcare
The concept of the IoT for collecting and activating devices promise great
opportunities to supervise health monitoring and emergency systems remotely.
[47] They can introduce smart beds, bed occupancy checks and vital-signs. [78][79]
[80] 
3.1.2.9 Building and home automation
IoT devices can be used to monitor and control mechanical, electrical and electronic
systems used in all types of buildings [47] with automation systems.
3.1.2.10 Transportation
The IoT gives integration of communications, control, and information processing of
transportation systems including vehicle, infrastructure, and driver / user). It leads to
communication within and between vehicles, smart traffic control, parking and toll
systems, logistic and fleet management, vehicle control, and safety and assistance.[47]
3.1.2.11 Consumer applications
Entrepreneurs have taken the opportunity to create IoT devices for connected car,
entertainment, residences and smart homes, wearable technology, connected health
and smart retail. Some are criticized for lack of useful value[90][91] and security.[92]
3.1.3 Trends and characteristics – Technology roadmap: Internet of things
3.1.3.1 Intelligence
The IoT has started to integrate ambient intelligence and autonomous control into its
environment. It could develop into a non-deterministic and open network of auto-
organized, intelligent entities (Web services, SOA components), virtual objects
(avatars) inter-operable and acting independently driven by shared or individual
objectives from context, circumstances or environments.[100] 
3.1.3.2 Architecture
The model for the IoT is based on event-driven architecture,[101] bottom-up with
processes and operations, in real-time with model driven and functional approaches
evolving through multi-agent systems, B-ADSc, etc., a semantic web[102] and BPM
Everywhere to general standards. Sensor networks can also be considered to be
priority in IoT.[103]
The IoT network architecture[104] uses IETF 6LoWPAN and IPv6[40] to deal with the
number of devices and IETF's Constrained Application Protocol, MQTT and ZeroMQ for
lightweight data transport. The concept of fog computing will help bursts of data
through the network.[105] and distributing, analyzing and processing of data to the
edge of the network real time responces are helped.
3.1.3.3 Complexity
The IoT is regarded as a complex system[106] with all the links, actors, and dynamic
structure. Some elements are kept local to control some of the complexity as
subsystems to reduce risks of privacy, control and reliability.
3.1.3.4 Size considerations
The size of the IoT leaves an impossible job for humans so devices and connections
must be monitored by machine.[107]
3.1.3.5 Space considerations
The Iot requires the location and size of an element dependent on time[108] and on the
device.[109] This leads to problems of data accuracy, data storage, data access and
geographic operations.
3.1.3.6 Basket of Remotes
The variation in IoT devices and lack of hardware and interface standards has lead to
manufacturers releasing open APIs and connections with the cloud to predict the user's
next action and trigger some reaction.[111][112][113].
3.1.3.7 Enabling technologies for IoT
Communication and networking is the major basis of IoT.[115][116][117][118] They are
based on wireless and wired technology. Examples of short-range wireless are
Bluetooth low energy (BLE), Light-Fidelity (Li-Fi), Near-field communication (NFC), QR
codes and barcodes, Radio-frequency identification (RFID), Thread,  (network protocol)|
TLS, Wi-Fi, Wi-Fi Direct, Z-Wave and ZigBee. Examples of medium-range wireless are
HaLow and LTE-Advanced. Examples of long-range wireless are Low-power wide-area
networking (LPWAN) and Very small aperture terminal (VSAT). Examples of wired
technology are Ethernet, Multimedia over Coax Alliance (MoCA), Power-line
communication (PLC) and HomePlug utilize PLC for networking IoT devices
3.1.4 Criticism and controversies
3.1.4.1 Platform fragmentation
Variations of IoT hardware and software, lack of standards,[127][128][129][130][131]
[132][133] lead to inconsistent interfaces.[1] It makes customers fear proprietary
software or hardware or protocols that are future proof.[2] Security and updates cause
trouble for older and lower-price devices.[134][135][136][137][138]
3.1.4.2 Privacy, autonomy and control
The IoT can give power to the individual through the visibility of information but it can
give control and manipulation to authorities.[139] The aggregation of data can lead to
problems of privacy.[140] Basic problems to be considered are user consent, freedom
of choice, anonymity and time limit on data collected.[151]
3.1.4.3 Data storage and analytics
3.1.4.4 IoT applications clean, process and interpret large quantities of data with a
proposed solution known as Wireless Sensor Networks.[154] They hold large amounts
of data.
3.1.4.5 Security
The IoT expansion rate has lead to problems with security[155] and regulations.[156]
Firewalls, security updates and anti-malware systems can swamp the IoT devices.
Vehicles and medical devices have been shown to be vulnerable to hackers.[161][162]
[163][164]. Spying can be carried out on people through devices and databases.[165]
[166][159][160] The Internet of Things Security Foundation (IoTSF) was setup to
counteract problems with security.[167][168]
3.1.4.6 Design
The IoT must be designed and managed for future proofing for size and interface by
constraining systems to avoid risks of failure and errors of users.[169][170]
3.1.4.7 Environmental sustainability impact
The IoT devices tend to have a short life and impact the environment with manufacture,
use, and disposal.[171] The devices use heavy metals, expensive rare-earths and toxic
chemicals which make them hard to recycle.
3.1.4.8 Intentional obsolescence of devices
It has become industry policy to intentionally make hardware or software obsolete or
force a subscription for the lifetime of the system.[172][173][173]
3.1.4.9 Confusing terminology
IoT terminology is confusing because a company can be working in sensor technology,
networking, embedded systems, or analytics.[174] There is an overlap
and technological convergence: Internet of Things (IoT), Internet of Everything (IoE),
Industrial Internet, Pervasive Computing, Pervasive Sensing, Ubiquitous
Computing, Cyber-Physical Systems (CPS), Wireless Sensor Networks (WSN), Smart
Objects, Cooperating Objects, Machine-to-Machine (M2M), Ambient Intelligence
(AmI), Operational Technology (OT), and Information Technology (IT).
[174] The Industrial Internet Consortium's Vocabulary Task Group has defined a set of
terms[175] for consistent terminology for the sector[175][176]. IoT One has started an
IoT Terms Database for commonality[177][178] to remove ambiguity and complexity.
3.1.5 IoT adoption barriers
3.1.5.1 Complexity and unclear value propositions
The IoT has trouble from introduction due to lack of focus on doing the job or too
expensive.[179][180][181][182][183] 
3.1.5.2 Privacy and security concerns
The IoT leads to worries over privacy invasion and success of security with
manufacturers concentrating on profit and users loosing control of their data.[184][185]
[186][187][188][189][190][191][192]
3.1.5.3 Traditional governance structures
People feel that the IoT does not support organizational structures or their processes.
[193][194] Management do not lead the introduction of IoT and has to be pushed by
customer, competitor or regulations.[195][196][197][198][199]
3.2 References
1. Brown, Eric (13 September 2016). "Who Needs the Internet of
Things?". Linux.com. Retrieved 23 October 2016.
2. Brown, Eric (20 September 2016). "21 Open Source Projects for IoT". Linux.com.
Retrieved 23 October 2016.
3. "Internet of Things Global Standards Initiative". ITU. Retrieved 26 June 2015.
4. "Internet of Things: Science Fiction or Business Fact?" (PDF). Harvard Business
Review. November 2014. Retrieved 23 October 2016.
5. Vermesan, Ovidiu; Friess, Peter (2013). Internet of Things: Converging
Technologies for Smart Environments and Integrated Ecosystems (PDF). Aalborg,
Denmark: River Publishers. ISBN 978-87-92982-96-4.
6. "An Introduction to the Internet of Things (IoT)" (PDF). Cisco.com. San Francisco,
California: Lopez Research. November 2013. Retrieved 23 October 2016.
7. Santucci, Gérald. "The Internet of Things: Between the Revolution of the Internet
and the Metamorphosis of Objects" (PDF). European Commission Community Research
and Development Information Service. Retrieved 23 October 2016.
8. Mattern, Friedemann; Floerkemeier, Christian. "From the Internet of Computers
to the Internet of Things" (PDF). ETH Zurich. Retrieved 23 October 2016.
9. Reddy, Aala Santhosh (May 2014). "Reaping the Benefits of the Internet of
Things"(PDF). Cognizant. Retrieved 23 October 2016.
10. Lindner, Tim (13 July 2015). "The Supply Chain: Changing at the Speed of
Technology". Connected World. Retrieved 18 September 2015.
11. Evans, Dave (April 2011). "The Internet of Things: How the Next Evolution of the
Internet Is Changing Everything" (PDF). Cisco. Retrieved 15 February 2016.
12. Höller, J.; Tsiatsis, V.; Mulligan, C.; Karnouskos, S.; Avesand, S.; Boyle, D.
(2014). From Machine-to-Machine to the Internet of Things: Introduction to a New Age
of Intelligence. Elsevier. ISBN 978-0-12-407684-6.
13. Monnier, Olivier (8 May 2014). "A smarter grid with the Internet of Things". Texas
Instruments.
14. Hwang, Jong-Sung; Choe, Young Han (February 2013). "Smart Cities Seoul: a
case study" (PDF). ITU-T Technology Watch. Retrieved 23 October 2016.
15. Zanella, Andrea; Bui, Nicola; Castellani, Angelo; Vangelista, Lorenzo; Zorzi,
Michele (February 2014). "Internet of Things for Smart Cities". IEEE Internet of Things
Journal. 1 (1): 22–32. Retrieved 26 June 2015.
16. "Molluscan eye". Retrieved 26 June 2015.
17. Erlich, Yaniv (2015). "A vision for ubiquitous sequencing". Genome
Research. 25 (10): 1411–1416. doi:10.1101/gr.191692.115. ISSN 1088-
9051. PMC 4579324 . PMID 26430149.
18. Wigmore, I. (June 2014). "Internet of Things (IoT)". TechTarget.
19. Noto La Diega, Guido; Walden, Ian (1 February 2016). "Contracting for the
'Internet of Things': Looking into the Nest". Queen Mary School of Law Legal Studies
Research Paper No. 219/2016. SSRN 2725913 .
20. Hendricks, Drew. "The Trouble with the Internet of Things". London Datastore.
Greater London Authority. Retrieved 10 August 2015.
21. Violino, Bob. "The 'Internet of things' will mean really, really big data". InfoWorld.
Retrieved 9 July 2014.
22. Hogan, Michael. "The 'The Internet of Things Database' Data Management
Requirements". ScaleDB. Retrieved 15 July 2014.
23. "Internet of Things (IoT)". gatewaytechnolabs.com.
24. "The "Only" Coke Machine on the Internet". Carnegie Mellon University.
Retrieved 10 November 2014.
25. "Internet of Things Done Wrong Stifles Innovation". InformationWeek. 7 July
2014. Retrieved 10 November 2014.
26. Mattern, Friedemann; Floerkemeier, Christian (2010). "From the Internet of
Computers to the Internet of Things" (PDF). Informatik-Spektrum. 33 (2): 107–
121. doi:10.1007/s00287-010-0417-7. Retrieved 3 February 2014.
27. Weiser, Mark (1991). "The Computer for the 21st Century" (PDF). Scientific
American. 265 (3): 94–
104. Bibcode:1991SciAm.265c..94W. doi:10.1038/scientificamerican0991-94.
Retrieved 5 November 2014.
28. Raji, RS (June 1994). "Smart networks for control". IEEE Spectrum.
29. Pontin, Jason (29 September 2005). "ETC: Bill Joy's Six Webs". MIT Technology
Review. Retrieved 17 November 2013.
30. Analyst Anish Gaddam interviewed by Sue Bushell in Computerworld, on 24 July
2000 ("M-commerce key to ubiquitous internet")
31. Magrassi, P. (2 May 2002). "Why a Universal RFID Infrastructure Would Be a Good
Thing". Gartner research report G00106518.
32. "Peter Day's World of Business". BBC World Service. BBC. Retrieved 4
October 2016.
33. Magrassi, P.; Berg, T (12 August 2002). "A World of Smart Objects". Gartner
research report R-17-2243.
34. Commission of the European Communities (18 June 2009). "Internet of Things —
An action plan for Europe" (PDF). COM(2009) 278 final.
35. Wood, Alex (31 March 2015). "The internet of things is revolutionizing our lives,
but standards are a must". The Guardian.
36. "From M2M to The Internet of Things: Viewpoints From Europe". Techvibes. 7
July 2011.
37. Sristava, Lara (16 May 2011). "The Internet of Things – Back to the Future
(Presentation)". European Commission Internet of Things Conference in Budapest – via
YouTube.
38. Magrassi, P.; Panarella, A.; Deighton, N.; Johnson, G. (28 September
2001). "Computers to Acquire Control of the Physical World". Gartner research report T-
14-0301.
39. "The Evolution of Internet of Things". Casaleggio Associati. February 2011.[need
quotation to verify]
40. "Gartner Says 6.4 Billion Connected "Things" Will Be in Use in 2016, Up 30
Percent From 2015". Gartner. 10 November 2015. Retrieved 21 April 2016.
41. "More Than 30 Billion Devices Will Wirelessly Connect to the Internet of
Everything in 2020". ABI Research. 9 May 2013.
42. Fickas, S.; Kortuem, G.; Segall, Z. (13–14 Oct 1997). "Software organization for
dynamic and adaptable wearable systems". International Symposium on Wearable
Computers: 56–63. doi:10.1109/ISWC.1997.629920. ISBN 0-8186-8192-6.
43. "Main Report: An In-depth Look at Expert Responses". Pew Research Center:
Internet, Science & Tech. 14 May 2014. Retrieved 26 June 2015.
44. "Behind The Numbers: Growth in the Internet of Things". The Connectivist.
Retrieved 26 June 2015.
45. "Budget 2015: some of the things we've announced". HM Treasury. Retrieved 31
March 2015.
46. Vongsingthong, S.; Smanchat, S. (2014). "Internet of Things: A review of
applications & technologies" (PDF). Suranaree Journal of Science and Technology.
47. c d e f g h i j k Ersue, M.; Romascanu, D.; Schoenwaelder, J.; Sehgal, A. (4 July
2014). "Management of Networks with Constrained Devices: Use Cases". IETF Internet
Draft.
48. Mitchell, Shane; Villa, Nicola; Stewart-Weeks, Martin; Lange, Anne. "The Internet
of Everything for Cities: Connecting People, Process, Data, and Things To Improve the
'Livability' of Cities and Communities" (PDF). Cisco Systems. Retrieved 10 July 2014.
49. Narayanan, Ajit. "Impact of Internet of Things on the Retail Industry". PCQuest.
Cyber Media Ltd. Retrieved 20 May 2014.
50. CasCard; Gemalto; Ericsson. "Smart Shopping: spark deals" (PDF). EU FP7
BUTLER Project.
51. Kyriazis, D.; Varvarigou, T.; Rossi, A.; White, D.; Cooper, J. (4–7 June 2013).
"Sustainable smart city IoT applications: Heat and electricity management & Eco-
conscious cruise control for public transportation". IEEE International Symposium and
Workshops on a World of Wireless, Mobile and Multimedia Networks (WoWMoM):
1. doi:10.1109/WoWMoM.2013.6583500. ISBN 978-1-4673-5827-9.
52. Eggimann, Sven; Mutzner, Lena; Wani, Omar; Mariane Yvonne, Schneider;
Spuhler, Dorothee; Beutler, Philipp; Maurer, Max (2017). "The potential of knowing more
– a review of data-driven urban water management". Environmental Science &
Technology.
53. Witkovski, Adriano (2015). "An IdM and Key-based Authentication Method for
providing Single Sign-On in IoT" (PDF). Proceedings of the IEEE GLOBECOM:
1. doi:10.1109/GLOCOM.2015.7417597. ISBN 978-1-4799-5952-5.
54. Clark, Liat. "Oxford Nanopore: we want to create the internet of living
things". Wired UK. Retrieved 8 December 2015.
55. "Making your home 'smart', the Indian way". The Times of India. Retrieved 26
June 2015.
56. Perera, Charith; Liu, Harold; Jayawardena, Srimal (2015). "The Emerging Internet
of Things Marketplace From an Industrial Perspective: A Survey". Emerging Topics in
Computing, IEEE Transactions on. PrePrint (4): 585. doi:10.1109/TETC.2015.2390034.
Retrieved 1 February 2015.
57. Couldry, Nick; Turow, Joseph (2014). "Advertising, Big Data, and the Clearance
of the Public Realm: Marketers' New Approaches to the Content Subsidy". International
Journal of Communication. 8: 1710–1726.
58. Moss, Jamie (20 June 2014). "The internet of things: unlocking the marketing
potential". The Guardian. Retrieved 31 March 2015.
59. Meadows-Klue, Danny. "A new era of personal data unlocked in an "Internet of
Things"". Digital Strategy Consulting. Retrieved 26 January 2015.
60. Millman, Rene. "6 real-life examples of IoT disrupting retail". Internet of
Business. Retrieved 21 February 2016.
61. Davies, Nicola. "How the Internet of Things will enable 'smart
buildings'". Extreme Tech.
62. Li, Shixing; Wang, Hong; Xu, Tao; Zhou, Guiping (2011). "Application Study on
Internet of Things in Environment Protection Field". Lecture Notes in Electrical
Engineering Volume. Lecture Notes in Electrical Engineering. 133: 99–
106. doi:10.1007/978-3-642-25992-0_13. ISBN 978-3-642-25991-3.
63. "Use case: Sensitive wildlife monitoring". FIT French Project. Retrieved 10
July 2014.
64. Hart, Jane K.; Martinez, Kirk (1 May 2015). "Toward an environmental Internet of
Things". Earth & Space Science. doi:10.1002/2014EA000044.
65. Gubbi, Jayavardhana; Buyya, Rajkumar; Marusic, Slaven; Palaniswami,
Marimuthu (24 February 2013). "Internet of Things (IoT): A vision, architectural
elements, and future directions". Future Generation Computer Systems. 29 (7): 1645–
1660. doi:10.1016/j.future.2013.01.010.
66. Chui, Michael; Löffler, Markus; Roberts, Roger. "The Internet of
Things". McKinsey Quarterly. McKinsey & Company. Retrieved 10 July 2014.
67. "Smart Trash". Postscapes. Retrieved 10 July 2014.
68. Severi, S.; Abreu, G.; Sottile, F.; Pastrone, C.; Spirito, M.; Berens, F. (23–26 June
2014). "M2M Technologies: Enablers for a Pervasive Internet of Things". The European
Conference on Networks and Communications (EUCNC2014).
69. Tan, Lu; Wang, Neng (20–22 August 2010). "Future Internet: The Internet of
Things". 3rd International Conference on Advanced Computer Theory and Engineering
(ICACTE). 5: 376–380. doi:10.1109/ICACTE.2010.5579543. ISBN 978-1-4244-6539-2.
70. "Center for Intelligent Maintenance Systems". IMS Center. Retrieved 8
March 2016.
71. Lee, Jay (1 December 2003). "E-manufacturing—fundamental, tools, and
transformation". Robotics and Computer-Integrated Manufacturing. Leadership of the
Future in Manufacturing. 19 (6): 501–507. doi:10.1016/S0736-5845(03)00060-7.
72. c Daugherty, Paul; Negm, Walid; Banerjee, Prith; Alter, Allan. "Driving
Unconventional Growth through the Industrial Internet of Things" (PDF). Accenture.
Retrieved 17 March 2016.
73. Lee, Jay; Bagheri, Behrad; Kao, Hung-An (2015). "A cyber-physical systems
architecture for industry 4.0-based manufacturing systems". Manufacturing Letters. 3:
18–23. doi:10.1016/j.mfglet.2014.12.001.
74. Lee, Jay (2015). Industrial Big Data. China: Mechanical Industry Press. ISBN 978-
7-111-50624-9.
75. "Industrial Internet Insights Report" (PDF). Accenture. Retrieved 17 March 2016.
76. Lee, Jay (19 November 2014). "Keynote Presentation: Recent Advances and
Transformation Direction of PHM". Roadmapping Workshop on Measurement Science
for Prognostics and Health Management of Smart Manufacturing Systems Agenda.
NIST.
77. Parello, J.; Claise, B.; Schoening, B.; Quittek, J. (28 April 2014). "Energy
Management Framework". IETF Internet Draft <draft-ietf-eman-framework-19>.
78. "Can we expect the Internet of Things in healthcare?". IoT Agenda.
Retrieved 2016-11-21.
79. Istepanian, R.; Hu, S.; Philip, N.; Sungoor, A. (2011). "The potential of Internet of
m-health Things "m-IoT" for non-invasive glucose level sensing". Annual International
Conference of the IEEE Engineering in Medicine and Biology Society
(EMBC). doi:10.1109/IEMBS.2011.6091302. ISBN 978-1-4577-1589-1.
80. Swan, Melanie (8 November 2012). "Sensor Mania! The Internet of Things,
Wearable Computing, Objective Metrics, and the Quantified Self 2.0". Sensor and
Actuator Networks. 1 (3): 217–253. doi:10.3390/jsan1030217.
81. Rico, Juan (22–24 April 2014). "Going beyond monitoring and actuating in large
scale smart cities". NFC & Proximity Solutions – WIMA Monaco.
82. "A vision for a city today, a city of vision tomorrow". Sino-Singapore Guangzhou
Knowledge City. Retrieved 11 July 2014.
83. "San Jose Implements Intel Technology for a Smarter City". Intel Newsroom.
Retrieved 11 July 2014.
84. "Western Singapore becomes test-bed for smart city solutions". Coconuts
Singapore. Retrieved 11 July 2014.
85. Lipsky, Jessica. "IoT Clash Over 900 MHz Options". EETimes. Retrieved 15
May 2015.
86. Alleven, Monica. "Sigfox launches IoT network in 10 UK cities". Fierce Wireless
Tech. Retrieved 13 May 2015.
87. Merritt, Rick. "13 Views of IoT World". EETimes. Retrieved 15 May 2015.
88. Fitchard, Kevin. "Sigfox brings its internet of things network to San
Francisco". Gigaom. Retrieved 15 May 2015.
89. "STE Security Innovation Awards Honorable Mention: The End of the
Disconnect". securityinfowatch.com. Retrieved 12 August 2015.
90. Franceschi-Bicchierai, Lorenzo. "When the Internet of Things Starts to Feel Like
the Internet of Shit". Motherboard. Vice Media Inc. Retrieved 27 June 2016.
91. Serebrin, Jacob. "Connected Lab Picks Up Where Xtreme Labs Left
Off". Techvibes. Techvibes Inc. Retrieved 27 June 2016.
92. Porup, J.M. ""Internet of Things" security is hilariously broken and getting
worse". Ars Technica. Condé Nast. Retrieved 27 June 2016.
93. Dan Brickley et al., c. 2001
94. Waldner, Jean-Baptiste (2008). Nanocomputers and Swarm Intelligence. London:
ISTE. pp. 227–231. ISBN 1-84704-002-0.
95. Kushalnagar, N.; Montenegro, G.; Schumacher, C. (August 2007). IPv6 over Low-
Power Wireless Personal Area Networks (6LoWPANs): Overview, Assumptions, Problem
Statement, and Goals. IETF. RFC 4919.
96. Sun, Charles C. (1 May 2014). "Stop using Internet Protocol Version
4!". Computerworld.
97. Thomson, S.; Narten, T.; Jinmei, T. (September 2007). IPv6 Stateless Address
Autoconfiguration. IETF. RFC 4862.
98. "EPCIS – EPC Information Services Standard". GS1. Retrieved 2 January 2014.
99. Miles, Stephen B. (2011). RFID Technology and Applications. London: Cambridge
University Press. pp. 6–8. ISBN 978-0-521-16961-5.
100. Alippi, C. (2014). Intelligence for Embedded Systems. Springer Verlag. ISBN 978-
3-319-05278-6.
101. Gautier, Philippe (2007). "RFID et acquisition de données évènementielles :
retours d'expérience chez Bénédicta". Systèmes d'Information et Management – revue
trimestrielle (in French). ESKA. 12 (2): 94–96. ISBN 978-2-7472-1290-8. ISSN 1260-4984.
102. Fayon, David (17 April 2010). "3 questions to Philippe Gautier". i-o-t.org.
Retrieved 26 June 2015.
103. Perera, Charith; Zaslavsky, Arkady; Christen, Peter & Georgakopoulos, Dimitrios
(2013). "Context Aware Computing for The Internet of Things: A
Survey". Communications Surveys Tutorials, IEEE. PP (n/a): 1–
44. doi:10.1109/SURV.2013.042313.00197.
104. Pal, Arpan (May–June 2015). "Internet of Things: Making the Hype a
Reality" (PDF). IT Pro. IEEE Computer Society. Retrieved 10 April 2016.
105. MIST: Fog-based Data Analytics Scheme with Cost-Efficient Resource
Provisioning for IoT Crowdsensing Applications [1]
106. Gautier, Philippe; Gonzalez, Laurent (2011). L'Internet des Objets... Internet, mais
en mieux (PDF). Foreword by Gérald Santucci (European commission), postword by
Daniel Kaplan (FING) and Michel Volle. Paris: AFNOR editions. ISBN 978-2-12-465316-4.
107. Waldner, Jean-Baptiste (2007). Nanoinformatique et intelligence ambiante.
Inventer l'Ordinateur du XXIeme Siècle. London: Hermes Science. p. 254. ISBN 2-7462-
1516-0.
108. "OGC SensorThings API standard specification". OGC. Retrieved 15
February 2016.
109. "OGC Sensor Web Enablement: Overview And High Level Architecture". OGC.
Retrieved 15 February 2016.
110. "The Enterprise Internet of Things Market". Business Insider. 25 February 2015.
Retrieved 26 June 2015.
111. Kharif, Olga (8 January 2014). "Cisco CEO Pegs Internet of Things as $19 Trillion
Market". Bloomberg.com. Retrieved 26 June 2015.
112. "Internet of Things: The "Basket of Remotes" Problem". Monday Note.
Retrieved 26 June 2015.
113. "Better Business Decisions with Advanced Predictive Analytics". Intel.
Retrieved 26 June 2015.
114. "Tech pages/IoT systems". Retrieved 26 June 2015.
115. Want, Roy; Bill N. Schilit, Scott Jenson (2015). "Enabling the Internet of
Things". 1. Sponsored by IEEE Computer Society. IEEE. pp. 28–35.
116. Al-Fuqaha, A.; Guizani, M.; Mohammadi, M.; Aledhari, M.; Ayyash, M. (2015-01-
01). "Internet of Things: A Survey on Enabling Technologies, Protocols, and
Applications". IEEE Communications Surveys Tutorials. 17 (4): 2347–
2376. doi:10.1109/COMST.2015.2444095. ISSN 1553-877X.
117. "The Internet of Things: a jumbled mess or a jumbled mess?". The Register.
Retrieved 5 June 2016.
118. "Can we talk? Internet of Things vendors face a communications
'mess'". Computerworld. Retrieved 5 June 2016.
119. Philip N. Howard, The Internet of Things is Posed to Change Democracy
Itself, Politico, 2015.06.01
120. Thompson, Kirsten; Mattalo, Brandon (24 November 2015). "The Internet of
Things: Guidance, Regulation and the Canadian Approach". CyberLex. Retrieved 23
October 2016.
121. "The Question of Who Owns the Data Is About to Get a Lot Trickier". Fortune. 6
April 2016. Retrieved 23 October 2016.
122. "The 'Internet of Things': Legal Challenges in an Ultra-connected World". Mason
Hayes & Curran. 22 January 2016. Retrieved 23 October 2016.
123. Brown, Ian (2015). "Regulation and the Internet of Things" (PDF). Oxford Internet
Institute. Retrieved 23 October 2016.
124. "FTC Report on Internet of Things Urges Companies to Adopt Best Practices to
Address Consumer Privacy and Security Risks". Federal Trade Commission. 27 January
2015. Retrieved 23 October 2016.
125. Lawson, Stephen (2 March 2016). "IoT users could win with a new bill in the US
Senate". MIS-Asia. Retrieved 23 October 2016.
126. Pittman, F. Paul (2 February 2016). "Legal Developments in Connected Car Arena
Provide Glimpse of Privacy and Data Security Regulation in Internet of
Things". Lexology. Retrieved 23 October 2016.
127. Wieland, Ken (25 February 2016). "IoT experts fret over fragmentation". Mobile
World.
128. Wallace, Michael (19 February 2016). "Fragmentation is the enemy of the
Internet of Things". Qualcomm.com.
129. Bauer, Harald; Patel, Mark; Veira, Jan (October 2015). "Internet of Things:
Opportunities and challenges for semiconductor companies". McKinsey & Co.
130. Ardiri, Aaron (8 July 2014). "Will fragmentation of standards only hinder the true
potential of the IoT industry?". evothings.com.
131. "IoT Brings Fragmentation in Platform" (PDF). arm.com.
132. Raggett, Dave (27 April 2016). "Countering Fragmentation with the Web of
Things: Interoperability across IoT platforms" (PDF). W3C.
133. Kovach, Steve (30 July 2013). "Android Fragmentation Report". Business Insider.
Retrieved 19 October 2013.
134. Piedad, Floyd N. "Will Android fragmentation spoil its IoT appeal?". TechBeacon.
135. Franceschi-Bicchierai, Lorenzo. "Goodbye, Android". Motherboard. Vice.
Retrieved August 2, 2015.
136. Kingsley-Hughes, Adrian. "The toxic hellstew survival guide". ZDnet. Retrieved 2
August 2015.
137. Tung, Liam (13 October 2015). "Android security a 'market for lemons' that
leaves 87 percent vulnerable". ZDNet. Retrieved 14 October 2015.
138. Thomas, Daniel R.; Beresford, Alastair R.; Rice, Andrew. "Security Metrics for the
Android Ecosystem" (PDF). Computer Laboratory, University of
Cambridge. doi:10.1145/2808117.2808118. Retrieved 14 October 2015.
139. Howard, Philip N. (2015). Pax Technica: How the internet of things May Set Us
Free, Or Lock Us Up. New Haven, CT: Yale University Press. ISBN 978-0-30019-947-5.
140. McEwan, Adrian (2014). "Designing the Internet of Things" (PDF). Retrieved 1
June 2016.
141. Reddington, Clare. "Connected Things and Civic Responsibilities". Storify.
Retrieved 20 May 2016.
142. "Panopticon as a metaphor for the internet of things" (PDF). The Council of the
Internet of Things. Retrieved 6 June 2016.
143. "Foucault" (PDF). UCLA.
144. "Deleuze - 1992 - Postscript on the Societies of Control" (PDF). UCLA.
145. Yoshigoe, Kenji; Dai, Wei; Abramson, Melissa; Jacobs, Alexander (2015).
"Overcoming Invasion of Privacy in Smart Home Environment with Synthetic Packet
Injection". TRON Symposium (TRONSHOW):
1. doi:10.1109/TRONSHOW.2014.7396875. ISBN 978-4-8936-2317-1.
146. Verbeek, Peter-Paul (2011). Moralizing Technology: Understanding and Designing
the Morality of Things. Chicago: The University of Chicago Press. ISBN 978-0-22685-
291-1.
147. Cardwell, Diane (18 February 2014). "At Newark Airport, the Lights Are On, and
They're Watching You". The New York Times.
148. Hardy, Quentin (4 February 2015). "Tim O'Reilly Explains the Internet of
Things". The New York Times Bits. The New York Times. Retrieved 18 May 2015.
149. Webb, Geoff (5 February 2015). "Say Goodbye to Privacy". WIRED. Retrieved 15
February 2015.
150. Crump, Catherine; Harwood, Matthew (25 March 2014). "The Net Closes Around
Us". TomDispatch.
151. Perera, Charith; Ranjan, Rajiv; Wang, Lizhe; Khan, Samee; Zomaya, Albert (2015).
"Privacy of Big Data in the Internet of Things Era". IEEE IT Professional
Magazine. arXiv:1412.8339 .
152. Brown, Ian (12 February 2013). "Britain's Smart Meter Programme: A Case Study
in Privacy by Design". ssrn.com. SSRN 2215646 .
153. "The Societal Impact of the Internet of Things" (PDF). British Computer Society.
14 February 2013. Retrieved 23 October 2016.
154. Gubbi, Jayavardhana; Buyya, Rajkumar; Marusic, Slaven; Palaniswami,
Marimuthu (2013-09-01). "Internet of Things (IoT): A vision, architectural elements, and
future directions". Future Generation Computer Systems. Including Special sections:
Cyber-enabled Distributed Computing for Ubiquitous Cloud and Network Services &
Cloud Computing and Scientific Applications — Big Data, Scalable Analytics, and
Beyond. 29 (7): 1645–1660. doi:10.1016/j.future.2013.01.010.
155. Singh, Jatinder; Pasquier, Thomas; Bacon, Jean; Ko, Hajoon; Eyers, David
(2015). "Twenty Cloud Security Considerations for Supporting the Internet of
Things". IEEE Internet of Things Journal. 3 (3): 1. doi:10.1109/JIoT.2015.2460333.
156. Clearfield, Chris. "Why The FTC Can't Regulate The Internet Of Things". Forbes.
Retrieved 26 June 2015.
157. "We Asked Executives About The Internet Of Things And Their Answers Reveal
That Security Remains A Huge Concern". Business Insider. Retrieved 26 June 2015.
158. Clearfield, Christopher (26 June 2013). "Rethinking Security for the Internet of
Things". Harvard Business Review Blog.
159. A. Witkovski; A. O. Santin; J. E. Marynowski; V. Abreu Jr. (December 2016). "An
IdM and Key-based Authentication Method for providing Single Sign-On in IoT" (PDF).
IEEE Globecom.
160. Steinberg, Joseph (27 January 2014). "These Devices May Be Spying On You
(Even In Your Own Home)". Forbes. Retrieved 27 May 2014.
161. Greenberg, Andy (21 July 2015). "Hackers Remotely Kill a Jeep on the Highway—
With Me in It". Wired. Retrieved 21 July 2015.
162. Scientific American, April 2015, p.68.
163. Loukas, George (June 2015). Cyber-Physical Attacks A growing invisible threat.
Oxford, UK: Butterworh-Heinemann (Elsevier). p. 65. ISBN 9780128012901.
164. Scientific American, November 2015, p.30.
165. "Disruptive Technologies Global Trends 2025" (PDF). National Intelligence
Council (NIC). April 2008. p. 27.
166. Ackerman, Spencer (15 March 2012). "CIA Chief: We'll Spy on You Through Your
Dishwasher". WIRED. Retrieved 26 June 2015.
167. Ward, Mark (23 September 2015). "Smart devices to get security tune-up". BBC
News.
168. "Executive Steering Board". IoT Security Foundation.
169. Fielding, Roy Thomas (2000). "Architectural Styles and the Design of Network-
based Software Architectures" (PDF). University Of California, Irvine.
170. Littman, Michael; Kortchmar, Samuel. "The Path To A Programmable
World". Footnote. Retrieved 14 June 2014.
171. Finley, Klint (6 May 2014). "The Internet of Things Could Drown Our Environment
in Gadgets". Wired.
172. Gilbert, Arlo (3 April 2016). "The time that Tony Fadell sold me a container of
hummus". Retrieved 7 April 2016.
173. c Walsh, Kit (5 April 2016). "Nest Reminds Customers That Ownership Isn't What
It Used to Be". Electronic Frontier Foundation. Retrieved 7 April 2016.
174. c d "Taming the IoT terminology zoo: what does it all mean?". Information Age.
Vitesse Media Plc. Retrieved 14 March 2017.
175. "Technology Working Group". The Industrial Internet Consortium. Retrieved 21
March 2017.
176. "Vocabulary Technical Report". The Industrial Internet Consortium. Retrieved 21
March 2017.
177. "Acceleration Sensing". IoT One. Retrieved 21 March 2017.
178. "IoT Terms Database". IoT One. Retrieved 21 March 2017.
179. Yarmoluk, Dan. "5 Barriers to IoT Adoption & How to Overcome Them". ATEK
Access Technologies. Retrieved 30 March 2017.
180. Yarmoluk, Dan. "5 Barriers to IoT Adoption & How to Overcome Them". ATEK
Access Technologies. Retrieved 30 March 2017.
181. c "Why The Consumer Internet Of Things Is Stalling". Forbes. Retrieved 24
March 2017.
182. "Every.Thing.Connected. A Study of the adoption of 'Internet of Things' among
Danish companies" (PDF). Ericsson. Retrieved 24 March 2017.
183. "Every.Thing.Connected. A Study of the adoption of 'Internet of Things' among
Danish companies" (PDF). Ericsson. Retrieved 24 March 2017.
184. "Privacy of the Internet of Things: A Systematic Literature Review (Extended
Discussion)" (PDF). Retrieved 27 March 2017.
185. "Privacy of the Internet of Things: A Systematic Literature Review (Extended
Discussion)" (PDF). Retrieved 27 March 2017.
186. "Privacy of the Internet of Things: A Systematic Literature Review (Extended
Discussion)" (PDF). Retrieved 27 March 2017.
187. Basenese, Louis. "The Best Play on the Internet of Things Trend". Wall Street
Daily. Wall Street Daily. Retrieved 28 March 2017.
188. Basenese, Louis. "The Best Play on the Internet of Things Trend". Wall Street
Daily. Wall Street Daily. Retrieved 28 March 2017.
189. Basenese, Louis. "The Best Play on the Internet of Things Trend". Wall Street
Daily. Wall Street Daily. Retrieved 28 March 2017.
190. "Igniting Growth in Consumer Technology" (PDF). Accenture. Retrieved 27
March 2017.
191. "Igniting Growth in Consumer Technology" (PDF). Accenture. Retrieved 27
March 2017.
192. "Igniting Growth in Consumer Technology" (PDF). Accenture. Retrieved 27
March 2017.
193. "Every. Thing. Connected. A study of the adoption of 'Internet of Things' among
Danish companies" (PDF). Ericsson. Retrieved 28 March 2017.
194. "Every. Thing. Connected. A study of the adoption of 'Internet of Things' among
Danish companies" (PDF). Ericsson. Retrieved 28 March 2017.
195. "Every. Thing. Connected. A study of the adoption of 'Internet of Things' among
Danish companies" (PDF). Ericsson. Retrieved 28 March 2017.
196. "Every. Thing. Connected. A study of the adoption of 'Internet of Things' among
Danish companies" (PDF). Ericsson. Retrieved 28 March 2017.
197. Yarmoluk, Dan. "5 Barriers to IoT Adoption & How to Overcome Them". ATEK
Access Technologies. Retrieved 30 March 2017.
198. Anthony, Scott. "Disruptive Innovation: Kodak's Downfall Wasn't About
Technology". Harvard Business Review. Harvard Business Publishing. Retrieved 30
March 2017.
199. Anthony, Scott. "Disruptive Innovation: Kodak's Downfall Wasn't About
Technology". Harvard Business Review. Harvard Business Publishing. Retrieved 30
March 2017.
200. Anthony, Scott. "Disruptive Innovation: Kodak's Downfall Wasn't About
Technology". Harvard Business Review. Harvard Business Publishing. Retrieved 30
March 2017.
201. Anthony, Scott. "Disruptive Innovation: Kodak's Downfall Wasn't About
Technology". Harvard Business Review. Harvard Business Publishing. Retrieved 30
March 2017.
202. Chaouchi, Hakima. The Internet of Things. London: Wiley-ISTE, 2010.
203. Chabanne, Herve, Pascal Urien, and Jean-Ferdinand Susini. RFID and the Internet
of Things. London: ISTE, 2011.
204. Hersent, Olivier, David Boswarthick and Omar Elloumi. The Internet of Things:
Key Applications and Protocols. Chichester, West Sussex: Wiley, 2012.
205. Pfister, Cuno. Getting Started with the Internet of Things. Sebastapool, Calif:
O'Reilly Media, Inc., 2011.
206. Uckelmann, Dieter, Mark Harrison and Florian Michahelles. Architecting the
Internet of Things. Berlin: Springer, 2011.
207. Weber, Rolf H., and Romana Weber. Internet of Things: Legal Perspectives.
Berlin: Springer, 2010.
208. Zhou, Honbo. The Internet of Things in the Cloud: A Middleware Perspective.
Boca Raton: CRC Press, Taylor & Francis Group, 2013.
209. Chaouchi, Hakima. The Internet of Things. London: Wiley-ISTE, 2010.
210. Chabanne, Herve, Pascal Urien, and Jean-Ferdinand Susini. RFID and the Internet
of Things. London: ISTE, 2011.
211. Hersent, Olivier, David Boswarthick and Omar Elloumi. The Internet of Things:
Key Applications and Protocols. Chichester, West Sussex: Wiley, 2012.
212. Pfister, Cuno. Getting Started with the Internet of Things. Sebastapool, Calif:
O'Reilly Media, Inc., 2011.
213. Uckelmann, Dieter, Mark Harrison and Florian Michahelles. Architecting the
Internet of Things. Berlin: Springer, 2011.
214. Weber, Rolf H., and Romana Weber. Internet of Things: Legal Perspectives.
Berlin: Springer, 2010.
215. Zhou, Honbo. The Internet of Things in the Cloud: A Middleware Perspective.
Boca Raton: CRC Press, Taylor & Francis Group, 2013.
4 Intel Active Management Technology
4.1 Commentary
4.1.1 Introduction
Intel Active Management Technology (AMT) uses hardware and firmware for
remote out-of-band management[1][2][3][4][5] to monitor, maintain, update, upgrade,
and repair them.[1] Out-of-band (hardware-based) management is different from
software-based (in-band) management and software management agents.[1][2]
Hardware-based management which uses a communication channel different from
software-based communication, a software stack in the operating system not
depending on the need of an OS or local management agent. It has been around on
Intel/AMD based computers already for auto-configuration, dynamic IP address
allocation and diskless workstations and wake-on-LAN for remote power control.
[6] AMT, used with a software management application,[1] gives the system
administrator access to the PC over the wire to do remote tasks that are almost
impractical without remote functions built into it.[1][3][7]
It is a secondary service processor on the motherboard,[8] with TLS communication
and encryption for extra security[2] extending to standards for out-of-band
management[9] for PCs and is like IPMI for servers.[1][10][11] Security problems have
been documented in [12][13][14][15][16] and facilities are not provided free.[17][18]
4.1.2 Features
Intel AMT has hardware-based remote management, security, power management, and
remote configuration giving independent remote access to enabled PCs[1][7][19] and
uses security and management technology built into PCs.[1][6] It uses out-of-band 
communication channel[1] independent of any operating system and a communication
channel independent of the PC power state, any management agent, and hardware
state – some without power e.g. remote power-up and others needing power. e.g.
console redirection via serial over LAN, agent presence checking, and network traffic
filtering.[1] The hardware can be scripted to automate maintenance and service.[1]
Hardware for AMT includes:
• Encrypted, remote communication channel for network traffic between the IT
console and Intel AMT.[1][2]
• Ability for a wired PC (physically connected to the network) outside the
company's firewall on an open LAN to establish a secure communication tunnel (via
AMT) back to the IT console.[1][2] 
• Remote power control through encrypted WOL.[1][2]
• Remote boot, via integrated device electronics redirect (IDE-R).[1][2]
• Console redirection, via serial over LAN.[1]
• Keyboard, video, mouse over network.
• Hardware-based filters for monitoring the network traffic for known threats and
fknown / unknown threats based on time-based heuristics.[1][2][20]
• Isolation circuitry to port-block, rate-limit, or fully isolate a PC that might be
compromised.[1][2][20]
• Agent presence checking, via hardware-based, policy-based
programmable timers and alerts.[1][2][20]
• OOB alerting.[1][2]
• Persistent event log, stored in protected memory.[1][2]
• Access (preboot) the PC's universal unique identifier.[1][2]
• Access (preboot) hardware asset information.[1][2]
• Access (preboot) to third-party data store (TPDS), a protected memory area that
software vendors can use, in which to version information, .DAT files, and other
information.[1][2]
• Remote configuration options.[1][2][21]
• Protected Audio/Video Pathway for playback protection of DRM-protected media.
• Support for IEEE 802.11 a/g/n[1][10][22][23]
• Cisco-compatible extensions for Voice over WLAN[1][10][22][23]
4.1.3 Applications
Most AMT is usable for PC powered-off, operating system crashed, software agent
missing, or hardware failure.[1][2] The console-redirection feature, agent presence
checking, and network traffic filters are allowed after PC powered up.[1][2]
AMT supports these management tasks:
• Remote power functions.[1]
• Remote boot the PC.[1][7]
• Remote redirect the system's I/O[1] for remote system administration.
• Remote BIOS management.[1]
• Detect suspicious network traffic.[1][20]
• Block or rate-limit network traffic to and from suspect compromised systems.[1]
[20]
• Manage hardware packet filters in the on-board network adapter.[1][20]
• Automatically alert IT console software agent timeout.[1][20]
• Receive Platform Event Trap events out-of-band from the AMT subsystem.[1]
• Access persistent event log in protected memory.[1]
• Discover an AMT system.[1]
• Perform a software inventory on the PC.[1]
• Perform a hardware inventory by uploading the remote PC's hardware asset list.
[1]
• View network server and keyboard, video, mouse to perform and monitor AMT
operations.
4.1.4 Provisioning and integration
AMT supports certificate-based or PSK-based remote provisioning (full remote
deployment), USB key-based provisioning ("one-touch" provisioning), manual
provisioning[1] and provisioning using an agent on the local host ("Host Based
Provisioning"). An OEM can also pre-provision AMT.[21]
The current version of AMT supports remote deployment on both laptop and desktop
PCs. (Remote deployment was one of the key features missing from earlier versions of
AMT and which delayed acceptance of AMT in the market.)[7] Remote deployment, until
recently, was only possible within a corporate network.[24] Remote deployment lets a
sys-admin deploy PCs without "touching" the systems physically.[1] It also allows a
sys-admin to delay deployments and put PCs into use for a period of time before making
AMT features available to the IT console.[25] As delivery and deployment models
evolve, AMT can now be deployed over the Internet, using both "Zero-Touch" and Host-
Based methods.[26]
PCs can be sold with AMT enabled or disabled. The OEM determines whether to ship
AMT with the capabilities ready for setup (enabled) or disabled. Your setup and
configuration process will vary, depending on the OEM build.[21]
AMT includes a Privacy Icon application, called IMSS,[27] that notifies the system's
user if AMT is enabled. It is up to the OEM to decide whether they want to display the
icon or not.
AMT supports different methods for disabling the management and security technology,
as well as different methods for reenabling the technology.[1][25][28][29]
AMT can be partially unprovisioned using the Configuration Settings, or fully
unprovisioned by erasing all configuration settings, security credentials, and
operational and networking settings.[30] A partial unprovisioning leaves the PC in the
setup state. In this state, the PC can self-initiate its automated, remote configuration
process. A full unprovisioning erases the configuration profile as well as the security
credentials and operational / networking settings required to communicate with the
Intel Management Engine. A full unprovisioning returns Intel AMT to its factory default
state.
Once AMT is disabled, in order to enable AMT again, an authorized sys-admin can
reestablish the security credentials required to perform remote configuration by either:
• Using the remote configuration process (full automated, remote config via
certificates and keys).[1]
• Physically accessing the PC to restore security credentials, either by USB key or
by entering the credentials and MEBx parameters manually.[1]
There is a way to totally reset AMT and return in to factory defaults. This can be done
in two ways:
• Setting the appropriate value in the BIOS.
• Clearing the CMOS memory and / or NVRAM.
• Setup and integration of AMT is supported by a setup and configuration service
(for automated setup), an AMT Webserver tool (included with Intel AMT), and AMT
Commander, an unsupported and free, proprietary application available from the Intel
website.
4.1.5 Communication
All access to the Intel AMT features is through the Intel Management Engine in the
hardware and firmware[1] depending on the Management Engine with the
communication channel based on the TCP/IP firmware stack in system hardware.
[1] Because it is based on the TCP/IP stack, remote communication with AMT occurs
via the network data path before communication is passed to the OS. AMT supports
wired and wireless networks[1][10][22][31] and establish a secure communication
tunnel between a wired PC and an IT console outside the corporate firewall.[1][32][33]
4.1.6 Design
Hardware design is described in [35][36][37][38][39][40][41][42]. Whilst firmware is
defined in Management Engine (mainstream chipsets), Server Platform Services
(server) and Trusted Execution Engine (tablet/mobile/low power).
4.1.7 Security
Security for communications between Intel AMT and the provisioning service and/or
management console brought about depending on the network environment. It can be
effected by certificates and keys, pre-shared keys, or administrator password.[1][2]
AMT is protected in the hardware and firmware and are active when the PC is not
generally available.[1][2][43] Software in the AMT is not maintained with normal
operating system's cycles so security defects can cause bad vulnerabilities.[44]
4.1.7.1 Networking
The users must balance the network configuration between a secure network or
remote management applications without secure communications to maintain and
service PCs[1] and the development of modern security technologies and hardware
designs allow remote management even in more secure environments.[1]
All AMT features are available in a secure network environment:
• The network can verify the security of an AMT-enabled PC at all stages of
operation on the network.
• PXE boot can be used while maintaining network security.
AMT can hold network security credentials in the hardware, with the AMT
Embedded Trust Agent and an AMT posture plug-in[1][2] which aggegates security
information, e.g. firmware configuration and security parameters, BIOS, and
protected memory storing the data in protected, nonvolatile memory to overcome
compromised PCs and the system administrator can correct the problem.[1][45][46]
AMT adopts various security schemes, technologies, and methodologies for AMT
features in deployment and remote management[1][2][43] e.g.
TLS, HTTP authentication, sign-on to Intel AMT with Microsoft
Windows domain authentication, digitally signed firmware, random number session
keys,  UUID, hardware asset information, and BIOS configuration settings, Access
control lists.
4.1.7.2 Vulnerabilities
Exploits are documented in [47][48][49][50][14][51][12][52][53][54][55][56][57][58][59]
[60][61][62][14][54][63][64][65][66][67][68][69][70][71][72][73]. Mutigation is
documented in [74] [75][76][77][78][79][80][81][82][83][84][85][86][87][88][89][90].
4.2 References
1. "Intel Centrino 2 with vPro Technology and Intel Core2 Processor with vPro
Technology" (PDF). Intel. 2008. Archived from the original (PDF) on 2008-12-06.
Retrieved 2008-08-07.
2. "Architecture Guide: Intel Active Management Technology". Intel. 2008-06-26.
Archived from the original on 2012-06-07. Retrieved 2008-08-12.
3. "Remote Pc Management with Intel's vPro". Tom's Hardware Guide. Retrieved 2007-
11-21.
4. "Intel vPro Chipset Lures MSPs, System Builders". ChannelWeb. Retrieved 1
August2007.
5. "Intel Mostly Launches Centrino 2 Notebook Platform". ChannelWeb. Retrieved 1
July2008.
6. "A new dawn for remote management? A first glimpse at Intel's vPro platform". ars
technica. Retrieved 2007-11-07.
7. "Revisiting vPro for Corporate Purchases". Gartner. Retrieved 2008-08-07.
8. "Answers to Frequently Asked Questions about libreboot". libreboot.org.
Retrieved 2015-09-25.
9. "Archived copy". Archived from the original on 2012-04-14. Retrieved 2012-04-30.
10. "Intel Centrino 2 with vPro Technology" (PDF). Intel. Archived from the
original(PDF) on 2008-03-15. Retrieved 2008-07-15.
11. "Intel MSP". Msp.intel.com. Retrieved 2016-05-25.
12. "Intel® Product Security Center". Security-center.intel.com. Retrieved 2017-05-07.
13. Charlie Demerjian (2017-05-01). "Remote security exploit in all 2008+ Intel
platforms". SemiAccurate. Retrieved 2017-05-07.
14. "Red alert! Intel patches remote execution hole that's been hidden in chips since
2010". Theregister.co.uk. Retrieved 2017-05-07.
15. HardOCP: Purism Is Offering Laptops with Intel's Management Engine Disabled
16. System76 to disable Intel Management Engine on its notebooks
17. Garrison, Justin (2011-03-28). "How to Remotely Control Your PC (Even When it
Crashes)". Howtogeek.com. Retrieved 2017-05-07.
18. "Open Manageability Developer Tool Kit | Intel® Software". Software.intel.com.
Retrieved 2017-05-07.
19. "Intel vPro Technology". Intel. Retrieved 2008-07-14.
20. "Intel Active Management Technology System Defense and Agent Presence
Overview" (PDF). Intel. February 2007. Retrieved 2008-08-16.
21. "Intel Centrino 2 with vPro Technology". Intel. Archived from the original on 2008-
03-15. Retrieved 2008-06-30.
22. "New Intel-Based Laptops Advance All Facets of Notebook PCs". Intel. Archived
from the original on 2008-07-17. Retrieved 2008-07-15.
23. "Understanding Intel AMT over wired vs. wireless (video)". Intel. Archived from the
original on March 26, 2008. Retrieved 2008-08-14.
24. "Intel® vPro™ Technology". Intel.
25. "Part 3: Post Deployment of Intel vPro in an Altiris Environment: Enabling and
Configuring Delayed Provisioning". Intel (forum). Retrieved 2008-09-12.
26. "Archived copy" (PDF). Archived from the original (PDF) on January 3, 2014.
Retrieved July 20, 2013.
27. "Archived copy". Archived from the original on February 20, 2011.
Retrieved December 26, 2010.
28. "Intel vPro Provisioning" (PDF). HP (Hewlett Packard). Retrieved 2008-06-02.
29. "vPro Setup and Configuration for the dc7700 Business PC with Intel vPro
Technology"(PDF). HP (Hewlett Packard). Retrieved 2008-06-02.[permanent dead link]
30. "Part 4: Post Deployment of Intel vPro in an Altiris Environment Intel: Partial
UnProvDefault". Intel (forum). Retrieved 2008-09-12.
31. "Technical Considerations for Intel AMT in a Wireless Environment". Intel. 2007-09-
27. Retrieved 2008-08-16.
32. "Intel Active Management Technology Setup and Configuration Service, Version
5.0"(PDF). Intel. Retrieved 2008-08-04.[permanent dead link]
33. "Intel AMT - Fast Call for Help". Intel. 2008-08-15. Retrieved 2008-08-17.[permanent
dead link](Intel developer's blog)
34. "Archived copy". Archived from the original on 2016-01-03. Retrieved 2016-01-16.
35. "Archived copy". Archived from the original on November 1, 2014.
Retrieved February 25, 2014.
36. "Positive Technologies Blog: Disabling Intel ME 11 via undocumented mode".
Retrieved 2017-08-30.
37. Igor Skochinsky (Hex-Rays) Rootkit in your laptop, Ruxcon Breakpoint 2012
38. "Intel Ethernet Controller I210 Datasheet" (PDF). Intel. 2013. pp. 1, 15, 52, 621–776.
Retrieved 2013-11-09.
39. "Intel Ethernet Controller X540 Product Brief" (PDF). Intel. 2012. Retrieved 2014-02-
26.
40. Joanna Rutkowska. "A Quest to the Core" (PDF). Invisiblethingslab.com.
Retrieved 2016-05-25.
41. "Archived copy" (PDF). Archived from the original (PDF) on February 11, 2014.
Retrieved February 26, 2014.
42. "Platforms II" (PDF). Users.nik.uni-obuda.hu. Retrieved 2016-05-25.
43. "New Intel vPro Processor Technology Fortifies Security for Business PCs (news
release)". Intel. 2007-08-27. Archived from the original on 2007-09-12. Retrieved 2007-
08-07.
44. "Intel® AMT Critical Firmware Vulnerability". Intel. Retrieved 10 June 2017.
45. "Intel Software Network, engineer / developers forum". Intel. Retrieved 2008-08-09.
[permanent dead link]
46. "Cisco Security Solutions with Intel Centrino Pro and Intel vPro Processor
Technology"(PDF). Intel. 2007.
47. "Invisible Things Lab to present two new technical presentations disclosing
system-level vulnerabilities affecting modern PC hardware at its
core" (PDF). Invisiblethingslab.com. Retrieved 2016-05-25.
48. "Berlin Institute of Technology : FG Security in telecommunications : Evaluating
"Ring-3" Rootkits" (PDF). Stewin.org. Retrieved 2016-05-25.
49. "Persistent, Stealthy Remote-controlled Dedicated Hardware
Malware" (PDF). Stewin.org. Retrieved 2016-05-25.
50. "Security Evaluation of Intel's Active Management Technology" (PDF). Web.it.kth.se.
Retrieved 2016-05-25.
51. "CVE - CVE-2017-5689". Cve.mitre.org. Retrieved 2017-05-07.
52. "Intel Hidden Management Engine - x86 Security Risk?". Darknet. 2016-06-16.
Retrieved 2017-05-07.
53. Garrett, Matthew (2017-05-01). "Intel's remote AMT
vulnerablity". mjg59.dreamwidth.org. Retrieved 2017-05-07.
54. "2017-05-05 ALERT! Intel AMT EXPLOIT OUT! IT'S BAD! DISABLE AMT
NOW!". Ssh.com\Accessdate=2017-05-07.
55. Dan Goodin (2017-05-06). "The hijacking flaw that lurked in Intel chips is worse than
anyone thought". Ars Technica. Retrieved 2017-05-08.
56. "General: BIOS updates due to Intel AMT IME vulnerability - General Hardware -
Laptop - Dell Community". En.community.dell.com. Retrieved 2017-05-07.
57. "Advisory note: Intel Firmware vulnerability – Fujitsu Technical Support pages from
Fujitsu Fujitsu Continental Europe, Middle East, Africa & India". Support.ts.fujitsu.com.
2017-05-01. Retrieved 2017-05-08.
58. "HPE | HPE CS700 2.0 for VMware". H22208.www2.hpe.com. 2017-05-01.
Retrieved 2017-05-07.
59. "Intel® Security Advisory regarding escalation o... |Intel
Communities". Communities.intel.com. Retrieved 2017-05-07.
60. "Intel Active Management Technology, Intel Small Business Technology, and Intel
Standard Manageability Remote Privilege Escalation". Support.lenovo.com.
Retrieved 2017-05-07.
61. "MythBusters: CVE-2017-5689". Embedi.com. Retrieved 2017-05-07.
62. Charlie Demerjian (2017-05-01). "Remote security exploit in all 2008+ Intel
platforms". SemiAccurate.com. Retrieved 2017-05-07.
63. "Sneaky hackers use Intel management tools to bypass Windows firewall".
Retrieved 10 June 2017.
64. Tung, Liam. "Windows firewall dodged by 'hot-patching' spies using Intel AMT, says
Microsoft - ZDNet". Retrieved 10 June 2017.
65. "PLATINUM continues to evolve, find ways to maintain invisibility". Retrieved 10
June2017.
66. "Malware Uses Obscure Intel CPU Feature to Steal Data and Avoid Firewalls".
Retrieved 10 June 2017.
67. "Hackers abuse low-level management feature for invisible backdoor". iTnews.
Retrieved 10 June 2017.
68. "Vxers exploit Intel's Active Management for malware-over-LAN • The
Register". www.theregister.co.uk. Retrieved 10 June 2017.
69. Security, heise. "Intel-Fernwartung AMT bei Angriffen auf PCs genutzt". Security.
Retrieved 10 June 2017.
70. "PLATINUM activity group file-transfer method using Intel AMT SOL". Channel 9.
Retrieved 10 June 2017.
71. Researchers find almost EVERY computer with an Intel Skylake and above CPU can
be owned via USB
72. "Intel® Management Engine Critical Firmware Update (Intel SA-00086)". Intel.
73. "Intel Chip Flaws Leave Millions of Devices Exposed". Wired.
74. "Disabling AMT in BIOS". software.intel.com. Retrieved 2017-05-17.
75. "Are consumer PCs safe from the Intel ME/AMT exploit? -
SemiAccurate". semiaccurate.com.
76. "Intel x86s hide another CPU that can take over your machine (you can't audit
it)". Boing Boing. 2016-06-15. Retrieved 2017-05-11.
77. "[coreboot] : AMT bug". Mail.coreboot.org. 2017-05-11. Retrieved 2017-06-13.
78. "Disabling Intel AMT on Windows (and a simpler CVE-2017-5689 Mitigation
Guide)". Social Media Marketing | Digital Marketing | Electronic Commerce. 2017-05-03.
Retrieved 2017-05-17.
79. "bartblaze/Disable-Intel-AMT". GitHub. Retrieved 2017-05-17.
80. "mjg59/mei-amt-check". GitHub. Retrieved 2017-05-17.
81. "Intel® AMT Critical Firmware Vulnerability". Intel. Retrieved 2017-05-17.
82. "Positive Technologies Blog: Disabling Intel ME 11 via undocumented mode".
Retrieved 2017-08-30.
83. "Intel Patches Major Flaws in the Intel Management Engine". Extreme Tech.
84. Vaughan-Nichols, Steven J. "Taurinus X200: Now the most 'Free Software' laptop on
the planet - ZDNet".
85. Kißling, Kristian. "Libreboot: Thinkpad X220 ohne Management Engine » Linux-
Magazin". Linux-Magazin.
86. online, heise. "Libiquity Taurinus X200: Linux-Notebook ohne Intels Management
Engine". heise online.
87. "Intel AMT Vulnerability Shows Intel's Management Engine Can Be Dangerous". 2
May 2017.
88. "Purism Explains Why It Avoids Intel's AMT And Networking Cards For Its Privacy-
Focused 'Librem' Notebooks". Tom's Hardware. 2016-08-29. Retrieved 2017-05-10.
89. "The Free Software Foundation loves this laptop, but you won't".
90. "FSF Endorses Yet Another (Outdated) Laptop - Phoronix". phoronix.com.
5 IoT security
5.1 Introduction
IoT security needs to cover information security, mobile security, network security,
internet security, application security, computer security and data-centric security. It
can be used for cyberbullying, cybercrime and cyberwarfare. Infectious malware
consists of computer viruses and worms. Concealment type are Trojan horses, rootkits,
backdoors, zombie computer, man-in-the-middle, man-in-the-browser, man-in-the-mobile
and clickjacking. Malware for profit comes as privacy-invasive software, adware,
spyware, botnet, keystroke logging, form grabbing, web threats, fraudulent dialer,
malbot, scareware, rogue security software, ransomware and crimeware. Protection
are classified by anti-keylogger, antivirus software, browser security, internet security,
mobile security, network security, defensive computing, firewall, intrusion detection
system and data loss prevention software. Countermeasures consist of computer and
network surveillance, operation: bot roast and honeypot.
5.1.1 Information security
5.1.1.1 Commentary
5.1.1.1.1 Introduction
From a business perspective, information security must be balanced against cost;
the Gordon-Loeb Model provides a mathematical economic approach for addressing
this concern.[3]
IT systems are made up of hardware, software and communications. Security aims to
help identify and apply standards for protection and prevention at physical, personal
and organizational levels. Information security prevents unauthorized access, use,
disclosure, disruption, modification, inspection, recording or destruction of information.
[1] It refers to administrative (procedural), logical and physical categories in a system
to give security from cyber attacks trying to access private data or control of the
internal systems. Information assurance renders trust of a system as regards
confidentiality, integrity and availability.
Security must protect information throughout its full lifetime as it passes from process
to process or is replicated. Each component of the system has and requires its own
protection mechanisms. It can be considered as an onion starting with data at the
centre, then working through surrounding layers of people, network security, host-
based security and application security.
The data must be classified by its owner. A security classification policy must be
defined with labels, criteria for the label e.g. value, confidentiality, obsolescence, age,
legal, regulation and the required security controls must be listed for each label.
Personnel should know and understand the classification scheme, the security controls
and handling procedures for each classification. The security scheme must be
reviewed periodically for validity.
5.1.1.1.2 Information security
Information security covers:
• Internet security
• Cyberwarfare
• Computer security
• Mobile security
• Network security
5.1.1.1.3 Threats
Threats are “find what threat and where”. Threats arise from software attacks, theft of
intellectual property, identity theft, theft of equipment or information, sabotage, and
information extortion. Software attacks come in the form of viruses,[2] worms, phishing
attacks, and Trojan horses. It requires securing networks and allied infrastructure and
securing applications and databases. Threats arise from:
• Computer crime
• Vulnerability
• Eavesdropping
• Exploits
• Trojans
• Viruses and worms
• Denial of service
• Malware
• Payloads
• Rootkits
• Keyloggers
5.1.1.1.4 Defences
Defences are “find what defence and where”. Counter measures include security
testing, information systems auditing, business continuity planning and digital
forensics. Specifically security threats / risks are countered by :[4]
• reduction/mitigation – build ways to eliminate problems
• assign/transfer – divert cost through insurance or outsourcing
• accept – if cost of countermeasure v threat is uneconomical
• ignore/reject – if threat can be ignored
Security aims to help identify and apply standards for protection and prevention at
physical, personnel and organizational levels.
Defences come from:
• Computer access control
• Application security
• Antivirus software
• Secure coding
• Security by design
• Secure operating systems
• Authentication
• Multi-factor authentication
• Authorization
• Data-centric security
• Firewall (computing)
• Intrusion detection system
• Intrusion prevention system
• Mobile secure gateway
5.1.1.1.5 Principles
Security is based on confidentiality, integrity, and availability.[16] Other concepts can
include [7] accountability[17] and non-repudiation. Standards, derived from  OECD[18],
NIST [19] and Open Group [20], procedures or policies give administrators, users and
operators a strategy for security within the organizations. [7][8][9][10][11][12][13][14]
The security process is driven by risk management which assesses
vulnerabilities and threats for an organization to reach business objectives and
determine ways to reduce risk relative for the value to the organization.[23] It is an
iterative process based on the changing business environment with vulnerabilities and
counter measures altering constantly.[24] By way of comment the most vulnerable part
is the human user, operator, designer, or other human.[25] The results of the
assessment leads to controls. Controls fall into administrative (procedural), logical and
physical categories with contributions from international standards such as ISO/IEC
27001 and ISO/IEC 27002[27] may contribute to the results. Administrative controls are
the written policies, procedures, standards and guidelines for running the business and
managing people. They give a basis for the logical and physical controls. Logical
controls use software and data to monitor and control access to information and
computing systems using passwords, firewalls, intrusion systems, access control lists,
and data encryption making sure that personnel, program or system process have no
more privilege than necessary. Physical controls monitor and control the environment
of the work place and computing facilities. [28]
5.1.1.1.6 Risk analysis
The risk analysis process considers:
• security policy,
• organization of information security,
• asset management,
• human resources security,
• physical and environmental security,
• communications and operations management,
• access control,
• information systems acquisition, development and maintenance,
• information security incident management,
• business continuity management, and
• regulatory compliance.
The actual risk management process consists of:
• Identify assets with their value including people, buildings, hardware, software, data,
supplies.
• Assess threats for acts of nature//war, accidents, malicious acts anywhere.
• Assess vulnerabilities, and for each, evaluate the probability of exploitation. Judge
effectiveness of policies, procedures, standards, training, physical security, quality
control, technical security.
• Assess the impact of each threat on each asset with qualitative or quantitative
analysis.
• Identify, select and implement appropriate controls with a proportional response
dependent on productivity, cost effectiveness, and value of the asset.
• Evaluate the effectiveness of the control measures with cost effective protection
without discernible loss of productivity.
• For each risk, management need to select one of the following:
• accept the risk based upon the value of the asset, the frequency of occurrence, and
the impact on the business.
• mitigate the risk by selecting and implementing appropriate control measures to
reduce the risk.
• transferred to another business by buying insurance or outsourcing to another
business.[26] 
• ignore the risk.
5.1.1.1.7 Access
Access to information is restricted to authorized personnel and computer programs
only at the level appropriate for the data based on identification e.g. a
username, authentication verifying a claim of identity by providing:
• Something you know: e.g. a password
• Something you have: e.g. a magnetic swipe card
• Something you are: e.g. biometrics,
and authorization defining the informational resources permitted to be accessed and
the actions allowed according to prescribed policies possibly based on role. All
authentication attempts and all access to information are logged with an audit trail.
Cryptography helps information security by encrypting usable information into a form
that renders it unusable by anyone other than an authorized user and decrypting it into
useful information for the authorized user with a cryptographic key. It provides security
for authentication, message digests, digital signatures, non-repudiation, and
communications.
The process is extended with due care where process steps are verified, measured, or
give tangible artefacts and due diligence where personnel monitor and maintain the
protection mechanisms on a continuing basis.
5.1.1.1.8 Incident management
Incident management is a formal process for defining and controlling security incidents
in the processing environment. It reduces risks from incidents and improves stability
and reliability from incidents. It uses a defined life cycle e.g. recorded, escalated/
approved/rejected, planned including back-out, tested, scheduled, communicated,
implemented, documented, post change review.
5.1.1.1.9 Change management
Change management gives a formal process for defining and controlling changes to the
system environment. It reduces risks from changes and improves stability and
reliability from changes. It uses a defined life cycle e.g. requested, escalated
/approved/rejected, planned including back-out, tested, scheduled, communicated,
implemented, documented, post change review.
5.1.1.2 References
1 44 U.S.C. § 3542(b)(1)
2 Stewart, James (2012). CISSP Study Guide. Canada: John Wiley & Sons, Inc.
pp. 255–257. ISBN 978-1-118-31417-3 – via Online PSU course resource, EBL Reader.
3 Gordon, Lawrence; Loeb, Martin (November 2002). "The Economics of
Information Security Investment". ACM Transactions on Information and System
Security. 5 (4): 438–457. doi:10.1145/581271.581274.
4 Stewart, James (2012). CISSP Certified Information Systems Security
Professional Study Guide Sixth Edition. Canada: John Wiley & Sons, Inc. pp. 255–
257. ISBN 978-1-118-31417-3.
5 Suetonius Tranquillus, Gaius (2008). Lives of the Caesars (Oxford World's
Classics). New York: Oxford University Press. p. 28. ISBN 978-0199537563.
6 Singh, Simon (2000). The Code Book. Anchor. pp. 289–290. ISBN 0-385-49532-3.
7 Cherdantseva Y. and Hilton J.: "Information Security and Information Assurance.
The Discussion about the Meaning, Scope and Goals". In: Organizational, Legal, and
Technological Dimensions of Information System Administrator. Almeida F., Portela, I.
(eds.). IGI Global Publishing. (2013)
8 ISO/IEC 27000:2009 (E). (2009). Information technology - Security techniques -
Information security management systems - Overview and vocabulary. ISO/IEC.
9 Committee on National Security Systems: National Information Assurance (IA)
Glossary, CNSS Instruction No. 4009, 26 April 2010.
10 ISACA. (2008). Glossary of terms, 2008. Retrieved
from http://www.isaca.org/Knowledge-Center/Documents/Glossary/glossary.pdf
11 Pipkin, D. (2000). Information security: Protecting the global enterprise. New
York: Hewlett-Packard Company.
12 B., McDermott, E., & Geer, D. (2001). Information security is information risk
management. In Proceedings of the 2001 Workshop on New Security Paradigms NSPW
‘01, (pp. 97 – 104). ACM. doi:10.1145/508171.508187
13 Anderson, J. M. (2003). "Why we need a new definition of information
security". Computers & Security. 22 (4): 308–313. doi:10.1016/S0167-4048(03)00407-3.
14 Venter, H. S.; Eloff, J. H. P. (2003). "A taxonomy for information security
technologies". Computers & Security. 22 (4): 299–307. doi:10.1016/S0167-
4048(03)00406-1.
15 https://www.isc2.org/uploadedFiles/(ISC)2_Public_Content/2013%20Global
%20Information%20Security%20Workforce%20Study%20Feb%202013.pdf
16 Perrin, Chad. "The CIA Triad". Retrieved 31 May 2012.
17 "Engineering Principles for Information Technology Security" (PDF).
csrc.nist.gov.
18 "oecd.org" (PDF). Archived from the original (PDF) on May 16, 2011.
Retrieved 2014-01-17.
19 "NIST Special Publication 800-27 Rev A" (PDF). csrc.nist.gov.
20 Aceituno, Vicente. "Open Information Security Maturity Model". Retrieved 12
February 2017.
21 Boritz, J. Efrim. "IS Practitioners' Views on Core Concepts of Information
Integrity". International Journal of Accounting Information Systems. Elsevier. 6 (4):
260–279. doi:10.1016/j.accinf.2005.07.001. Retrieved 12 August 2011.
22 Loukas, G.; Oke, G. (September 2010) [August 2009]. "Protection Against Denial
of Service Attacks: A Survey" (PDF). Comput. J. 53 (7): 1020–
1037. doi:10.1093/comjnl/bxp078.
23 ISACA (2006). CISA Review Manual 2006. Information Systems Audit and Control
Association. p. 85. ISBN 1-933284-15-3.
24 Spagnoletti, Paolo; Resca A. (2008). "The duality of Information Security
Management: fighting against predictable and unpredictable threats". Journal of
Information System Security. 4 (3): 46–62.
25 Kiountouzis, E.A.; Kokolakis, S.A. Information systems security: facing the
information society of the 21st century. London: Chapman & Hall, Ltd. ISBN 0-412-
78120-4.
26 "NIST SP 800-30 Risk Management Guide for Information Technology
Systems"(PDF). Retrieved 2014-01-17.
27 [1]
28 "Segregation of Duties Control matrix". ISACA. 2008. Archived from the
original on 3 July 2011. Retrieved 2008-09-30.
29 Shon Harris (2003). All-in-one CISSP Certification Exam Guide (2nd
ed.). Emeryville, California: McGraw-Hill/Osborne. ISBN 0-07-222966-7.
30 itpi.org Archived December 10, 2013, at the Wayback Machine.
31 "book summary of The Visible Ops Handbook: Implementing ITIL in 4 Practical
and Auditable Steps". wikisummaries.org. Retrieved 2016-06-22.
32 Harris, Shon (2008). All-in-one CISSP Certification Exam Guide (4th ed.). New
York, NY: McGraw-Hill. ISBN 978-0-07-149786-2.
33 "The Disaster Recovery Plan". Sans Institute. Retrieved 7 February 2012.
34 Lim, Joo S., et al. "Exploring the Relationship between Organizational Culture
and Information Security Culture." Australian Information Security Management
Conference.
35 Schlienger, Thomas; Teufel, Stephanie (2003). "Information security culture-from
analysis to change". South African Computer Journal. 31: 46–52.
36 "BSI-Standards". BSI. Retrieved 29 November 2013.
5.1.2 Mobile security
5.1.2.1 Commentary
Mobile security is required as new risks come from smart phones using sensitive
information which can be targets of attack exploiting weaknesses in phones in
communication mode e.g. SMS, MMS, wifi, Bluetooth and GSM, the browser and
operating system.
Counter-measures are applied to smart phones with security in different layers of
software from design to use and from operating systems, through software layers, to
downloadable apps.
Threats[1] interrupting their operation come from some apps that are malware being
limited e.g. location information via GPS, blocking access to the address book,
preventing data transmission/manipulation, messaging, availability.[2] Attacks come
from spies, thieves, and hackers.[3][4][5][6][7]
Infected phones allow an attacker to:
• manipulate the smartphone as a zombie machine for spam by sms or email;[8]
• make phone calls.[8]
• record and monitor conversations to others.[8]
• steal a user's identity.[8]
• discharging the battery.[9][10]
• preventing the operation and/or making it unusable.[11]
• removing personal data.[11]
Attacks derive from flaws in the management of SMS and MMS.
• managing binary SMS messages causing denial of service attacks.[12]
• extended email address giving the curse of silence.
• SMS from the Internet causing distributed denial of service.[13]
• MMS to other phones with an attachment infected with a virus.[11]
Attacks based on the GSM networks break the encryption of the mobile network.[15]
[16] or eavesdrop with a fake base station (IMSI catcher).
Attacks are based on Wi-Fi with access point spoofing,[17][18][19] and worms.[20]
Bluetooth-based attacks are based on unregistered services and virtual serial
ports[21] and transmitted malware.[11]
The mobile web browser suffers from buffer and stack overflow,[22] phishing, malicious
websites, etc.
Operating systems suffers from manipulation of firmware and malicious signature
certificates, bypassing the byte code verifier and accessing the kernel, changing the
central configuration file, changing the image of the firmware,[23] and using
valid signatures with invalid certificate.[24]
Attacks based on hardware come from electromagnetic wave forms,[25] juice jacking,
and password cracking.[26]
Malware are loaded onto smart phones as they are a permanent point of access to the
internet.[27][28] Its phases of attack are infection,[29] explicit/Implied permission, and
common/no interaction. Once infected the phone accomplishes its goal of monetary
damage, damage data and/or device, and concealed damage[30] then spread to other
systems[31] through Wi-Fi, Bluetooth or infrared or remote networks (telephone calls or
SMS or emails). It shows up as viruses and Trojans,[11][32] ransomware,[34] and
spyware,[11] and libraries.[35]
Countermeasures consist of secure operating systems, rootkit detectors,[36][37][38]
process isolation,[36][39][40][41][42] file permissions,[43][44][45] memory protection,
[42] runtime environments,[46][47][42] security software (antivirus and firewall,[48][49]
[50][37] visual notifications, Turing test, biometric identification, resource monitoring,
[52] (battery, memory usage, network traffic, services,[53] network surveillance, spam
filters, encryption of stored or transmitted information, telecom network monitoring,
manufacturer surveillance, memory debug mode,[54][55] default settings,[36][56]
security audit of apps,[36] detect suspicious applications demanding rights,[57]
revocation procedures,[58][59] avoid heavily customized systems, improve software
patch processes,[57] user awareness, being sceptical,[60] permissions given to
applications,[61][56][62] be careful,[63][64] ensure data,[65] centralized storage of text
messages,[66] ;imitations of certain security measures,[67][55] next generation of
mobile security,[68] rich operating system, secure operating system, trusted execution
environment, secure element).
5.1.2.2 References
1 BYOD and Increased Malware Threats Help Driving Billion Dollar Mobile Security
Services Market in 2013, ABI Research
2 Bishop 2004.
3 Olson, Parmy. "Your smartphone is hackers' next big target". CNN.
Retrieved August 26, 2013.
4 (PDF)http://www.gov.mu/portal/sites/cert/files/Guide%20on%20Protection
%20Against%20Hacking.pdf. Missing or empty |title= (help)
5 Lemos, Robert. "New laws make hacking a black-and-white choice". CNET
News.com. Retrieved September 23, 2002.
6 McCaney, Kevin. "'Unknowns' hack NASA, Air Force, saying 'We're here to help'".
Retrieved May 7, 2012.
7 Bilton 2010.
8 Guo, Wang & Zhu 2004, p. 3.
9 Dagon, Martin & Starder 2004, p. 12.
10 Dixon & Mishra 2010, p. 3.
11 Töyssy & Helenius 2006, p. 113.
12 Siemens 2010, p. 1.
13 "Brookstone spy tank app". limormoyal.com. Retrieved 2016-08-11.
14 Gendrullis 2008, p. 266.
15 European Telecommunications Standards Institute 2011, p. 1.
16 Jøsang, Miralabé & Dallot 2015.
17 Roth, Polak & Rieffel 2008, p. 220.
18 Gittleson, Kim (28 March 2014) Data-stealing Snoopy drone unveiled at Black
Hat BBC News, Technology, Retrieved 29 March 2014
19 Wilkinson, Glenn (25 September 2012) Snoopy: A distributed tracking and
profiling framework Sensepost, Retrieved 29 March 2014
20 Töyssy & Helenius 2006, p. 27.
21 Mulliner 2006, p. 113.
22 Dunham, Abu Nimeh & Becher 2008, p. 225.
23 Becher 2009, p. 65.
24 Becher 2009, p. 66.
25 Kasmi C, Lopes Esteves J (13 August 2015). "IEMI Threats for Information
Security: Remote Command Injection on Modern Smartphones". IEEE Transactions on
Electromagnetic Compatibility. doi:10.1109/TEMC.2015.2463089. Lay
summary – WIRED (14 October 2015).
26 Aviv, Adam J.; Gibson, Katherine; Mossop, Evan; Blaze, Matt; Smith, Jonathan
M. Smudge Attacks on Smartphone Touch Screens (PDF). 4th USENIX Workshop on
Offensive Technologies.
27 Schmidt et al. 2009a, p. 3.
28 Suarez-Tangil, Guillermo; Juan E. Tapiador; Pedro Peris-Lopez; Arturo Ribagorda
(2014). "Evolution, Detection and Analysis of Malware in Smart Devices" (PDF). IEEE
Communications Surveys & Tutorials.
29 Becher 2009, p. 87.
30 Becher 2009, p. 88.
31 Mickens & Noble 2005, p. 1.
32 Raboin 2009, p. 272.
33 Töyssy & Helenius 2006, p. 114.
34 Haas, Peter D. (2015-01-01). "Ransomware goes mobile: An analysis of the
threats posed by emerging methods". UTICA COLLEGE.
35 Becher 2009, p. 91-94.
36 Becher 2009, p. 12.
37 Schmidt, Schmidt & Clausen 2008, p. 5-6.
38 Halbronn & Sigwald 2010, p. 5-6.
39 Ruff 2011, p. 127.
40 Hogben & Dekker 2010, p. 50.
41 Schmidt, Schmidt & Clausen 2008, p. 50.
42 Shabtai et al. 2009, p. 10.
43 Becher 2009, p. 31.
44 Schmidt, Schmidt & Clausen 2008, p. 3.
45 Shabtai et al. 2009, p. 7-8.
46 Pandya 2008, p. 15.
47 Becher 2009, p. 22.
48 Becher et al. 2011, p. 96.
49 Becher 2009, p. 128.
50 Becher 2009, p. 140.
51 Thirumathyam & Derawi 2010, p. 1.
52 Schmidt, Schmidt & Clausen 2008, p. 7-12.
53 Becher 2009, p. 126.
54 Becher et al. 2011, p. 101.
55 Ruff 2011, p. 11.
56 Hogben & Dekker 2010, p. 45.
57 Becher 2009, p. 13.
58 Becher 2009, p. 34.
59 Ruff 2011, p. 7.
60 Hogben & Dekker 2010, p. 46-48.
61 Ruff 2011, p. 7-8.
62 Shabtai et al. 2009, p. 8-9.
63 Hogben & Dekker 2010, p. 43.
64 Hogben & Dekker 2010, p. 47.
65 Hogben & Dekker 2010, p. 43-45.
66 Charlie Sorrel (2010-03-01). "TigerText Deletes Text Messages From Receiver's
Phone". Wired. Archived from the original on 2010-10-17. Retrieved 2010-03-02.
67 Becher 2009, p. 40.
68 http://www.insidesecure.com/Markets-solutions/Payment-and-Mobile-
Banking/Mobile-Security
69 Bishop, Matt (2004). Introduction to Computer Security. Addison Wesley
Professional. ISBN 978-0-321-24744-5.
70 Dunham, Ken; Abu Nimeh, Saeed; Becher, Michael (2008). Mobile Malware Attack
and Defense. Syngress Media. ISBN 978-1-59749-298-0.
71 Rogers, David (2013). Mobile Security: A Guide for Users. Copper Horse Solutions
Limited. ISBN 978-1-291-53309-5.
72 Becher, Michael (2009). Security of Smartphones at the Dawn of Their
Ubiquitousness (PDF) (Dissertation). Mannheim University.
73 Becher, Michael; Freiling, Felix C.; Hoffmann, Johannes; Holz, Thorsten;
Uellenbeck, Sebastian; Wolf, Christopher (May 2011). Mobile Security Catching Up?
Revealing the Nuts and Bolts of the Security of Mobile Devices (PDF). 2011 IEEE
Symposium on Security and Privacy. pp. 96–111. doi:10.1109/SP.2011.29. ISBN 978-1-
4577-0147-4.
74 Bilton, Nick (26 July 2010). "Hackers With Enigmatic Motives Vex
Companies". The New York Times. p. 5.
75 Cai, Fangda; Chen, Hao; Wu, Yuanyi; Zhang, Yuan (2015). AppCracker:
Widespread Vulnerabilities in Userand Session Authentication in Mobile
Apps (PDF) (Dissertation). University of California, Davis.
76 Crussell, Johnathan; Gibler, Clint; Chen, Hao (2012). Attack of the Clones:
Detecting Cloned Applications on Android Markets (PDF) (Dissertation). University of
California, Davis.
77 Dagon, David; Martin, Tom; Starder, Thad (October–December 2004). "Mobile
Phones as Computing Devices: The Viruses are Coming!". IEEE Pervasive
Computing. 3 (4): 11. doi:10.1109/MPRV.2004.21.[dead link]
78 Dixon, Bryan; Mishra, Shivakant (June–July 2010). On and Rootkit and Malware
Detection in Smartphones (PDF). 2010 International Conference on Dependable
Systems and Networks Workshops (DSN-W). ISBN 978-1-4244-7728-9.
79 Gendrullis, Timo (November 2008). A real-world attack breaking A5/1 within
hours. Proceedings of CHES ’08. Springer. pp. 266–282. doi:10.1007/978-3-540-85053-
3_17.
80 Guo, Chuanxiong; Wang, Helen; Zhu, Wenwu (November 2004). Smart-Phone
Attacks and Defenses (PDF). ACM SIGCOMM HotNets. Association for Computing
Machinery, Inc. Retrieved March 31, 2012.
81 Halbronn, Cedric; Sigwald, John (2010). Vulnerabilities & iPhone Security
Model (PDF). HITB SecConf 2010.
82 Hogben, Giles; Dekker, Marnix (December 2010). "Smartphones: Information
security Risks, Opportunities and Recommendations for users". ENISA.
83 Jøsang, Audun; Miralabé, Laurent; Dallot, Léonard (2015). "Vulnerability by
Design in Mobile Network Security" (PDF). Journal of Information Warfare
(JIF). 14 (4). ISSN 1445-3347.
84 Mickens, James W.; Noble, Brian D. (2005). Modeling epidemic spreading in
mobile environments. WiSe '05 Proceedings of the 4th ACM workshop on Wireless
security. Association for Computing Machinery, Inc. pp. 77–
86. doi:10.1145/1080793.1080806.
85 Mulliner, Collin Richard (2006). Security of Smart Phones (PDF) (M.Sc. thesis).
University of California, Santa Barbara.
86 Pandya, Vaibhav Ranchhoddas (2008). Iphone Security Analysis (PDF) (Thesis).
San Jose State University.
87 Raboin, Romain (December 2009). La sécurité des
smartphones (PDF). Symposium sur la sécurité des technologies de l'information et des
communications 2009. SSTIC09 (in French).
88 Racic, Radmilo; Ma, Denys; Chen, Hao (2006). Exploiting MMS Vulnerabilities to
Stealthily Exhaust Mobile Phone’s Battery (PDF) (Dissertation). University of California,
Davis.
89 Roth, Volker; Polak, Wolfgang; Rieffel, Eleanor (2008). Simple and Effective
Defense Against Evil Twin Access Points. ACM SIGCOMM
HotNets. doi:10.1145/1352533.1352569. ISBN 978-1-59593-814-5.
90 Ruff, Nicolas (2011). Sécurité du système Android (PDF). Symposium sur la
sécurité des technologies de l'information et des communications 2011. SSTIC11 (in
French).
91 Ruggiero, Paul; Foote, Jon. Cyber Threats to Mobile Phones (PDF) (thesis). US-
CERT.
92 Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Clausen, Jan; Yüksel, Kamer
Ali; Kiraz, Osman; Camtepe, Ahmet; Albayrak, Sahin (October 2008). Enhancing Security
of Linux-based Android Devices (PDF). Proceedings of 15th International Linux
Kongress.
93 Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Batyuk, Leonid; Clausen, Jan
Hendrik; Camtepe, Seyit Ahmet; Albayrak, Sahin (April 2009a). Smartphone Malware
Evolution Revisited: Android Next Target? (PDF). 4th International Conference on
Malicious and Unwanted Software (MALWARE). ISBN 978-1-4244-5786-1.
Retrieved 2010-11-30.
94 Shabtai, Asaf; Fledel, Yuval; Kanonov, Uri; Elovici, Yuval; Dolev, Shlomi (2009).
"Google Android: A State-of-the-Art Review of Security
Mechanisms". CoRR. arXiv:0912.5101v1.
95 Thirumathyam, Rubathas; Derawi, Mohammad O. (2010). Biometric Template
Data Protection in Mobile Device Using Environment XML-database. 2010 2nd
International Workshop on Security and Communication Networks (IWSCN). ISBN 978-1-
4244-6938-3.
96 Töyssy, Sampo; Helenius, Marko (2006). "About malicious software in
smartphones". Journal in Computer Virology. Springer Paris. 2 (2): 109–
119. doi:10.1007/s11416-006-0022-0. Retrieved 2010-11-30.
97 European Telecommunications Standards Institute (2011). "3GPP Confidentiality
and Integrity Algorithms & UEA1 UIA1". Archived from the original on 12 May 2012.
98 Siemens (2010). "Series M Siemens SMS DoS Vulnerability".
99 CIGREF (October 2010). "Sécurisation de la mobilité" (PDF) (in French).
100 Chong, Wei Hoo (November 2007). iDEN Smartphone Embedded Software
Testing (PDF). Fourth International Conference on Information Technology, 2007. ITNG
'07. doi:10.1109/ITNG.2007.103. ISBN 0-7695-2776-0.
101 Jansen, Wayne; Scarfone, Karen (October 2008). "Guidelines on Cell Phone and
PDA Security: Recommendations of the National Institute of Standards and
Technology" (PDF). National Institute of Standards and Technology. Retrieved April
21, 2012.
102 Lee, Sung-Min; Suh, Sang-bum; Jeong, Bokdeuk; Mo, Sangdok (January 2008). A
Multi-Layer Mandatory Access Control Mechanism for Mobile Devices Based on
Virtualization. 5th IEEE Consumer Communications and Networking Conference, 2008.
CCNC 2008. doi:10.1109/ccnc08.2007.63. ISBN 978-1-4244-1456-7. Archived from the
original on May 16, 2013.
103 Li, Feng; Yang, Yinying; Wu, Jie (March 2010). CPMC: An Efficient Proximity
Malware Coping Scheme in Smartphone-based Mobile Networks (PDF). INFOCOM, 2010
Proceedings IEEE. doi:10.1109/INFCOM.2010.5462113.
104 Ni, Xudong; Yang, Zhimin; Bai, Xiaole; Champion, Adam C.; Xuan, Dong (October
2009). Distribute: Differentiated User Access Control on Smartphones (PDF). 6th IEEE
International Conference on Mobile Adhoc and Periodic Sensor Systems, 2009. MASS
'09. ISBN 978-1-4244-5113-5.[dead link]
105 Ongtang, Machigar; McLaughlin, Stephen; Enck, William; Mcdaniel, Patrick
(December 2009). Semantically Rich Application-Centric Security in Android (PDF).
Annual Computer Security Applications Conference, 2009. ACSAC '09. ISSN 1063-9527.
106 Schmidt, Aubrey-Derrick; Bye, Rainer; Schmidt, Hans-Gunther; Clausen, Jan;
Kiraz, Osman; Yüksel, Kamer A.; Camtepe, Seyit A.; Albayrak, Sahin (2009b). Static
Analysis of Executables for Collaborative Malware Detection on Android (PDF). IEEE
International Conference Communications, 2009. ICC '09. ISSN 1938-1883.
107 Yang, Feng; Zhou, Xuehai; Jia, Gangyong; Zhang, Qiyuan (2010). A Non-
cooperative Game Approach for Intrusion Detection Systems in Smartphone systems.
8th Annual Communication Networks and Services Research
Conference. doi:10.1109/CNSR.2010.24. ISBN 978-1-4244-6248-3. Archived from the
original on May 16, 2013.
108 How Applications Lead your mobile to be Hacked - Ujjwal Sahay
5.1.3 Network security
5.1.3.1 Commentary
Network security is the policies and practices to prevent and
monitor unauthorized access, misuse, modification, or denial of a computer
network and network-accessible resources. The administrator gives users
authenticating information to allow access to valid data and processes.
Network security concept relies on multi-factor authentication. Once allowed,
the firewall applies access policies to coordinate use of services by users[1] though
precluding unauthorized access worms or Trojans are not ruled out. Anti-virus or
intrusion prevention system[2] discover and stop the malware by monitoring the
network and logging anomalies and applying machine learning / traffic analysis to find
attackers, malicious insiders or targeted external attackers.[3] In addition
communication can be encrypted for privacy. Honeypots (decoy network-accessible
resources) in the network not normally accessed legitimately help surveillance and
early-warning tools. [4]
Security managements varies according to the networks job.
Network attacks are Passive (the intruder intercepts data on the network), and Active
(the intruder starts commands impacting normal network operation or to spy and
access the network assets.[5]
Types of attacks include:[6]
• Passive
• Network
• Wiretapping
• Port scanner
• Idle scan
• Active
• Denial-of-service attack
• DNS spoofing
• Man in the middle
• ARP poisoning
• VLAN hopping
• Smurf attack
• Buffer overflow
• Heap overflow
• Format string attack
• SQL injection
• Phishing
• Cross-site scripting
• CSRF
• Cyber-attack
5.1.3.2 References
1. A Role-Based Trusted Network Provides Pervasive Security and Compliance -
interview with Jayshree Ullal, senior VP of Cisco
2. Dave Dittrich, Network monitoring/Intrusion Detection Systems (IDS), University
of Washington.
3. "Dark Reading: Automating Breach Detection For The Way Security Professionals
Think". October 1, 2015.
4. "''Honeypots, Honeynets''". Honeypots.net. 2007-05-26. Retrieved 2011-12-09.
5. Wright, Joe; Jim Harmening (2009) "15" Computer and Information Security
Handbook Morgan Kaufmann Publications Elsevier Inc p. 257
6. http://www.cnss.gov/Assets/pdf/cnssi_4009.pdf
7. Case Study: Network Clarity, SC Magazine 2014
8. Cisco. (2011). What is network security?. Retrieved from cisco.com
9. Security of the Internet (The Froehlich/Kent Encyclopedia of
Telecommunications vol. 15. Marcel Dekker, New York, 1997, pp. 231–255.)
10. Introduction to Network Security, Matt Curtin.
11. MPLS, SD-WAN and Network Security', Yishay Yovel.
12. Security Monitoring with Cisco Security MARS, Gary Halleen/Greg Kellogg, Cisco
Press, Jul. 6, 2007.
13. Self-Defending Networks: The Next Generation of Network Security, Duane
DeCapite, Cisco Press, Sep. 8, 2006.
14. Security Threat Mitigation and Response: Understanding CS-MARS, Dale
Tesch/Greg Abelar, Cisco Press, Sep. 26, 2006.
15. Securing Your Business with Cisco ASA and PIX Firewalls, Greg Abelar, Cisco
Press, May 27, 2005.
16. Deploying Zone-Based Firewalls, Ivan Pepelnjak, Cisco Press, Oct. 5, 2006.
17. Network Security: PRIVATE Communication in a PUBLIC World, Charlie Kaufman
| Radia Perlman | Mike Speciner, Prentice-Hall, 2002. ISBN .
18. Network Infrastructure Security, Angus Wong and Alan Yeung, Springer, 2009.
5.1.4 Internet security
5.1.4.1 Commentary
Internet security applies security techniques to the Internet covering browsers
and networks. It establish rules and measures to counteract attacks over the Internet.
[1] which is an insecure channel for exchanging information giving to a high risk
of intrusion or fraud e.g. phishing.[2] Encryption and from-the-ground-up engineering[3]
help protection.
Threats arise from malicious software, denial-of-service attacks, phishing and
application vulnerabilities.
Malicious software comes as viruses, Trojan horses, spyware, and worms and has
malicious intent and disrupts computer operation, gathers sensitive information, or
gains access to private computer systems.
A botnet is a set of zombie systems taken over by a robot program performing mass
malicious acts for the originator.
Computer Viruses replicate their structures or effects by infecting other files or
structures on a system.
Worms replicate themselves throughout a network performing malicious tasks
throughout.
Ransomware restricts access to infected systems by demanding a ransom before the
restriction is removed.
Scareware aims to cause shock, anxiety, or the perception of a threat for a user.
Spyware monitors activity on a system and reports it to the originator.
A Trojan pretends to be harmless for the user who downloads it onto his computer.
A denial-of-service attack makes a computer resource unavailable to its users. [4]
Phishing pretends to be a trustworthy entity to gather information.[5][6][7]
Application vulnerabilities can lead to any of the previous threats.[8][9]
Network layer security is provided by TCP/IP protocols secured with cryptographic
methods and security protocols e.g. Secure Sockets Layer (SSL), succeeded
by Transport Layer Security (TLS) for web traffic, Pretty Good Privacy (PGP) for email,
and IPsec for the network layer security. IPsec protects TCP/IP communication with a
set of security extensions by IETF providing security and authentication at the IP layer
by encrypting data. It uses the Authentication Header (AH) and ESP to give data
integrity, data origin authentication, and anti-replay service. These protocols can be
used alone or in combination to provide the desired set of security services for
the Internet Protocol (IP) layer. They use entities that are security protocols for AH and
ESP, security association for policy management and traffic processing, manual and
automatic key management for the internet key exchange (IKE) and algorithms
for authentication and encryption. Security services provided at the IP layer includes
access control, data origin integrity, protection against replays, and confidentiality.
The algorithm allows these sets to work independently without affecting other parts of
the implementation. The IPsec implementation is operated in a host or security
gateway environment giving protection to IP traffic.
Some web sites offer users a security token which is generated regularly using random
numbers and applied to validate access to the online access based on the devices'
serial number.[10]
Electronic mail security uses pretty good privacy, MIME and MAC. Pretty Good
Privacy applies encryption to messages and data files with Triple DES or CAST-128.
Email messages are signed to ensure integrity and sender identity and Encrypted to
ensure confidentiality.[11] MIME transforms the sender non-ASCII data to NVT ASCII
data and delivers it to the client SMTP sent through the Internet[12] to the
server SMTP which reverses the process. A Message authentication code (MAC) uses
cryptography to use a secret key to encrypt a message at the sender and receiver.[13]
A firewall controls access between networks which act as the intermediate server
filtering between SMTP and Hypertext Transfer Protocol (HTTP) connections.[14] The
types of filter are packet filter, stateful packet inspection and application-level
gateway.
Use of web browsers is driven by their facilities[15] and their security [16] and
vulnerabilities.[17][18][19] Antivirus software and Internet security programs protect
programmable devices from attack by detecting and eliminating viruses and are
provided for all platforms.[20] A password manager helps a user store and organize
encrypted passwords with a master password to grant the user access to their entire
password database.[21]
Security suites contain firewalls, anti-virus, anti-spyware and more[22] and theft
protection, portable storage device safety check, private Internet browsing, cloud anti-
spam, file shredder or make security-related decisions (answering popup windows).[23]
5.1.4.2 References
1. Gralla, Preston (2007). How the Internet Works. Indianapolis: Que Pub. ISBN 0-
7897-2132-5.
2. Rhee, M. Y. (2003). Internet Security: Cryptographic Principles,Algorithms and
Protocols. Chichester: Wiley. ISBN 0-470-85285-2.
3. An example of a completely re-engineered computer is the Librem laptop which
uses components certified by web-security experts. It was launched after a crowd
funding campaign in 2015.
4. "Information Security: A Growing Need of Businesses and Industries
Worldwide". University of Alabama at Birmingham Business Program. Retrieved 20
November 2014.
5. Ramzan, Zulfikar (2010). "Phishing attacks and countermeasures". In Stamp,
Mark & Stavroulakis, Peter. Handbook of Information and Communication Security.
Springer. ISBN 9783642041174.
6. Van der Merwe, A J, Loock, M, Dabrowski, M. (2005), Characteristics and
Responsibilities involved in a Phishing Attack, Winter International Symposium on
Information and Communication Technologies, Cape Town, January 2005.
7. "2012 Global Losses From Phishing Estimated At $1.5 Bn". FirstPost. February
20, 2013. Retrieved December 21, 2014.
8. "Improving Web Application Security: Threats and
Countermeasures". msdn.microsoft.com. Retrieved 2016-04-05.
9. "Justice Department charges Russian spies and criminal hackers in Yahoo
intrusion". Washington Post. Retrieved 15 March 2017.
10. Margaret Rouse (September 2005). "What is a security token?".
SearchSecurity.com. Retrieved 2014-02-14.
11. "Virtual Private Network". NASA. Retrieved 2014-02-14.
12. Asgaut Eng (1996-04-10). "Network Virtual Terminal". The Norwegian Institute of
Technology ppv.org. Retrieved 2014-02-14.
13. "What Is a Message Authentication Code?". Wisegeek.com. Retrieved 2013-04-
20.
14. "Firewalls - Internet Security". sites.google.com. Retrieved 2016-06-30.
15. "Browser Statistics". W3Schools.com. Retrieved 2011-08-10.
16. Bradly, Tony. "It's Time to Finally Drop Internet Explorer 6". PCWorld.com.
Retrieved 2010-11-09.
17. Messmer, Ellen and NetworkWorld (2010-11-16). "Google Chrome Tops 'Dirty
Dozen' Vulnerable Apps List". PCWorld.com. Retrieved 2010-11-09.
18. Keizer, Greg (2009-07-15). "Firefox 3.5 Vulnerability Confirmed". PCWorld.com.
Retrieved 2010-11-09.
19. Skinner, Carrie-Ann. "Opera Plugs "Severe" Browser Hole". PC World.com.
Archived from the original on May 20, 2009. Retrieved 2010-11-09.
20. Larkin, Eric (2008-08-26). "Build Your Own Free Security Suite". Retrieved 2010-
11-09.
21. "USE A FREE PASSWORD MANAGER" (PDF). scsccbkk.org.
22. Rebbapragada, Narasu. "All-in-one Security". PC World.com. Archived from the
original on October 27, 2010. Retrieved 2010-11-09.
23. "Free products for PC security". 2015-10-08.
24. National Institute of Standards and Technology (NIST.gov) - Information
Technology portal with links to computer- and cyber security
25. National Institute of Standards and Technology (NIST.gov) -Computer Security
Resource Center -Guidelines on Electronic Mail Security, version 2
26. The Internet Engineering Task Force.org - UK organization -IP Authentication
Header 1998
27. The Internet Engineering Task Force.org - UK organization -Encapsulating
Security Payload
28. Wireless Safety.org - Up to date info on security threats, news stories, and step
by step tutorials
29. PwdHash Stanford University - Firefox & IE browser extensions that
transparently convert a user's password into a domain-specific password.
30. Internet security.net - by JC Montejo & Goio Miranda (free security programs),
est 2007.
31. Internet and Data Security Guide UK anonymous membership site
32. Cybertelecom.org Security - surveying federal Internet security work
33. DSL Reports.com- Broadband Reports, FAQs and forums on Internet security, est
1999
34. FBI Safe Online Surfing Internet Challenge - Cyber Safety for Young
Americans (FBI)
5.1.5 Application security
5.1.5.1 Commentary
Application security uses the application life cycle to consider the security policy of its
organization focusing on technology / platform independence with principles, patterns,
practices[1] and security standards and regulations. It examines threats for assets,
vulnerabilities, attacks and countermeasures, securing the network, host and
application and incorporating security into the development process.
Threats arise into categories of input validation, software tampering, authentication,
authorization, configuration management, sensitive information, session management,
cryptography, parameter manipulation, exception management, auditing and logging
Mobile application uses many open systems[2][3] so strategies for security include:
Validating the application within the context of the system.
• Ensuring communication security and session control follow predefined
processes for the users.
• Encrypted data in memory.
• Security testing techniques checks for vulnerabilities or security holes in
applications using automated tools for penetration testing, code analysis and
interactive security testing.[4][5]
5.1.5.2 References
1. Improving Web Application Security: Threats and Countermeasures, published by
Microsoft Corporation.
2. "Platform Security Concepts", Simon Higginson.
3. Application Security Framework Archived March 29, 2009, at the Wayback
Machine., Open Mobile Terminal Platform
4. http://www.gartner.com/technology/reprints.do?id=1-
1GT3BKT&ct=130702&st=sb&mkt_tok=3RkMMJWWfF9wsRokvazAZKXonjHpfsX76%252
B4qX6WylMI%252F0ER3fOvrPUfGjI4CTsRmI%252BSLDwEYGJlv6SgFTbnFMbprzbgPUhA
%253D
5. "Continuing Business with Malware Infected Customers". Gunter Ollmann.
October 2008.
5.1.6 Computer security
5.1.6.1 Commentary
Computer security protects systems from the theft or damage to
the hardware, software or the information on them and disruption or misdirection of the
services they provide.[1]
It includes controlling access to the hardware, protecting against harm that may come
via network access, data and code injection,[2] and due to malpractice by operators,
whether intentional, accidental, or due to them being tricked into deviating from secure
procedures.[3]
A vulnerability is a system susceptibility or flaw. It is categorized by:
• Back doors bypass normal access controls.
• Denial-of-service stops access to system resources.[6] 
• Direct-access attack gives access to the system for mal purposes.[7]
• Eavesdropping listens to network traffic.
• Spoofing pretends a source as one known to the receiver.[8]
• Tampering modifies products maliciously.[9]
• Privilege escalation gives the user extra access privileges beyond their
authority.
• Phishing collects information based on the trust of the system.[10]
• Click jacking fools the user into using link to a false web page.
• Social engineering gathers information under false pretenses.[11]
Systems although protected by firewalls can suffer from hacking, using unknown
internet links, network and files from unsafe sites but can be protected with improved
design to prevent crash or security failure.[14] Almost all systems and IoT are subject
to security problems.[45][46][47][48][49][50][51][52][53][54][55][56][57][58][59] The
impact of security breaches is assessed through the Gordon-Loeb model by estimating
the security measures compared with the security breach.[80] The countermeasures
are an action, device, procedure, or technique to reduce a threat, vulnerability, or
attack by eliminating, preventing, minimizing the harm, or taking corrective action
when a problem is found and reported.[81][82][83]
Some common countermeasures are security by design, security architecture, security
measures, vulnerability management, hardware protection mechanisms and response
to breaches.
Security by design uses techniques such as:
• The principle of least privilege, where each part of the system has only the
privileges that are needed for its function.
• Automated correctness proof of subsystems.
• Code reviews and unit testing
• Defense in depth, where the design is such that more than one subsystem needs
to be violated to compromise the integrity of the system
• Default secure settings, and design to "fail secure"
• Audit trails tracking system activity to record breaches
• Full disclosure of all vulnerabilities, to ensure minimum window of vulnerability
The Open Security Architecture organization uses a formal process to position controls
for the quality attributes of confidentiality, integrity, availability, accountability
and assurance services[84] whilst Techopedia gives a unified security design
addressing the necessities and risks for a scenario / environment specifying when and
where to apply security controls. The process is reproducible. The key attributes of
security architecture are[85] relationship of different components and their
dependency, the controls based on risk assessment, good practice, finances, and legal
matters and the standardization of controls.
Security measures are based on threat prevention, detection, and response yielding
policies and system components including:
• User account access controls and cryptography
• Firewalls
• Intrusion Detection System to detect network attacks and assist in post-
attack forensics and audit trails and logs serve a similar function for individual
systems.
• Response depends on the system
• machine learning to detect advanced persistent threats.[86][87]
• publicity for attacks.[88]
Vulnerability management identifies and re-mediates or
mitigates vulnerabilities[89] with a vulnerability scanner,[90] security auditors,[91][92]
[93][94][95][96] Cryptography and multi factor authentication, whilst Social engineering
and direct computer access (physical) attacks can only be prevented by procedures.
[97][98]
Hardware protection mechanisms help security[99][100] are USB dongles,[101][102]
[103] trusted platform modules,[104] computer case intrusion detection, drive locks,
[105][106] disabling USB ports[107] and mobile-enabled access devices.[108]
Response to breaches is based on identifying attackers and attack method.
Cyber warfare defense relies on the processes of security.[180]
5.1.6.2 References
1 Gasser, Morrie (1988). Building a Secure Computer System (PDF). Van Nostrand
Reinhold. p. 3. ISBN 0-442-23022-2. Retrieved 6 September 2015.
2 "Definition of computer security". Encyclopedia. Ziff Davis, PCMag. Retrieved 6
September 2015.
3 Rouse, Margaret. "Social engineering definition". TechTarget. Retrieved 6
September 2015.
4 "Reliance spells end of road for ICT amateurs", May 07, 2013, The Australian
5 "Computer Security and Mobile Security Challenges" (pdf). researchgate.net.
Retrieved 2016-08-04.
6 "Distributed Denial of Service Attack". csa.gov.sg. Retrieved 12 November 2014.
7 Wireless mouse leave billions at risk of computer hack: cyber security firm
8 "What is Spoofing? - Definition from Techopedia".
9 Gallagher, Sean (May 14, 2014). "Photos of an NSA "upgrade" factory show Cisco
router getting implant". Ars Technica. Retrieved August 3, 2014.
10 "Identifying Phishing Attempts". Case.
11 Arcos Sergio. "Social Engineering" (PDF).
12 Scannell, Kara (24 Feb 2016). "CEO email scam costs companies
$2bn". Financial Times (25 Feb 2016). Retrieved 7 May 2016.
13 "Bucks leak tax info of players, employees as result of email scam". Associated
Press. 20 May 2016. Retrieved 20 May 2016.
14 J. C. Willemssen, "FAA Computer Security". GAO/T-AIMD-00-330. Presented at
Committee on Science, House of Representatives, 2000.
15 "Financial Weapons of War (Minnesota Law Review)". ssrn.com. 2016.
16 Pagliery, Jose. "Hackers attacked the U.S. energy grid 79 times this year". CNN
Money. Cable News Network. Retrieved 16 April 2015.
17 "Vulnerabilities in Smart Meters and the C12.12 Protocol". SecureState. 2012-02-
16. Retrieved 4 November 2016.
18 P. G. Neumann, "Computer Security in Aviation," presented at International
Conference on Aviation Safety and Security in the 21st Century, White House
Commission on Safety and Security, 1997.
19 J. Zellan, Aviation Security. Hauppauge, NY: Nova Science, 2003, pp. 65–70.
20 "Air Traffic Control Systems Vulnerabilities Could Make for Unfriendly Skies
[Black Hat] - SecurityWeek.Com".
21 "Hacker Says He Can Break Into Airplane Systems Using In-Flight Wi-
Fi". NPR.org. 4 August 2014.
22 Jim Finkle (4 August 2014). "Hacker says to show passenger jets at risk of cyber
attack". Reuters.
23 "Pan-European Network Services (PENS) - Eurocontrol.int".
24 "Centralised Services: NewPENS moves forward - Eurocontrol.int".
25 "NextGen Program About Data Comm - FAA.gov".
26 "Is Your Watch Or Thermostat A Spy? Cybersecurity Firms Are On It". NPR.org. 6
August 2014.
27 Melvin Backman (18 September 2014). "Home Depot: 56 million cards exposed in
breach". CNNMoney.
28 "Staples: Breach may have affected 1.16 million customers' cards". Fortune.com.
December 19, 2014. Retrieved 2014-12-21.
29 "Target security breach affects up to 40M cards". Associated Press via
Milwaukee Journal Sentinel. 19 December 2013. Retrieved 21 December 2013.
30 Jim Finkle (23 April 2014). "Exclusive: FBI warns healthcare sector vulnerable to
cyber attacks". Reuters. Retrieved 23 May 2016.
31 Bright, Peter (February 15, 2011). "Anonymous speaks: the inside story of the
HBGary hack". Arstechnica.com. Retrieved March 29, 2011.
32 Anderson, Nate (February 9, 2011). "How one man tracked down Anonymous—
and paid a heavy price". Arstechnica.com. Retrieved March 29, 2011.
33 Palilery, Jose (December 24, 2014). "What caused Sony hack: What we know
now". CNN Money. Retrieved January 4, 2015.
34 James Cook (December 16, 2014). "Sony Hackers Have Over 100 Terabytes Of
Documents. Only Released 200 Gigabytes So Far". Business Insider.
Retrieved December 18, 2014.
35 Timothy B. Lee (18 January 2015). "The next frontier of hacking: your car". Vox.
36 Stephen Checkoway; Damon McCoy; Brian Kantor; Danny Anderson; Hovav
Shacham; Stefan Savage; Karl Koscher; Alexei Czeskis; Franziska Roesner; Tadayoshi
Kohno (2011). Comprehensive Experimental Analyses of Automotive Attack
Surfaces (PDF). SEC'11 Proceedings of the 20th USENIX conference on Security.
Berkeley, CA, US: USENIX Association. pp. 6–6.
37 Greenberg, Andy. "Hackers Remotely Kill a Jeep on the Highway—With Me in It".
WIRED. Retrieved 22 January 2017.
38 "Hackers take control of car, drive it into a ditch". The Independent. 22 July
2015. Retrieved 22 January 2017.
39 Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk (PDF)
(Report). 2015-02-06. Retrieved November 4, 2016.
40 Kang, Cecilia (19 September 2016). "Self-Driving Cars Gain Powerful Ally: The
Government". The New York Times. Retrieved 22 January 2017.
41 [file:///C:/Users/Robert/Downloads/Federal_Automated_Vehicles_Policy.pdf
"Federal Automated Vehicles Policy"] Check |url= value (help) (PDF). Retrieved 22
January 2017.
42 Staff, AOL. "Cybersecurity expert: It will take a 'major event' for companies to
take this issue seriously". AOL.com. Retrieved 22 January 2017.
43 "The problem with self-driving cars: who controls the code?". The Guardian. 23
December 2015. Retrieved 22 January 2017.
44 "Tesla fixes software bug that allowed Chinese hackers to control car remotely".
The Telegraph. Retrieved 22 January 2017.
45 "Internet strikes back: Anonymous' Operation Megaupload explained". RT. 20
January 2012. Archived from the original on 5 May 2013. Retrieved May 5, 2013.
46 "Gary McKinnon profile: Autistic 'hacker' who started writing computer programs
at 14". The Daily Telegraph. London. 23 January 2009.
47 "Gary McKinnon extradition ruling due by 16 October". BBC News. September 6,
2012. Retrieved September 25, 2012.
48 Law Lords Department (30 July 2008). "House of Lords – Mckinnon V Government
of The United States of America and Another". Publications.parliament.uk. Retrieved 30
January 2010. 15. … alleged to total over $700,000
49 "NSA Accessed Mexican President's Email", October 20, 2013, Jens Glüsing,
Laura Poitras, Marcel Rosenbach and Holger Stark, spiegel.de
50 Sanders, Sam (4 June 2015). "Massive Data Breach Puts 4 Million Federal
Employees' Records At Risk". NPR. Retrieved 5 June 2015.
51 Liptak, Kevin (4 June 2015). "U.S. government hacked; feds think China is the
culprit". CNN. Retrieved 5 June 2015.
52 Sean Gallagher. "Encryption "would not have helped" at OPM, says DHS official".
53 "Schools Learn Lessons From Security Breaches". Education Week. 19 October
2015. Retrieved 23 May 2016.
54 "Internet of Things Global Standards Initiative". ITU. Retrieved 26 June 2015.
55 Singh, Jatinder; Pasquier, Thomas; Bacon, Jean; Ko, Hajoon; Eyers, David
(2015). "Twenty Cloud Security Considerations for Supporting the Internet of
Things". IEEE Internet of Things Journal: 1–1. doi:10.1109/JIoT.2015.2460333.
56 Chris Clearfield. "Why The FTC Can't Regulate The Internet Of Things". Forbes.
Retrieved 26 June 2015.
57 "Internet of Things: Science Fiction or Business Fact?" (PDF). Harvard Business
Review. Retrieved 4 November 2016.
58 Ovidiu Vermesan; Peter Friess. "Internet of Things: Converging Technologies for
Smart Environments and Integrated Ecosystems" (PDF). River Publishers. Retrieved 4
November 2016.
59 Christopher Clearfield "Rethinking Security for the Internet of Things" Harvard
Business Review Blog, 26 June 2013/
60 "Hotel room burglars exploit critical flaw in electronic door locks". Ars Technica.
Retrieved 23 May 2016.
61 "Hospital Medical Devices Used As Weapons In Cyberattacks". Dark Reading.
Retrieved 23 May 2016.
62 Jeremy Kirk (17 October 2012). "Pacemaker hack can deliver deadly 830-volt
jolt". Computerworld. Retrieved 23 May 2016.
63 "How Your Pacemaker Will Get Hacked". The Daily Beast. Retrieved 23
May 2016.
64 Leetaru, Kalev. "Hacking Hospitals And Holding Hostages: Cybersecurity In
2016". Forbes. Retrieved 29 December 2016.
65 "Cyber-Angriffe: Krankenhäuser rücken ins Visier der Hacker". Wirtschafts
Woche. Retrieved 29 December 2016.
66 "Hospitals keep getting attacked by ransomware — Here's why". Business
Insider. Retrieved 29 December 2016.
67 "MedStar Hospitals Recovering After 'Ransomware' Hack". NBC News.
Retrieved 29 December 2016.
68 Pauli, Darren. "US hospitals hacked with ancient exploits". The Register.
Retrieved 29 December 2016.
69 Pauli, Darren. "Zombie OS lurches through Royal Melbourne Hospital spreading
virus". The Register. Retrieved 29 December 2016.
70 "Grimsby hospital computer attack: 'No ransom has been demanded'". Grimsby
Telegraph. 31 October 2016. Retrieved 29 December 2016.
71 "Hacked Lincolnshire hospital computer systems 'back up'". BBC News. 2
November 2016. Retrieved 29 December 2016.
72 "Lincolnshire operations cancelled after network attack". BBC News. 31 October
2016. Retrieved 29 December 2016.
73 "Legion cyber-attack: Next dump is sansad.nic.in, say hackers". The Indian
Express. 12 December 2016. Retrieved 29 December 2016.
74 "15k patients' info shared on social media from NH Hospital data breach". RT
International. Retrieved 29 December 2016.
75 "Former New Hampshire Psychiatric Hospital Patient Accused Of Data Breach".
CBS Boston. Retrieved 29 December 2016.
76 "Texas Hospital hacked, affects nearly 30,000 patient records". Healthcare IT
News. 4 November 2016. Retrieved 29 December 2016.
77 Becker, Rachel (27 December 2016). "New cybersecurity guidelines for medical
devices tackle evolving threats". The Verge. Retrieved 29 December 2016.
78 "Postmarket Management of Cybersecurity in Medical Devices" (PDF). 28
December 2016. Retrieved 29 December 2016.
79 Cashell, B., Jackson, W. D., Jickling, M., & Webel, B. (2004). The Economic
Impact of Cyber-Attacks. Congressional Research Service, Government and Finance
Division. Washington DC: The Library of Congress.
80 Gordon, Lawrence; Loeb, Martin (November 2002). "The Economics of
Information Security Investment". ACM Transactions on Information and System
Security. 5 (4): 438–457. doi:10.1145/581271.581274.
81 RFC 2828 Internet Security Glossary
82 CNSS Instruction No. 4009 dated 26 April 2010
83 InfosecToday Glossary
84 Definitions: IT Security Architecture. SecurityArchitecture.org, Jan, 2006
85 Jannsen, Cory. "Security Architecture". Techopedia. Janalta Interactive Inc.
Retrieved 9 October 2014.
86 "Cybersecurity at petabyte scale".
87 Woodie, Alex (9 May 2016). "Why ONI May Be Our Best Hope for Cyber Security
Now". Retrieved 13 July 2016.
88 "Firms lose more to electronic than physical theft". Reuters.
89 Foreman, P: Vulnerability Management, page 1. Taylor & Francis Group,
2010. ISBN 978-1-4398-0150-5
90 Anna-Maija Juuso and Ari Takanen Unknown Vulnerability Management,
Codenomicon whitepaper, October 2010 [1].
91 Alan Calder and Geraint Williams. PCI DSS: A Pocket Guide, 3rd
Edition. ISBN 978-1-84928-554-4. network vulnerability scans at least quarterly and
after any significant change in the network
92 Harrison, J. (2003). "Formal verification at Intel": 45–
54. doi:10.1109/LICS.2003.1210044.
93 Umrigar, Zerksis D.; Pitchumani, Vijay (1983). "Formal verification of a real-time
hardware design". Proceeding DAC '83 Proceedings of the 20th Design Automation
Conference. IEEE Press. pp. 221–7. ISBN 0-8186-0026-8.
94 "Abstract Formal Specification of the seL4/ARMv6 API" (PDF). Retrieved May
19, 2015.
95 Christoph Baumann, Bernhard Beckert, Holger Blasum, and Thorsten
Bormer Ingredients of Operating System Correctness? Lessons Learned in the Formal
Verification of PikeOS
96 "Getting it Right" by Jack Ganssle
97 Arachchilage, Nalin; Love, Steve; Scott, Michael (June 1, 2012). "Designing a
Mobile Game to Teach Conceptual Knowledge of Avoiding 'Phishing
Attacks'". International Journal for e-Learning Security. Infonomics Society. 2 (1): 127–
132. Retrieved April 1, 2016.
98 Scott, Michael; Ghinea, Gheorghita; Arachchilage, Nalin (7 July 2014). Assessing
the Role of Conceptual Knowledge in an Anti-Phishing Educational Game (pdf).
Proceedings of the 14th IEEE International Conference on Advanced Learning
Technologies. IEEE. p. 218. doi:10.1109/ICALT.2014.70. Retrieved April 1, 2016.
99 "The Hacker in Your Hardware: The Next Security Threat". Scientific American.
100 Waksman, Adam; Sethumadhavan, Simha (2010), "Tamper Evident
Microprocessors"(PDF), Proceedings of the IEEE Symposium on Security and Privacy,
Oakland, California
101 "Sentinel HASP HL". E-Spin. Retrieved 2014-03-20.
102 "Token-based authentication". SafeNet.com. Retrieved 2014-03-20.
103 "Lock and protect your Windows PC". TheWindowsClub.com. Retrieved 2014-03-
20.
104 James Greene (2012). "Intel Trusted Execution Technology: White Paper" (PDF).
Intel Corporation. Retrieved 2013-12-18.
105 "SafeNet ProtectDrive 8.4". SCMagazine.com. 2008-10-04. Retrieved 2014-03-20.
106 "Secure Hard Drives: Lock Down Your Data". PCMag.com. 2009-05-11.
107 "Top 10 vulnerabilities inside the network". Network World. 2010-11-08.
Retrieved 2014-03-20.
108 "Forget IDs, use your phone as credentials". Fox Business Network. 2013-11-04.
Retrieved 2014-03-20.
109 Lipner, Steve (2015). "The Birth and Death of the Orange Book". IEEE Annals of
the History of Computing. 37 (2): 19–31. doi:10.1109/MAHC.2015.27.
110 Kelly Jackson Higgins (2008-11-18). "Secure OS Gets Highest NSA Rating, Goes
Commercial". Dark Reading. Retrieved 2013-12-01.
111 "Board or bored? Lockheed Martin gets into the COTS hardware biz". VITA
Technologies Magazine. December 10, 2010. Retrieved 9 March 2012.
112 Sanghavi, Alok (21 May 2010). "What is formal verification?". EE Times_Asia.
113 Jonathan Zittrain, 'The Future of The Internet', Penguin Books, 2008
114 Information Security. United States Department of Defense, 1986
115 "THE TJX COMPANIES, INC. VICTIMIZED BY COMPUTER SYSTEMS INTRUSION;
PROVIDES INFORMATION TO HELP PROTECT CUSTOMERS" (Press release). The TJX
Companies, Inc. 2007-01-17. Retrieved 2009-12-12.
116 Largest Customer Info Breach Grows. MyFox Twin Cities, 29 March 2007.
117 "The Stuxnet Attack On Iran's Nuclear Plant Was 'Far More Dangerous' Than
Previously Thought". Business Insider. 20 November 2013.
118 Reals, Tucker (24 September 2010). "Stuxnet Worm a U.S. Cyber-Attack on Iran
Nukes?". CBS News.
119 Kim Zetter (17 February 2011). "Cyberwar Issues Likely to Be Addressed Only
After a Catastrophe". Wired. Retrieved 18 February 2011.
120 Chris Carroll (18 October 2011). "Cone of silence surrounds U.S. cyberwarfare".
Stars and Stripes. Retrieved 30 October 2011.
121 John Bumgarner (27 April 2010). "Computers as Weapons of War" (PDF). IO
Journal. Retrieved 30 October 2011.
122 Newman, Lily Hay (9 October 2013). "Can You Trust NIST?". IEEE Spectrum.
123 "New Snowden Leak: NSA Tapped Google, Yahoo Data Centers", Oct 31, 2013,
Lorenzo Franceschi-Bicchierai, mashable.com
124 Seipel, Hubert. "Transcript: ARD interview with Edward Snowden". La Foundation
Courage. Retrieved 11 June 2014.
125 Michael Riley; Ben Elgin; Dune Lawrence; Carol Matlack. "Target Missed
Warnings in Epic Hack of Credit Card Data - Businessweek". Businessweek.com.
126 "Home Depot says 53 million emails stolen". CNET. CBS Interactive. 6 November
2014.
127 "Millions more Americans hit by government personnel data hack". Reuters.
2017-07-09. Retrieved 2017-02-25.
128 Barrett, Devlin. "U.S. Suspects Hackers in China Breached About four (4) Million
People's Records, Officials Say". The Wall Street Journal.
129 Risen, Tom (5 June 2015). "China Suspected in Theft of Federal Employee
Records". US News & World Report.
130 Zengerle, Patricia (2015-07-19). "Estimate of Americans hit by government
personnel data hack skyrockets". Reuters.
131 Sanger, David (5 June 2015). "Hacking Linked to China Exposes Millions of U.S.
Workers". New York Times.
132 Mansfield-Devine, Steve (2015-09-01). "The Ashley Madison affair". Network
Security. 2015 (9): 8–16. doi:10.1016/S1353-4858(15)30080-5.
133 "Mikko Hypponen: Fighting viruses, defending the net". TED.
134 "Mikko Hypponen – Behind Enemy Lines". Hack In The Box Security Conference.
135 "Ensuring the Security of Federal Information Systems and Cyber Critical
Infrastructure and Protecting the Privacy of Personally Identifiable Information".
Government Accountability Office. Retrieved November 3, 2015.
136 Kirby, Carrie (June 24, 2011). "Former White House aide backs some Net
regulation / Clarke says government, industry deserve 'F' in cybersecurity". The San
Francisco Chronicle.
137 "FIRST website".
138 "First members".
139 "European council".
140 "MAAWG".
141 "MAAWG".
142 "Government of Canada Launches Canada's Cyber Security Strategy". Market
Wired. 3 October 2010. Retrieved 1 November 2014.
143 "Canada's Cyber Security Strategy". Public Safety Canada. Government of
Canada. Retrieved 1 November 2014.
144 "Action Plan 2010–2015 for Canada's Cyber Security Strategy". Public Safety
Canada. Government of Canada. Retrieved 3 November 2014.
145 "Cyber Incident Management Framework For Canada". Public Safety Canada.
Government of Canada. Retrieved 3 November 2014.
146 "Action Plan 2010–2015 for Canada's Cyber Security Strategy". Public Safety
Canada. Government of Canada. Retrieved 1 November 2014.
147 "Canadian Cyber Incident Response Centre". Public Safety Canada. Retrieved 1
November 2014.
148 "Cyber Security Bulletins". Public Safety Canada. Retrieved 1 November 2014.
149 "Report a Cyber Security Incident". Public Safety Canada. Government of
Canada. Retrieved 3 November 2014.
150 "Government of Canada Launches Cyber Security Awareness Month With New
Public Awareness Partnership". Market Wired. Government of Canada. 27 September
2012. Retrieved 3 November 2014.
151 "Cyber Security Cooperation Program". Public Safety Canada. Retrieved 1
November 2014.
152 "Cyber Security Cooperation Program". Public Safety Canada.
153 "GetCyberSafe". Get Cyber Safe. Government of Canada. Retrieved 3
November 2014.
154 "Cyber Security". Tier3 — Cyber Security Services Pakistan.
155 "National Response Centre For Cyber Crime".
156 "Tier3 - Cyber Security Services Pakistan". Tier3 - Cyber Security Services
Pakistan.
157 "Surfsafe® Pakistan". Surfsafe® Pakistan-report terrorist and extremist online-
content.
158 "South Korea seeks global support in cyber attack probe". BBC Monitoring Asia
Pacific. 7 March 2011.
159 Kwanwoo Jun (23 September 2013). "Seoul Puts a Price on Cyberdefense". Wall
Street Journal. Dow Jones & Company, Inc. Retrieved 24 September 2013.
160 "Text of H.R.4962 as Introduced in House: International Cybercrime Reporting
and Cooperation Act – U.S. Congress". OpenCongress. Retrieved 2013-09-25.
161 [2] Archived 20 January 2012 at the Wayback Machine.
162 "National Cyber Security Division". U.S. Department of Homeland Security.
Archived from the original on 11 June 2008. Retrieved June 14, 2008.
163 "FAQ: Cyber Security R&D Center". U.S. Department of Homeland Security S&T
Directorate. Retrieved June 14, 2008.
164 AFP-JiJi, "U.S. boots up cybersecurity center", October 31, 2009.
165 "Federal Bureau of Investigation – Priorities". Federal Bureau of Investigation.
166 "Internet Crime Complaint Center (IC3) - Home".
167 "Infragard, Official Site". Infragard. Retrieved 10 September 2010.
168 "Robert S. Mueller, III -- InfraGard Interview at the 2005 InfraGard
Conference". Infragard (Official Site) -- "Media Room". Retrieved 9 December 2009.
169 "CCIPS".
170 "U.S. Department of Defense, Cyber Command Fact Sheet". stratcom.mil. May 21,
2010.
171 "Speech:". Defense.gov. Retrieved 2010-07-10.
172 Shachtman, Noah. "Military's Cyber Commander Swears: "No Role" in Civilian
Networks", The Brookings Institution, 23 September 2010.
173 "FCC Cybersecurity". FCC.
174 "Cybersecurity for Medical Devices and Hospital Networks: FDA Safety
Communication". Retrieved 23 May 2016.
175 "Automotive Cybersecurity - National Highway Traffic Safety Administration
(NHTSA)". Retrieved 23 May 2016.
176 "U.S. GAO - Air Traffic Control: FAA Needs a More Comprehensive Approach to
Address Cybersecurity As Agency Transitions to NextGen". Retrieved 23 May 2016.
177 Aliya Sternstein (4 March 2016). "FAA Working on New Guidelines for Hack-Proof
Planes". Nextgov. Retrieved 23 May 2016.
178 Bart Elias (18 June 2015). "Protecting Civil Aviation from Cyberattacks" (PDF).
Retrieved 4 November 2016.
179 Verton, Dan (January 28, 2004). "DHS launches national cyber alert
system". Computerworld. IDG. Retrieved 2008-06-15.
180 Clayton, Mark. "The new cyber arms race". The Christian Science Monitor.
Retrieved 16 April 2015.
181 "Burning Glass Technologies, "Cybersecurity Jobs, 2015"". July 2015.
Retrieved 11 June 2016.
182 Oltsik, Jon. "Cybersecurity Skills Shortage Impact on Cloud
Computing". Network World. Retrieved 2016-03-23.
183 [3] Burning Glass Technologies, "Demand for Cybersecurity Workers Outstripping
Supply," July 30, 2015, accessed 2016-06-11
184 de Silva, Richard (11 Oct 2011). "Government vs. Commerce: The Cyber Security
Industry and You (Part One)". Defence IQ. Retrieved 24 Apr 2014.
185 "Department of Computer Science". Retrieved April 30, 2013.
186 "(Information for) Students". NICCS (US National Initiative for Cybercareers and
Studies). Retrieved 24 April 2014.
187 "Current Job Opportunities at DHS". U.S. Department of Homeland Security.
Retrieved 2013-05-05.
188 "Cybersecurity Training & Exercises". U.S. Department of Homeland Security.
Retrieved 2015-01-09.
189 "Cyber Security Awareness Free Training and Webcasts". MS-ISAC (Multi-State
Information Sharing & Analysis Center). Retrieved 9 January 2015.
190 "Security Training Courses". LearnQuest. Retrieved 2015-01-09.
191 "Confidentiality". Retrieved 2011-10-31.
192 "Data Integrity". Retrieved 2011-10-31.
193 "Endpoint Security". Retrieved 2014-03-15.
194 Wu, Chwan-Hwa (John); Irwin, J. David (2013). Introduction to Computer
Networks and Cybersecurity. Boca Raton: CRC Press. ISBN 978-1466572133.
195 Lee, Newton (2015). Counterterrorism and Cybersecurity: Total Information
Awareness (2nd ed.). Springer. ISBN 978-3-319-17243-9.
196 Singer, P. W.; Friedman, Allan (2014). Cybersecurity and Cyberwar: What
Everyone Needs to Know. Oxford University Press. ISBN 978-0199918119.
197 Kim, Peter (2014). The Hacker Playbook: Practical Guide To Penetration Testing.
Seattle: CreateSpace Independent Publishing Platform. ISBN 978-1494932633.
5.1.7 Data-centric security
5.1.7.1 Commentary
Data-centric security emphasizes the security of the data instead of considering
networks, servers, or applications.[1][2] It considers the objectives of business
strategy by relating security services to the data.[3]
Common processes in a data-centric security model include:[4]
- Discover what data is stored and sensitivity.
- Manage access policies to data e.g. accessible, editable, or blocked from specific
users, or locations.
- Protect data loss or unauthorized use and prevent going to unauthorized users or
locations.
- Monitor usage to check deviations from the norm to show possible malicious intent.
From a technical point of view, information(data)-centric security relies on the
implementation of the following:[5]
- Information (data) that is self-describing and defending.
- Policies and controls that account for business context.
- Information that remains protected as it moves in and out of applications and storage
systems, and changing business context.
- Policies that work consistently through the different data management technologies
and defensive layers implemented.
Data access control is the selective restriction of access to data i.e. viewing, editing,
or using and mapping out data, its position, its importance, its users, its sensitivity and
then designing its controls.[6] Its size reflects the granular level of the user.[7] It needs
to use standards, and central control of data protection techniques and security
policies [8][9]
Encryption helps avoid the risk of data theft though it is useless a hacker operates with
stolen valid user credentials.[10]
Data Masking hides data within a database to ensure that data security is maintained
and that sensitive information is hidden from unauthorized personnel. It is achieved by
duplicating data to eliminate the subset of the data that needs to be hidden or by
obscuring the data dynamically as users perform requests.
Continuous monitoring of the data activity combined with precise access control leads
to the real-time detection of data breaches, limits the damages inflicted by a breach
and can even stop the intrusion if proper controls are in place.[11] When it comes to
cloud computing data-centric security follows the same process.[12]
5.1.7.2 References
1. Gartner Group (2014). "Gartner Says Big Data Needs a Data-Centric Security Focus".
2. SANS Institute (2015). "Data-Centric Security Needed to Protect Big Data
Implementations".
3. IEEE (2007). "Elevating the Discussion on Security Management: The Data Centric
Paradigm".
4. Wired Magazine (2014). "Information-Centric Security: Protect Your Data From the
Inside-Out".
5. Mogull, Rich (2014). "The Information-Centric Security Lifecycle" (PDF).
6. Federal News Radio (2015). "NASA Glenn becoming more data-centric across many
fronts".
7. BlueTalon (2016). "Data-Centric Security: From Chaos to Order".
8. "Data-Centric Security". www.axiomatics.com. Retrieved 2017-03-29.
9. IAAP community (2014). "Data-Centric Security: Reducing Risk at the Endpoints of the
Organization".
10.MIT Technology Review (2015). "Encryption Wouldn't Have Stopped Anthem's Data
Breach".
11.Dark Reading (2016). "Databases Remain Soft Underbelly Of Cybersecurity".
12.IEEE (2010). "Security and Privacy Challenges in Cloud Computing
Environments" (PDF).
5.2 Practical Attacks of Security
5.2.1 Cyberbullying
5.2.1.1 Commentary
Cyberbullying[18][19][14][21][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37]
[38][39][40][41][42][43][44][45][46][47][48][49][50][60][61][62][63][64][65]
 which is increasingly common, especially among teenagers[1] uses electronic means
of contact to harass e.g. posting rumors, threats, sexual remarks, personal information,
hate speech[2] through repeated actions with intent to harm[3] victims who become
psychologically impaired[4] being more harmful than conventional bullying.[5][11] High-
profile cases[6][7] encouraged new laws or extension about cyberbullying.[8][9][10][51]
[52][53][54][55][66] Cyberbullying comes as trolling (arouse reactions, disruption for
personal entertainment[12][13][20]), stalking (using internet to give threats to the
victim[14]), peer pressure for impact on victims.[15][16][17]
Cyberbullying is countered by intervention and breaking bullying pattern.[56][57] and
legal action.[58][59][101][102][103][104][105][106][107][108][109][110][111]
Research into cyberbullying is documented internationally in [67][68][69][70][71][72]
[73][74][75][76][77][78][79][80][81][82][83][84][85][86][87][88][89][90][91][92][93][94]
[95][96][97][98][99][100][112][113][114][115][116][117][118][119][120][121][122] [123]
[124][125][126][127][128][129][130][131][132][133][134][135][136].
5.2.1.2 References
1. Smith, Peter K.; Mahdavi, Jess; Carvalho, Manuel; Fisher, Sonja; Russell, Shanette;
Tippett, Neil (2008). "Cyberbullying: its nature and impact in secondary school
pupils". The Journal of Child Psychology and Psychiatry. 49 (4): 376–
385. doi:10.1111/j.1469-7610.2007.01846.x.
2. Cyberbullying – Law and Legal Definitions US Legal
3. An Educator's Guide to Cyberbullying Brown Senate.gov, archived from the original on
10 April 2011
4. Hinduja, S.; Patchin, J. W. (2009). Bullying beyond the schoolyard: Preventing and
responding to cyberbullying. Thousand Oaks, CA: Corwin Press. ISBN 1-4129-6689-2.
5. "Archived copy". Archived from the original on August 13, 2015. Retrieved November
25, 2015.
6. Hu, Winnie (1 October 2010). "Legal Debate Swirls Over Charges in a Student's
Suicide". New York Times. Nate Schweber. Retrieved 1 December 2016.
7. Chapin, John (2014-08-17). "Adolescents and Cyber Bullying: The Precaution Adoption
Process Model". Education and Information Technologies. 21 (4): 719–
728. doi:10.1007/s10639-014-9349-1. ISSN 1360-2357.
8. Gregorie, Trudy. "Cyberstalking: dangers on the information superhighway" (PDF).
9. https://docs.education.gov.au/system/files/doc/other/australian_covert_bullying_prevalen
ce_study_chapter_1.pdf
10.What to Do About Cyberbullies: For Adults, by Rena Sherwood; YAHOO Contributor
network
11.Hinduja, S.; Patchin, J. W. (2008). "Cyberbullying: An Exploratory Analysis of Factors
Related to Offending and Victimization". Deviant Behavior. 29 (2): 129–
156. doi:10.1080/01639620701457816.
12.Diaz, Fernando L. (2016). "Trolling & the First Amendment: Protecting Internet Speech
in the Era of Cyberbullies & Internet Defamation". University of Illinois Journal of Law,
Technology & Policy: 135–160.
13.Duggan, Maeve. "5 facts about online harassment". Pew Research Center.
14.Smith, Alison M. (5 September 2008). Protection of Children Online: Federal and State
Laws Addressing Cyberstalking, Cyberharassment, and Cyberbullying (Report).
15.O'Keeffe, Gwenn Schurgin; Clarke-Pearson, Kathleen; Media, Council on
Communications and (2011-04-01). "The Impact of Social Media on Children,
Adolescents, and Families". Pediatrics. 127 (4): 800–804. doi:10.1542/peds.2011-
0054. ISSN 0031-4005. PMID 21444588.
16.Ramasubbu, Suren (2015-05-26). "Influence of Social Media on Teenagers". Huffington
Post. Retrieved 2017-11-30.
17.Wolpert, Stuart. "The teenage brain on social media". UCLA Newsroom. Retrieved 2017-
11-30.
18.Moreno, Megan A. (2014-05-01). "Cyberbullying". JAMA
Pediatrics. 168 (5). doi:10.1001/jamapediatrics.2013.3343. ISSN 2168-6203.
19.Pettalia, Jennifer L.; Levin, Elizabeth; Dickinson, Joël (2013-11-01). "Cyberbullying:
Eliciting harm without consequence". Computers in Human Behavior. 29 (6): 2758–
2765. doi:10.1016/j.chb.2013.07.020.
20.Phillips, Whitney (October 15, 2012). "What an Academic Who Wrote Her Dissertation
on Trolls Thinks of Violentacrez". The Atlantic.
21."Cyberbullying / Bullying Statistics". February 19, 2016.
22."How online abuse of women has spiraled out of control". TED (conference). January
18, 2017. Retrieved January 18, 2017.
23."Defining a Cyberbully". The National Science Foundation. Retrieved November 8,2011.
24.Görzig, Anke; Lara A. Frumkin (2013). "Cyberbullying experiences on-the-go: When
social media can become distressing". Cyberpsychology: Journal of Psychosocial
Research on Cyberspace.
25.Boyd, D. (2014). Bullying is social media amplifying meanness and cruelty? In It's
Complicated the social lives of networked teens (p. 137). New Haven, CT: Yale
University Press.
26."Teen and Young Adult Internet Use". Pew Internet Project. Retrieved 5 January 2015.
27.Kowalski, Giumetti, Schroeder & Lattanner, R. M., G. W., A. N., M. R. (2014). "Bullying in
the digital age: A critical review and meta-analysis of cyberbullying research among
youth". Psychological Bulletin: 1073–1137. – via google scholar.
28."Cyberbullying Statistics". Internet Safety 101. Retrieved 5 January 2015.
29."Teens, Social Media, and Privacy". Retrieved 15 November 2015.
30."Stranger Danger: Protecting Your Children from Cyber Bullying, Sexting, & Social
Media". Retrieved 15 November 2015.
31."Twitter abuse - '50% of misogynistic tweets from women'". BBC News. 2016-05-26.
Retrieved 2017-09-07.
32."New Demos study reveals scale of social media misogyny -
Demos". www.demos.co.uk. Retrieved 2017-09-07.
33.Douglas Fischer; The Daily Climate. "Cyber Bullying Intensifies as Climate Data
Questioned". scientificamerican.com.
34."Dominique Browning: When Grownups Bully Climate Scientists –
TIME.com". TIME.com.
35."Bullying climate change scientists". latrobe.edu.au.
36."Paths to Bullying in Online Gaming: The Effects of Gender, Preference for Playing
Violent Games, Hostility, and Aggressive Behavior on Bullying". National Sun Yat-sen
University. doi:10.2190/ec.47.3.a.
37.Lam L, Cheng Z and Liu X, 'Violent Online Games Exposure And
Cyberbullying/Victimization Among Adolescents' (2013) 16 Cyberpsychology, Behavior,
and Social Networking
38.Rosenberg, Alyssa. "Gamergate and how Internet users think about gaming and
harassment". The Washington Post.
39.Hu, Elise. "Pew: Gaming Is Least Welcoming Online Space For Women". All Tech
Considered. NPR.
40.MacDonald, Keza. "Are gamers really sexist?". The Guardian.
41.Norris, Kamala O. (2004). "Gender Stereotypes, Aggression, and Computer Games: An
Online Survey of Women". CyberPsychology & Behavior. CyberPsychology &
Behavior. 7(6): 714. doi:10.1089/cpb.2004.7.714.
42."Remarks by the President at Reception in Honor of Women's History Month". March 16,
2016.
43.Johnston, Casey. "Women are gamers, but largely absent from "e-sports"". Ars
Technica.
44.O'Leary, Amy. "In Virtual Play, Sex Harassment Is All Too Real". The New York Times.
45.Crecente, Brian. "Plague of game dev harassment erodes industry, spurs support
groups". Polygon.
46.Jenkins, Ria. "When will gamers understand that criticism isn't censorship?". The
Guardian.
47.Berlatsky, Noah. "Online Harassment of Women Isn't Just a Gamer Problem". Pacific
Standard.
48.Young, Cathy. "Blame GamerGate's Bad Rep on Smears and Shoddy Journalism". New
York Observer.
49.Jager, Chris. "Crowdsourcing Tends To Attract The Worst Kind Of People". Lifehacker.
Gawker Media.
50.Citron, Danielle (2014). Hate Crimes in Cyberspace. Cambridge, Mass., USA & London,
UK: Harvard University Press. ISBN 978-0-674-36829-3.
51."Cyberstalking, cyberharassment and cyberbullying". NCSL National Conference of
State Legislatures. Archived from the original on September 6, 2015.
52.Cyberstalking Washington State Legislature
53.Bailey, Melissa (May 28, 2012). "Back Off, Bully!".
54."What Is Cyberstalking?". Archived from the original on 2014-12-27.
55.Cyberbullying Enacted Legislation: 2006–2010 Archived June 9, 2013, at the Wayback
Machine. Legislation by State, NCSL
56.CT teens develop bullying app to protect peers 7 News; June 2012 Archived June 22,
2012, at the Wayback Machine.
57."http://onlinelibrary.wiley.com.jproxy.nuim.ie/doi/10.1002/pits.21603/abstract". doi:10.10
02/pits.21603/abstract. External link in |title= (help)
58.Current and pending cyberstalking-related United States federal and state laws WHOA
59.The Global Cyber Law Database Archived June 20, 2012, at the Wayback Machine.
GCLD
60.MacDonald, Gregg (September 1, 2010). "Cyber-bullying defies traditional stereotype:
Girls are more likely than boys to engage in this new trend, research suggests". Fairfax
Times. Archived from the original on May 26, 2013.
61.Cyberbullying in Adolescent Victims: Perception and Coping Journal of Psychosocial
Research on Cyberspace
62."Stop Cyberbullying". Stop Cyberbullying. 2005-06-27. Retrieved 2013-10-08.
63.Topping, Alexandra. "Cyberbullying on social networks spawning form of self-harm".
Guardian News and Media Limited. Retrieved 6 August 2013.
64.Englander, Elizabeth (June 2012). "Digital Self-Harm: Frequency, Type, Motivations, and
Outcomes". MARC Research Reports. Massachusetts Aggression Reduction Center,
Bridgewater State University. 5.
65.Wayne Petherick (2009). "Cyber-Stalking:Obsessional Pursuit and the Digital
Criminal". TrueTV. Archived from the original on February 9, 2009.
66.Cyberbullying Stalking and Harassment
67.Cross, D., Shaw, T., Hearn, L., Epstein, M., Monks, H., Lester, L., & Thomas, L. 2009.
Australian Covert Bullying Prevalence Study (ACBPS). Child Health Promotion Research
Centre, Edith Cowan University, Perth. Deewr.gov.au. Retrieved on July 6, 2011.
68.Zhou, Zongkui "Cyberbullying and its risk factors among Chinese high school
students"School Psychology International December 2013 34: 630–647, first published
on May 8, 2013
69.Fung, Annis L. C. "The Phenomenon Of Cyberbullying: Its Aetiology And
Intervention." Journal of Youth Studies (10297847) 13.2 (2010): 31–42. Academic
Search Complete. Web. 28 Oct. 2016.
70.Hasebrink, U (2011). "Patterns of risk and safety online. In-depth analyses from the EU
Kids Online survey of 9- to 16-year-olds and their parents in 25 European
countries"(PDF).
71.Hasebrink, U., Livingstone, S., Haddon, L. and Ólafsson, K.(2009) Comparing children's
online opportunities and risks across Europe: Cross-national comparisons for EU Kids
Online. LSE, London: EU Kids Online (Deliverable D3.2, 2nd edition), ISBN 978-0-85328-
406-2 secondedition.pdf lse.ac.uk
72.Sourander, A.; Klomek, A.B.; Ikonen, M.; Lindroos, J.; Luntamo, T.; Koskeiainen, M.;
Helenius, H. (2010). "Psychosocial risk factors associated with cyberbullying among
adolescents: A population-based study". Archives of General Psychiatry. 67 (7): 720–
728. doi:10.1001/archgenpsychiatry.2010.79.
73.Callaghan, Mary; Kelly, Colette; Molcho, Michal (2014). "360 Link". International Journal
of Public Health. 60 (2): 199. doi:10.1007/s00038-014-0638-
7&rft.externaldbid=n/a&rft.externaldocid=601082949&paramdict=en-us.
74.O'Neill, Brian; Dinh, Thuy (2016). "Cyberbullying among 9–16 year olds in Ireland".
Dublin Institute of Technology.
75.Cross-Tab Marketing Services & Telecommunications Research Group for Microsoft
Corporation
76.Campbell, Marilyn A. (2005). Cyber bullying: An old problem in a new guise?
77.Sugimori Shinkichi (2012). "Anatomy of Japanese Bullying". nippon.com.
Retrieved January 5, 2015.
78."Cyber bullying bedevils Japan". The Sydney Morning Herald. Retrieved January 5,2015.
79.Cyber Bullying: Student Awareness Palm Springs Unified School District Retrieved 5
January 2015
80.Data pulled from http://cyberbullying.org/summary-of-our-cyberbullying-research/
81.Finkelhor, D., Mitchell, K.J., & Wolak, J. (2000). Online victimization: A report on the
nation's youth. Alexandria, VA: National Center for Missing and Exploited Children.
82."What Parents Need to Know About Cyberbullying". ABC News Primetime. ABC News
Internet Ventures. 2006-09-12. Retrieved 2015-02-03.
83.Wolak, J., Mitchell, K.J., & Finkelhor, D. (2006). Online Victimization of Youth: Five
Years Later. Alexandria, VA: National Center for Missing and Exploited Children.
84.Ybarra, M.L.; Mitchell, K.J.; Wolak, J.; Finkelhor, D. (Oct 2006). "Examining
characteristics and associated distress related to Internet harassment: findings from
the Second Youth Internet Safety Survey". Pediatrics. 118 (4): e1169–
77. doi:10.1542/peds.2006-0815. PMID 17015505.
85.Ybarra, M.L.; Mitchell, K.J. (Aug 2007). "Prevalence and frequency of Internet
harassment instigation: implications for adolescent health". J Adolesc Health. 41 (2):
189–95. doi:10.1016/j.jadohealth.2007.03.005. PMID 17659224.
86."Statistics on Bullying" (PDF). Archived from the original (PDF) on October 4, 2013.
87.Hinduja, S. & Patchin, J. W. (2007). Offline Consequences of Online Victimization:
School Violence and Delinquency. Journal of School Violence, 6(3), 89–112.
88.National Children's Home. (2005).Putting U in the picture. Mobile Bullying Survey
2005. Archived October 28, 2005, at the Wayback Machine.(pdf)
89."Cyberbullying FAQ For Teens". National Crime Prevention Council. 2015.
Retrieved 2015-02-03.
90.Hertz, M. F.; David-Ferdon, C. (2008). Electronic Media and Youth Violence: A CDC Issue
Brief for Educators and Caregivers (PDF). Atlanta (GA): Centers for Disease Control.
p. 9. Retrieved 2015-02-03.
91.Ybarra, Michele L.; Diener-West, Marie; Leaf, Philip J. (December 2007). "Examining the
overlap in internet harassment and school bullying: implications for school
intervention". Journal of Adolescent Health. 41 (6 Suppl 1): S42–
S50. doi:10.1016/j.jadohealth.2007.09.004.
92.Kowalski, Robin M.; Limber, Susan P. (December 2007). "Electronic bullying among
middle school students". Journal of Adolescent Health. 41 (6 Suppl 1): S22–
S30. doi:10.1016/j.jadohealth.2007.08.017.
93.Hertz, M. F.; David-Ferdon, C. (2008). Electronic Media and Youth Violence: A CDC Issue
Brief for Educators and Caregivers (PDF). Atlanta (GA): Centers for Disease Control.
p. 7. Retrieved 2015-02-03.
94.Patchin, J. W.; Hinduja, S. (2006). "Bullies move beyond the schoolyard: A preliminary
look at cyberbullying". Youth Violence and Juvenile Justice. 4 (2): 148–
169. doi:10.1177/1541204006286288.
95.Snyder, Thomas D.; Robers, Simone; Kemp, Jana; Rathbun, Amy; Morgan, Rachel (2014-
06-10). "Indicator 11: Bullying at School and Cyber-Bullying Anywhere" (PDF). Indicators
of School Crime and Safety: 2013 (Compendium). Bureau of Justice Statistics (BJS) and
National Center for Education Statistics Institute of Education Sciences (ies). NCES
2014042. Retrieved 2015-02-03.
96.Kann, Laura; Kinchen, Steve; Shanklin, Shari L.; Flint, Katherine H.; Hawkins, Joseph;
Harris, William A.; Lowry, Richard; Olsen, Emily O'Malley; McManus, Tim; Chyen, David;
Whittle, Lisa; Taylor, Eboni; Demissie, Zewditu; Brener, Nancy; Thornton, Jemekia;
Moore, John; Zaza, Stephanie (2014-06-13). "Youth Risk Behavior Surveillance — United
States, 2013" (PDF). Morbidity and Mortality Weekly Report (MMWR). Centers for
Disease Control and Prevention. 63 (4): 66. Retrieved 16 February 2015.
97.Mehari, Krista; Farrell, Albert; Le, Anh-Thuy (2014). "Cyberbullying among adolescents:
Measures in search of a construct". Psychology of Violence. 4 (4): 399–
415. doi:10.1037/a0037521. Retrieved March 24, 2015.
98.Howlett-Brandon, Mary (2014). "CYBERBULLYING: AN EXAMINATION OF GENDER,
RACE, ETHNICITY, AND ENVIRONMENTAL FACTORS FROM THE NATIONAL CRIME
VICTIMIZATION SURVEY: STUDENT CRIME SUPPLEMENT, 2009". VCU Theses and
Dissertations, VCU Scholars Compass. Virginia Commonwealth University.
Retrieved March 30, 2015.
99."2015's Best & Worst States at Controlling Bullying". WalletHub. Retrieved 2015-11-25.
100."9-R students create teacher-bashing tweets". The Durango Herald. Retrieved 2015-
11-25.
101.PÉREZ-PEÑA, RICHARD. "Christie Signs Tougher Law on Bullying in Schools".
NewYork Times. Retrieved January 6, 2011.
102.Bill targets adults who cyberbully Pantagraph, by Kevin Mcdermott, December 20,
2007
103.A rallying cry against cyberbullying. CNET News, by Stefanie Olsen, June 7, 2008
104."Fort Bragg USD".
105."Education Legislation: Cyber-Bullying". Centerdigitaled.com. 2009-03-16.
Retrieved 2016-06-05.
106.Surdin, Ashley (January 1, 2009). "States Passing Laws to Combat Cyber-Bullying —
washingtonpost.com". The Washington Post. Retrieved January 2, 2009.
107.International IT and e-commerce legal info. Out-law.com. Retrieved on July 6, 2011.
108.Primack, Alvin J.; Johnson, Kevin A. (Spring 2017). "Student cyberbullying inside the
digital schoolhouse gate: Toward a standard for determining where a "School" is". First
Amendment Studies. 51.1: 30–48. line feed character in |title= at position 41 (help)
109."Bullying, Harassment and Stress in the Workplace —A European
Perspective" (PDF). Bullying, Harassment and Stress in the Workplace —A European
Perspective. Proskauer. Retrieved 18 November 2015.
110."Statutory guidance – Keeping children safe in education". GOV.UK. Department for
Education. Retrieved 20 November 2015.
111."New guidelines to help industry promote internet safety". GOV.UK. Department for
Education & Tim Loughton. Retrieved 20 November 2015.
112.Von Marees, N.; Petermann, F. (2012). "Cyberbullying: An increasing challenge for
schools". School Psychology International. 33 (5): 476.
113.Smyth, S. M. (2010). Cybercrime in Canadian criminal law. (pp. 105–122). Toronto, ON:
Carswell.
114.Walther, B (2012). "Cyberbullying: Holding grownups liable for negligent
entrustment". Houston Law Review. 49 (2): 531–562.
115.Stauffer, Sterling; Heath, Melissa Allen; Coyne, Sarah Marie; Ferrin, Scott (2012). "High
school teachers' perceptions of cyberbullying prevention and intervention
strategies". Psychology in the Schools. 49 (4): 352. doi:10.1002/pits.21603.
116.Alexandra Topping; Ellen Coyne and agencies (8 August 2013). "Cyberbullying
websites should be boycotted, says Cameron: Prime minister calls for website
operators to 'step up to the plate', following death of 14-year-old Hannah Smith". The
Guardian.
117.Kelly Running. "Cyber-bullying and popular culture". Carlyle Observer.
Retrieved January 5, 2015.
118."Cyberthreat: How to protect yourself from online bullying". Ideas and Discoveries.
Ideas and Discoveries: 76. 2011.
119.Alvarez, Lizette. "Girl's Suicide Points to Rise in Apps Used by Cyberbullies". The New
York Times. Retrieved 20 November 2013.
120."Effects of Bullying". StopBullying.gov. U.S. Department of Health & Human Services.
Retrieved 18 November 2015.
121."Cyber-Bullying and its Effect on our Youth". American Osteopathic Association.
American Osteopathic Association. Retrieved 21 November 2015.
122.Deniz, Metin (7 July 2015). "A Study on Primary School Students' Being Cyber Bullies
and Victims According to Gender, Grade, and Socioeconomic Status". Croatian Journal
of Education. 17 (3): 659–680. doi:10.15516/cje.v17i3.835.
123.Simmons, Rachel (2011). Odd Girl Out. Mariner Books.
124.Binns, Amy (2013). "Facebook's Ugly Sisters: Anonymity and abuse on Formspring and
Ask.fm". Media Education Research Journal. 4(1): 27.
125.Polsky, Carol (23 March 2010). "Family and Friends shocked at cyberposts after teen's
death". Newsday.
126."Teenager in rail suicide was sent abusive message on social networking site". The
Telegraph. 22 July 2011.
127."First Class Rank Requirements". US Scout Service Project. Retrieved August 5, 2008.
128."Always be prepared to battle bullies". NBC News.
129."Leelila Strogov – Fox 11 LA – Cyber Bullies". Fox 11.
130."FOX 11 Investigates: 'Anonymous'". Fox Television Stations, Inc.
131.Smith, Skylar. "2 dead after head-on collision with CSUF student". Dailytitan.com.
Retrieved 2016-06-05.
132."End Revenge Porn". End Revenge Porn.
133.Hertzog, J. (2015, October 5). October is National Bullying Prevention Awareness
Month. Retrieved November 3, 2015,
from http://www.stopbullying.gov/blog/2015/10/05/october-national-bullying-prevention-
awareness-month
134.YouTube tackles bullying online BBC News, November 19, 2007
135.Salazar, Cristian (2010-05-24). "Alexis Pilkington Facebook Horror: Cyber Bullies
Harass Teen Even After Suicide". Huffington Post. Retrieved 22 October 2012.
136."The BULLY Project". The BULLY Project.
137.Berson, I. R.; Berson, M. J.; Ferron, J. M. (2002). "Emerging risks of violence in the
digital age: Lessons for educators from an online study of adolescent girls in the United
States". Journal of School Violence. 1 (2): 51–71. doi:10.1300/j202v01n02_04.
138.Burgess-Proctor, A., Patchin, J. W., & Hinduja, S. (2009). Cyberbullying and online
harassment: Reconceptualizing the victimization of adolescent girls. In V. Garcia and J.
Clifford [Eds.]. Female crime victims: Reality reconsidered. Upper Saddle River, NJ:
Prentice Hall. In Print.
139.Keith, S. & Martin, M. E. (2005). Cyber-bullying: Creating a Culture of Respect in a
Cyber World. Reclaiming Children & Youth, 13(4), 224–228.
140.Hinduja, S.; Patchin, J. W. (2007). "Offline Consequences of Online Victimization:
School Violence and Delinquency". Journal of School Violence. 6 (3): 89–
112. doi:10.1300/j202v06n03_06.
141.Hinduja, S.; Patchin, J. W. (2008). "Cyberbullying: An Exploratory Analysis of Factors
Related to Offending and Victimization". Deviant Behavior. 29 (2): 129–
156. doi:10.1080/01639620701457816.
142.Hinduja, S. & Patchin, J. W. (2009). Bullying beyond the Schoolyard: Preventing and
Responding to Cyberbullying. Thousand Oaks, CA: Sage Publications.
143.Patchin, J. & Hinduja, S. (2006). Bullies Move beyond the Schoolyard: A Preliminary
Look at Cyberbullying. Youth Violence and Juvenile Justice', 4(2), 148–169.
144.Tettegah, S. Y., Betout, D., & Taylor, K. R. (2006). Cyber-bullying and schools in an
electronic era. In S. Tettegah & R. Hunter (Eds.) Technology and Education: Issues in
administration, policy and applications in k12 school. PP. 17–28. London: Elsevier.
145.Wolak, J. Mitchell, K.J., & Finkelhor, D. (2006). Online victimization of youth: 5 years
later. Alexandria, VA: National Center for Missing & Exploited Children. Available
at unh.edu
146.Ybarra, M. L.; Mitchell, J. K. (2004). "Online aggressor/targets, aggressors and targets:
A comparison of associated youth characteristics". Journal of Child Psychology and
Psychiatry. 45 (7): 1308–1316. doi:10.1111/j.1469-7610.2004.00328.x. PMID 15335350.
147.Ybarra ML (2004). Linkages between depressive symptomatology and Internet
harassment among young regular Internet users. Cyberpsychol and Behavior.
Apr;7(2):247-57.
148.Ybarra ML, Mitchell KJ (2004). Youth engaging in online harassment: associations
with caregiver-child relationships, Internet use, and personal characteristics. Journal
of Adolescence. Jun;27(3):319-36.
149.Frederick S. Lane, (Chicago: NTI Upstream, 2011)
5.2.2 Cybercrime
5.2.2.1 Commentary
Cybercrime is crime involving computers and networks.[1] The computer may initiate
the crime, or be the target[2][3] on a single personal or national scale[4] for privacy,
scams, theft, harassment and threats (illegal obscene or offensive content)[15][16][17]
[18][19][20][21], hacking, fraud, identity theft, unsolicited sending of bulk email,
copyright infringement, unwarranted mass-surveillance, child pornography, and child
grooming hitting the global economy by various estimates.[5][6][7][8][13][14] People
performing Cybercrime require to have sufficient technical knowledge to construct and
deploy the attack using viruses, denial-of-service attacks, hacking, phishing and
malware (malicious code).
Cybercrime is performed by cyberterrorists, foreign intelligence services, or other
groups to map potential security holes in critical systems intimidating government or
organization for political or social objectives.[9] Cyberextortionists threaten repeated
denial of service or other attacks by malicious hackers by demanding money in return
for promising to stop the attacks and to offer protection for a website, e-mail server, or
computer system.[10][11] Cyberwarfare occurs in war by hacking or denial of service to
strategic command and control systems.[12]
Documented cases cover many areas of commerce and government.[22][23][24][25][26]
[27][28][29][30]
Cybercrime technology information is spread over the internet[31] and the service
architecture of the cloud encourages spam and denial of service.[32][33]
Investigation is performed through logs on the computers and Internet Service
Providers.
Legislation is weak in the area of Cybercrime so it is rare that criminals are prosecuted
successfully.[34][35][36]
Punishment for cybercrimes imprison convicted prisoners, ban them from using
particular devices, restricting them to particular devices and as security expert.[37]
[38] Awareness has been the best way of preventing being caught with Cybercrime.[39]
5.2.2.2 References
1. Moore, R. (2005) "Cyber crime: Investigating High-Technology Computer Crime,"
Cleveland, Mississippi: Anderson Publishing.
2. Warren G. Kruse, Jay G. Heiser (2002). Computer forensics: incident response
essentials. Addison-Wesley. p. 392. ISBN 0-201-70719-5.
3. Halder, D., & Jaishankar, K. (2011) Cyber crime and the Victimization of Women:
Laws, Rights, and Regulations. Hershey, PA, USA: IGI Global. ISBN 978-1-60960-830-9
4. Steve Morgan (January 17, 2016). "Cyber Crime Costs Projected To Reach $2
Trillion by 2019". Forbes. Retrieved September 22, 2016.
5. "Cyber crime costs global economy $445 billion a year: report". Reuters. 2014-06-
09. Retrieved 2014-06-17.
6. "Sex, Lies and Cybercrime Surveys" (PDF). Microsoft. 2011-06-15.
Retrieved 2015-03-11.
7. "#Cybercrime— what are the costs to victims - North Denver News". North
Denver News. Retrieved 16 May 2015.
8. "Cybercrime will Cost Businesses Over $2 Trillion by 2019" (Press release).
Juniper Research. Retrieved May 21, 2016.
9. "Cybercriminals Need Shopping Money in 2017, Too! -
SentinelOne". sentinelone.com. Retrieved 2017-03-24.
10. Lepofsky, Ron. "Cyberextortion by Denial-of-Service Attack" (PDF). Archived
from the original (PDF) on July 6, 2011.
11. Mohanta, Abhijit (6 December 2014). "Latest Sony Pictures Breach : A Deadly
Cyber Extortion". Retrieved 20 September 2015.
12. War is War? The utility of cyberspace operations in the contemporary operational
environment
13. "Cyber Crime definition".
14. "Save browsing". google.
15. "2011 U.S. Sentencing Guidelines Manual § 2G1.3(b)(3)".
16. "United States of America v. Neil Scott Kramer". Retrieved 2013-10-23.
17. "South Carolina". Retrieved 16 May 2015.
18. "1. In Connecticut , harassment by computer is now a crime". Nerac Inc.
February 3, 2003. Archived from the original on April 10, 2008.
19. "Section 18.2-152.7:1". Code of Virginia. Legislative Information System of
Virginia. Retrieved 2008-11-27.
20. Susan W. Brenner, Cybercrime: Criminal Threats from Cyberspace, ABC-CLIO,
2010, pp. 91
21. "We talked to the opportunist imitator behind Silk Road 3.0". 2014-11-07.
Retrieved 2016-10-04.
22. Weitzer, Ronald (2003). Current Controversies in Criminology. Upper Saddle
River, New Jersey: Pearson Education Press. p. 150.
23. David Mann And Mike Sutton (2011-11-06). ">>Netcrime". Bjc.oxfordjournals.org.
Retrieved 2011-11-10.
24. "A walk on the dark side". The Economist. 2007-09-30.
25. "DHS: Secretary Napolitano and Attorney General Holder Announce Largest U.S.
Prosecution of International Criminal Network Organized to Sexually Exploit Children".
Dhs.gov. Retrieved 2011-11-10.
26. DAVID K. LI (January 17, 2012). "Zappos cyber attack". New York Post.
27. Salvador Rodriguez (June 6, 2012). "Like LinkedIn, eHarmony is hacked; 1.5
million passwords stolen". Los Angeles Times.
28. Rick Rothacker (Oct 12, 2012). "Cyber attacks against Wells Fargo "significant,"
handled well: CFO". Reuters.
29. "AP Twitter Hack Falsely Claims Explosions at White House". Samantha Murphy.
April 23, 2013. Retrieved April 23, 2013.
30. "Fake Tweet Erasing $136 Billion Shows Markets Need Humans". Bloomberg.
April 23, 2013. Retrieved April 23, 2013.
31. Richet, Jean-Loup (2013). "From Young Hackers to Crackers". International
Journal of Technology and Human Interaction. 9 (1).
32. Richet, Jean-Loup (2011). "Adoption of deviant behavior and cybercrime 'Know
how' diffusion". York Deviancy Conference.
33. Richet, Jean-Loup (2012). "How to Become a Black Hat Hacker? An Exploratory
Study of Barriers to Entry Into Cybercrime.". 17th AIM Symposium.
34. Kshetri, Nir. "Diffusion and Effects of Cyber Crime in Developing Countries".
35. Northam, Jackie. "U.S. Creates First Sanctions Program Against Cybercriminals".
36. Adrian Cristian MOISE (2015). "Analysis of Directive 2013/40/EU on attacks
against information systems in the context of approximation of law at the European
level" (PDF). Journal of Law and Administrative Sciences. Archived from the
original (PDF) on December 8, 2015.
37. "Managing the Risks Posed by Offender Computer Use - Perspectives" (PDF).
December 2011.
38. Bowker, Art (2012). The Cybercrime Handbook for Community Corrections:
Managing Risk in the 21st Century. Springfield: Thomas. ISBN 9780398087289.
39. Feinberg, T (2008). "Whether it happens at school or off-campus, cyberbullying
disrupts and affects.". Cyberbullying: 10.
40. Balkin, J., Grimmelmann, J., Katz, E., Kozlovski, N., Wagman, S. & Zarsky, T.
(2006) (eds) Cybercrime: Digital Cops in a Networked Environment, New York University
Press, New York.
41. Bowker, Art (2012) "The Cybercrime Handbook for Community Corrections:
Managing Risk in the 21st Century" Charles C. Thomas Publishers, Ltd. Springfield.
42. Brenner, S. (2007) Law in an Era of Smart Technology, Oxford: Oxford University
Press
43. Broadhurst, R., and Chang, Lennon Y.C. (2013) "Cybercrime in Asia: trends and
challenges", in B. Hebenton, SY Shou, & J. Liu (eds), Asian Handbook of Criminology
(pp. 49–64). New York: Springer (ISBN 978-1-4614-5217-1)
44. Chang, L.Y. C. (2012) Cybercrime in the Greater China Region: Regulatory
Responses and Crime Prevention across the Taiwan Strait. Cheltenham: Edward Elgar.
(ISBN 978-0-85793-667-7)
45. Chang, Lennon Y.C., & Grabosky, P. (2014) "Cybercrime and establishing a
secure cyber world", in M. Gill (ed) Handbook of Security (pp. 321–339). NY: Palgrave.
46. Csonka P. (2000) Internet Crime; the Draft council of Europe convention on
cyber-crime: A response to the challenge of crime in the age of the internet? Computer
Law & Security Report Vol.16 no.5.
47. Easttom C. (2010) Computer Crime Investigation and the Law
48. Fafinski, S. (2009) Computer Misuse: Response, regulation and the
law Cullompton: Willan
49. Glenny, Misha, DarkMarket : cyberthieves, cybercops, and you, New York, NY :
Alfred A. Knopf, 2011. ISBN 978-0-307-59293-4
50. Grabosky, P. (2006) Electronic Crime, New Jersey: Prentice Hall
51. Halder, D., & Jaishankar, K. (2011) Cyber crime and the Victimization of Women:
Laws, Rights, and Regulations. Hershey, PA, USA: IGI Global. ISBN 978-1-60960-830-9
52. Jaishankar, K. (Ed.) (2011). Cyber Criminology: Exploring Internet Crimes and
Criminal behavior. Boca Raton, FL, USA: CRC Press, Taylor and Francis Group.
53. McQuade, S. (2006) Understanding and Managing Cybercrime, Boston: Allyn &
Bacon.
54. McQuade, S. (ed) (2009) The Encyclopedia of Cybercrime, Westport,
CT: Greenwood Press.
55. Parker D (1983) Fighting Computer Crime, U.S.: Charles Scribner's Sons.
56. Pattavina, A. (ed) Information Technology and the Criminal Justice
System, Thousand Oaks, CA: Sage.
57. Paul Taylor. Hackers: Crime in the Digital Sublime (November 3, 1999 ed.).
Routledge; 1 edition. p. 200. ISBN 0-415-18072-4.
58. Robertson, J. (2010, March 2). Authorities bust 3 in infection of 13m computers.
Retrieved March 26, 2010, from Boston News: Boston.com
59. Walden, I. (2007) Computer Crimes and Digital Investigations, Oxford: Oxford
University Press.
60. Rolón, Darío N. Control, vigilancia y respuesta penal en el ciberespacio, Latin
American's New Security Thinking, Clacso, 2014, pp. 167/182
61. Richet, J.L. (2013) From Young Hackers to Crackers, International Journal of
Technology and Human Interaction (IJTHI), 9(3), 53-62.
62. Wall, D.S. (2007) Cybercrimes: The transformation of crime in the information
age, Cambridge: Polity.
63. Williams, M. (2006) Virtually Criminal: Crime, Deviance and Regulation
Online, Routledge, London.
64. Yar, M. (2006) Cybercrime and Society, London: Sage.
5.2.3 Cyberwarfare
5.2.3.1 Commentary
Cyberwarfare involves the use and targeting of computers and networks in warfare. It
involves both offensive and defensive operations pertaining to the threat of
cyberattacks, espionage and sabotage. [1][2][3][4][5][6][7][8][9][10][11][12]
Cyberattacks, where immediate damage or disruption is caused are the main concern.
[13] Cyber espionage, which can provide the information needed to make a
successful cyberattack or scandal to launch an information warfare.[14][15][16][17][18]
[19][20]
Computers, networks and satellites coordinate activities of commerce, industry and
military e.g. power, water, fuel, communications, transportation infrastructure,
weaponry, global politics and governmental agencies. All activities and components re
vulnerable to electronic attack and sabotage.[21][22][23]
Denial-of-service attack attempts to make a system(web server, service or root name
server) unavailable to its intended users.
Electrical power grids are susceptible to cyberwarfare[24][25] and governments try to
identify vulnerabilities and improve security of control system networks.[26][27][28][29]
[30] by disconnecting the grid from the Internet and running the grid with droop speed
control only.[31][32][33][34][35]The military attempt to find and, when necessary,
neutralize cyberattacks and to defend military computer networks.[36][37][38][39][40]
[41][42][43]
Civil targets in internet sabotage include all aspects of the Internet from
the backbones of the web, to the internet service providers, to the data communication
media, and network equipment including web servers, enterprise information systems,
client server systems, communication links, network equipment, and the desktops and
laptops in businesses and homes and electrical grids and telecommunication systems.
Hacktivism subverts computers and networks to promote an agenda, and extends to
attacks, theft and virtual sabotage viewed as cyberwarfare – or mistaken for it.[44]
The private sector are hacked in ongoing global conflicts and industrial
espionage occurs widely.[45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60]
[61][62][63][64][65][66][67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82]
[83][84][85][86][87][88][89][90][91][92][93][94][95][96][97][98][99][100][101][102][103]
[104][105][106][107][108][109][110][111][112][113][114][115][116][117][118][119][120]
[121]122][123][124][125][126][127][128][129][130][131][132][133]
Cyber counter-intelligence are measures to identify, penetrate, or neutralize foreign or
industrial operations that use cyber means as the primary trade craft methodology, as
well as foreign intelligence service or industrial spying.[134][135][136][137][138][139]
[140][141]
There is controversy about terms used in Cyberwarfare.[140][141][142][143][144][145]
[146][147]
Legality and rules are defined in the references.[148][149][150][151][152][153][154]
5.2.3.2 References
1. Clarke, Richard A. Cyber War, HarperCollins (2010) ISBN 9780061962233
2. Blitz, James (1 November 2011). "Security: A huge challenge from China, Russia
and organised crime". Financial Times. Retrieved 6 June 2015.
3. Arquilla, John (1999). "Can information warfare ever be just?". Ethics and
Information Technology. 1 (3): 203–212. doi:10.1023/A:1010066528521.
4. Collins, Sean (April 2012). "Stuxnet: the emergence of a new cyber weapon and
its implications". Journal of Policing, Intelligence and Counter Terrorism. 7 (1).
Retrieved 6 June 2015.
5. "Critical infrastructure vulnerable to attack, warned cyber security
expert". gsnmagazine.com. Government Security News. 2014. Retrieved 6 June 2015.
6. Maniscalchi, Jago (4 September 2011). "What is Cyberwar?". Retrieved 6
June 2015.
7. Lynn, William J. III. "Defending a New Domain: The Pentagon's
Cyberstrategy", Foreign Affairs, Sept/Oct. 2010, pp. 97–108
8. Clapper, James R. "Worldwide Threat Assessment of the US Intelligence
Community ", Senate Armed Services Committee, 26 February 2015 p. 1
9. Lisa Lucile Owens, Justice and Warfare in Cyberspace, The Boston Review
(2015), available at [1]
10. Poole-Robb, Stuart. "Turkish blackout sparks fears of cyber attack on the West",
ITProPortal.com, 19 May 2015
11. USAF HQ, Annex 3–12 Cyberspace Ops, U.S. Air Force, 2011
12. James P. Farwell and Rafael Rohozinski, Stuxnet and the future of cyber war,
Survival, 2011
13. "Cyberattacks, Terrorism Top U.S. Security Threat Report". NPR.org. 12 March
2013.
14. "A Note on the Laws of War in Cyberspace", James A. Lewis, April 2010
15. Rayman, Noah (18 December 2013). "Merkel Compared NSA To Stasi in
Complaint To Obama". Time. Retrieved 1 February 2014.
16. Devereaux, Ryan; Greenwald, Glenn; Poitras, Laura (19 May 2014). "Data Pirates
of the Caribbean: The NSA Is Recording Every Cell Phone Call in the Bahamas". The
Intercept. First Look Media. Retrieved 21 May 2014.
17. Schonfeld, Zach (23 May 2014). "The Intercept Wouldn't Reveal a Country the
U.S. Is Spying On, So WikiLeaks Did Instead". Newsweek. Retrieved 26 May 2014.
18. Bodmer, Kilger, Carpenter, & Jones (2012). Reverse Deception: Organized Cyber
Threat Counter-Exploitation. New York: McGraw-Hill Osborne Media. ISBN
0071772499, ISBN 978-0071772495
19. Sanders, Sam (4 June 2015). "Massive Data Breach Puts 4 Million Federal
Employees' Records at Risk". NPR. Retrieved 5 June 2015.
20. Liptak, Kevin (4 June 2015). "U.S. government hacked; feds think China is the
culprit". CNN. Retrieved 5 June 2015.
21. "Clarke: More defense needed in cyberspace" HometownAnnapolis.com, 24
September 2010
22. "Malware Hits Computerized Industrial Equipment". The New York Times, 24
September 2010
23. Singer, P.W.; Friedman, Allan (2014). Cybersecurity and Cyberwar: What Everyone
Needs to Know. Oxford: Oxford University Press. p. 156. ISBN 978-0-19-991809-6.
24. Shiels, Maggie. (9 April 2009) BBC: Spies 'infiltrate US power grid'. BBC News.
Retrieved 8 November 2011.
25. Meserve, Jeanne (8 April 2009). "Hackers reportedly have embedded code in
power grid". CNN. Retrieved 8 November 2011.
26. "US concerned power grid vulnerable to cyber-attack". In.reuters.com (9 April
2009). Retrieved 8 November 2011.
27. Gorman, Siobhan. (8 April 2009) Electricity Grid in U.S. Penetrated By Spies. The
Wall Street Journal. Retrieved 8 November 2011.
28. NERC Public Notice. (PDF). Retrieved 8 November 2011.
29. Xinhua: China denies intruding into the U.S. electrical grid. 9 April 2009
30. 'China threat' theory rejected. China Daily (9 April 2009). Retrieved 8 November
2011.
31. ABC News: Video. ABC News. (20 April 2009). Retrieved 8 November 2011.
32. Disconnect electrical grid from Internet, former terror czar Clarke warns. The
Raw Story (8 April 2009). Retrieved 8 November 2011.
33. "White House Cyber Czar: 'There Is No Cyberwar'". Wired, 4 March 2010
34. Kim Zetter (3 March 2016). "Inside the Cunning, Unprecedented Hack of
Ukraine's Power Grid". Wired.
35. Evan Perez (12 February 2016). "U.S. official blames Russia for power grid attack
in Ukraine". CNN.
36. "Cyber-War Nominee Sees Gaps in Law", The New York Times, 14 April 2010
37. Cyber ShockWave Shows U.S. Unprepared For Cyber Threats.
Bipartisanpolicy.org. Retrieved 8 November 2011.
38. Drogin, Bob (17 February 2010). "In a doomsday cyber attack scenario, answers
are unsettling". Los Angeles Times.
39. Ali, Sarmad (16 February 2010). "Washington Group Tests Security in 'Cyber
ShockWave'". The Wall Street Journal.
40. Cyber ShockWave CNN/BPC wargame: was it a failure?. Computerworld (17
February 2010). Retrieved 8 November 2011.
41. Steve Ragan Report: The Cyber ShockWave event and its aftermath. The Tech
Herald. 16 February 2010
42. Lee, Andy (1 May 2012). "International Cyber Warfare: Limitations and
Possibilities".Jeju Peace Institute.
43. U.S. Navy Recruiting – Cyber Warfare Engineer.
44. Denning, D. E. (2008). The ethics of cyber conflict. The Handbook of Information
and Computer Ethics. 407–429.
45. Financial Weapons of War, 100 Minnesota Law Review 1377 (2016)
46. "Google Attack Is Tip Of Iceberg", McAfee Security Insights, 13 January 2010
47. Government-sponsored cyberattacks on the rise, McAfee says. Network
World (29 November 2007). Retrieved 8 November 2011.
48. "US embassy cables: China uses access to Microsoft source code to help plot
cyber warfare, US fears". The Guardian. London. 4 December 2010. Retrieved 31
December 2010.
49. "How China will use cyber warfare to leapfrog in military
competitiveness". Culture Mandala: The Bulletin of the Centre for East-West Cultural
and Economic Studies. 8 (1 October 2008). p. 37. Retrieved January 2013. Check date
values in: |access-date=(help)
50. "China to make mastering cyber warfare A priority (2011)". Washington, D.C.:
NPR. Retrieved January 2013. Check date values in: |access-date= (help)
51. "How China will use cyber warfare to leapfrog in military
competitiveness". Culture Mandala: The Bulletin of the Centre for East-West Cultural
and Economic Studies. 8 (1 October 2008). p. 42. Retrieved January 2013. Check date
values in: |access-date=(help)
52. "How China will use cyber warfare to leapfrog in military
competitiveness". Culture Mandala: The Bulletin of the Centre for East-West Cultural
and Economic Studies. 8 (1 October 2008). p. 43. Retrieved January 2013. Check date
values in: |access-date=(help)
53. "Washington, Beijing in Cyber-War Standoff". Yahoo! News. 12 February 2013.
Retrieved January 2013. Check date values in: |access-date= (help)
54. Etzioni, Amitai (September 20, 2013). "MAR: A Model for US-China
Relations", The Diplomat.
55. Jim Finkle (3 August 2011). "State actor seen in "enormous" range of cyber
attacks". Reuters. Retrieved 3 August 2011.
56. Sudworth, John. (9 July 2009) "New cyberattacks hit South Korea". BBC News.
Retrieved 8 November 2011.
57. Williams, Martin. UK, Not North Korea, Source of DDOS Attacks, Researcher
Says. PC World.
58. "SK Hack by an Advanced Persistent Threat" (PDF). Command Five Pty Ltd.
Retrieved 24 September 2011.
59. Lee, Se Young. "South Korea raises alert after hackers attack broadcasters,
banks". Global Post. Retrieved 6 April 2013.
60. Kim, Eun-jung. "S. Korean military to prepare with U.S. for cyber warfare
scenarios". Yonhap News Agency. Retrieved 6 April 2013.
61. https://www.f-secure.com/documents/996508/1030745/nanhaishu_whitepaper.pdf
62. "Beware of the bugs: Can cyber attacks on India's critical infrastructure be
thwarted?". BusinessToday. Retrieved January 2013. Check date values in: |access-
date= (help)
63. "5 lakh cyber warriors to bolster India's e-defence". The Times of India. India. 16
October 2012. Retrieved 18 October 2012.
64. "36 government sites hacked by 'Indian Cyber Army'". The Express Tribune.
Retrieved 8 November 2011.
65. "Hacked by 'Pakistan cyber army', CBI website still not restored". Ndtv.com (4
December 2010). Retrieved 8 November 2011.
66. Pauli, Darren. "Copy paste slacker hackers pop corp locks in ode to stolen code".
The Register.
67. "APT Group 'Patchwork' Cuts-and-Pastes a Potent Attack". Threatpost. 7 July
2016. Retrieved 2 January 2017.
68. Mazanec, Brain M. (2015). The Evolution of Cyber War. USA: University of
Nebraska Press. pp. 235–236. ISBN 9781612347639.
69. Danchev, Dancho (11 August 2008). "Coordinated Russia vs Georgia
cyberattack". ZDNet. Retrieved 25 November 2008.
70. Cyberspace and the changing nature of warfare. Strategists must be aware that
part of every political and military conflict will take place on the internet, says Kenneth
Geers.
71. "www.axisglobe.com". Retrieved 1 August 2016.
72. Andrew Meier, Black Earth. W. W. Norton & Company, 2003, ISBN 0-393-05178-1,
pages 15-16.
73. Website of Kyrgyz Central Election Commission hacked by Estonian hackers,
Regnum, 14 December 2007
74. "Al Qaeda rocked by apparent cyberattack. But who did it?". The Christian
Science Monitor.
75. Britain faces serious cyber threat, spy agency head warns. The Globe and
Mail (13 October 2010). Retrieved 8 November 2011.
76. "Attack the City: why the banks are 'war gaming'".
77. "Wall Street banks learn how to survive in staged cyber attack". Reuters. 21
October 2013.
78. "Germany's 60-person Computer Network Operation (CNO) unit has been
practicing for cyber war for years."
79. "Hackers wanted to man front line in cyber war", The Local, 24 March 2013
80. "Germany to invest 100 million euros on internet surveillance: report",
Kazinform, 18 June 2013
81. "Nationaal Cyber Security Centrum – NCSC".
82. "Defensie Cyber Strategie".
83. "Cyber commando".
84. Ringstrom, Anna (January 25, 2017). Goodman, David, ed. "Swedish forces
exposed to extensive cyber attack: Dagens Nyheter". Reuters. Archived from the
original on January 25, 2017. Sweden's armed forces were recently exposed to an
extensive cyber attack that prompted them to shut down an IT system used in military
exercises, daily newspaper Dagens Nyheter reported on Wednesday. The attack that
affected the Caxcis IT system was confirmed to the Swedish newspaper by armed
forces spokesman Philip Simon.
85. Ukraine's military denies Russian hack attack , Yahoo! News (6 January 2017)
86. "Danger Close: Fancy Bear Tracking of Ukrainian Field Artillery Units".
CrowdStrike. 22 December 2016.
87. Defense ministry denies reports of alleged artillery losses because of Russian
hackers' break into software, Interfax-Ukraine (6 January 2017)
88. Mazanec, Brain M. (2015). The Evolution of Cyber War. USA: University of
Nebraska Press. pp. 221–222. ISBN 9781612347639.
89. "BlackEnergy malware activity spiked in runup to Ukraine power grid takedown".
The Register. Retrieved 26 December 2016.
90. "War in the fifth domain. Are the mouse and keyboard the new weapons of
conflict?". The Economist. 1 July 2010. Retrieved 2 July 2010. Important thinking about
the tactical and legal concepts of cyber-warfare is taking place in a former Soviet
barracks in Estonia, now home to NATO's "centre of excellence" for cyber-defence. It
was established in response to what has become known as "Web War 1", a concerted
denial-of-service attack on Estonian government, media and bank web servers that was
precipitated by the decision to move a Soviet-era war memorial in central Tallinn in
2007.
91. Estonia accuses Russia of 'cyber attack'. The Christian Science Monitor. (17 May
2007). Retrieved 8 November 2011.
92. Ian Traynor, "Russia accused of unleashing cyberwar to disable Estonia", The
Guardian, 17 May 2007
93. Boyd, Clark. (June 17, 2010) "Cyber-war a growing threat warn experts". BBC
News. Retrieved 8 November 2011.
94. Scott J. Shackelford, From Nuclear War to Net War: Analogizing Cyber Attacks in
International Law, 27 Berkeley J. Int'l Law. 192 (2009).
95. "Israel Adds Cyber-Attack to IDF", Military.com, 10 February 2010
96. Fulghum, David A. "Why Syria's Air Defenses Failed to Detect Israelis", Aviation
Week & Space Technology, 3 October 2007. Retrieved 3 October 2007.
97. Fulghum, David A. "Israel used electronic attack in air strike against Syrian
mystery target", Aviation Week & Space Technology, 8 October 2007. Retrieved 8
October 2007.
98. "Iran's military is preparing for cyber warfare". Flash//CRITIC Cyber Threat News.
Retrieved 18 March 2015.
99. AFP (1 October 2010). Stuxnet worm brings cyber warfare out of virtual world.
Google. Retrieved 8 November 2011.
100. Ralph Langner: Cracking Stuxnet, a 21st-century cyber weapon | Video on.
Ted.com. Retrieved 8 November 2011.
101. American Forces Press Service: Lynn Explains U.S. Cybersecurity Strategy.
Defense.gov. Retrieved 8 November 2011.
102. "Pentagon to Consider Cyberattacks Acts of War". The New York Times. 31 May
2006
103. Dilanian, Ken. "Cyber-attacks a bigger threat than Al Qaeda, officials say", Los
Angeles Times, 12 March 2013
104. "Intelligence Chairman: U.S. Fighting Cyber War 'Every Day'", PJ Media, 29 July
2013
105. "Cyberwar: War in the Fifth Domain" Economist, 1 July 2010
106. The Lipman Report, 15 October 2010
107. Clarke, Richard. "China's Cyberassault on America", The Wall Street Journal, 15
June 2011
108. "Cyberwarrior Shortage Threatens U.S. Security". NPR, 19 July 2010
109. "U.S. military cyberwar: What's off-limits?" CNET, 29 July 2010
110. "US Launched Cyber Attacks on Other Nations". RT, 26 January 2012.
111. Sanger, David E. "Obama Order Sped Up Wave of Cyberattacks Against Iran." The
New York Times, 1 June 2012.
112. ANNUAL REPORT TO CONGRESS Military and Security Developments Involving
the People's Republic of China 2010. US Defense Department (PDF). Retrieved 8
November 2011.
113. AP: Pentagon takes aim at China cyber threat Archived 23 August 2010 at
the Wayback Machine.
114. "The Joint Operating Environment", Joint Forces Command, 18 February 2010,
pp. 34–36
115. U.S. drone and predator fleet is being keylogged. Wired, October 2011. Retrieved
6 October 2011
116. Hennigan, W.J. "Air Force says drone computer virus poses 'no threat'". Los
Angeles Times, 13 October 2011.
117. Mathew J. Schwartz (21 November 2011). "Hacker Apparently Triggers Illinois
Water Pump Burnout". InformationWeek.
118. Kim Zetter (30 November 2011). "Exclusive: Comedy of Errors Led to False
'Water-Pump Hack' Report". Wired.
119. Barrett, Devlin (5 June 2015). "U.S. Suspects Hackers in China Breached About
four (4) Million People's Records, Officials Say". Wall Street Journal. Retrieved 5
June 2015.
120. "U.S. gov't hack may be four (4) times larger than first reported".
121. Sanders, Sam (4 June 2015). "Massive Data Breach Puts 4 Million Federal
Employees' Records At Risk". NPR.
122. "Joint Statement from the Department Of Homeland Security and Office of the
Director of National Intelligence on Election Security". Department Of Homeland
Security and Office of the Director of National Intelligence on Election Security.
October 7, 2016. Retrieved 15 October 2016.
123. "U.S. Says Russia Directed Hacks to Influence Elections". NYT. Oct 7, 2016.
124. "Presidential approval and reporting of covert actions". gpo.gov. United States
Code. Retrieved 16 October 2016.
125. "VP Biden Promises Response to Russian Hacking". NBC News Meet the Press.
Oct 14, 2016.
126. "Biden Hints at U.S. Response to Russia for Cyberattacks". NYT. Oct 15, 2016.
127. Lee, Carol E.; Sonne, Paul (December 30, 2016). "U.S. Sanctions Russia Over
Election Hacking; Moscow Threatens to Retaliate" – via Wall Street Journal.
128. "U.S. imposes sanctions on Russia over election interference". CBS News.
December 29, 2016. Retrieved December 29, 2016.
129. "US expels 35 Russian diplomats, closes two compounds: report". DW.COM.
December 29, 2016. Retrieved December 29, 2016.
130. Satter, Raphael. "US general: We hacked the enemy in Afghanistan.". Associated
Press, 24 August 2012.
131. Sanger, David E.; Broad, William J. (4 March 2017). "Trump Inherits a Secret
Cyberwar Against North Korean Missiles". The New York Times. Retrieved 4
March 2017.
132. A Bill. To amend the Homeland Security Act of 2002 and other laws to enhance
the security and resiliency of the cyber and communications infrastructure of the
United States.. Senate.gov. 111th Congress 2D Session
133. Senators Say Cybersecurity Bill Has No 'Kill Switch', Information Week, 24 June
2010. Retrieved 25 June 2010.
134. DOD – Cyber Counterintelligence. Dtic.mil. Retrieved 8 November 2011.
135. Pentagon Bill To Fix Cyber Attacks: ,0M. CBS News. Retrieved 8 November 2011.
136. "Senate Legislation Would Federalize Cybersecurity". The Washington Post.
Retrieved 8 November 2011.
137. "White House Eyes Cyber Security Plan". CBS News (10 February 2009).
Retrieved 8 November 2011.
138. CCD COE – Cyber Defence. Ccdcoe.org. Retrieved 8 November 2011.
139. Associated Press (11 May 2009) FBI to station cybercrime expert in
Estonia. Boston Herald. Retrieved 8 November 2011.
140. Reed, John. "Is the 'holy grail' of cyber security within reach?". Foreign Policy
Magazine, 6 September 2012.
141. Carroll, Chris. "US can trace cyberattacks, mount pre-emptive strikes, Panetta
says". Stars and Stripes, 11 October 2012.
142. "Latest viruses could mean 'end of world as we know it,' says man who
discovered Flame", The Times of Israel, 6 June 2012
143. "Cyber espionage bug attacking Middle East, but Israel untouched — so far", The
Times of Israel, 4 June 2013
144. Rid, Thomas (October 2011). "Cyber War Will Not Take Place". Journal of
Strategic Studies. 35: 5–32. doi:10.1080/01402390.2011.608939. Retrieved 21
October 2011.
145. Deibert, Ron (2011). "Tracking the emerging arms race in cyberspace". Bulletin
of the Atomic Scientists. 67 (1): 1–8. doi:10.1177/0096340210393703.
146. Sommer, Peter (January 2011). "Reducing Systemic Cybersecurity
Risk" (PDF). OECD Multi-Displinary Issues. Retrieved 21 May 2012.
147. Gaycken, Sandro (2010). "Cyberwar – Das Internet als Kriegsschauplatz".
148. Russian Embassy to the UK [2]. Retrieved 25 May 2012.
149. Tom Gjelten (23 September 2010). "Seeing The Internet As An 'Information
Weapon'". NPR. Retrieved 23 September 2010.
150. Gorman, Siobhan. (4 June 2010) WSJ: U.S. Backs Talks on Cyber Warfare. The
Wall Street Journal. Retrieved 8 November 2011.
151. Sean Gallagher, US, Russia to install "cyber-hotline" to prevent accidental
cyberwar, Arstechnica, 18 June 2013
152. Український центр політичного менеджменту – Зміст публікації – Конвенция о
запрещении использования кибервойны. Politik.org.ua. Retrieved 8 November 2011.
153. "'Digital Geneva Convention' needed to deter nation-state hacking: Microsoft
president". Reuters. 14 February 2017. Retrieved 20 February 2017.
154. Kaspersky, Eugene. "A Digital Geneva Convention? A Great Idea.". Forbes.
Retrieved 20 February 2017.
155. Bodmer, Kilger, Carpenter, & Jones (2012). Reverse Deception: Organized Cyber
Threat Counter-Exploitation. New York: McGraw-Hill Osborne Media. ISBN 0071772499,
"ISBN 978-0071772495"
156. NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE)
157. Cyberwar Twitter feed from Richard Stiennon
158. Cyberwar News community by Reza Rafati
159. "Sabotaging the System" video, "60 Minutes", 8 November 2009, CBS News, 15
minutes
160. ABC: Former White House security advisor warns of cyber war
161. Wall Street Journal: Fighting Wars in Cyberspace
162. Will There Be An Electronic Pearl Harbor, PC World by Ira Winkler, 1 December
2009
163. Senate panel: 80 percent of cyberattacks preventable, Wired, 17 November 2009
164. Consumer Reports Online Security Guide
165. Cyberwarfare reference materials
166. Duncan Gardham, 26 June 2009, Hackers recruited to fight 'new cold war',
Telegraph UK
167. Stefano Mele, Jan 2016, Cyber Strategy & Policy Brief (Volume 01 – January
2016)
168. Stefano Mele, Jun 2013, Cyber-Weapons: Legal and Strategic Aspects (version
2.0)
169. Stefano Mele, Sep 2010, Cyberwarfare and its damaging effects on citizens
170. Cybersecurity: Authoritative Reports and Resources, US Congressional Research
Service
171. Why the USA is Losing The Cyberwar Against China, by Joseph Steinberg,
VentureBeat, 9 November 2011
172. Michael Riley and Ashlee Vance, 20 July 2011, Cyber Weapons: The New Arms
Race
173. The Digital Arms Race: NSA Preps America for Future Battle, Der Spiegel,
January 2015
174. Andress, Jason. Winterfeld, Steve. (2011). Cyber Warfare: Techniques, Tactics
and Tools for Security Practitioners. Syngress. ISBN 1-59749-637-5
175. Brenner, S. (2009). Cyber Threats: The Emerging Fault Lines of the Nation State.
Oxford University Press. ISBN 0-19-538501-2
176. Carr, Jeffrey. (2010). Inside Cyber Warfare: Mapping the Cyber Underworld.
O'Reilly. ISBN 978-0-596-80215-8
177. Cordesman, Anthony H., Cordesman, Justin G. Cyber-threats, Information
Warfare, and Critical Infrastructure Protection, Greenwood Publ. (2002)
178. Costigan, Sean S.; Perry, Jake (2012). Cyberspaces and global affairs. Farnham,
Surrey: Ashgate. ISBN 9781409427544.
179. Gaycken, Sandro. (2012). Cyberwar – Das Wettrüsten hat längst begonnen.
Goldmann/Randomhouse. ISBN 978-3442157105
180. Geers, Kenneth. (2011). Strategic Cyber Security. NATO Cyber Centre. Strategic
Cyber Security, ISBN 978-9949-9040-7-5, 169 pages
181. Shane Harris (2014). @War: The Rise of the Military-Internet Complex. Eamon
Dolan/Houghton Mifflin Harcourt. ISBN 978-0544251793.
182. Hunt, Edward (2012). "US Government Computer Penetration Programs and the
Implications for Cyberwar". IEEE Annals of the History of Computing. 34 (3): 4–
21. doi:10.1109/mahc.2011.82.
183. Janczewski, Lech; Colarik, Andrew M. Cyber Warfare and Cyber Terrorism IGI
Global (2008)
184. Rid, Thomas (2011) "Cyber War Will Not Take Place," Journal of Strategic
Studies, doi:10.1080/01402390.2011.608939
185. Ventre, D. (2007). La guerre de l'information. Hermes-Lavoisier. 300 pages
186. Ventre, D. (2009). Information Warfare. Wiley – ISTE. ISBN 978-1-84821-094-3
187. Ventre, D. (Edit.) (2010). Cyberguerre et guerre de l'information. Stratégies,
règles, enjeux. Hermes-Lavoisier. ISBN 978-2-7462-3004-0
188. Ventre, D. (2011). Cyberespace et acteurs du cyberconflit. Hermes-Lavoisier. 288
pages
189. Ventre, D. (Edit.) (2011). Cyberwar and Information Warfare. Wiley. 460 pages
190. Ventre, D. (2011). Cyberattaque et Cyberdéfense. Hermes-Lavoisier. 336 pages
191. Ventre, D. (Edit.) (2012). Cyber Conflict. Competing National Perspectives. Wiley-
ISTE. 330 pages
192. Woltag, Johann-Christoph: 'Cyber Warfare' in Rüdiger Wolfrum (Ed.) Max Planck
Encyclopedia of Public International Law (Oxford University Press 2012).
5.3 Vulnerability
5.3.1 Commentary
5.3.1.1 Introduction
A vulnerability is a place where an attacker can cut down a system's information
assurance based on the system susceptibility or flaw, attacker access, and attacker
capability.[1] Vulnerability management is the process of identify, classify, remediate,
and mitigate based on risk.[2]
Instances of attacks are exploitable vulnerabilities. The window of vulnerability is the
time from introduced or manifested vulnerability, to removal of vulnerability, or the
attacker was disabled. Definitions for vulnerability are given in [3][4][5][6][7][8][9][10]
[11][12][13][14][15]
A resource (either physical or logical) may have vulnerabilities exploited by a threat
agent and threat action. It is a compromise of
confidentiality, integrity or availability (CIA model) of resources of an organization
and/or other interested parties.
An attack is active when attempting to alter system resources or operation, changing
integrity or availability. A passive attack tries to learn or make use of information from
the system not changing system resources, compromising confidentiality.[5]
OWASP (see figure) follows vulnerabilities as a threat agent through an attack exploits
a vulnerability and the related security controls cause technical impact on an IT
resource/asset connected to a business impact.[16]
Information security management system is a set of policies set by manage based
on risk management principles and countermeasures (controls, services) to give a
security strategy with rules and regulations applicable in a country.[17]
Vulnerabilities are categorized by asset class and are related to hardware, humidity,
dust, soiling, unprotected storage, software, insufficient testing, lack of audit trail,
network, unprotected communication lines, insecure network architecture, personnel,
inadequate recruiting process, inadequate security awareness, physical site, area
subject to flood, unreliable power source, organizational, lack of regular audits, lack of
continuity plans and lack of security.[3]
Causes of vulnerability are complexity,[18] familiarity,[19] connectivity,[12] password
management flaws,[20][18] operating system,[18][21] website browsing,[22] software
bugs,[18] unchecked user input,[18] and recurrent mistakes,[23][24][25] and
summarized as human user, operator, designer, or other human.[26]
Consequences can have high impact and if management allow continued vulnerabilities
they are regarded as a misconduct. Privacy law makes management reduce security
risk. Security audits let independent people certify that risk is minimized. Penetration
test verifies weakness and countermeasures.[27][17]
Key to security is defence in depth: i.e. multi-layer defense to prevent the exploits,
detect and intercept the attacks and find threat agents and prosecute them with an
intrusion detection system and physical security to protect assets as defined by
standards.
Vulnerability /responsible /coordinated disclosure alerts the user community about
problems in a system[28] but this procedure is not enough income for researchers for
cyberwarfare or cybercrime.[29][30]
A vulnerability inventory is maintained for scored classified vulnerabilities Common
Vulnerability Scoring System (CVSS) and OWASP for potential vulnerabilities to educate
system developers reduce their introduction into software.[31]
Vulnerability disclosure date occurs when a security vulnerability is described freely
available to the public on a trusted and independent channel/source after analysis by
experts for risk rating.
Software exists to discover and remove vulnerabilities in a system and provides an
auditor with an overview of vulnerabilities but also needs human judgment to avoid
false positives and a limited-scope view of problems. However constant vigilance,
careful system maintenance, best practices in deployment (e.g. firewalls and access
controls) and auditing of life cycle reduce the risk.
Vulnerabilities in systems come from physical environment, personnel, management,
administration procedure, security measures, business operation, service delivery,
hardware, software, communication equipment and facilities. Technology does not
protect physical assets as maintenance experts need entry to the facilities
Four examples of vulnerability exploits:
• an attacker finds and uses an overflow weakness to install malware to export sensitive
data;
• an attacker convinces a user to open an email message with attached malware;
• an insider copies a hardened, encrypted program onto a thumb drive and cracks it at
home;
• a flood damages one's computer systems installed at ground floor.
5.3.1.2 Software vulnerabilities
Common types of software flaws that lead to vulnerabilities include:
• Memory safety violations, such as:
• Buffer overflows and over-reads
• Dangling pointers
• Input validation errors, such as:
• Format string attacks
• SQL injection
• Code injection
• E-mail injection
• Directory traversal
• Cross-site scripting in web applications
• HTTP header injection
• HTTP response splitting
• Race conditions, such as:
• Time-of-check-to-time-of-use bugs
• Symlink races
• Privilege-confusion bugs, such as:
• Cross-site request forgery in web applications
• Click jacking
• FTP bounce attack
• Privilege escalation
• User interface failures, such as:
• Warning fatigue[32] or user conditioning.
• Blaming the Victim Prompting a user to make a security decision without giving the
user enough information to answer it[33]
• Race Conditions[34][35]
• Side-channel attack
• Timing attack
Some set of coding guidelines have been developed and a large number of static code
analysers has been used to verify that the code follows the guidelines.
5.3.1.3 Information security management system
Information security management system is a set of policies set by manage based
on risk management principles and countermeasures (controls, services) to give a
security strategy with rules and regulations applicable in a country.[17]
Vulnerabilities are categorized by asset class and are related to hardware, humidity,
dust, soiling, unprotected storage, software, insufficient testing, lack of audit trail,
network, unprotected communication lines, insecure network architecture, personnel,
inadequate recruiting process, inadequate security awareness, physical site, area
subject to flood, unreliable power source, organizational, lack of regular audits, lack of
continuity plans and lack of security.[3]
5.3.1.4 Causes of vulnerability
Causes of vulnerability are complexity,[18] familiarity,[19] connectivity,[12] password
management flaws,[20][18] operating system,[18][21] website browsing,[22] software
bugs,[18] unchecked user input,[18] and recurrent mistakes,[23][24][25] and
summarised as human user, operator, designer, or other human.[26]
5.3.1.5 Consequences
Consequences can have high impact and if management allow continued vulnerabilities
they are regarded as a misconduct. Privacy law makes management reduce security
risk. Security audits let independent people certify that risk is minimised. Penetration
test verifies weakness and countermeasures.[27][17]
Key to security is defence in depth: i.e. multilayer defence to prevent the exploits,
detect and intercept the attacks and find threat agents and prosecute them with an
intrusion detection system and physical security to protect assets as defined by
standards.
Vulnerability /responsible /coordinated disclosure alerts the user community about
problems in a system[28] but this procedure is not enough income for researchers for
cyberwarfare or cybercrime.[29][30]
A vulnerability inventory is maintained for scored classified vulnerabilities Common
Vulnerability Scoring System (CVSS) and OWASP for potential vulnerabilities to educate
system developers reduce their introduction into software.[31]
Vulnerability disclosure date occurs when a security vulnerability is described freely
available to the public on a trusted and independent channel/source after analysis by
experts for risk rating.
Software exists to discover and remove vulnerabilities in a system and provides an
auditor with an overview of vulnerabilities but also needs human judgement to avoid
false positives and a limited-scope view of problems. However constant vigilance,
careful system maintenance, best practices in deployment (e.g. firewalls and access
controls) and auditing of life cycle reduce the risk.
Vulnerabilities in systems come from physical environment, personnel, management,
administration procedure, security measures, business operation, service delivery,
hardware, software, communication equipment and facilities. Technology does not
protect physical assets as maintenance experts need entry to the facilities
Four examples of vulnerability exploits:
• an attacker finds and uses an overflow weakness to install malware to export
sensitive data;
• an attacker convinces a user to open an email message with attached malware;
• an insider copies a hardened, encrypted program onto a thumb drive and cracks it at
home;
• a flood damages one's computer systems installed at ground floor.
5.3.1.6 Software vulnerabilities
Common types of software flaws that lead to vulnerabilities include:
• Memory safety violations, such as:
• Buffer overflows and over-reads
• Dangling pointers
• Input validation errors, such as:
• Format string attacks
• SQL injection
• Code injection
• E-mail injection
• Directory traversal
• Cross-site scripting in web applications
• HTTP header injection
• HTTP response splitting
• Race conditions, such as:
• Time-of-check-to-time-of-use bugs
• Symlink races
• Privilege-confusion bugs, such as:
• Cross-site request forgery in web applications
• Clickjacking
• FTP bounce attack
• Privilege escalation
• User interface failures, such as:
• Warning fatigue[32] or user conditioning.
• Blaming the Victim Prompting a user to make a security decision without giving the
user enough information to answer it[33]
• Race Conditions[34][35]
• Side-channel attack
• Timing attack
Some set of coding guidelines have been developed and a large number of static code
analysers has been used to verify that the code follows the guidelines.
5.3.2 References
1. "The Three Tenets of Cyber Security". U.S. Air Force Software Protection
Initiative. Retrieved 2009-12-15.
2. Foreman, P: Vulnerability Management, page 1. Taylor & Francis Group,
2010. ISBN 978-1-4398-0150-5
3. ISO/IEC, "Information technology -- Security techniques-Information security risk
management" ISO/IEC FIDIS 27005:2008
4. British Standard Institute, Information technology -- Security techniques --
Management of information and communications technology security -- Part 1:
Concepts and models for information and communications technology security
management BS ISO/IEC 13335-1-2004
5. Internet Engineering Task Force RFC 2828 Internet Security Glossary
6. CNSS Instruction No. 4009 dated 26 April 2010
7. "FISMApedia". fismapedia.org.
8. "Term:Vulnerability". fismapedia.org.
9. NIST SP 800-30 Risk Management Guide for Information Technology Systems
10. "Glossary". europa.eu.
11. Technical Standard Risk Taxonomy ISBN 1-931624-77-1 Document Number: C081
Published by The Open Group, January 2009.
12. "An Introduction to Factor Analysis of Information Risk (FAIR)", Risk
Management Insight LLC, November 2006;
13. Matt Bishop and Dave Bailey. A Critical Analysis of Vulnerability Taxonomies.
Technical Report CSE-96-11, Department of Computer Science at the University of
California at Davis, September 1996
14. Schou, Corey (1996). Handbook of INFOSEC Terms, Version 2.0. CD-ROM (Idaho
State University & Information Systems Security Organization)
15. NIATEC Glossary
16. ISACA THE RISK IT FRAMEWORK (registration required) Archived July 5, 2010, at
the Wayback Machine.
17. Wright, Joe; Harmening, Jim (2009). "15". In Vacca, John. Computer and
Information Security Handbook. Morgan Kaufmann Publications. Elsevier Inc.
p. 257. ISBN 978-0-12-374354-1.
18. Kakareka, Almantas (2009). "23". In Vacca, John. Computer and Information
Security Handbook. Morgan Kaufmann Publications. Elsevier Inc. p. 393. ISBN 978-0-12-
374354-1.
19. Krsul, Ivan (April 15, 1997). "Technical Report CSD-TR-97-026". The COAST
Laboratory Department of Computer Sciences, Purdue
University. CiteSeerX 10.1.1.26.5435.
20. Pauli, Darren (16 January 2017). "Just give up: 123456 is still the world's most
popular password". The Register. Retrieved 2017-01-17.
21. "The Six Dumbest Ideas in Computer Security". ranum.com.
22. "The Web Application Security Consortium / Web Application Security
Statistics". webappsec.org.
23. Ross Anderson. Why Cryptosystems Fail. Technical report, University Computer
Laboratory, Cam- bridge, January 1994.
24. Neil Schlager. When Technology Fails: Significant Technological Disasters,
Accidents, and Failures of the Twentieth Century. Gale Research Inc., 1994.
25. Hacking: The Art of Exploitation Second Edition
26. Kiountouzis, E. A.; Kokolakis, S. A. Information systems security: facing the
information society of the 21st century. London: Chapman & Hall, Ltd. ISBN 0-412-
78120-4.
27. Bavisi, Sanjay (2009). "22". In Vacca, John. Computer and Information Security
Handbook. Morgan Kaufmann Publications. Elsevier Inc. p. 375. ISBN 978-0-12-374354-
1.
28. "The new era of vulnerability disclosure - a brief chat with HD Moore". The Tech
Herald.
29. "Browse - Content - SecurityStreet". rapid7.com.
30. Betz, Chris (11 Jan 2015). "A Call for Better Coordinated Vulnerability Disclosure
- MSRC - Site Home - TechNet Blogs". blogs.technet.com. Retrieved 12 January 2015.
31. "Category:Vulnerability". owasp.org.
32. "Warning Fatigue". freedom-to-tinker.com.
33. [1] Archived October 21, 2007, at the Wayback Machine.
34. "Jesse Ruderman  » Race conditions in security dialogs". squarefree.com.
35. "lcamtuf's blog". lcamtuf.blogspot.com.
5.4 Exploits
5.4.1 Commentary
5.4.1.1 Exploit (computer security)
An exploit is a piece of software, a chunk of data, or a sequence of commands that
takes advantage of a bug or vulnerability in order to cause unintended or unanticipated
behaviour to occur on computer software, hardware, or something electronic (usually
computerized). Such behaviour frequently includes things like gaining control of a
computer system, allowing privilege escalation, or a denial-of-service (DoS or related
DDoS) attack.
5.4.1.2 Classification
There are several methods of classifying exploits. The most common is by how the
exploit contacts the vulnerable software. A remote exploit[1] works over a network and
exploits the security vulnerability without any prior access to the vulnerable system.
A local exploit[2] requires prior access to the vulnerable system and usually increases
the privileges of the person running the exploit past those granted by the system
administrator. Exploits against client applications also exist, usually consisting of
modified servers that send an exploit if accessed with a client application.
Exploits against client applications may also require some interaction with the user
and thus may be used in combination with the social engineering method. Another
classification is by the action against the vulnerable system; unauthorized data access,
arbitrary code execution, and denial of service are examples.
Many exploits are designed to provide superuser-level access to a computer system.
However, it is also possible to use several exploits, first to gain low-level access, then
to escalate privileges repeatedly until one reaches root.
Normally a single exploit can only take advantage of a specific software vulnerability.
Often, when an exploit is published, the vulnerability is fixed through a patch and the
exploit becomes obsolete until newer versions of the software become available. This
is the reason why some black hat hackers do not publish their exploits but keep them
private to themselves or other hackers.
Such exploits are referred to as zero day exploits and to obtain access to such exploits
is the primary desire of unskilled attackers, often nicknamed script kiddies.[3]
5.4.1.3 Types
Exploits are commonly categorized and named[4][5] by the type of vulnerability they
exploit (see vulnerabilities for a list), whether they are local/remote and the result of
running the exploit (e.g. EoP, DoS, spoofing).
5.4.1.4 Pivoting
Pivoting refers to a method used by penetration testers that uses the compromised
system to attack other systems on the same network to avoid restrictions such
as firewall configurations, which may prohibit direct access to all machines. For
example, if an attacker compromises a web server on a corporate network, the
attacker can then use the compromised web server to attack other systems on the
network. These types of attacks are often called multi-layered attacks. Pivoting is also
known as island hopping.
Pivoting can further be distinguished into proxy pivoting and VPN pivoting. Proxy
pivoting generally describes the practice of channelling traffic through a compromised
target using a proxy payload on the machine and launching attacks from the computer.
[6] This type of pivoting is restricted to certain TCP and UDP ports that are supported
by the proxy.
VPN pivoting enables the attacker to create an encrypted layer to tunnel into the
compromised machine to route any network traffic through that target machine, for
example, to run a vulnerability scan on the internal network through the compromised
machine, effectively giving the attacker full network access as if they were behind the
firewall.
Typically, the proxy or VPN applications enabling pivoting are executed on the target
computer as the payload (software) of an exploit.
5.4.2 References
1. "Remote Exploits - Exploit Database". www.exploit-db.com.
2. "Privilege Escalation and Local Exploits - Exploit Database". www.exploit-db.com.
3. Whitman,Michael (2012). "Chapter 2: The Need for Security". Principles of
Information Security, Fourth Edition. Boston, Mass: Course Technology. p. 53.
4. "Exploits Database by Offensive Security". www.exploit-db.com.
5. "Exploit Database | Rapid7". www.rapid7.com.
6. "Metasploit Basics – Part 3: Pivoting and Interfaces". Digital Bond.
• Kahsari Alhadi, Milad. Metasploit Penetration Tester's Guide, ISBN 978-600-7026-62-
5.5 Malware topics
5.5.1 Commentary
Infectious malware consists of:
• Computer virus
• Comparison of computer viruses
• Computer worm
• List of computer worms
• Timeline of computer viruses and worms
Concealment consists of:
• Trojan horse
• Rootkit
• Backdoor
• Zombie computer
• Man-in-the-middle
• Man-in-the-browser
• Man-in-the-mobile
• Clickjacking
Malware for profit consists of:
• Privacy-invasive software
• Adware
• Spyware
• Botnet
• Keystroke logging
• Form grabbing
• Web threats
• Fraudulent dialer
• Malbot
• Scareware
• Rogue security software
• Ransomware
• Crimeware
Malware by operating systems are:
• Linux malware
• Palm OS viruses
• Mobile malware
• Macro virus
• Classic Mac OS viruses
• MacOS malware
• iOS malware
• Android malware
Protection are classified by:
• Anti-keylogger
• Antivirus software
• Browser security
• Internet security
• Mobile security
• Network security
• Defensive computing
• Firewall
• Intrusion detection system
• Data loss prevention software
Countermeasures consist of:
• Computer and network surveillance
• Operation: Bot Roast
• Honeypot
Categoriesfor malware can be classified as:
• Trojan horses
• Social engineering (computer security)
• Spyware
• Web security exploits
• Cyberwarfare
5.5.2 References
1. "Malware definition". techterms.com. Retrieved 27 September 2015.
2. Christopher Elisan (5 September 2012). Malware, Rootkits & Botnets A
Beginner's Guide. McGraw Hill Professional. pp. 10–. ISBN 978-0-07-179205-9.
3. Stallings, William (2012). Computer security : principles and practice. Boston:
Pearson. p. 182. ISBN 978-0-13-277506-9.
4. "Defining Malware: FAQ". technet.microsoft.com. Retrieved 10 September 2009.
5. "An Undirected Attack Against Critical Infrastructure" (PDF). United States
Computer Emergency Readiness Team(Us-cert.gov). Retrieved 28 September 2014.
6. "Evolution of Malware-Malware Trends". Microsoft Security Intelligence Report-
Featured Articles. Microsoft.com. Retrieved 28 April 2013.
7. "Virus/Contaminant/Destructive Transmission Statutes by State". National
Conference of State Legislatures. 2012-02-14. Retrieved 26 August 2013.
8. "§ 18.2-152.4:1 Penalty for Computer Contamination" (PDF). Joint Commission on
Technology and Science. Retrieved 17 September 2010.
9. Russinovich, Mark (2005-10-31). "Sony, Rootkits and Digital Rights Management
Gone Too Far". Mark's Blog. Microsoft MSDN. Retrieved 2009-07-29.
10. "Protect Your Computer from Malware". OnGuardOnline.gov. Retrieved 26
August 2013.
11. "Malware". FEDERAL TRADE COMMISSION- CONSUMER INFORMATION.
Retrieved 27 March 2014.
12. Hernandez, Pedro. "Microsoft Vows to Combat Government Cyber-Spying".
eWeek. Retrieved 15 December 2013.
13. Kovacs, Eduard. "MiniDuke Malware Used Against European Government
Organizations". Softpedia. Retrieved 27 February 2013.
14. "South Korea network attack 'a computer virus'". BBC. Retrieved 20 March 2013.
15. "Malware Revolution: A Change in Target". March 2007.
16. "Child Porn: Malware's Ultimate Evil". November 2009.
17. PC World – Zombie PCs: Silent, Growing Threat.
18. "Peer To Peer Information". NORTH CAROLINA STATE UNIVERSITY. Retrieved 25
March 2011.
19. "Another way Microsoft is disrupting the malware ecosystem". Retrieved 18
February 2015.
20. "Shamoon is latest malware to target energy sector". Retrieved 18
February 2015.
21. "Computer-killing malware used in Sony attack a wake-up call". Retrieved 18
February 2015.
22. "Symantec Internet Security Threat Report: Trends for July–December 2007
(Executive Summary)" (PDF). XIII. Symantec Corp. April 2008: 29. Retrieved 11
May 2008.
23. "F-Secure Reports Amount of Malware Grew by 100% during 2007" (Press
release). F-Secure Corporation. 4 December 2007. Retrieved 11 December 2007.
24. "F-Secure Quarterly Security Wrap-up for the first quarter of 2008". F-Secure. 31
March 2008. Retrieved 25 April 2008.
25. "Continuing Business with Malware Infected Customers". Gunter Ollmann.
October 2008.
26. "New Research Shows Remote Users Expose Companies to Cybercrime".
Webroot. April 2013.
27. "Symantec names Shaoxing, China as world's malware capital". Engadget.
Retrieved 15 April 2010.
28. Rooney, Ben (2011-05-23). "Malware Is Posing Increasing Danger". Wall Street
Journal.
29. Suarez-Tangil, Guillermo; Juan E. Tapiador, Pedro Peris-Lopez, Arturo Ribagorda
(2014). "Evolution, Detection and Analysis of Malware in Smart Devices" (PDF). IEEE
Communications Surveys & Tutorials.
30. "computer virus – Encyclopedia Britannica". Britannica.com. Retrieved 28
April 2013.
31. All about Malware and Information Privacy
32. "What are viruses, worms, and Trojan horses?". Indiana University. The Trustees
of Indiana University. Retrieved 23 February 2015.
33. Landwehr, C. E; A. R Bull; J. P McDermott; W. S Choi (1993). A taxonomy of
computer program security flaws, with examples. DTIC Document. Retrieved 2012-04-
05.
34. "Trojan Horse Definition". Retrieved 2012-04-05.
35. "Trojan horse". Webopedia. Retrieved 2012-04-05.
36. "What is Trojan horse? – Definition from Whatis.com". Retrieved 2012-04-05.
37. "Trojan Horse: [coined By MIT-hacker-turned-NSA-spook Dan Edwards] N.".
Retrieved 2012-04-05.
38. "What is the difference between viruses, worms, and Trojans?". Symantec
Corporation. Retrieved 2009-01-10.
39. "VIRUS-L/comp.virus Frequently Asked Questions (FAQ) v2.00 (Question B3: What
is a Trojan Horse?)". 9 October 1995. Retrieved 2012-09-13.
40. McDowell, Mindi. "Understanding Hidden Threats: Rootkits and Botnets". US-
CERT. Retrieved 6 February 2013.
41. "Catb.org". Catb.org. Retrieved 15 April 2010.
42. Vincentas (11 July 2013). "Malware in SpyWareLoop.com". Spyware Loop.
Retrieved 28 July 2013.
43. Staff, SPIEGEL. "Inside TAO: Documents Reveal Top NSA Hacking Unit".
SPIEGEL. Retrieved 23 January 2014.
44. Edwards, John. "Top Zombie, Trojan Horse and Bot Threats". IT Security.
Retrieved 25 September 2007.
45. Appelbaum, Jacob. "Shopping for Spy Gear:Catalog Advertises NSA Toolbox".
SPIEGEL. Retrieved 29 December 2013.
46. Evasive malware
47. Kirat, Dhilung; Vigna, Giovanni; Kruegel, Christopher (2014). Barecloud: bare-
metal analysis-based evasive malware detection. ACM. pp. 287–301. ISBN 978-1-
931971-15-7.
48. The Four Most Common Evasive Techniques Used by Malware. April 27, 2015.
49. Young, Adam; Yung, Moti (1997). "Deniable Password Snatching: On the
Possibility of Evasive Electronic Espionage". Symp. on Security and Privacy. IEEE.
pp. 224–235. ISBN 0-8186-7828-3.
50. Casey, Henry T. (25 November 2015). "Latest adware disables antivirus
software". Tom's Guide. Yahoo.com. Retrieved 25 November 2015.
51. "Global Web Browser... Security Trends" (PDF). Kaspersky lab. November 2012.
52. Rashid, Fahmida Y. (27 November 2012). "Updated Browsers Still Vulnerable to
Attack if Plugins Are Outdated". pcmag.com.
53. Danchev, Dancho (18 August 2011). "Kaspersky: 12 different vulnerabilities
detected on every PC". pcmag.com.
54. "Adobe Security bulletins and advisories". Adobe.com. Retrieved 19
January 2013.
55. Rubenking, Neil J. "Secunia Personal Software Inspector 3.0 Review & Rating".
PCMag.com. Retrieved 19 January 2013.
56. "USB devices spreading viruses". CNET. CBS Interactive. Retrieved 18
February 2015.
57. "LNCS 3786 – Key Factors Influencing Worm Infection", U. Kanlayasiri, 2006, web
(PDF): SL40-PDF.
58. "How Antivirus Software Works?". Retrieved 2015-10-16.
59. "Microsoft Security Essentials". Microsoft. Retrieved 21 June 2012.
60. "Malicious Software Removal Tool". Microsoft. Retrieved 21 June 2012.
61. "Windows Defender". Microsoft. Retrieved 21 June 2012.
62. Rubenking, Neil J. (8 January 2014). "The Best Free Antivirus for 2014".
pcmag.com.
63. "How do I remove a computer virus?". Microsoft. Retrieved 26 August 2013.
64. "Microsoft Safety Scanner". Microsoft. Retrieved 26 August 2013.
65. "An example of a website vulnerability scanner". Unmaskparasites.com.
Retrieved 19 January 2013.
66. "Redleg's File Viewer. Used to check a webpage for malicious redirects or
malicious HTML coding". Aw-snap.info. Retrieved 19 January 2013.
67. "Example Google.com Safe Browsing Diagnostic page". Google.com.
Retrieved 19 January 2013.
68. "Safe Browsing (Google Online Security Blog)". Retrieved 21 June 2012.
69. Hanspach, Michael; Goetz, Michael (November 2013). "On Covert Acoustical
Mesh Networks in Air". Journal of Communications. doi:10.12720/jcm.8.11.758-767.
70. M. Guri, G. Kedma, A. Kachlon and Y. Elovici, "AirHopper: Bridging the air-gap
between isolated networks and mobile phones using radio frequencies," Malicious and
Unwanted Software: The Americas (MALWARE), 2014 9th International Conference on,
Fajardo, PR, 2014, pp. 58-67.
71. M. Guri, M. Monitz, Y. Mirski and Y. Elovici, "BitWhisper: Covert Signaling Channel
between Air-Gapped Computers Using Thermal Manipulations," 2015 IEEE 28th
Computer Security Foundations Symposium, Verona, 2015, pp. 276-289.
72. GSMem: Data Exfiltration from Air-Gapped Computers over GSM Frequencies.
Mordechai Guri, Assaf Kachlon, Ofer Hasson, Gabi Kedma, Yisroel Mirsky, and Yuval
Elovici, Ben-Gurion University of the Negev; USENIX Security Symposium 2015
73. https://arxiv.org/ftp/arxiv/papers/1606/1606.05915.pdf
74. Vincentas (11 July 2013). "Grayware in SpyWareLoop.com". Spyware Loop.
Retrieved 28 July 2013.
75. "Threat Encyclopedia – Generic Grayware". Trend Micro. Retrieved 27
November 2012.
76. "Rating the best anti-malware solutions". Arstechnica. Retrieved 28
January 2014.
77. "PUP Criteria". malwarebytes.org. Retrieved 13 February 2015.
78. William A Hendric (4 September 2014). "Computer Virus history". The Register.
Retrieved 29 March 2015.
79. John von Neumann, "Theory of Self-Reproducing Automata", Part 1: Transcripts
of lectures given at the University of Illinois, December 1949, Editor: A. W. Burks,
University of Illinois, USA, 1966.
80. Fred Cohen, "Computer Viruses", PhD Thesis, University of Southern California,
ASP Press, 1988.
81. Young, Adam; Yung, Moti (2004). Malicious cryptography - exposing
cryptovirology. Wiley. pp. 1–392. ISBN 978-0-7645-4975-5.
5.6 Payload (computing)
5.6.1 Commentary
The payload is the intended message of transmitted data excluding control information.
[1][2][4][5] In computer security, the payload is the part of malware[3] which has
overhead code for replication and hiding.
5.6.2 References
1. "Payload definition". Pcmag.com. 1994-12-01. Retrieved 2012-02-07.
2. "Payload definition". Techterms.com. Retrieved 2012-02-07.
3. "Payload definition". Securityfocus.com. Retrieved 2012-02-07.
4. "RFC 1122: Requirements for Internet Hosts — Communication Layers". IETF.
October 1989. p. 18. RFC 1122. Retrieved 2010-06-07.
5. "Data Link Layer (Layer 2)". The TCP/IP Guide. 2005-09-20. Retrieved 2010-01-31.
5.7 Infectious Malware
5.7.1 Computer virus
5.7.1.1 Commentary
5.7.1.1.1 Introduction
A virus is a type of malicious software that, when executed, replicates itself or infects
other programs, data or boot sector by modifying them.[1][2][3][4][5] It often has
harmful side effects. They exploit security vulnerabilities to gain access to their hosts'
computers and computing resources e.g. Microsoft Windows.[6][7][8][9][10][11][12]
[13] Motives for viruses can include seeking profit, desire to send a political message,
personal amusement, to demonstrate a vulnerability in software,
for sabotage and denial of service, or to explore cybersecurity issues, artificial
life and evolutionary algorithms.[14]
Viruses cause much financial damage,[15] due to system failure, wasting computer
resources, corrupting data, increasing maintenance costs, etc and, as a
result, antivirus tools have been developed[16] together with research searching to find
antivirus solutions for detecting new viruses.[17]
The theoretical and practical studies of viruses have been carried since 1949.[18][19]
[20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41]
A virus has a search routine to locate targets for infection and copy itself into the
program containing the search routine locates.[42] It has infection mechanism to
spread itself.[43] The infection is triggered any time the file with the virus is used and
satisfies the start condition.[44][45][46] The  infection is the body or data that perform
the actual malicious purpose of the virus as a problem[43] or as a hoax.[47]
The virus has a four phases life cycle. They are:
• Dormant phase when the virus accesses the target user's software but takes no
action until it is triggered.[43]
• Propagation phase where the virus replicates itself.[43]
• Triggering phase when it is activated by its starting condition.[43]
• Execution phase where the virus is executed.[43]
Viruses infect programs,  data files[54][55][59] or the boot sector of hard drive.[48][49]
[50] They can be resident as part of the operating system or non-resident where the
virus is held on backing storage.[51][52][53][56][49][57][58]
Stealth strategies are based on rewriting file properties to ensure no changes,[60] by
overwriting blank data in files,[61] killing antivirus tasks[62] and intercepting system
calls.[63]
Antivirus best avoids viruses by booting from a clean medium and checking the system
files against virus signatures, using heuristics.[64][65] or a database of file "hashes" to
identify altered files[66], checking for virus strings (signatures), encrypting the virus a
number of times,[67][68][69][70][71][72] using a metamorphic engine,[73][74] hidden
extensions,[75][76] and security holes.[77][78][6][7][8][79][80][81]
Antivirus software consists of a number of programs.[82][83][84][85][86][87][88][89][90]
[91][92] They use virus signatures[93] and executional signatures.
Recovery strategies and methods consist of frequent backups of data and the operating
systems on different media, generally unconnected to the system and read-only.[94][95]
[96]
Viruses are removed by uploading files to a website,[97][98] downloading an antivirus
program[99][100] and system restore from previous restore points.[101][102][103][94]
[104]
Viruses are spread by swapping files on devices,[105][106][107] networked devices,
[108] email and documentation macros.[109][110][111][112][113][114]
5.7.1.1.2 Operations and functions
5.7.1.1.2.1 Parts
A viable computer virus must contain a search routine, which locates new files or new
disks which are worthwhile targets for infection. Secondly, every computer virus must
contain a routine to copy itself into the program which the search routine locates.
[42] The three main virus parts are:
Infection mechanism
Infection mechanism (also called 'infection vector'), is how the virus spreads or
propagates. A virus typically has a search routine, which locates new files or new disks
for infection.[43]
5.7.1.1.2.2 Trigger
The trigger, which is also known as logic bomb, is the compiled version that could be
activated any time an executable file with the virus is run that determines the event or
condition for the malicious "payload" to be activated or delivered[44] such as a
particular date, a particular time, particular presence of another program, capacity of
the disk exceeding some limit,[45] or a double-click that opens a particular file.[46]
5.7.1.1.2.3 Payload
The payload itself is the harmful activity,[43] or some times non-destructive but
distributive, which is called Virus hoax.[47]
5.7.1.1.2.4 Phases
Virus phases is the life cycle of the computer virus, described by using an analogy
to biology. This life cycle can be divided into four phases:
5.7.1.1.2.4.1 Dormant phase
The virus program is idle during this stage. The virus program has managed to access
the target user's computer or software, but during this stage, the virus does not take
any action. The virus will eventually be activated by the "trigger" which states which
event will execute the virus, such as a date, the presence of another program or file,
the capacity of the disk exceeding some limit or the user taking a certain action (e.g.,
double-clicking on a certain icon, opening an e-mail, etc.). Not all viruses have this
stage.[43]
5.7.1.1.2.4.2 Propagation phase
The virus starts propagating, that is multiplying and replicating itself. The virus places
a copy of itself into other programs or into certain system areas on the disk. The copy
may not be identical to the propagating version; viruses often "morph" or change to
evade detection by IT professionals and anti-virus software. Each infected program will
now contain a clone of the virus, which will itself enter a propagation phase.[43]
5.7.1.1.2.4.3 Triggering phase
A dormant virus moves into this phase when it is activated, and will now perform the
function for which it was intended. The triggering phase can be caused by a variety of
system events, including a count of the number of times that this copy of the virus has
made copies of itself.[43]
5.7.1.1.2.4.4 Execution phase
This is the actual work of the virus, where the "payload" will be released. It can be
destructive such as deleting files on disk, crashing the system, or corrupting files or
relatively harmless such as popping up humorous or political messages on screen.[43]
5.7.1.1.2.5 Infection targets and replication techniques
Computer viruses infect a variety of different subsystems on their host computers and
software.[48] One manner of classifying viruses is to analyse whether they reside
in binary executables (such as .EXE or .COM files), data files (such as Microsoft
Word documents or PDF files), or in the boot sector of the host's hard drive (or some
combination of all of these).[49][50]
5.7.1.1.2.6 Resident vs. non-resident viruses
A memory-resident virus (or simply "resident virus") installs itself as part of
the operating system when executed, after which it remains in RAM from the time the
computer is booted up to when it is shut down. Resident viruses overwrite interrupt
handling code or other functions, and when the operating system attempts to access
the target file or disk sector, the virus code intercepts the request and redirects
the control flow to the replication module, infecting the target. In contrast, a non-
memory-resident virus (or "non-resident virus"), when executed, scans the disk for
targets, infects them, and then exits (i.e. it does not remain in memory after it is done
executing).[51][52][53]
5.7.1.1.2.7 Macro viruses
Many common applications, such as Microsoft Outlook and Microsoft Word,
allow macro programs to be embedded in documents or emails, so that the programs
may be run automatically when the document is opened. A macro virus (or "document
virus") is a virus that is written in a macro language, and embedded into these
documents so that when users open the file, the virus code is executed, and can infect
the user's computer. This is one of the reasons that it is dangerous to open unexpected
or suspicious attachments in e-mails.[54][55] While not opening attachments in e-mails
from unknown persons or organizations can help to reduce the likelihood of contracting
a virus, in some cases, the virus is designed so that the e-mail appears to be from a
reputable organization (e.g., a major bank or credit card company).
5.7.1.1.2.8 Boot sector viruses
Boot sector viruses specifically target the boot sector and/or the Master Boot Record
[56] (MBR) of the host's hard drive or removable storage media (flash drives, floppy
disks, etc.).[49][57][58]
5.7.1.1.2.9 Email virus
Email virus – A virus that specifically, rather than accidentally, uses the email system
to spread. While virus infected files may be accidentally sent as email attachments,
email viruses are aware of email system functions. They generally target a specific
type of email system (Microsoft’s Outlook is the most commonly used), harvest email
addresses from various sources, and may append copies of themselves to all email
sent, or may generate email messages containing copies of themselves as
attachments.[59]
5.7.1.1.3 Stealth techniques
In order to avoid detection by users, some viruses employ different kinds of deception.
Some old viruses, especially on the MS-DOS platform, make sure that the "last
modified" date of a host file stays the same when the file is infected by the virus. This
approach does not fool antivirus software, however, especially those which maintain
and date cyclic redundancy checks on file changes.[60] Some viruses can infect files
without increasing their sizes or damaging the files. They accomplish this by
overwriting unused areas of executable files. These are called cavity viruses. For
example, the CIH virus, or Chernobyl Virus, infects Portable Executable files. Because
those files have many empty gaps, the virus, which was 1 KB in length, did not add to
the size of the file.[61] Some viruses try to avoid detection by killing the tasks
associated with antivirus software before it can detect them (for example, Conficker).
In the 2010s, as computers and operating systems grow larger and more complex, old
hiding techniques need to be updated or replaced. Defending a computer against
viruses may demand that a file system migrate towards detailed and explicit
permission for every kind of file access.[62]
5.7.1.1.4 Read request intercepts
While some kinds of antivirus software employ various techniques to counter stealth
mechanisms, once the infection occurs any recourse to "clean" the system is
unreliable. In Microsoft Windows operating systems, the NTFS file system is
proprietary. This leaves antivirus software little alternative but to send a "read" request
to Windows OS files that handle such requests. Some viruses trick antivirus software
by intercepting its requests to the Operating system (OS). A virus can hide by
intercepting the request to read the infected file, handling the request itself, and
returning an uninfected version of the file to the antivirus software. The interception
can occur by code injection of the actual operating system files that would handle the
read request. Thus, an antivirus software attempting to detect the virus will either not
be given permission to read the infected file, or, the "read" request will be served with
the uninfected version of the same file.[63]
The only reliable method to avoid "stealth" viruses is to "reboot" from a medium that is
known to be "clear". Security software can then be used to check the dormant
operating system files. Most security software relies on virus signatures, or they
employ heuristics.[64]
[65] Security software may also use a database of file "hashes" for Windows OS files,
so the security software can identify altered files, and request Windows installation
media to replace them with authentic versions. In older versions of Windows,
file cryptographic hash functions of Windows OS files stored in Windows—to allow file
integrity/authenticity to be checked—could be overwritten so that the System File
Checker would report that altered system files are authentic, so using file hashes to
scan for altered files would not always guarantee finding an infection.[66]
5.7.1.1.5 Self-modification
Most modern antivirus programs try to find virus-patterns inside ordinary programs by
scanning them for so-called virus signatures.[67] Unfortunately, the term is misleading,
in that viruses do not possess unique signatures in the way that human beings do. Such
a virus "signature" is merely a sequence of bytes that an antivirus program looks for
because it is known to be part of the virus. A better term would be "search strings".
Different antivirus programs will employ different search strings, and indeed different
search methods, when identifying viruses. If a virus scanner finds such a pattern in a
file, it will perform other checks to make sure that it has found the virus, and not
merely a coincidental sequence in an innocent file, before it notifies the user that the
file is infected. The user can then delete, or (in some cases) "clean" or "heal" the
infected file. Some viruses employ techniques that make detection by means of
signatures difficult but probably not impossible. These viruses modify their code on
each infection. That is, each infected file contains a different variant of the virus.
5.7.1.1.6 Encrypted viruses
One method of evading signature detection is to use simple encryption to encipher
(encode) the body of the virus, leaving only the encryption module and a
static cryptographic keyin cleartext which does not change from one infection to the
next.[68] In this case, the virus consists of a small decrypting module and an encrypted
copy of the virus code. If the virus is encrypted with a different key for each infected
file, the only part of the virus that remains constant is the decrypting module, which
would (for example) be appended to the end. In this case, a virus scanner cannot
directly detect the virus using signatures, but it can still detect the decrypting module,
which still makes indirect detection of the virus possible. Since these would be
symmetric keys, stored on the infected host, it is entirely possible to decrypt the final
virus, but this is probably not required, since self-modifying code is such a rarity that it
may be reason for virus scanners to at least "flag" the file as suspicious.[69] An old but
compact way will be the use of arithmetic operation like addition or subtraction and
the use of logical conditions such as XORing,[70] where each byte in a virus is with a
constant, so that the exclusive-or operation had only to be repeated for decryption. It is
suspicious for a code to modify itself, so the code to do the encryption/decryption may
be part of the signature in many virus definitions.[69] A simpler older approach did not
use a key, where the encryption consisted only of operations with no parameters, like
incrementing and decrementing, bitwise rotation, arithmetic negation, and logical NOT.
[70] Some viruses will employ a means of encryption inside an executable in which the
virus is encrypted under certain events, such as the virus scanner being disabled for
updates or the computer being rebooted. This is called cryptovirology. At said times,
the executable will decrypt the virus and execute its hidden runtimes, infecting the
computer and sometimes disabling the antivirus software.
5.7.1.1.6.1 Polymorphic code
Polymorphic code was the first technique that posed a serious threat to virus scanners.
Just like regular encrypted viruses, a polymorphic virus infects files with an encrypted
copy of itself, which is decoded by a decryption module. In the case of polymorphic
viruses, however, this decryption module is also modified on each infection. A well-
written polymorphic virus therefore has no parts which remain identical between
infections, making it very difficult to detect directly using "signatures".[71]
[72] Antivirus software can detect it by decrypting the viruses using an emulator, or
by statistical pattern analysis of the encrypted virus body. To enable polymorphic code,
the virus has to have a polymorphic engine (also called "mutating engine" or
"mutation engine") somewhere in its encrypted body. See polymorphic code for
technical detail on how such engines operate.[73]
Some viruses employ polymorphic code in a way that constrains the mutation rate of
the virus significantly. For example, a virus can be programmed to mutate only slightly
over time, or it can be programmed to refrain from mutating when it infects a file on a
computer that already contains copies of the virus. The advantage of using such slow
polymorphic code is that it makes it more difficult for antivirus professionals and
investigators to obtain representative samples of the virus, because "bait" files that are
infected in one run will typically contain identical or similar samples of the virus. This
will make it more likely that the detection by the virus scanner will be unreliable, and
that some instances of the virus may be able to avoid detection.
5.7.1.1.6.2 Metamorphic code
To avoid being detected by emulation, some viruses rewrite themselves completely
each time they are to infect new executables. Viruses that utilize this technique are
said to be in metamorphic code. To enable metamorphism, a "metamorphic engine" is
needed. A metamorphic virus is usually very large and complex. For
example, W32/Simile consisted of over 14,000 lines of assembly language code, 90% of
which is part of the metamorphic engine.[74][75]
5.7.1.1.7 Vulnerabilities and infection vectors
5.7.1.1.7.1 Software bugs
As software is often designed with security features to prevent unauthorized use of
system resources, many viruses must exploit and manipulate security bugs, which
are security defects in a system or application software, to spread themselves and
infect other computers. Software development strategies that produce large numbers
of "bugs" will generally also produce potential exploitable "holes" or "entrances" for
the virus.
5.7.1.1.7.2 Social engineering and poor security practices
In order to replicate itself, a virus must be permitted to execute code and write to
memory. For this reason, many viruses attach themselves to executable files that may
be part of legitimate programs (see code injection). If a user attempts to launch an
infected program, the virus' code may be executed simultaneously.[76] In operating
systems that use file extensions to determine program associations (such as Microsoft
Windows), the extensions may be hidden from the user by default. This makes it
possible to create a file that is of a different type than it appears to the user. For
example, an executable may be created and named "picture.png.exe", in which the
user sees only "picture.png" and therefore assumes that this file is a digital image and
most likely is safe, yet when opened, it runs the executable on the client machine.[77]
5.7.1.1.7.3 Vulnerability of different operating systems
The vast majority of viruses target systems running Microsoft Windows. This is due to
Microsoft's large market share of desktop computer users.[78] The diversity of
software systems on a network limits the destructive potential of viruses and malware.
[79] Open-source operating systems such as Linux allow users to choose from a variety
of desktop environments, packaging tools, etc., which means that malicious code
targeting any of these systems will only affect a subset of all users. Many Windows
users are running the same set of applications, enabling viruses to rapidly spread
among Microsoft Windows systems by targeting the same exploits on large numbers of
hosts.[5][6][7][80]
While Linux and Unix in general have always natively prevented normal users from
making changes to the operating system environment without permission, Windows
users are generally not prevented from making these changes, meaning that viruses
can easily gain control of the entire system on Windows hosts. This difference has
continued partly due to the widespread use of administrator accounts in contemporary
versions like Windows XP. In 1997, researchers created and released a virus for Linux—
known as "Bliss".[81] Bliss, however, requires that the user run it explicitly, and it can
only infect programs that the user has the access to modify. Unlike Windows users,
most Unix users do not log in as an administrator, or root user , except to install or
configure software; as a result, even if a user ran the virus, it could not harm their
operating system. The Bliss virus never became widespread, and remains chiefly a
research curiosity. Its creator later posted the source code to Usenet, allowing
researchers to see how it worked.[82]
5.7.1.1.8 Countermeasures
5.7.1.1.8.1 Antivirus software
Many users install antivirus software that can detect and eliminate known viruses
when the computer attempts to download or run the executable file (which may be
distributed as an email attachment, or on USB flash drives, for example). Some
antivirus software blocks known malicious websites that attempt to install malware.
Antivirus software does not change the underlying capability of hosts to transmit
viruses. Users must update their software regularly to patch security
vulnerabilities ("holes"). Antivirus software also needs to be regularly updated in order
to recognize the latest threats. This is because malicious hackers and other individuals
are always creating new viruses. The German AV-TEST Institute publishes evaluations
of antivirus software for Windows[83] and Android.[84]
Examples of Microsoft Windows anti virus and anti-malware software include the
optional Microsoft Security Essentials
[85] (for Windows XP, Vista and Windows 7) for real-time protection, the Windows
Malicious Software Removal Tool
[86] (now included with Windows (Security) Updates on "Patch Tuesday", the second
Tuesday of each month), and Windows Defender (an optional download in the case of
Windows XP).[87] Additionally, several capable antivirus software programs are
available for free download from the Internet (usually restricted to non-commercial
use).[88] Some such free programs are almost as good as commercial competitors.
[89] Common security vulnerabilities are assigned CVE IDs and listed in the US National
Vulnerability Database. Secunia PSI[90] is an example of software, free for personal
use, that will check a PC for vulnerable out-of-date software, and attempt to update
it. Ransomware and phishing scam alerts appear as press releases on the Internet
Crime Complaint Center noticeboard. Ransomware is a virus that posts a message on
the user's screen saying that the screen or system will remain locked or unusable until
a ransom payment is made. Phishing is a deception in which the malicious individual
pretends to be a friend, computer security expert, or other benevolent individual, with
the goal of convincing the targeted individual to reveal passwords or other personal
information.
Other commonly used preventative measures include timely operating system updates,
software updates, careful Internet browsing (avoiding shady websites), and installation
of only trusted software.[91] Certain browsers flag sites that have been reported to
Google and that have been confirmed as hosting malware by Google.[92][93]
There are two common methods that an antivirus software application uses to detect
viruses, as described in the antivirus software article. The first, and by far the most
common method of virus detection is using a list of virus signature definitions. This
works by examining the content of the computer's memory (its Random Access
Memory (RAM), and boot sectors) and the files stored on fixed or removable drives
(hard drives, floppy drives, or USB flash drives), and comparing those files against
a database of known virus "signatures". Virus signatures are just strings of code that
are used to identify individual viruses; for each virus, the antivirus designer tries to
choose a unique signature string that will not be found in a legitimate program.
Different antivirus programs use different "signatures" to identify viruses. The
disadvantage of this detection method is that users are only protected from viruses
that are detected by signatures in their most recent virus definition update, and not
protected from new viruses (see "zero-day attack").[94]
A second method to find viruses is to use a heuristic algorithm based on common virus
behaviours. This method has the ability to detect new viruses for which antivirus
security firms have yet to define a "signature", but it also gives rise to more false
positives than using signatures. False positives can be disruptive, especially in a
commercial environment, because it may lead to a company instructing staff not to use
the company computer system until IT services has checked the system for viruses.
This can slow down productivity for regular workers.
5.7.1.1.8.2 Recovery strategies and methods
One may reduce the damage done by viruses by making regular backups of data (and
the operating systems) on different media, that are either kept unconnected to the
system (most of the time, as in a hard drive), read-only or not accessible for other
reasons, such as using different file systems. This way, if data is lost through a virus,
one can start again using the backup (which will hopefully be recent).[95] If a backup
session on optical media like CD and DVD is closed, it becomes read-only and can no
longer be affected by a virus (so long as a virus or infected file was not copied onto
the CD/DVD). Likewise, an operating system on a bootable CD can be used to start the
computer if the installed operating systems become unusable. Backups on removable
media must be carefully inspected before restoration. The Gammima virus, for example,
propagates via removable flash drives.[96][97]
5.7.1.1.8.3 Virus removal
Many websites run by antivirus software companies provide free online virus scanning,
with limited "cleaning" facilities (after all, the purpose of the websites is to sell
antivirus products and services). Some websites—
like Google subsidiary VirusTotal.com—allow users to upload one or more suspicious
files to be scanned and checked by one or more antivirus programs in one operation.
[98][99] Additionally, several capable antivirus software programs are available for free
download from the Internet (usually restricted to non-commercial use).[100] Microsoft
offers an optional free antivirus utility called Microsoft Security Essentials, a Windows
Malicious Software Removal Tool that is updated as part of the regular Windows update
regime, and an older optional anti-malware (malware removal) tool Windows
Defender that has been upgraded to an antivirus product in Windows 8.
Some viruses disable System Restore and other important Windows tools such as Task
Manager and CMD. An example of a virus that does this is CiaDoor. Many such viruses
can be removed by rebooting the computer, entering Windows "safe mode" with
networking, and then using system tools or Microsoft Safety Scanner.[101] System
Restore on Windows Me, Windows XP, Windows Vista and Windows 7 can restore
the registry and critical system files to a previous checkpoint. Often a virus will cause
a system to "hang" or "freeze", and a subsequent hard reboot will render a system
restore point from the same day corrupted. Restore points from previous days should
work, provided the virus is not designed to corrupt the restore files and does not exist
in previous restore points.[102]
[103]
5.7.1.1.8.4 Operating system reinstallation
Microsoft's System File Checker (improved in Windows 7 and later) can be used to
check for, and repair, corrupted system files.[104] Restoring an earlier "clean" (virus-
free) copy of the entire partition from a cloned disk, a disk image, or a backup copy is
one solution—restoring an earlier backup disk "image" is relatively simple to do, usually
removes any malware, and may be faster than "disinfecting" the computer—or
reinstalling and reconfiguring the operating system and programs from scratch, as
described below, then restoring user preferences.[95] Reinstalling the operating
system is another approach to virus removal. It may be possible to recover copies of
essential user data by booting from a live CD, or connecting the hard drive to another
computer and booting from the second computer's operating system, taking great care
not to infect that computer by executing any infected programs on the original drive.
The original hard drive can then be reformatted and the OS and all programs installed
from original media. Once the system has been restored, precautions must be taken to
avoid reinfection from any restored executable files.[105]
5.7.1.1.8.5 Viruses and the Internet
Before computer networks became widespread, most viruses spread on removable
media, particularly floppy disks. In the early days of the personal computer, many users
regularly exchanged information and programs on floppies. Some viruses spread by
infecting programs stored on these disks, while others installed themselves into the
disk boot sector, ensuring that they would be run when the user booted the computer
from the disk, usually inadvertently. Personal computers of the era would attempt to
boot first from a floppy if one had been left in the drive. Until floppy disks fell out of
use, this was the most successful infection strategy and boot sector viruses were the
most common in the "wild" for many years. Traditional computer viruses emerged in
the 1980s, driven by the spread of personal computers and the resultant increase
in bulletin board system (BBS), modem use, and software sharing. Bulletin board–driven
software sharing contributed directly to the spread of Trojan horse programs, and
viruses were written to infect popularly traded
software. Sharewareand bootleg software were equally common vectors for viruses on
BBSs.[106][107][108] Viruses can increase their chances of spreading to other
computers by infecting files on a network file system or a file system that is accessed
by other computers.[109]
Macro viruses have become common since the mid-1990s. Most of these viruses are
written in the scripting languages for Microsoft programs such as Microsoft
Word and Microsoft Excel and spread throughout Microsoft Office by infecting
documents and spreadsheets. Since Word and Excel were also available for Mac OS,
most could also spread to Macintosh computers. Although most of these viruses did not
have the ability to send infected email messages, those viruses which did take
advantage of the Microsoft Outlook Component Object Model (COM) interface.[110]
[111] Some old versions of Microsoft Word allow macros to replicate themselves with
additional blank lines. If two macro viruses simultaneously infect a document, the
combination of the two, if also self-replicating, can appear as a "mating" of the two and
would likely be detected as a virus unique from the "parents".[112]
A virus may also send a web address link as an instant message to all the contacts
(e.g., friends and colleagues' e-mail addresses) stored on an infected machine. If the
recipient, thinking the link is from a friend (a trusted source) follows the link to the
website, the virus hosted at the site may be able to infect this new computer and
continue propagating.[113]Viruses that spread using cross-site scripting were first
reported in 2002,[114] and were academically demonstrated in 2005.[115] There have
been multiple instances of the cross-site scripting viruses in the "wild", exploiting
websites such as MySpace (with the Samy worm) and Yahoo!.
5.7.1.2 References
1. Stallings, William (2012). Computer security : principles and practice. Boston:
Pearson. p. 182. ISBN 978-0-13-277506-9.
2. Aycock, John (2006). Computer Viruses and Malware. Springer. p. 14. ISBN 978-0-
387-30236-2.
3. http://vx.netlux.org/lib/aas10.html
4. "Alan Solomon 'All About Viruses' (VX heavens)". Web.archive.org. 2011-06-14.
Archived from the original on January 17, 2012. Retrieved 2014-07-17.
5. Ludwig, Mark (1998). The giant black book of computer viruses. Show Low, Ariz:
American Eagle. p. 13. ISBN 978-0-929408-23-1.
6. Mookhey, K.K. et al. (2005). Linux: Security, Audit and Control Features. ISACA.
p. 128. ISBN 9781893209787.
7. Toxen, Bob (2003). Real World Linux Security: Intrusion Prevention, Detection, and
Recovery. Prentice Hall Professional. p. 365. ISBN 9780130464569.
8. Noyes, Katherine (Aug 3, 2010). "Why Linux Is More Secure Than Windows". PCWorld.
9. Skoudis, Edward (2004). "Infection mechanisms and targets". Malware: Fighting
Malicious Code. Prentice Hall Professional. pp. 31–48. ISBN 9780131014053.
10.Aycock, John (2006). Computer Viruses and Malware. Springer. p. 27. ISBN 978-0-
387-30236-2.
11.Ludwig, Mark A. (1996). The Little Black Book of Computer Viruses: Volume 1, The
Basic Technologies. pp. 16–17. ISBN 0-929408-02-0.
12.Harley, David et al. (2001). Viruses Revealed. McGraw-Hill. p. 6. ISBN 0-07-222818-0.
13.Filiol, Eric (2005). Computer viruses:from theory to applications. Springer.
p. 8. ISBN 978-2-287-23939-7.
14.Bell, David J. et al, eds. (2004). "Virus". Cyberculture: The Key Concepts. Routledge.
p. 154. ISBN 9780203647059.
15."Viruses that can cost you".
16.Granneman, Scott. "Linux vs. Windows Viruses". The Register. Retrieved September
4, 2015.
17.Kaspersky, Eugene (November 21, 2005). "The contemporary antivirus industry and
its problems". SecureLight.
18.The term "computer virus" was not used at that time.
19.von Neumann, John (1966). "Theory of Self-Reproducing Automata" (PDF). Essays on
Cellular Automata. University of Illinois Press: 66–87. Retrieved June 10, 2010.
20.Éric Filiol, Computer viruses: from theory to applications, Volume 1, Birkhäuser,
2005, pp. 19–38 ISBN 2-287-23939-1.
21.Risak, Veith (1972), "Selbstreproduzierende Automaten mit minimaler
Informationsübertragung", Zeitschrift für Maschinenbau und Elektrotechnik
22.Kraus, Jürgen (February 1980), Selbstreproduktion bei Programmen (PDF)
23."Virus list". Retrieved 2008-02-07.
24.Thomas Chen, Jean-Marc Robert (2004). "The Evolution of Viruses and Worms".
Retrieved 2009-02-16.
25.Parikka, Jussi (2007). Digital Contagions: A Media Archaeology of Computer Viruses.
New York: Peter Lang. p. 50. ISBN 978-0-8204-8837-0.
26.Russell, Deborah & Gangemi, G.T. (1991). Computer Security Basics. O'Reilly.
p. 86. ISBN 0-937175-71-4.
27.http://www.imdb.com/title/tt0070909/synopsis: IMDB synopsis of Westworld.
Retrieved November 28, 2015.
28.Michael Crichton (November 21, 1973). Westworld (movie). 201 S. Kinney Road,
Tucson, Arizona, USA: Metro-Goldwyn-Mayer. Event occurs at 32 minutes. And
there's a clear pattern here which suggests an analogy to an infectious disease
process, spreading from one resort area to the next." ... "Perhaps there are
superficial similarities to disease." "I must confess I find it difficult to belief in a
disease of machinery.
29.Anick Jesdanun (1 September 2007). "School prank starts 25 years of security
woes". CNBC. Retrieved April 12, 2013.
30."The anniversary of a nuisance".[dead link]
31.Cohen, Fred (1984), Computer Viruses – Theory and Experiments
32.Cohen, Fred, An Undetectable Computer Virus, 1987, IBM
33.Burger, Ralph, 1991. Computer Viruses and Data Protection, pp. 19–20
34.Dr. Solomon's Virus Encyclopedia, 1995. ISBN 1-897661-00-
2. Abstract. ArchivedAugust 4, 2008, at the Wayback Machine.
35.Gunn, J.B. (June 1984). "Use of virus functions to provide a virtual APL interpreter
under user control". ACM SIGAPL APL Quote Quad archive. ACM New York, NY,
USA. 14 (4): 163–168. doi:10.1145/384283.801093. ISSN 0163-6006.
36."Boot sector virus repair". Antivirus.about.com. 2010-06-10. Retrieved 2010-08-27.
37."Amjad Farooq Alvi Inventor of first PC Virus post by Zagham". YouTube.
Retrieved 2010-08-27.
38."winvir virus". Retrieved 10 June 2016.
39.Grimes, Roger (2001). Malicious Mobile Code: Virus Protection for Windows. O'Reilly.
pp. 99–100. ISBN 9781565926820.
40."SCA virus". Virus Test Center, University of Hamburg. 1990-06-05. Retrieved 2014-
01-14.
41.http://5-0-1.webs.com
42.Ludwig, Mark (1998). The giant black book of computer viruses. Show Low, Ariz:
American Eagle. p. 15. ISBN 978-0-929408-23-1.
43.c d e f Stallings, William (2012). Computer security : principles and practice. Boston:
Pearson. p. 183. ISBN 978-0-13-277506-9.
44.Ludwig, Mark (1998). The giant black book of computer viruses. Show Low, Ariz:
American Eagle. p. 292. ISBN 978-0-929408-23-1.
45."www.cs.colostate.edu" (PDF). Retrieved 2016-04-25.
46.Gregory, Peter (2004). Computer viruses for dummies (in Danish). Hoboken, NJ: Wiley
Pub. p. 210. ISBN 0-7645-7418-3.
47.Szor, Peter (2005). The art of computer virus research and defense. Upper Saddle
River, NJ: Addison-Wesley. p. 43. ISBN 0-321-30454-3.
48.Serazzi, Giuseppe & Zanero, Stefano (2004). "Computer Virus Propagation Models".
In Calzarossa, Maria Carla & Gelenbe, Erol. Performance Tools and Applications to
Networked Systems (PDF). Lecture Notes in Computer Science. Vol. 2965. pp. 26–50.
49.Avoine, Gildas et al. (2007). Computer System Security: Basic Concepts and Solved
Exercises. EPFL Press / CRC Press. pp. 21–22. ISBN 9781420046205.
50.Brain, Marshall; Fenton, Wesley. "How Computer Viruses Work". HowStuffWorks.com.
Retrieved 16 June 2013.
51.Grimes, Roger (2001). Malicious Mobile Code: Virus Protection for Windows. O'Reilly.
pp. 37–38. ISBN 9781565926820.
52.Salomon, David (2006). Foundations of Computer Security. Springer. pp. 47–
48. ISBN 9781846283413.
53.Polk, William T. (1995). Antivirus Tools and Techniques for Computer Systems.
William Andrew (Elsevier). p. 4. ISBN 9780815513643.
54.Grimes, Roger (2001). "Macro Viruses". Malicious Mobile Code: Virus Protection for
Windows. O'Reilly. ISBN 9781565926820.
55.Aycock, John (2006). Computer Viruses and Malware. Springer.
p. 89. ISBN 9780387341880.
56."What is boot sector virus?". Retrieved 2015-10-16.
57.Anonymous (2003). Maximum Security. Sams Publishing. pp. 331–
333. ISBN 9780672324598.
58.Skoudis, Edward (2004). "Infection mechanisms and targets". Malware: Fighting
Malicious Code. Prentice Hall Professional. pp. 37–38. ISBN 9780131014053.
59.Dave Jones. 2001 (December 2001). "Building an e-mail virus detection system for
your network. Linux J. 2001, 92 , 2-.".
60.editor-in-chief, Béla G. Lipták, (2002). Instrument engineers' handbook. (3rd ed.).
Boca Raton: CRC Press. p. 874. ISBN 9781439863442. Retrieved September 4, 2015.
61."Computer Virus Strategies and Detection Methods" (PDF). Retrieved 2
September 2008.
62.Internet Communication. PediaPress. pp. 163–. GGKEY:Y43AS5T4TFD. Retrieved 16
April 2016.
63.Szor, Peter (2005). The Art of Computer Virus Research and Defense. Boston:
Addison-Wesley. p. 285. ISBN 0-321-30454-3.
64.Fox-Brewster, Thomas. "Netflix Is Dumping Anti-Virus, Presages Death Of An
Industry". Forbes. Retrieved September 4, 2015.
65."How Anti-Virus Software Works". Stanford University. Retrieved September 4, 2015.
66."www.sans.org". Retrieved 2016-04-16.
67.Bishop, Matt (2003). Computer Security: Art and Science. Addison-Wesley
Professional. p. 620. ISBN 9780201440997.
68.Internet Communication. PediaPress. pp. 165–. GGKEY:Y43AS5T4TFD.
69.John Aycock (19 September 2006). Computer Viruses and Malware. Springer. pp. 35–
36. ISBN 978-0-387-34188-0.
70.Kizza, Joseph M. (2009). Guide to Computer Network Security. Springer.
p. 341. ISBN 9781848009165.
71.Eilam, Eldad (2011). Reversing: Secrets of Reverse Engineering. John Wiley & Sons.
p. 216. ISBN 9781118079768.
72."Virus Bulletin : Glossary – Polymorphic virus". Virusbtn.com. 2009-10-01.
Retrieved 2010-08-27.
73.Perriot, Fredrick; Peter Ferrie; Peter Szor (May 2002). "Striking Similarities" (PDF).
Retrieved September 9, 2007.
74."Virus Bulletin : Glossary — Metamorphic virus". Virusbtn.com. Retrieved 2010-08-27.
75."Virus Basics". US-CERT.
76."Virus Notice: Network Associates' AVERT Discovers First Virus That Can Infect
JPEG Files, Assigns Low-Profiled Risk". Retrieved 2002-06-13.
77."Operating system market share". netmarketshare.com. Retrieved 2015-05-16.
78.This is analogous to how genetic diversity in a population decreases the chance of a
single disease wiping out a population in biology
79.Raggi, Emilio et al. (2011). Beginning Ubuntu Linux. Apress.
p. 148. ISBN 9781430236276.
80."McAfee discovers first Linux virus" (Press release). McAfee, via Axel Boldt. 5
February 1997.
81.Boldt, Axel (19 January 2000). "Bliss, a Linux 'virus'".
82."Detailed test reports—(Windows) home user". AV-Test.org.
83."Detailed test reports — Android mobile devices". AV-Test.org.
84."Microsoft Security Essentials". Retrieved June 21, 2012.
85."Malicious Software Removal Tool". Archived from the original on June 21, 2012.
Retrieved June 21, 2012.
86."Windows Defender". Retrieved June 21, 2012.
87.Rubenking, Neil J. (Feb 17, 2012). "The Best Free Antivirus for 2012". pcmag.com.
88.Rubenking, Neil J. (Jan 10, 2013). "The Best Antivirus for 2013". pcmag.com.
89.Rubenking, Neil J. "Secunia Personal Software Inspector 3.0 Review & Rating".
PCMag.com. Retrieved 2013-01-19.
90."10 Step Guide to Protect Against Viruses". GrnLight.net. Retrieved 23 May 2014.
91."Google Safe Browsing".
92."Report malicious software (URL) to Google".
93.Zhang, Yu et al. (2008). "A Novel Immune Based Approach For Detection of Windows
PE Virus". In Tang, Changjie et al. Advanced Data Mining and Applications: 4th
International Conference, ADMA 2008, Chengdu, China, October 8-10, 2008,
Proceedings. Springer. p. 250. ISBN 9783540881919.
94."Good Security Habits | US-CERT". Retrieved 2016-04-16.
95."W32.Gammima.AG". Symantec. Retrieved 2014-07-17.
96.Category: Computer Articles. "Viruses! In! Space!". GrnLight.net. Retrieved 2014-07-
17.
97."VirusTotal.com (a subsidiary of Google)".
98."VirScan.org".
99.Rubenking, Neil J. "The Best Free Antivirus for 2014". pcmag.com.
100."Microsoft Safety Scanner".
101."Virus removal -Help". Retrieved 2015-01-31.
102."W32.Gammima.AG Removal — Removing Help". Symantec. 2007-08-27.
Retrieved 2014-07-17.
103."support.microsoft.com". Retrieved 2016-04-16.
104."www.us-cert.gov" (PDF). Retrieved 2016-04-16.
105.David Kim; Michael G. Solomon (17 November 2010). Fundamentals of Information
Systems Security. Jones & Bartlett Publishers. pp. 360–. ISBN 978-1-4496-7164-8.
106."1980s – Securelist – Information about Viruses, Hackers and Spam".
Retrieved 2016-04-16.
107.Internet Communication. PediaPress. pp. 160–. GGKEY:Y43AS5T4TFD.
108."What is a Computer Virus?". Actlab.utexas.edu. 1996-03-31. Retrieved 2010-08-27.
109.Realtimepublishers.com (1 January 2005). The Definitive Guide to Controlling
Malware, Spyware, Phishing, and Spam. Realtimepublishers.com. pp. 48–. ISBN 978-
1-931491-44-0.
110.Eli B. Cohen (2011). Navigating Information Challenges. Informing Science.
pp. 27–. ISBN 978-1-932886-47-4.
111.Vesselin Bontchev. "Macro Virus Identification Problems". FRISK Software
International.
112."Facebook 'photo virus' spreads via email.". Retrieved 2014-04-28.
113.Berend-Jan Wever. "XSS bug in hotmail login page". Retrieved 2014-04-07.
114.Wade Alcorn. "The Cross-site Scripting Virus". bindshell.net. Retrieved 2015-10-13.
115.Further reading[edit]
116.Burger, Ralf (16 February 2010) [1991]. Computer Viruses and Data Protection.
Abacus. p. 353. ISBN 978-1-55755-123-8.
117.Granneman, Scott (6 October 2003). "Linux vs. Windows Viruses". The Register.
118.Ludwig, Mark (1993). Computer Viruses, Artificial Life and Evolution. Tucson,
Arizona 85717: American Eagle Publications, Inc. ISBN 0-929408-07-1. Archived
from the original on July 4, 2008.
119.Mark Russinovich (November 2006). Advanced Malware Cleaning video (Web
(WMV / MP4)). Microsoft Corporation. Retrieved 24 July 2011.
120.Parikka, Jussi (2007). Digital Contagions. A Media Archaeology of Computer
Viruses. Digital Formations. New York: Peter Lang. ISBN 978-0-8204-8837-0.
5.7.2 Computer worm
5.7.2.1 Commentary
A worm is a standalone malware program that replicates itself in order to spread to
other computers[1] using a network to spread itself, relying on security failures on the
target system to access it. Worms cause harm to the network, by
consuming bandwidth, whereas viruses corrupt files on a targeted computer.
Many worms that have been created are designed only to spread, and do not attempt to
change the systems they pass through.
Any code designed to do more than spread the worm is typically referred to as the
"payload". Typical payloads might delete files on a host system, encrypt files in
a ransomware attack, or exfiltrate data such as confidential information and installing
a backdoor for remote control by the worm author as a "zombie". These combine to
form botnets which can be used to send spam or perform DoS attacks.[7][8][9][10][11]
Worms use problems in operating systems which are updated with regular
patches[12] which nullify most. Problems can be avoided by care over opening
unexpected email,[13][14] attachments or questionable web sites.
Anti-virus and spyware software and firewalls help avoid worm damage but must be
maintained current. A detection technique includes monitoring frequency and quantity
of scans.[15][16][17] [18]
Mitigation is derived from setting security on hardware, sink routes and
communications messages.
Worms have been used experimentally to updated operating systems with patches but
drew too much network traffic.[19] Others have been applied similarly to study changes
in social activity or user behavior and OSI modeling.[20]
5.7.2.2 References
1. Barwise, Mike. "What is an internet worm?". BBC. Retrieved 9 September 2010.
2. Brunner, John (1975). The Shockwave Rider. New York: Ballantine
Books. ISBN 0-06-010559-3.
3. "The Submarine".
4. "Security of the Internet". CERT/CC.
5. "Phage mailing list". securitydigest.org.
6. Dressler, J. (2007). "United States v. Morris". Cases and Materials on Criminal
Law. St. Paul, MN: Thomson/West. ISBN 978-0-314-17719-3.
7. Ray, Tiernan (February 18, 2004). "Business & Technology: E-mail viruses blamed
as spam rises sharply". The Seattle Times.
8. McWilliams, Brian (October 9, 2003). "Cloaking Device Made for
Spammers". Wired.
9. "Mydoom Internet worm likely from Russia, linked to spam mail: security
firm". www.channelnewsasia.com. 31 January 2004. Archived from the original on
2006-02-19.
10. "Uncovered: Trojans as Spam Robots". Hiese online. 2004-02-21. Archived
from the original on 2009-05-28. Retrieved 2012-11-02.
11. "Hacker threats to bookies probed". BBC News. February 23, 2004.
12. "USN list". Ubuntu. Retrieved 2012-06-10.
13. Threat Description Email-Worm
14. Threat Description Email-Worm: VBS/LoveLetter
15. Sellke, S. H.; Shroff, N. B.; Bagchi, S. (2008). "Modeling and Automated
Containment of Worms". IEEE Transactions on Dependable and Secure
Computing. 5 (2): 71–86. doi:10.1109/tdsc.2007.70230. Archived from the original on 25
May 2015.
16. "A New Way to Protect Computer Networks from Internet Worms". Newswise.
Retrieved July 5, 2011.
17. Moskovitch R., Elovici Y., Rokach L. (2008), "Detection of unknown computer
worms based on behavioral classification of the host", Computational Statistics and
Data Analysis, 52(9):4544–4566, doi:10.1016/j.csda.2008.01.028
18. "Computer Worm Information and Removal Steps". Veracode. Retrieved 2015-04-
04.
19. "Virus alert about the Nachi worm". Microsoft.
20. Al-Salloum, Z. S.; Wolthusen, S. D. (2010). "A link-layer-based self-replicating
vulnerability discovery agent". The IEEE symposium on Computers and
Communications. p. 704. doi:10.1109/ISCC.2010.5546723. ISBN 978-1-4244-7754-8.
21. External links[edit]
22. Malware Guide – Guide for understanding, removing and preventing worm
infections on Vernalex.com.
23. "The 'Worm' Programs – Early Experience with a Distributed Computation", John
Shoch and Jon Hupp, Communications of the ACM, Volume 25 Issue 3 (March 1982),
pages 172–180.
24. "The Case for Using Layered Defenses to Stop Worms", Unclassified report from
the U.S. National Security Agency (NSA), 18 June 2004.
25. Worm Evolution, paper by Jago Maniscalchi on Digital Threat, 31 May 2009.
5.8 Denial-of-service attack
5.8.1 Commentary
Unavailable to its users by disrupting services of a host connected to the Internet.
Denial of service is done by flooding the targeted resource with requests to overload
the systems and prevent legitimate requests from being fulfilled.[1][2][3][4][5].
Denial-of-service attacks stop users of a service from using it while distributed denial-
of-service (DDoS) attack is targeted by many different sources. They either crash
services or flood services.[6][7][8][9]
Advanced persistent DoS attacks are characterised by:
• advanced reconnaissance
• tactical execution
• explicit motivation
• large computing capacity
• simultaneous multi-threaded OSI layer attacks
• persistence over extended periods[10].
Denial-of-service as a service is a stress test and testing unauthorized denial-of-service
attacks.[11]
Symptoms of a denial-of-service attack to include:[12]
• unusually slow network performance
• unavailability of a particular web site
• inability to access any web site
• dramatic increase in the number of spam emails received
• disconnection of a wireless or wired internet connection
• long-term denial of access to the web or any internet services.
Attack tools are examined in [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27]
[28][29][30][31][32][33][34][35][36][37][40][41][42][43][44][45][46][47][48][49][50]
Defense from denial-of-service attacks usually involve the combination of attack
detection, traffic classification and response tools, aiming to block illegitimate traffic
and allow legitimate traffic.[51] A list of prevention and response tools is provided in
the following references.[52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67]
[68][69][70][71]
An unintentional denial-of-service occurs when a system is denied due to a sudden
spike in popularity. [72][73][74][75]
Backscatter is a side-effect of a spoofed denial-of-service attack where the attacker
forges the source address in IPpackets sent to the victim which cannot distinguish
between the spoofed or legitimate packets, so the victim responds to the spoofed
packets.[76] It is detected by network telescopes for indirect evidence of attacks and
using statistics to observe packets to determine characteristics of DoS attacks and
victims.
Many countries have laws making denial-of-service attacks illegal.[77][78][79][80][81]
5.8.2 References
1. "denial of service attack". Retrieved 26 May 2016.
2. Prince, Matthew (25 April 2016). "Empty DDoS Threats: Meet the Armada
Collective". CloudFlare. Retrieved 18 May 2016.
3. "Brand.com President Mike Zammuto Reveals Blackmail Attempt". 5 March 2014.
Archived from the original on 11 March 2014.
4. "Brand.com's Mike Zammuto Discusses Meetup.com Extortion". 5 March 2014.
Archived from the original on 13 May 2014.
5. "The Philosophy of Anonymous". Radicalphilosophy.com. 2010-12-17. Retrieved 2013-
09-10.
6. Taghavi Zargar, Saman (November 2013). "A Survey of Defense Mechanisms Against
Distributed Denial of Service (DDoS) Flooding Attacks" (PDF). IEEE
COMMUNICATIONS SURVEYS & TUTORIALS. pp. 2046–2069. Retrieved 2014-03-07.
7. Smith, Steve. "5 Famous Botnets that held the internet hostage". tqaweekly.
tqaweekly. Retrieved Nov 20 2014. Check date values in: |access-date= (help)
8. Goodin, Dan (28 September 2016). "Record-breaking DDoS reportedly delivered by
>145k hacked cameras". Ars Technica. Archived from the original on 2 October 2016.
9. Khandelwal, Swati (26 September 2016). "World's largest 1 Tbps DDoS Attack
launched from 152,000 hacked Smart Devices". The Hacker News. Archived from the
original on 30 September 2016.
10.Gold, Steve (21 August 2014). "Video games company hit by 38-day DDoS attack". SC
Magazine UK. Retrieved 4 February 2016.
11.Krebs, Brian (August 15, 2015). "Stress-Testing the Booter Services,
Financially". Krebs on Security. Retrieved 2016-09-09.
12.McDowell, Mindi (November 4, 2009). "Cyber Security Tip ST04-015 - Understanding
Denial-of-Service Attacks". United States Computer Emergency Readiness
Team. Archived from the original on 2013-11-04. Retrieved December 11, 2013.
13.Dittrich, David (December 31, 1999). "The "stacheldraht" distributed denial of service
attack tool". University of Washington. Retrieved 2013-12-11.
14.Glenn Greenwald (2014-07-15). "HACKING ONLINE POLLS AND OTHER WAYS
BRITISH SPIES SEEK TO CONTROL THE INTERNET". The Intercept_. Retrieved 2015-
12-25.
15."Amazon CloudWatch". Amazon Web Services, Inc.
16.Encyclopaedia Of Information Technology. Atlantic Publishers & Distributors. 2007.
p. 397. ISBN 81-269-0752-5.
17.Schwabach, Aaron (2006). Internet and the Law. ABC-CLIO. p. 325. ISBN 1-85109-
731-7.
18.Lu, Xicheng; Wei Zhao (2005). Networking and Mobile Computing. Birkhäuser.
p. 424. ISBN 3-540-28102-9.
19."Has Your Website Been Bitten By a Zombie?". Cloudbric. 3 August 2015.
Retrieved 15 September 2015.
20.Boyle, Phillip (2000). "SANS Institute – Intrusion Detection FAQ: Distributed Denial of
Service Attack Tools: n/a". SANS Institute. Retrieved 2008-05-02.
21.Leyden, John (2004-09-23). "US credit card firm fights DDoS attack". The Register.
Retrieved 2011-12-02.
22.Swati Khandelwal (23 October 2015). "Hacking CCTV Cameras to Launch DDoS
Attacks". The Hacker News.
23.https://www.incapsula.com/blog/cctv-ddos-botnet-back-yard.html
24."Who's Behind DDoS Attacks and How Can You Protect Your Website?". Cloudbric.
10 September 2015. Retrieved 15 September 2015.
25.Solon, Olivia (9 September 2015). "Cyber-Extortionists Targeting the Financial Sector
Are Demanding Bitcoin Ransoms". Bloomberg. Retrieved 15 September 2015.
26.Greenberg, Adam (14 September 2015). "Akamai warns of increased activity from
DDoS extortion group". SC Magazine. Retrieved 15 September 2015.
27."OWASP Plan - Strawman - Layer_7_DDOS.pdf" (PDF). Open Web Application Security
Project. 18 March 2014. Retrieved 18 March 2014.
28."Types of DDoS Attacks". Distributed Denial of Service Attacks(DDoS) Resources,
Pervasive Technology Labs at Indiana University. Advanced Networking Management
Lab (ANML). December 3, 2009. Archived from the original on 2010-09-14.
Retrieved December 11, 2013.
29.Paul Sop (May 2007). "Prolexic Distributed Denial of Service Attack Alert". Prolexic
Technologies Inc. Prolexic Technologies Inc. Archived from the original on 2007-08-
03. Retrieved 2007-08-22.
30.Robert Lemos (May 2007). "Peer-to-peer networks co-opted for DOS attacks".
SecurityFocus. Retrieved 2007-08-22.
31.Fredrik Ullner (May 2007). "Denying distributed attacks". DC++: Just These Guys, Ya
Know?. Retrieved 2007-08-22.
32.Leyden, John (2008-05-21). "Phlashing attack thrashes embedded systems". The
Register. Retrieved 2009-03-07.
33.Jackson Higgins, Kelly (May 19, 2008). "Permanent Denial-of-Service Attack
Sabotages Hardware". Dark Reading. Archived from the original on December 8,
2008.
34."EUSecWest Applied Security Conference: London, U.K.". EUSecWest. 2008.
Archived from the original on 2009-02-01.
35.Rossow, Christian (February 2014). "Amplification Hell: Revisiting Network Protocols
for DDoS Abuse" (PDF). Internet Society. Retrieved 4 February 2016.
36.Paxson, Vern (2001). "An Analysis of Using Reflectors for Distributed Denial-of-
Service Attacks". ICIR.org.
37."Alert (TA14-017A) UDP-based Amplification Attacks". US-CERT. July 8, 2014.
Retrieved 2014-07-08.
38.van Rijswijk-Deij, Roland (2014). "DNSSEC and its potential for DDoS attacks - a
comprehensive measurement study". ACM Press.
39.Adamsky, Florian (2015). "P2P File-Sharing in Hell: Exploiting BitTorrent
Vulnerabilities to Launch Distributed Reflective DoS Attacks".
40.Vaughn, Randal; Evron, Gadi (2006). "DNS Amplification Attacks" (PDF). ISOTF.
Archived from the original (PDF) on 2010-12-14.
41."Alert (TA13-088A) DNS Amplification Attacks". US-CERT. July 8, 2013.
Retrieved 2013-07-17.
42.Yu Chen; Kai Hwang; Yu-Kwong Kwok (2005). "Filtering of shrew DDoS attacks in
frequency domain". The IEEE Conference on Local Computer Networks 30th
Anniversary (LCN'05)l. pp. 8 pp. doi:10.1109/LCN.2005.70. ISBN 0-7695-2421-4.
43.Ben-Porat, U.; Bremler-Barr, A.; Levy, H. (2013-05-01). "Vulnerability of Network
Mechanisms to Sophisticated DDoS Attacks". IEEE Transactions on
Computers. 62 (5): 1031–1043. doi:10.1109/TC.2012.49. ISSN 0018-9340.
44.orbitalsatelite. "Slow HTTP Test". SourceForge.
45."RFC 4987 – TCP SYN Flooding Attacks and Common Mitigations". Tools.ietf.org.
August 2007. Retrieved 2011-12-02.
46."CERT Advisory CA-1997-28 IP Denial-of-Service Attacks". CERT. 1998.
Retrieved July 18, 2014.
47."Windows 7, Vista exposed to 'teardrop attack'". ZDNet. September 8, 2009.
Retrieved 2013-12-11.
48."Microsoft Security Advisory (975497): Vulnerabilities in SMB Could Allow Remote
Code Execution". Microsoft.com. September 8, 2009. Retrieved 2011-12-02.
49."FBI — Phony Phone Calls Distract Consumers from Genuine Theft". FBI.gov. 2010-
05-11. Retrieved 2013-09-10.
50."Internet Crime Complaint Center's (IC3) Scam Alerts January 7, 2013". IC3.gov.
2013-01-07. Retrieved 2013-09-10.
51.Loukas, G.; Oke, G. (September 2010) [August 2009]. "Protection Against Denial of
Service Attacks: A Survey" (PDF). Comput. J. 53 (7): 1020–
1037. doi:10.1093/comjnl/bxp078.
52.Alqahtani, S.; Gamble, R. F. (1 January 2015). "DDoS Attacks in Service
Clouds". 2015 48th Hawaii International Conference on System Sciences (HICSS):
5331–5340. doi:10.1109/HICSS.2015.627.
53.Kousiouris, George (2014). "KEY COMPLETION INDICATORS:minimizing the effect of
DoS attacks on elastic Cloud-based applications based on application-level markov
chain checkpoints". CLOSER Conference. Retrieved 2015-05-24.
54.Patrikakis, C.; Masikos, M.; Zouraraki, O. (December 2004). "Distributed Denial of
Service Attacks". The Internet Protocol Journal. 7 (4): 13–35.
55.Abante, Carl (March 2, 2013). "Relationship between Firewalls and Protection
against DDoS". Ecommerce Wisdom. Retrieved 2013-05-24.[dubious – discuss]
56.Froutan, Paul (June 24, 2004). "How to defend against DDoS
attacks". Computerworld. Retrieved May 15, 2010.
57.Suzen, Mehmet. "Some IoS tips for Internet Service (Providers)" (PDF). Archived
from the original (PDF) on 2008-09-10.
58."DDoS Mitigation via Regional Cleaning Centers (Jan 2004)" (PDF). SprintLabs.com.
Sprint ATL Research. Archived from the original (PDF) on 2008-09-21. Retrieved 2011-
12-02.
59.Gallagher, Sean. "Biggest DDoS ever aimed at Cloudflare's content delivery
network". Ars Technica. Retrieved 18 May 2016.
60."Level 3 DDoS Mitigation". level3.com. Retrieved 9 May 2016.
61."Defensepipe". radware.com. Retrieved November 2015. Check date values in: |
access-date= (help)
62."Clean Pipes DDoS Protection and Mitigation from Arbor Networks &
Cisco". ArborNetworks.com. 8 August 2013.
63."AT&T Internet Protect Distributed Denial of Service
Defense" (PDF). ATT.com (Product brief). 16 October 2012.
64."Silverline DDoS Protection service". f5.com. Retrieved March 2015. Check date
values in: |access-date= (help)
65."Infrastructure DDos Protection". incapsula.com. Retrieved June 2015. Check date
values in: |access-date= (help)
66."DDoS Protection". Neustar.biz. Retrieved November 2014. Check date values in: |
access-date= (help)
67.Lunden, Ingrid (December 2, 2013). "Akamai Buys DDoS Prevention Specialist
Prolexic For $370M To Ramp Up Security Offerings For Enterprises". TechCrunch.
Retrieved September 23, 2014.
68."DDoS Protection with Network Agnostic Option". Tatacommunications.com. 7
September 2011.
69."VeriSign Rolls Out DDoS Monitoring Service". Darkreading.com. 11 September 2009.
Retrieved 2 December 2011.
70."Security: Enforcement and Protection". Verizon.com. Retrieved January
2015. Check date values in: |access-date= (help)
71."Verizon Digital Media Services Launches Cloud-Based Web Application Firewall
That Increases Defenses Against Cyberattacks". Verizon.com. Retrieved January
2015. Check date values in: |access-date= (help)
72.Shiels, Maggie (2009-06-26). "Web slows after Jackson's death". BBC News.
73."We're Sorry. Automated Query error". Google Product Forums › Google Search
Forum. Google.com. October 20, 2009. Retrieved 2012-02-11.
74."YouTube sued by sound-alike site". BBC News. 2006-11-02.
75.Bill Chappell (12 March 2014). "People Overload Website, Hoping To Help Search For
Missing Jet". NPR. Retrieved 4 February 2016.
76."Backscatter Analysis (2001)". Animations (video). Cooperative Association for
Internet Data Analysis. Retrieved December 11, 2013.
77."United States Code: Title 18,1030. Fraud and related activity in connection with
computers | Government Printing Office". www.gpo.gov. 2002-10-25. Retrieved 2014-
01-15.
78."International Action Against DD4BC Cybercriminal Group". EUROPOL. 12 January
2016.
79."Computer Misuse Act 1990". legislation.gov.uk — The National Archives, of UK. 10
January 2008.
80."Anonymous DDoS Petition: Group Calls On White House To Recognize Distributed
Denial Of Service As Protest.". HuffingtonPost.com. 2013-01-12.
81."DDOS Attack: crime or virtual sit-in?". RT.com. YouTube.com. October 6, 2011.
82.Ethan Zuckerman; Hal Roberts; Ryan McGrady; Jillian York; John Palfrey (December
2011). "Distributed Denial of Service Attacks Against Independent Media and Human
Rights Sites" (PDF). The Berkman Center for Internet & Society at Harvard
University. Archived from the original (PDF) on 2011-03-02. Retrieved 2011-03-02.
83."DDOS Public Media Reports". Harvard. Archived from the original on 2011-03-02.
5.9 Malware
5.9.1 Commentary
Malware is software used with malicious intent to disrupt computer or mobile
operation, gather sensitive information, gain access to private systems, or display
unwanted advertising.[1][2][3][4] It can include computer viruses, worms, trojan
horses, ransomware, spyware, adware, scareware, and other malicious programs or
executable code, scripts, active content, and other software[5] disguised as, or
embedded in, non-malicious files.[6][7][8][9][11][12][13][14][15] 
Software such as anti-virus and firewalls are used to protect against activity identified
as malicious, and to recover from attacks.[10]
Infected "zombie computers" send email spam, host contraband data,[16] or force
distributed denial-of-service attacks for extortion.[17]
Spyware is installed through security holes, hidden and packaged with unrelated user-
installed software[18] monitoring web browsing, displaying unsolicited advertisements,
or redirecting affiliate marketing.
Ransomware infects computer and demands payment to reverse the damage.
Click fraud makes an advertising link on a site generating a payment from the
advertiser. [19]
Malware can be used for criminal purposes or cyber warfare[20][21] and proliferates
more than legal applications[22] at an increasing rate.[23][24][25][26][27][28][29]
Infectious malware shows up as viruses and worms.[30]
These categories are not mutually exclusive, so malware may use multiple techniques.
[31]
Viruses are hidden in other legitimate program replicated into other programs and files,
and perform malicious actions.[32]
Trojan horses misrepresent themselves as useful, routine, or interesting to persuade a
user to install it[33][34][35][36][37] e.g. by e-mail attachments or by download[38]
without injection into files or replication.[39]
Rootkits conceal themselves in the operating system with the malware hidden from the
user by making their processes invisible, preventing files from being read,[40] stopping
deletion and restart cancelled processes.[41]
Backdoors bypass normal authentication to allow future access,[42] invisible to the
user for technical support of customers and government agency monitoring.[43] They
are installed by Trojan horses, worms, implants, or other methods.[44][45]
Malware combines multiple techniques to avoid detection and analysis.[46] e.g.
fingerprinting the environment when executed,[47] confusing automated tools'
detection methods,[48] timing-based evasion (runs at fixed times or actions), changing
internal data to confuse automated tools,[49] using stolen certificates,[50] and
using information hiding techniques(stegomalware).
Malware exploits security defects (bugs / vulnerabilities) in the operating system,
applications [51], or in vulnerable versions of browser plug-ins with insecure design,
user error, over-privileged users and over-privileged code[52][53][54][56] listed in the
US National Vulnerability Database.[55] Use of the same operating system leads to a
vulnerability as a network using the same operating system exploits one and
a worm can effects them all.[57]
Anti-malware strategies include programs to combat malware and preventive and
recovery measures (backup /recovery).
Anti-malware software consists of an on-access / real-time scanner, hooks into the
operating system's kernel to simulate malware processes to assess legitimate file. If
illegitimate access is stopped, the file is processed in a pre-defined way, and the user
notified.
Anti-malware programs act in the two forms static and dynamic. The static way scans
all files, directories and control tables reporting malware and taking predefined action.
[58] In the dynamic mode the scanner accesses files and control tables as the
computer executes its work.[59][60][61][62][62][63][64]
Some websites offer security scans to improve their reputation, some to check
vulnerability from malware and outdated software scanning.[65][66][67][68]
"Air gap" isolation or "Parallel Network" has been suggested as a way to protect
computers from malware on infected computers however it is found that this is not the
case using electromagnetic, thermal and acoustic emissions.[69][70][71][72][73]
Grayware consists of unwanted applications e.g.  spyware, adware, fraudulent dialers,
joke programs, remote access tools or files not counted as malware harming
performance and security.[74] They can be annoying or undesirable but not as bad as
malware.[75][76][77][50]
Research introduced worms,[78] viruses,[79][80] initial ransomware and evasion ideas.
[81]
5.9.2 References
1 "Malware definition". techterms.com. Retrieved 27 September 2015.
2 Christopher Elisan (5 September 2012). Malware, Rootkits & Botnets A Beginner's
Guide. McGraw Hill Professional. pp. 10–. ISBN 978-0-07-179205-9.
3 Stallings, William (2012). Computer security : principles and practice. Boston:
Pearson. p. 182. ISBN 978-0-13-277506-9.
4 "Defining Malware: FAQ". technet.microsoft.com. Retrieved 10 September 2009.
5 "An Undirected Attack Against Critical Infrastructure" (PDF). United States Computer
Emergency Readiness Team(Us-cert.gov). Retrieved 28 September 2014.
6 "Evolution of Malware-Malware Trends". Microsoft Security Intelligence Report-
Featured Articles. Microsoft.com. Retrieved 28 April 2013.
7 "Virus/Contaminant/Destructive Transmission Statutes by State". National
Conference of State Legislatures. 2012-02-14. Retrieved 26 August 2013.
8 "§ 18.2-152.4:1 Penalty for Computer Contamination" (PDF). Joint Commission on
Technology and Science. Retrieved 17 September 2010.
9 Russinovich, Mark (2005-10-31). "Sony, Rootkits and Digital Rights Management
Gone Too Far". Mark's Blog. Microsoft MSDN. Retrieved 2009-07-29.
10 "Protect Your Computer from Malware". OnGuardOnline.gov. Retrieved 26
August 2013.
11 "Malware". FEDERAL TRADE COMMISSION- CONSUMER INFORMATION.
Retrieved 27 March 2014.
12 Hernandez, Pedro. "Microsoft Vows to Combat Government Cyber-Spying". eWeek.
Retrieved 15 December 2013.
13 Kovacs, Eduard. "MiniDuke Malware Used Against European Government
Organizations". Softpedia. Retrieved 27 February 2013.
14 "South Korea network attack 'a computer virus'". BBC. Retrieved 20 March 2013.
15 "Malware Revolution: A Change in Target". March 2007.
16 "Child Porn: Malware's Ultimate Evil". November 2009.
17 PC World – Zombie PCs: Silent, Growing Threat.
18 "Peer To Peer Information". NORTH CAROLINA STATE UNIVERSITY. Retrieved 25
March 2011.
19 "Another way Microsoft is disrupting the malware ecosystem". Retrieved 18
February 2015.
20 "Shamoon is latest malware to target energy sector". Retrieved 18 February 2015.
21 "Computer-killing malware used in Sony attack a wake-up call". Retrieved 18
February 2015.
22 "Symantec Internet Security Threat Report: Trends for July–December 2007
(Executive Summary)" (PDF). XIII. Symantec Corp. April 2008: 29. Retrieved 11
May 2008.
23 "F-Secure Reports Amount of Malware Grew by 100% during 2007" (Press release).
F-Secure Corporation. 4 December 2007. Retrieved 11 December 2007.
24 "F-Secure Quarterly Security Wrap-up for the first quarter of 2008". F-Secure. 31
March 2008. Retrieved 25 April 2008.
25 "Continuing Business with Malware Infected Customers". Gunter Ollmann. October
2008.
26 "New Research Shows Remote Users Expose Companies to Cybercrime". Webroot.
April 2013.
27 "Symantec names Shaoxing, China as world's malware capital". Engadget.
Retrieved 15 April 2010.
28 Rooney, Ben (2011-05-23). "Malware Is Posing Increasing Danger". Wall Street
Journal.
29 Suarez-Tangil, Guillermo; Juan E. Tapiador, Pedro Peris-Lopez, Arturo Ribagorda
(2014). "Evolution, Detection and Analysis of Malware in Smart Devices" (PDF). IEEE
Communications Surveys & Tutorials.
30 "computer virus – Encyclopedia Britannica". Britannica.com. Retrieved 28
April 2013.
31 All about Malware and Information Privacy
32 "What are viruses, worms, and Trojan horses?". Indiana University. The Trustees of
Indiana University. Retrieved 23 February 2015.
33 Landwehr, C. E; A. R Bull; J. P McDermott; W. S Choi (1993). A taxonomy of
computer program security flaws, with examples. DTIC Document. Retrieved 2012-
04-05.
34 "Trojan Horse Definition". Retrieved 2012-04-05.
35 "Trojan horse". Webopedia. Retrieved 2012-04-05.
36 "What is Trojan horse? – Definition from Whatis.com". Retrieved 2012-04-05.
37 "Trojan Horse: [coined By MIT-hacker-turned-NSA-spook Dan Edwards] N.".
Retrieved 2012-04-05.
38 "What is the difference between viruses, worms, and Trojans?". Symantec
Corporation. Retrieved 2009-01-10.
39 "VIRUS-L/comp.virus Frequently Asked Questions (FAQ) v2.00 (Question B3: What is
a Trojan Horse?)". 9 October 1995. Retrieved 2012-09-13.
40 McDowell, Mindi. "Understanding Hidden Threats: Rootkits and Botnets". US-CERT.
Retrieved 6 February 2013.
41 "Catb.org". Catb.org. Retrieved 15 April 2010.
42 Vincentas (11 July 2013). "Malware in SpyWareLoop.com". Spyware Loop.
Retrieved 28 July 2013.
43 Staff, SPIEGEL. "Inside TAO: Documents Reveal Top NSA Hacking Unit". SPIEGEL.
Retrieved 23 January 2014.
44 Edwards, John. "Top Zombie, Trojan Horse and Bot Threats". IT Security.
Retrieved 25 September 2007.
45 Appelbaum, Jacob. "Shopping for Spy Gear:Catalog Advertises NSA Toolbox".
SPIEGEL. Retrieved 29 December 2013.
46 Evasive malware
47 Kirat, Dhilung; Vigna, Giovanni; Kruegel, Christopher (2014). Barecloud: bare-metal
analysis-based evasive malware detection. ACM. pp. 287–301. ISBN 978-1-931971-15-
7.
48 The Four Most Common Evasive Techniques Used by Malware. April 27, 2015.
49 Young, Adam; Yung, Moti (1997). "Deniable Password Snatching: On the Possibility
of Evasive Electronic Espionage". Symp. on Security and Privacy. IEEE. pp. 224–
235. ISBN 0-8186-7828-3.
50 Casey, Henry T. (25 November 2015). "Latest adware disables antivirus
software". Tom's Guide. Yahoo.com. Retrieved 25 November 2015.
51 "Global Web Browser... Security Trends" (PDF). Kaspersky lab. November 2012.
52 Rashid, Fahmida Y. (27 November 2012). "Updated Browsers Still Vulnerable to
Attack if Plugins Are Outdated". pcmag.com.
53 Danchev, Dancho (18 August 2011). "Kaspersky: 12 different vulnerabilities
detected on every PC". pcmag.com.
54 "Adobe Security bulletins and advisories". Adobe.com. Retrieved 19 January 2013.
55 Rubenking, Neil J. "Secunia Personal Software Inspector 3.0 Review & Rating".
PCMag.com. Retrieved 19 January 2013.
56 "USB devices spreading viruses". CNET. CBS Interactive. Retrieved 18
February 2015.
57 "LNCS 3786 – Key Factors Influencing Worm Infection", U. Kanlayasiri, 2006, web
(PDF): SL40-PDF.
58 "How Antivirus Software Works?". Retrieved 2015-10-16.
59 "Microsoft Security Essentials". Microsoft. Retrieved 21 June 2012.
60 "Malicious Software Removal Tool". Microsoft. Retrieved 21 June 2012.
61 "Windows Defender". Microsoft. Retrieved 21 June 2012.
62 Rubenking, Neil J. (8 January 2014). "The Best Free Antivirus for 2014". pcmag.com.
63 "How do I remove a computer virus?". Microsoft. Retrieved 26 August 2013.
64 "Microsoft Safety Scanner". Microsoft. Retrieved 26 August 2013.
65 "An example of a website vulnerability scanner". Unmaskparasites.com.
Retrieved 19 January 2013.
66 "Redleg's File Viewer. Used to check a webpage for malicious redirects or
malicious HTML coding". Aw-snap.info. Retrieved 19 January 2013.
67 "Example Google.com Safe Browsing Diagnostic page". Google.com. Retrieved 19
January 2013.
68 "Safe Browsing (Google Online Security Blog)". Retrieved 21 June 2012.
69 Hanspach, Michael; Goetz, Michael (November 2013). "On Covert Acoustical Mesh
Networks in Air". Journal of Communications. doi:10.12720/jcm.8.11.758-767.
70 M. Guri, G. Kedma, A. Kachlon and Y. Elovici, "AirHopper: Bridging the air-gap
between isolated networks and mobile phones using radio frequencies," Malicious
and Unwanted Software: The Americas (MALWARE), 2014 9th International
Conference on, Fajardo, PR, 2014, pp. 58-67.
71 M. Guri, M. Monitz, Y. Mirski and Y. Elovici, "BitWhisper: Covert Signaling Channel
between Air-Gapped Computers Using Thermal Manipulations," 2015 IEEE 28th
Computer Security Foundations Symposium, Verona, 2015, pp. 276-289.
72 GSMem: Data Exfiltration from Air-Gapped Computers over GSM Frequencies.
Mordechai Guri, Assaf Kachlon, Ofer Hasson, Gabi Kedma, Yisroel Mirsky, and Yuval
Elovici, Ben-Gurion University of the Negev; USENIX Security Symposium 2015
73 https://arxiv.org/ftp/arxiv/papers/1606/1606.05915.pdf
74 Vincentas (11 July 2013). "Grayware in SpyWareLoop.com". Spyware Loop.
Retrieved 28 July 2013.
75 "Threat Encyclopedia – Generic Grayware". Trend Micro. Retrieved 27
November 2012.
76 "Rating the best anti-malware solutions". Arstechnica. Retrieved 28 January 2014.
77 "PUP Criteria". malwarebytes.org. Retrieved 13 February 2015.
78 William A Hendric (4 September 2014). "Computer Virus history". The Register.
Retrieved 29 March 2015.
79 John von Neumann, "Theory of Self-Reproducing Automata", Part 1: Transcripts of
lectures given at the University of Illinois, December 1949, Editor: A. W. Burks,
University of Illinois, USA, 1966.
80 Fred Cohen, "Computer Viruses", PhD Thesis, University of Southern California, ASP
Press, 1988.
81 Young, Adam; Yung, Moti (2004). Malicious cryptography - exposing cryptovirology.
Wiley. pp. 1–392. ISBN 978-0-7645-4975-5.
5.10 Payload (computing)
5.10.1 Commentary
The payload is the intended message of transmitted data excluding control information.
[1][2][4][5] In computer security, the payload is the part of malware[3] which has
overhead code for replication and hiding.
5.10.2 References
1. "Payload definition". Pcmag.com. 1994-12-01. Retrieved 2012-02-07.
2. "Payload definition". Techterms.com. Retrieved 2012-02-07.
3. "Payload definition". Securityfocus.com. Retrieved 2012-02-07.
4. "RFC 1122: Requirements for Internet Hosts — Communication Layers". IETF.
October 1989. p. 18. RFC 1122. Retrieved 2010-06-07.
5. "Data Link Layer (Layer 2)". The TCP/IP Guide. 2005-09-20. Retrieved 2010-01-31.
5.11 Concealment
5.11.1 Trojan horse
5.11.1.1 Commentary
A Trojan horse is any malicious computer program to hack into a computer by
misleading users of its true intent. [1][2][3][4][5] It is spread by email attachments or
download. It gives a backdoor for a controller to access a computer system for private
data,[6][8] destructive actions and using computers for invalid purposes.[9][10][11][12]
[13]
5.11.1.2 References
1 Carnegie Mellon University (1999): "CERT Advisory CA-1999-02 Trojan Horses",
ЎЦ
2 Landwehr, C. E; A. R Bull; J. P McDermott; W. S Choi (1993). A taxonomy of
computer program security flaws, with examples. DTIC Document. Retrieved 2012-04-
05.
3 "Trojan Horse Definition". Retrieved 2012-04-05.
4 "Trojan horse". Webopedia. Retrieved 2012-04-05.
5 "What is Trojan horse? – Definition from Whatis.com". Retrieved 2012-04-05.
6 "Trojan Horse: [coined By MIT-hacker-turned-NSA-spook Dan Edwards] N.".
Retrieved 2012-04-05.
7 "What is the difference between viruses, worms, and Trojans?". Symantec
Corporation. Retrieved 2009-01-10.
8 "VIRUS-L/comp.virus Frequently Asked Questions (FAQ) v2.00 (Question B3: What
is a Trojan Horse?)". 9 October 1995. Retrieved 2012-09-13.
9 "Hackers, Spyware and Trojans – What You Need to Know". Comodo.
Retrieved September 5, 2015.
10 Robert McMillan (2013): Trojan Turns Your PC Into Bitcoin Mining Slave,
Retrieved on 2015-02-01
11 Jamie Crapanzano (2003): "Deconstructing SubSeven, the Trojan Horse of
Choice", SANS Institute, Retrieved on 2009-06-11
12 Vincentas (11 July 2013). "Trojan Horse in SpyWareLoop.com". Spyware Loop.
Retrieved 28 July 2013.
13 Basil Cupa, Trojan Horse Resurrected: On the Legality of the Use of Government
Spyware (Govware), LISS 2013, pp. 419–428
14 "Dokument nicht gefunden!". Federal Department of Justice and Police. Archived
from the original on May 6, 2013.
15 "Swiss coder publicises government spy Trojan – Techworld.com".
News.techworld.com. Retrieved 2014-01-26.
16 BitDefender.com Malware and Spam Survey
17 Datta, Ganesh. "What are Trojans?". SecurAid.
18 https://sourceforge.net/projects/mega-panzer/
19 https://sourceforge.net/projects/mini-panzer/
20 https://blog.lookout.com/blog/2015/11/19/shedun-trojanized-adware/
21 http://www.theinquirer.net/inquirer/news/2435721/shedun-trojan-adware-is-
hitting-the-android-accessibility-service
22 https://blog.lookout.com/blog/2015/11/04/trojanized-adware/
23 http://betanews.com/2015/11/05/shuanet-shiftybug-and-shedun-malware-could-
auto-root-your-android/
24 http://www.techtimes.com/articles/104373/20151109/new-family-of-android-
malware-virtually-impossible-to-remove-say-hello-to-shedun-shuanet-and-shiftybug.htm
25 http://arstechnica.com/security/2015/11/android-adware-can-install-itself-even-
when-users-explicitly-reject-it/
5.11.2 Rootkit
5.11.2.1 Commentary
5.11.2.1.1 Introduction
A rootkit is a collection of computer software, typically malicious, designed to enable
access to a computer or areas of its software that would not otherwise be allowed (for
example, to an unauthorized user) and often masks its existence or the existence of
other software.[1] The term rootkit is a concatenation of "root" (the traditional name of
the privileged account on Unix-like operating systems) and the word "kit" (which refers
to the software components that implement the tool). The term "rootkit" has negative
connotations through its association with malware.[1]
Rootkit installation can be automated, or an attacker can install it once they've
obtained root or Administrator access. Obtaining this access is a result of direct attack
on a system, i.e. exploiting a known vulnerability (such as privilege escalation) or
a password(obtained by cracking or social engineering tactics like "phishing"). Once
installed, it becomes possible to hide the intrusion as well as to maintain privileged
access. The key is the root or administrator access. Full control over a system means
that existing software can be modified, including software that might otherwise be
used to detect or circumvent it.
Rootkit detection is difficult because a rootkit may be able to subvert the software that
is intended to find it. Detection methods include using an alternative and
trusted operating system, behavioural-based methods, signature scanning, difference
scanning, and memory dump analysis. Removal can be complicated or practically
impossible, especially in cases where the rootkit resides in the kernel; reinstallation of
the operating system may be the only available solution to the problem.[2] When
dealing with firmware rootkits, removal may require hardware replacement, or
specialized equipment.
5.11.2.1.2 History
The term rootkit or root kit originally referred to a maliciously modified set of
administrative tools for a Unix-like operating system that granted "root" access.[3] If
an intruder could replace the standard administrative tools on a system with a rootkit,
the intruder could obtain root access over the system whilst simultaneously concealing
these activities from the legitimate system administrator. These first-generation
rootkits were trivial to detect by using tools such as Tripwire that had not been
compromised to access the same information.[4]
[5] Lane Davis and Steven Dake wrote the earliest known rootkit in 1990 for Sun
Microsystems' SunOS UNIX operating system.[6] In the lecture he gave upon receiving
the Turing award in 1983, Ken Thompson of Bell Labs, one of the creators of Unix,
theorized about subverting the C compiler in a Unix distribution and discussed the
exploit. The modified compiler would detect attempts to compile the
Unix login command and generate altered code that would accept not only the user's
correct password, but an additional "backdoor" password known to the attacker.
Additionally, the compiler would detect attempts to compile a new version of the
compiler, and would insert the same exploits into the new compiler. A review of the
source code for the login command or the updated compiler would not reveal any
malicious code.[7] This exploit was equivalent to a rootkit.
The first documented computer virus to target the personal computer, discovered in
1986, used cloaking techniques to hide itself: the Brain virus intercepted attempts to
read the boot sector, and redirected these to elsewhere on the disk, where a copy of
the original boot sector was kept.[1] Over time, DOS-virus cloaking methods became
more sophisticated, with advanced techniques including the hooking of low-level
disk INT 13H BIOS interrupt calls to hide unauthorized modifications to files.[1]
The first malicious rootkit for the Windows NT operating system appeared in 1999: a
trojan called NTRootkit created by Greg Hoglund.[8] It was followed
by HackerDefender in 2003.[1] The first rootkit targeting Mac OS X appeared in 2009,
[9] while the Stuxnet worm was the first to target programmable logic
controllers (PLC).[10]
5.11.2.1.3 Protection rootkit
In 2005, Sony BMG published CDs with copy protection and digital rights
management software called Extended Copy Protection, created by software company
First 4 Internet. The software included a music player but silently installed a rootkit
which limited the user's ability to access the CD.[11] Software engineer Mark
Russinovich, who created the rootkit detection tool RootkitRevealer, discovered the
rootkit on one of his computers.[1] The ensuing scandal raised the public's awareness
of rootkits.[12] To cloak itself, the rootkit hid from the user any file starting with
"$sys$". Soon after Russinovich's report, malware appeared which took advantage of
that vulnerability of affected systems.[1]One BBC analyst called it a "public
relations nightmare."
[13] Sony BMG released patches to uninstall the rootkit, but it exposed users to an
even more serious vulnerability.[14] The company eventually recalled the CDs. In the
United States, a class-action lawsuit was brought against Sony BMG.[15]
5.11.2.1.4 Greek wiretapping case 2004–05
The Greek wiretapping case of 2004-05, also referred to as Greek Watergate,
[16] involved the illegal telephone tapping of more than 100 mobile phones on
the Vodafone Greecenetwork belonging mostly to members of the Greek government
and top-ranking civil servants. The taps began sometime near the beginning of August
2004 and were removed in March 2005 without discovering the identity of the
perpetrators. The intruders installed a rootkit targeting Ericsson's AXE telephone
exchange. According to IEEE Spectrum, this was "the first time a rootkit has been
observed on a special-purpose system, in this case an Ericsson telephone switch."
[17] The rootkit was designed to patch the memory of the exchange while it was
running, enable wiretapping while disabling audit logs, patch the commands that list
active processes and active data blocks, and modify the data
block checksum verification command. A "backdoor" allowed an operator
with sysadmin status to deactivate the exchange's transaction log, alarms and access
commands related to the surveillance capability.[17] The rootkit was discovered after
the intruders installed a faulty update, which caused SMS texts to be undelivered,
leading to an automated failure report being generated. Ericsson engineers were called
in to investigate the fault and discovered the hidden data blocks containing the list of
phone numbers being monitored, along with the rootkit and illicit monitoring software.
5.11.2.1.5 Uses
Modern rootkits do not elevate access,[3] but rather are used to make another software
payload undetectable by adding stealth capabilities.[8] Most rootkits are classified
as malware, because the payloads they are bundled with are malicious. For example, a
payload might covertly steal user passwords, credit card information, computing
resources, or conduct other unauthorized activities. A small number of rootkits may be
considered utility applications by their users: for example, a rootkit might cloak a CD-
ROM-emulation driver, allowing video game users to defeat anti-piracy measures that
require insertion of the original installation media into a physical optical drive to verify
that the software was legitimately purchased.
Rootkits and their payloads have many uses:
• Provide an attacker with full access via a backdoor, permitting unauthorized access
to, for example, steal or falsify documents. One of the ways to carry this out is to
subvert the login mechanism, such as the /bin/login program on Unix-like systems
or GINA on Windows. The replacement appears to function normally, but also
accepts a secret login combination that allows an attacker direct access to the
system with administrative privileges, bypassing
standard authentication and authorization mechanisms.
• Conceal other malware, notably password-stealing key loggers and computer
viruses.[18]
• Appropriate the compromised machine as a zombie computer for attacks on other
computers. (The attack originates from the compromised system or network, instead
of the attacker's system.) "Zombie" computers are typically members of
large botnets that can launch denial-of-service attacks, distribute e-mail spam,
conduct click fraud, etc.
• Enforcement of digital rights management (DRM).
In some instances, rootkits provide desired functionality, and may be installed
intentionally on behalf of the computer user:
• Conceal cheating in online games from software like Warden.[19]
• Detect attacks, for example, in a honeypot.[20]
• Enhance emulation software and security software.[21] Alcohol 120% and Daemon
Tools are commercial examples of non-hostile rootkits used to defeat copy-
protection mechanisms such as SafeDisc and SecuROM. Kaspersky antivirus
software also uses techniques resembling rootkits to protect itself from malicious
actions. It loads its own drivers to intercept system activity, and then prevents other
processes from doing harm to itself. Its processes are not hidden, but cannot be
terminated by standard methods (It can be terminated with Process Hacker).
• Anti-theft protection: Laptops may have BIOS-based rootkit software that will
periodically report to a central authority, allowing the laptop to be monitored,
disabled or wiped of information in the event that it is stolen.[22]
• Bypassing Microsoft Product Activation[23]
5.11.2.1.6 Types
Further information: Ring (computer security)
There are at least five types of rootkit, ranging from those at the lowest level in
firmware (with the highest privileges), through to the least privileged user-based
variants that operate in Ring 3. Hybrid combinations of these may occur spanning, for
example, user mode and kernel mode.[24]
5.11.2.1.7 User mode
User-mode rootkits run in Ring 3, along with other applications as user, rather than low-
level system processes.[25] They have a number of possible installation vectors to
intercept and modify the standard behaviour of application programming interfaces
(APIs). Some inject a dynamically linked library (such as a .DLL file on Windows, or a
.dylib file on Mac OS X) into other processes, and are thereby able to execute inside
any target process to spoof it; others with sufficient privileges simply overwrite the
memory of a target application. Injection mechanisms include:[25]
• Use of vendor-supplied application extensions. For example, Windows Explorer has
public interfaces that allow third parties to extend its functionality.
• Interception of messages.
• Debuggers.
• Exploitation of security vulnerabilities.
• Function hooking or patching of commonly used APIs, for example, to hide a running
process or file that resides on a filesystem.[26]
...since user mode applications all run in their own memory space, the rootkit needs to
perform this patching in the memory space of every running application. In addition, the
rootkit needs to monitor the system for any new applications that execute and patch
those programs' memory space before they fully execute.
— Windows Rootkit Overview, Symantec[3]
5.11.2.1.8 Kernel mode
Kernel-mode rootkits run with the highest operating system privileges (Ring 0) by
adding code or replacing portions of the core operating system, including both
the kernel and associated device drivers. Most operating systems support kernel-mode
device drivers, which execute with the same privileges as the operating system itself.
As such, many kernel-mode rootkits are developed as device drivers or loadable
modules, such as loadable kernel modules in Linux or device drivers in Microsoft
Windows. This class of rootkit has unrestricted security access, but is more difficult to
write.[27] The complexity makes bugs common, and any bugs in code operating at the
kernel level may seriously impact system stability, leading to discovery of the rootkit.
[27] One of the first widely known kernel rootkits was developed for Windows NT
4.0 and released in Phrack magazine in 1999 by Greg Hoglund.[28][29][30] Kernel
rootkits can be especially difficult to detect and remove because they operate at the
same security level as the operating system itself, and are thus able to intercept or
subvert the most trusted operating system operations. Any software, such as antivirus
software, running on the compromised system is equally vulnerable.[31] In this
situation, no part of the system can be trusted.
A rootkit can modify data structures in the Windows kernel using a method known
as direct kernel object manipulation (DKOM).[32] This method can be used to hide
processes. A kernel mode rootkit can also hook the System Service Descriptor
Table (SSDT), or modify the gates between user mode and kernel mode, in order to
cloak itself.[3] Similarly for the Linux operating system, a rootkit can modify the system
call table to subvert kernel functionality.[33] It's common that a rootkit creates a
hidden, encrypted filesystem in which it can hide other malware or original copies of
files it has infected.[34] Operating systems are evolving to counter the threat of kernel-
mode rootkits. For example, 64-bit editions of Microsoft Windows now implement
mandatory signing of all kernel-level drivers in order to make it more difficult for
untrusted code to execute with the highest privileges in a system.[35]
5.11.2.1.9 Bootkits
A kernel-mode rootkit variant called a bootkit, it can infect startup code like the Master
Boot Record (MBR), Volume Boot Record (VBR) or boot sector, and in this way, can be
used to attack full disk encryption systems.
An example of such an attack on disk encryption is the "Evil Maid Attack", in which an
attacker installs a bootkit on an unattended computer, replacing the legitimate boot
loader with one under their control. Typically the malware loader persists through the
transition to protected mode when the kernel has loaded, and is thus able to subvert
the kernel.[36][37][38][39]For example, the "Stoned Bootkit" subverts the system by
using a compromised boot loader to intercept encryption keys and passwords.[40] More
recently, the Alureon rootkit has successfully subverted the requirement for 64-bit
kernel-mode driver signing in Windows 7 by modifying the master boot record.
[41] Although not malware in the sense of doing something the user doesn't want,
certain "Vista Loader" or "Windows Loader" software works in a similar way by
injecting an ACPI SLIC (System Licensed Internal Code) table in the RAM-cached
version of the BIOS during boot, in order to defeat the Windows Vista and Windows 7
activation process.[42][43] This vector of attack was rendered useless in the (non-
server) versions of Windows 8, which use a unique, machine-specific key for each
system, that can only be used by that one machine.[44] Many antivirus companies
provide free utilities and programs to remove bootkits.
5.11.2.1.10 Hypervisor level
Rootkits have been created as Type II Hypervisors in academia as proofs of concept.
By exploiting hardware virtualisation features such as Intel VT or AMD-V, this type of
rootkit runs in Ring -1 and hosts the target operating system as a virtual machine,
thereby enabling the rootkit to intercept hardware calls made by the original operating
system.[5] Unlike normal hypervisors, they do not have to load before the operating
system, but can load into an operating system before promoting it into a virtual
machine.[5] A hypervisor rootkit does not have to make any modifications to the kernel
of the target to subvert it; however, that does not mean that it cannot be detected by
the guest operating system. For example, timing differences may be detectable
in CPU instructions.[5] The "SubVirt" laboratory rootkit, developed jointly
by Microsoft and University of Michigan researchers, is an academic example of a
virtual machine–based rootkit (VMBR),[45] while Blue Pill software is another. In 2009,
researchers from Microsoft and North Carolina State University demonstrated a
hypervisor-layer anti-rootkit called Hooksafe, which provides generic protection against
kernel-mode rootkits.[46] Windows 10 introduced a new feature called "Device Guard",
that takes advantage of virtualisation to provide independent external protection of an
operating system against rootkit-type malware.[47]
5.11.2.1.11 Firmware and hardware
A firmware rootkit uses device or platform firmware to create a persistent malware
image in hardware, such as a router, network card,[48] hard drive, or the system BIOS.
[25][49] The rootkit hides in firmware, because firmware is not usually inspected for
code integrity. John Heasman demonstrated the viability of firmware rootkits in
both ACPI firmware routines[50]and in a PCI expansion card ROM.[51] In October 2008,
criminals tampered with European credit card-reading machines before they were
installed. The devices intercepted and transmitted credit card details via a mobile
phone network.[52] In March 2009, researchers Alfredo Ortega and Anibal Sacco
published details of a BIOS-level Windows rootkit that was able to survive disk
replacement and operating system re-installation.[53][54]
[55] A few months later they learned that some laptops are sold with a legitimate
rootkit, known as Absolute CompuTrace or Absolute LoJack for Laptops, pre-installed
in many BIOS images. This is an anti-theft technology system that researchers showed
can be turned to malicious purposes.[22]
Intel Active Management Technology, part of Intel vPro, implements out-of-band
management, giving administrators remote administration, remote management,
and remote controlof PCs with no involvement of the host processor or BIOS, even
when the system is powered off. Remote administration includes remote power-up and
power-down, remote reset, redirected boot, console redirection, pre-boot access to
BIOS settings, programmable filtering for inbound and outbound network traffic, agent
presence checking, out-of-band policy-based alerting, access to system information,
such as hardware asset information, persistent event logs, and other information that
is stored in dedicated memory (not on the hard drive) where it is accessible even if the
OS is down or the PC is powered off. Some of these functions require the deepest level
of rootkit, a second non-removable spy computer built around the main computer.
Sandy Bridge and future chipsets have "the ability to remotely kill and restore a lost or
stolen PC via 3G". Hardware rootkits built into the chipset can help recover stolen
computers, remove data, or render them useless, but they also present privacy and
security concerns of undetectable spying and redirection by management or hackers
who might gain control.
5.11.2.1.12 Installation and cloaking
Rootkits employ a variety of techniques to gain control of a system; the type of rootkit
influences the choice of attack vector. The most common technique leverages security
vulnerabilities to achieve surreptitious privilege escalation. Another approach is to use
a Trojan horse, deceiving a computer user into trusting the rootkit's installation
program as benign—in this case, social engineering convinces a user that the rootkit is
beneficial.[27] The installation task is made easier if the principle of least privilege is
not applied, since the rootkit then does not have to explicitly request elevated
(administrator-level) privileges. Other classes of rootkits can be installed only by
someone with physical access to the target system. Some rootkits may also be
installed intentionally by the owner of the system or somebody authorized by the
owner, e.g. for the purpose of employee monitoring, rendering such subversive
techniques unnecessary.[56] The installation of malicious rootkits is commercially
driven, with a pay-per-install (PPI) compensation method typical for distribution.[57]
[58]
Once installed, a rootkit takes active measures to obscure its presence within the host
system through subversion or evasion of standard operating system security tools
and application programming interface (APIs) used for diagnosis, scanning, and
monitoring. Rootkits achieve this by modifying the behaviour of core parts of an
operating system through loading code into other processes, the installation or
modification of drivers, or kernel modules. Obfuscation techniques include concealing
running processes from system-monitoring mechanisms and hiding system files and
other configuration data.[59] It is not uncommon for a rootkit to disable the event
logging capacity of an operating system, in an attempt to hide evidence of an attack.
Rootkits can, in theory, subvert any operating system activities.[60] The "perfect
rootkit" can be thought of as similar to a "perfect crime": one that nobody realizes has
taken place. Rootkits also take a number of measures to ensure their survival against
detection and "cleaning" by antivirus software in addition to commonly installing into
Ring 0 (kernel-mode), where they have complete access to a system. These
include polymorphism (changing so their "signature" is hard to detect), stealth
techniques, regeneration, disabling or turning off anti-malware software.[61] and not
installing on virtual machines where it may be easier for researchers to discover and
analyse them.
5.11.2.1.13 Detection
The fundamental problem with rootkit detection is that if the operating system has
been subverted, particularly by a kernel-level rootkit, it cannot be trusted to find
unauthorized modifications to itself or its components.[60] Actions such as requesting
a list of running processes, or a list of files in a directory, cannot be trusted to behave
as expected. In other words, rootkit detectors that work while running on infected
systems are only effective against rootkits that have some defect in their camouflage,
or that run with lower user-mode privileges than the detection software in the kernel.
[27] As with computer viruses, the detection and elimination of rootkits is an ongoing
struggle between both sides of this conflict.[60]Detection can take a number of
different approaches, including looking for virus "signatures" (e.g. antivirus software),
integrity checking (e.g. digital signatures), difference-based detection (comparison of
expected vs. actual results), and behavioural detection (e.g. monitoring CPU usage or
network traffic).
For kernel-mode rootkits, detection is considerably more complex, requiring careful
scrutiny of the System Call Table to look for hooked functions where the malware may
be subverting system behaviour,[62] as well as forensic scanning of memory for
patterns that indicate hidden processes. Unix rootkit detection offerings include
Zeppoo,[63] chkrootkit, rkhunter and OSSEC. For Windows, detection tools include
Microsoft Sysinternals RootkitRevealer,[64] Avast Antivirus,[65] Sophos Anti-Rootkit,
[66] F-Secure,[67] Radix,[68] GMER,[69]and WindowsSCOPE. Any rootkit detectors that
prove effective ultimately contribute to their own ineffectiveness, as malware authors
adapt and test their code to escape detection by well-used tools.[Notes 1] Detection by
examining storage while the suspect operating system is not operational can miss
rootkits not recognised by the checking software, as the rootkit is not active and
suspicious behaviour is suppressed; conventional anti-malware software running with
the rootkit operational may fail if the rootkit hides itself effectively.
5.11.2.1.14 Alternative trusted medium
The best and most reliable method for operating-system-level rootkit detection is to
shut down the computer suspected of infection, and then to check
its storage by booting from an alternative trusted medium (e.g. a "rescue" CD-
ROM or USB flash drive).[70] The technique is effective because a rootkit cannot
actively hide its presence if it is not running.
5.11.2.1.15 Behavioural-based
The behavioural-based approach to detecting rootkits attempts to infer the presence of
a rootkit by looking for rootkit-like behaviour. For example, by profiling a system,
differences in the timing and frequency of API calls or in overall CPU utilization can be
attributed to a rootkit. The method is complex and is hampered by a high incidence
of false positives. Defective rootkits can sometimes introduce very obvious changes to
a system: the Alureon rootkit crashed Windows systems after a security update
exposed a design flaw in its code.[71][72] Logs from a packet analyzer, firewall,
or intrusion prevention system may present evidence of rootkit behaviour in a
networked environment.[24]
5.11.2.1.16 Signature-based
Antivirus products rarely catch all viruses in public tests (depending on what is used
and to what extent), even though security software vendors incorporate rootkit
detection into their products. Should a rootkit attempt to hide during an antivirus scan,
a stealth detector may notice; if the rootkit attempts to temporarily unload itself from
the system, signature detection (or "fingerprinting") can still find it. This combined
approach forces attackers to implement counter-attack mechanisms, or "retro"
routines, that attempt to terminate antivirus programs. Signature-based detection
methods can be effective against well-published rootkits, but less so against specially
crafted, custom-root rootkits.[60]
5.11.2.1.17 Difference-based
Another method that can detect rootkits compares "trusted" raw data with "tainted"
content returned by an API. For example, binaries present on disk can be compared
with their copies within operating memory (in some operating systems, the in-memory
image should be identical to the on-disk image), or the results returned from file
system or Windows Registry APIs can be checked against raw structures on the
underlying physical disks[60][73]—however, in the case of the former, some valid
differences can be introduced by operating system mechanisms like memory relocation
or shimming. A rootkit may detect the presence of a such difference-based scanner
or virtual machine (the latter being commonly used to perform forensic analysis), and
adjust its behaviour so that no differences can be detected. Difference-based detection
was used by Russinovich's RootkitRevealertool to find the Sony DRM rootkit.[1]
5.11.2.1.18 Integrity checking
The rkhunter utility uses SHA-1hashes to verify the integrity of system files.
Code signing uses public-key infrastructure to check if a file has been modified since
being digitally signed by its publisher. Alternatively, a system owner or administrator
can use a cryptographic hash function to compute a "fingerprint" at installation time
that can help to detect subsequent unauthorized changes to on-disk code libraries.
[74] However, unsophisticated schemes check only whether the code has been
modified since installation time; subversion prior to that time is not detectable. The
fingerprint must be re-established each time changes are made to the system: for
example, after installing security updates or a service pack. The hash function creates
a message digest, a relatively short code calculated from each bit in the file using an
algorithm that creates large changes in the message digest with even smaller changes
to the original file. By recalculating and comparing the message digest of the installed
files at regular intervals against a trusted list of message digests, changes in the
system can be detected and monitored—as long as the original baseline was created
before the malware was added.
More-sophisticated rootkits are able to subvert the verification process by presenting
an unmodified copy of the file for inspection, or by making code modifications only in
memory, reconfiguration registers, which are later compared to a white list of expected
values.[75] The code that performs hash, compare, or extend operations must also be
protected—in this context, the notion of an immutable root-of-trust holds that the very
first code to measure security properties of a system must itself be trusted to ensure
that a rootkit or bootkit does not compromise the system at its most fundamental level.
[76]
5.11.2.1.19 Memory dumps
Forcing a complete dump of virtual memory will capture an active rootkit (or a kernel
dump in the case of a kernel-mode rootkit), allowing offline forensic analysis to be
performed with a debugger against the resulting dump file, without the rootkit being
able to take any measures to cloak itself. This technique is highly specialized, and may
require access to non-public source code or debugging symbols. Memory dumps
initiated by the operating system cannot always be used to detect a hyper-visor-based
rootkit, which is able to intercept and subvert the lowest-level attempts to read
memory[5]—a hardware device, such as one that implements a non-maskable interrupt,
may be required to dump memory in this scenario.[77][78] Virtual machines also make
it easier to analyse the memory of a compromised machine from the underlying
hypervisor, so some rootkits will avoid infecting virtual machines for this reason.
5.11.2.1.20 Removal
Manual removal of a rootkit is often too difficult for a typical computer user,[25] but a
number of security-software vendors offer tools to automatically detect and remove
some rootkits, typically as part of an antivirus suite. As of 2005, Microsoft's
monthly Windows Malicious Software Removal Tool s able to detect and remove some
classes of rootkits.[79][80] Also, Windows Defender Offline can remove rootkits, as it
runs from a trusted environment before the operating system starts. Some antivirus
scanners can bypass file system APIs, which are vulnerable to manipulation by a
rootkit. Instead, they access raw filesystem structures directly, and use this
information to validate the results from the system APIs to identify any differences that
may be caused by a rootkit.[Notes 2][81][82][83][84] There are experts who believe that
the only reliable way to remove them is to re-install the operating system from trusted
media.[85][86] This is because antivirus and malware removal tools running on an
untrusted system may be ineffective against well-written kernel-mode rootkits. Booting
an alternative operating system from trusted media can allow an infected system
volume to be mounted and potentially safely cleaned and critical data to be copied off—
or, alternatively, a forensic examination performed.[24] Lightweight operating systems
such as Windows PE, Windows Recovery Console, Windows Recovery
Environment, BartPE, or Live Distroscan be used for this purpose, allowing the system
to be "cleaned". Even if the type and nature of a rootkit is known, manual repair may be
impractical, while re-installing the operating system and applications is safer, simpler
and quicker.[85]
5.11.2.1.21 Public availability
Like much malware used by attackers, many rootkit implementations are shared and
are easily available on the Internet. It is not uncommon to see a compromised system
in which a sophisticated, publicly available rootkit hides the presence of
unsophisticated worms or attack tools apparently written by inexperienced
programmers.[24] Most of the rootkits available on the Internet originated as exploits
or as academic " proofs of concept  to demonstrate varying methods of hiding things
within a computer system and of taking unauthorized control of it.[87]
[dubious – discuss] Often not fully optimized for stealth, such rootkits sometimes leave
unintended evidence of their presence. Even so, when such rootkits are used in an
attack, they are often effective. Other rootkits with keylogging features such
as GameGuard are installed as part of online commercial games.
5.11.2.1.22 Defences
System hardening represents one of the first layers of defence against a rootkit, to
prevent it from being able to install.[88] Applying security patches, implementing
the principle of least privilege, reducing the attack surface and installing antivirus
software are some standard security best practices that are effective against all
classes of malware.[89] New secure boot specifications like Unified Extensible
Firmware Interface have been designed to address the threat of bootkits, but even
these are vulnerable if the security features they offer are not utilized.[49] For server
systems, remote server attestation using technologies such as Intel Trusted Execution
Technology (TXT) provide a way of validating that servers remain in a known good
state. For example, Microsoft Bitlocker encrypting data-at-rest validates servers are in
a known "good state" on bootup. PrivateCore vCage is a software offering that secures
data-in-use (memory) to avoid bootkits and rootkits by validating servers are in a known
"good" state on bootup. The PrivateCore implementation works in concert with Intel
TXT and locks down server system interfaces to avoid potential bootkits and rootkits.
Notes
• The process name of Sysinternals RootkitRevealer was targeted by malware; in an
attempt to counter this countermeasure, the tool now uses a randomly generated
process name.
• In theory, a sufficiently sophisticated kernel-level rootkit could subvert read
operations against raw filesystem data structures as well, so that they match the
results returned by APIs.
5.11.2.2 References
1 "Rootkits, Part 1 of 3: The Growing Threat" (PDF). McAfee. 2006-04-17. Archived
from the original (PDF) on 2006-08-23.
2 http://www.technibble.com/how-to-remove-a-rootkit-from-a-windows-system/
3 "Windows Rootkit Overview" (PDF). Symantec. 2006-03-26. Retrieved 2010-08-17.
4 Sparks, Sherri; Butler, Jamie (2005-08-01). "Raising The Bar For Windows Rootkit
Detection". Phrack. 0xb (0x3d).
5 Myers, Michael; Youndt, Stephen (2007-08-07). "An Introduction to Hardware-
Assisted Virtual Machine (HVM) Rootkits". Crucial Security. CiteSeerX: 10.1.1.90.8832.
6 Andrew Hay; Daniel Cid; Rory Bray (2008). OSSEC Host-Based Intrusion Detection
Guide. Syngress. p. 276. ISBN 1-59749-240-X.
7 Thompson, Ken (August 1984). "Reflections on Trusting
Trust" (PDF). Communications of the ACM. 27 (8): 761. doi:10.1145/358198.358210.
8 Greg Hoglund; James Butler (2006). Rootkits: Subverting the Windows kernel.
Addison-Wesley. p. 4. ISBN 0-321-29431-9.
9 Dai Zovi, Dino (2009-07-26). Advanced Mac OS X Rootkits (PDF). Blackhat.
Endgame Systems. Retrieved 2010-11-23.
10 "Stuxnet Introduces the First Known Rootkit for Industrial Control
Systems". Symantec. 2010-08-06. Retrieved 2010-12-04.
11 "Spyware Detail: XCP.Sony.Rootkit". Computer Associates. 2005-11-05. Archived
from the original on 2010-08-18. Retrieved 2010-08-19.
12 Russinovich, Mark (2005-10-31). "Sony, Rootkits and Digital Rights Management
Gone Too Far". TechNet Blogs. Microsoft. Retrieved 2010-08-16.
13 "Sony's long-term rootkit CD woes". BBC News. 2005-11-21. Retrieved 2008-09-
15.
14 Felton, Ed (2005-11-15). "Sony's Web-Based Uninstaller Opens a Big Security
Hole; Sony to Recall Discs".
15 Knight, Will (2005-11-11). "Sony BMG sued over cloaking software on music
CD". New Scientist. Sutton, UK: Reed Business Information. Retrieved 2010-11-21.
16 Kyriakidou, Dina (March 2, 2006). ""Greek Watergate" Scandal Sends Political
Shockwaves". Reuters. Retrieved 2007-11-24.[dead link]
17 Vassilis Prevelakis; Diomidis Spinellis (July 2007). "The Athens Affair".
18 Russinovich, Mark (June 2005). "Unearthing Root Kits". Windows IT Pro.
Retrieved 2010-12-16.
19 "World of Warcraft Hackers Using Sony BMG Rootkit". The Register. 2005-11-04.
Retrieved 2010-08-23.
20 Steve Hanna (September 2007). "Using Rootkit Technology for Honeypot-Based
Malware Detection" (PDF). CCEID Meeting.
21 Russinovich, Mark (6 February 2006). "Using Rootkits to Defeat Digital Rights
Management". Winternals. SysInternals. Archived from the original on 14 August 2006.
Retrieved 2006-08-13.
22 Ortega, Alfredo; Sacco, Anibal (2009-07-24). Deactivate the Rootkit: Attacks on
BIOS anti-theft technologies (PDF). Black Hat USA 2009 (PDF). Boston, MA: Core
Security Technologies. Retrieved 2014-06-12.
23 Kleissner, Peter (2009-09-02). "Stoned Bootkit: The Rise of MBR Rootkits &
Bootkits in the Wild" (PDF). Retrieved 2010-11-23.
24 Anson, Steve; Bunting, Steve (2007). Mastering Windows Network Forensics and
Investigation. John Wiley and Sons. pp. 73–74. ISBN 0-470-09762-0.
25 "Rootkits Part 2: A Technical Primer" (PDF). McAfee. 2007-04-03. Archived
from the original (PDF) on 2008-12-05. Retrieved 2010-08-17.
26 Kdm. "NTIllusion: A portable Win32 userland rootkit". Phrack. 62 (12).
27 "Understanding Anti-Malware Technologies" (PDF). Microsoft. 2007-02-21.
Retrieved 2010-08-17.
28 Hoglund, Greg (1999-09-09). "A *REAL* NT Rootkit, Patching the NT
Kernel". Phrack. 9(55). Retrieved 2010-11-21.
29 Shevchenko, Alisa (2008-09-01). "Rootkit Evolution". Help Net Security. Help Net
Security.
30 Chuvakin, Anton (2003-02-02). An Overview of Unix Rootkits (PDF) (Report).
Chantilly, Virginia: iDEFENSE. Retrieved 2010-11-21.
31 Butler, James; Sparks, Sherri (2005-11-16). "Windows Rootkits of 2005, Part
Two". Symantec Connect. Symantec. Retrieved 2010-11-13.
32 Butler, James; Sparks, Sherri (2005-11-03). "Windows Rootkits of 2005, Part
One". Symantec Connect. Symantec. Retrieved 2010-11-12.
33 Burdach, Mariusz (2004-11-17). "Detecting Rootkits And Kernel-level
Compromises In Linux". Symantec. Retrieved 2010-11-23.
34 Marco Giuliani (11 April 2011). "ZeroAccess – An Advanced Kernel Mode
Rootkit"(PDF). Webroot Software. Retrieved 10 August 2011.
35 "Driver Signing Requirements for Windows". Microsoft. Retrieved 2008-07-06.
36 Schneier, Bruce (2009-10-23). "'Evil Maid' Attacks on Encrypted Hard Drives".
Retrieved 2009-11-07.
37 Soeder, Derek; Permeh, Ryan (2007-05-09). "Bootroot". eEye Digital Security.
Archived from the original on 2013-08-17. Retrieved 2010-11-23.
38 Kumar, Nitin; Kumar, Vipin (2007). Vbootkit: Compromising Windows Vista
Security(PDF). Black Hat Europe 2007.
39 "BOOT KIT: Custom boot sector based Windows 2000/XP/2003
Subversion". NVlabs. 2007-02-04. Archived from the original on June 10, 2010.
Retrieved 2010-11-21.
40 Kleissner, Peter (2009-10-19). "Stoned Bootkit". Peter Kleissner. Retrieved 2009-
11-07.[self-published source?]
41 Goodin, Dan (2010-11-16). "World's Most Advanced Rootkit Penetrates 64-bit
Windows". The Register. Retrieved 2010-11-22.
42 Peter Kleissner, "The Rise of MBR Rootkits And Bootkits in the Wild", Hacking at
Random(2009) - text; slides
43 Windows Loader - Software Informer. This is the loader application that's used by
millions of people worldwide
44 Microsoft tightens grip on OEM Windows 8 licensing
45 King, Samuel T.; Chen, Peter M.; Wang, Yi-Min; Verbowski, Chad; Wang, Helen J.;
Lorch, Jacob R. (2006-04-03). International Business Machines (ed.), ed. SubVirt:
Implementing malware with virtual machines (PDF). 2006 IEEE Symposium on Security
and Privacy. Institute of Electrical and Electronics
Engineers. doi:10.1109/SP.2006.38. ISBN 0-7695-2574-1. Retrieved 2008-09-15.
46 Wang, Zhi; Jiang, Xuxian; Cui, Weidong; Ning, Peng (2009-08-11). "Countering
Kernel Rootkits with Lightweight Hook Protection" (PDF). In Al-Shaer, Ehab (General
Chair). Proceedings of the 16th ACM Conference on Computer and Communications
Security. CCS 2009: 16th ACM Conference on Computer and Communications Security.
Jha, Somesh; Keromytis, Angelos D. (Program Chairs). New York: ACM New
York. doi:10.1145/1653662.1653728. ISBN 978-1-60558-894-0. Retrieved 2009-11-11.
47 https://msdn.microsoft.com/en-us/library/dn986865(v=vs.85).aspx
48 Delugré, Guillaume (2010-11-21). Reversing the Broacom NetExtreme's
Firmware(PDF). hack.lu. Sogeti. Archived from the original (PDF) on 2012-04-25.
Retrieved 2010-11-25.
49 http://blog.trendmicro.com/trendlabs-security-intelligence/hacking-team-uses-
uefi-bios-rootkit-to-keep-rcs-9-agent-in-target-systems/
50 Heasman, John (2006-01-25). Implementing and Detecting an ACPI BIOS
Rootkit(PDF). Black Hat Federal 2006. NGS Consulting. Retrieved 2010-11-21.
51 Heasman, John (2006-11-15). "Implementing and Detecting a PCI Rootkit" (PDF).
Next Generation Security Software. CiteSeerX: 10.1.1.89.7305. Retrieved 2010-11-13.
52 Modine, Austin (2008-10-10). "Organized crime tampers with European card
swipe devices: Customer data beamed overseas". The Register. Situation Publishing.
Retrieved 2008-10-13.
53 Sacco, Anibal; Ortéga, Alfredo (2009). Persistent BIOS
infection (PDF). CanSecWest 2009. Core Security Technologies. Retrieved 2010-11-21.
54 Goodin, Dan (2009-03-24). "Newfangled rootkits survive hard disk wiping". The
Register. Situation Publishing. Retrieved 2009-03-25.
55 Sacco, Anibal; Ortéga, Alfredo (2009-06-01). "Persistent BIOS Infection: The Early
Bird Catches the Worm". Phrack. 66 (7). Retrieved 2010-11-13.
56 Ric Vieler (2007). Professional Rootkits. John Wiley & Sons.
p. 244. ISBN 9780470149546.
57 Matrosov, Aleksandr; Rodionov, Eugene (2010-06-25). "TDL3: The Rootkit of All
Evil?"(PDF). Moscow: ESET. p. 3. Retrieved 2010-08-17.
58 Matrosov, Aleksandr; Rodionov, Eugene (2011-06-27). "The Evolution of TDL:
Conquering x64" (PDF). ESET. Retrieved 2011-08-08.
59 Brumley, David (1999-11-16). "Invisible Intruders: rootkits in
practice". USENIX. USENIX.
60  Davis, Michael A.; Bodmer, Sean; LeMasters, Aaron (2009-09-03). "Chapter 10:
Rootkit Detection" (PDF). Hacking Exposed Malware & Rootkits: Malware & rootkits
security secrets & solutions (PDF). New York: McGraw Hill Professional. ISBN 978-0-07-
159118-8. Retrieved 2010-08-14.
61 Trlokom (2006-07-05). "Defeating Rootkits and Keyloggers" (PDF). Trlokom.
Retrieved 2010-08-17.
62 Dai Zovi, Dino (2011). "Kernel Rootkits". Archived from the original on September
10, 2012. Retrieved 13 Sep 2012.
63 "Zeppoo". SourceForge. 18 July 2009. Retrieved 8 August 2011.
64 Cogswell, Bryce; Russinovich, Mark (2006-11-01). "RootkitRevealer
v1.71". Microsoft. Retrieved 2010-11-13.
65 "Rootkit & Anti-rootkit". Retrieved 13 September 2017.
66 "Sophos Anti-Rootkit". Sophos. Retrieved 8 August 2011.
67 "BlackLight". F-Secure. Retrieved 8 August 2011.
68 "Radix Anti-Rootkit". usec.at. Retrieved 8 August 2011.
69 "GMER". Retrieved 8 August 2011.
70 Harriman, Josh (2007-10-19). "A Testing Methodology for Rootkit Removal
Effectiveness" (PDF). Dublin, Ireland: Symantec Security Response. Retrieved 2010-08-
17.
71 Cuibotariu, Mircea (2010-02-12). "Tidserv and MS10-015". Symantec.
Retrieved 2010-08-19.
72 "Restart Issues After Installing MS10-015". Microsoft. 2010-02-11.
Retrieved 2010-10-05.
73 "Strider GhostBuster Rootkit Detection". Microsoft Research. 2010-01-28.
Retrieved 2010-08-14.
74 "Signing and Checking Code with Authenticode". Microsoft. Retrieved 2008-09-
15.
75 "Stopping Rootkits at the Network Edge" (PDF). Beaverton, Oregon: Trusted
Computing Group. January 2017. Retrieved 2008-07-11.
76 "TCG PC Specific Implementation Specification, Version 1.1" (PDF). Trusted
Computing Group. 2003-08-18. Retrieved 2010-11-22.
77 "How to generate a complete crash dump file or a kernel crash dump file by
using an NMI on a Windows-based system". Microsoft. Retrieved 2010-11-13.
78 Seshadri, Arvind; et al. (2005). "Pioneer: Verifying Code Integrity and Enforcing
Untampered Code Execution on Legacy Systems". Carnegie Mellon University.
79 Dillard, Kurt (2005-08-03). "Rootkit battle: Rootkit Revealer vs. Hacker Defender".
80 "The Microsoft Windows Malicious Software Removal Tool helps remove specific,
prevalent malicious software from computers that are running Windows 7, Windows
Vista, Windows Server 2003, Windows Server 2008, or Windows XP". Microsoft. 2010-
09-14.
81 Hultquist, Steve (2007-04-30). "Rootkits: The next big enterprise
threat?". InfoWorld. IDG. Retrieved 2010-11-21.
82 "Security Watch: Rootkits for fun and profit". CNET Reviews. 2007-01-19.
Archived from the original on 2012-10-08. Retrieved 2009-04-07.
83 Bort, Julie (2007-09-29). "Six ways to fight back against botnets". PCWorld. San
Francisco: PCWorld Communications. Retrieved 2009-04-07.
84 Hoang, Mimi (2006-11-02). "Handling Today's Tough Security Threats:
Rootkits". Symantec Connect. Symantec. Retrieved 2010-11-21.
85 Danseglio, Mike; Bailey, Tony (2005-10-06). "Rootkits: The Obscure Hacker
Attack". Microsoft.
86 Messmer, Ellen (2006-08-26). "Experts Divided Over Rootkit Detection and
Removal". NetworkWorld.com. Framingham, Mass.: IDG. Retrieved 2010-08-15.
87 Stevenson, Larry; Altholz, Nancy (2007). Rootkits for Dummies. John Wiley and
Sons Ltd. p. 175. ISBN 0-471-91710-9.
88 Skoudis, Ed; Zeltser, Lenny (2004). Malware: Fighting Malicious Code. Prentice
Hall PTR. p. 335. ISBN 0-13-101405-6.
89 Hannel, Jeromey (2003-01-23). "Linux RootKits For Beginners - From Prevention
to Removal". SANS Institute. Archived from the original (PDF) on October 24, 2010.
Retrieved 2010-11-22.
90 Blunden, Bill (2009). The Rootkit Arsenal: Escape and Evasion in the Dark
Corners of the System. Wordware. ISBN 978-1-59822-061-2.
91 Hoglund, Greg; Butler, James (2005). Rootkits: Subverting the Windows Kernel.
Addison-Wesley Professional. ISBN 0-321-29431-9.
92 Grampp, F. T.; Morris, Robert H., Sr. (October 1984). "The UNIX System: UNIX
Operating System Security". AT&T Bell Laboratories Technical Journal. AT&T. 62 (8):
1649–1672.
93 Kong, Joseph (2007). Designing BSD Rootkits. No Starch Press. ISBN 1-59327-
142-5.
94 Veiler, Ric (2007). Professional Rootkits. Wrox. ISBN 978-0-470-10154-4.
95 Rootkit Analysis: Research and Analysis of Rootkits
96 Even Nastier: Traditional RootKits
97 Sophos Podcast about rootkit removal
98 Rootkit research in Microsoft
99 Testing of antivirus/anti-rootkit software for the detection and removal of
rootkits, Anti-Malware Test Lab, January 2008
100 Testing of anti-rootkit software, InformationWeek, January 2007
101 Security Now! Episode 9, Rootkits, Podcast by Steve Gibson/GRC explaining
Rootkit technology, October 2005
5.11.3 Backdoor
5.11.3.1 Content
5.11.3.1.1 Definition
A backdoor uses secret ways to bypass normal authentication / encryption in
a hardware/software system[1]  for remote access to a computer or access
cryptographic systems. It can be a hidden part of program,[2] a separate program or
code in the firmware[3] or parts of an operating system.[4][5][6] 
Default passwords / credentials and debug facilities act as backdoors when unchanged
by users.[7]
5.11.3.1.2 Overview
They became problems with multiuser / networked systems[8] described as trapdoors
but coincides with modern day nomenclature backdoor.[9]
Backdoors for  proprietary systems are not publicized but are exposed by application.
5.11.3.1.3 Politics and attribution
Government agencies can be responsible for backdoors which show up as bugs.
However they can be used by hackers usually by national actors.[10]
In general terms, the long dependency-chains in the modern, highly
specialized technological economy and innumerable human-elements process control-
points make it difficult to conclusively pinpoint responsibility at such time as a covert
backdoor becomes unveiled.
Even direct admissions of responsibility must be scrutinized carefully if the confessing
party is beholden to other powerful interests.
5.11.3.1.4 Examples
Worms often install backdoors for sending junk e-mail from the infected machines,
collecting data. gaining superior access to the system[11][12] or remote control.[13]
Object code backdoors are harder to detect but they can be found using a master to
check differences, length, checksum or analyzing disassembled code and removed by
code recompilation.[14][15]
A symmetric backdoor can be used by anyone whilst an asymmetric backdoor can only
be used by its owner.[3][16][17][18]
A compiler backdoor occurs when the compiler inserts a backdoor in a program which
is modified to detect when it is compiling itself and then inserts both the backdoor and
the code modifying self-compilation.[15][14] This type of backdoor occurs very rarely
and is documented in [19][20][21].
Recovery from a backdoor requires in extreme cases a rebuilt system and with data
copied from the original system. The system is rebuilt with parallel production lines and
compared for acceptance tests to ensure a same effect system.[22] However normally
a rebuild without ;rocessing source is sufficient.[3][17][23][24][25][26][27][28]
5.11.3.2 References
1 Eckersley, Peter; Portnoy, Erica (8 May 2017). "Intel's Management Engine is a
security hazard, and users need a way to disable it". www.eff.org. EFF. Retrieved 15
May 2017.
2 Chris Wysopal, Chris Eng. "Static Detection of Application Backdoors" (PDF).
Veracode. Retrieved 2015-03-14.
3 wired.com: "How a Crypto ‘Backdoor’ Pitted the Tech World Against the NSA"
(Zetter) 24 Sep 2013
4 Ashok, India (21 June 2017). "Hackers using NSA malware DoublePulsar to infect
Windows PCs with Monero mining Trojan". International Business Times UK.
Retrieved 1 July 2017.
5 "Microsoft Back Doors". GNU Operating System. Retrieved 1 July 2017.
6 "NSA backdoor detected on >55,000 Windows boxes can now be remotely
removed". Ars Technica. Retrieved 1 July 2017.
7 http://blog.erratasec.com/2012/05/bogus-story-no-chinese-backdoor-in.html
8 H.E. Petersen, R. Turn. "System Implications of Information
Privacy". Proceedings of the AFIPS Spring Joint Computer Conference, vol. 30, pages
291–300. AFIPS Press: 1967.
9 Security Controls for Computer Systems, Technical Report R-609, WH Ware, ed,
Feb 1970, RAND Corp.
10 Beastly Tesla V100 (10 May 2017) "which features a staggering 21.1 billion
transistors"
11 Larry McVoy (November 5, 2003) Linux-Kernel Archive: Re: BK2CVS problem.
ussg.iu.edu
12 Thwarted Linux backdoor hints at smarter hacks; Kevin Poulsen; SecurityFocus,
6 November 2003.
13 replicant.us: "Samsung Galaxy Back-door" 28 Jan 2014
14 Thompson, Ken (August 1984). "Reflections on Trusting
Trust" (PDF). Communications of the ACM. 27 (8): 761–763. doi:10.1145/358198.358210.
15 Karger & Schell 2002.
16 G+M: "The strange connection between the NSA and an Ontario tech firm" 20
Jan 2014
17 nytimes.com: "N.S.A. Able to Foil Basic Safeguards of Privacy on Web" (Perlroth
et al.) 5 Sep 2013
18 cryptovirology.com page on OpenSSL RSA backdoor
19 Jargon File entry for "backdoor" at catb.org, describes Thompson compiler hack
20 Mick Stute's answer to "What is a coder's worst nightmare?", Quora – describes
a case in 1989.
21 Compile-a-virus — W32/Induc-A Sophos labs on the discovery of the Induc-A virus
22 Wheeler 2009.
23 "Unmasking "Free" Premium WordPress Plugins". Sucuri Blog. Retrieved 3
March 2015.
24 Sinegubko, Denis. "Joomla Plugin Constructor Backdoor". Securi. Retrieved 13
March2015.
25 "Vulnerability Note VU#247371". Vulnerability Note Database. Retrieved 13
March2015.
26 "Interbase Server Contains Compiled-in Back Door Account". CERT. Retrieved 13
March 2015.
27 "Researchers confirm backdoor password in Juniper firewall code". Ars
Technica. Retrieved 2016-01-16.
28 "Zagrożenia tygodnia 2015-W52 - Spece.IT". Spece.IT (in Polish). Retrieved 2016-
01-16.
29 Karger, Paul A.; Schell, Roger R. (June 1974). Multics Security Evaluation:
Vulnerability Analysis (PDF). Vol II.
30 Karger, Paul A.; Schell, Roger R. (September 18, 2002). Thirty Years Later:
Lessons from the Multics Security Evaluation (PDF). Computer Security Applications
Conference, 2002. Proceedings. 18th Annual. IEEE. pp. 119–
126. doi:10.1109/CSAC.2002.1176285. Retrieved 2014-11-08.
31 Wheeler, David A. (7 December 2009). Fully Countering Trusting Trust through
Diverse Double-Compiling (Ph.D.). Fairfax, VA: George Mason University.
Retrieved 2014-11-09.
32 David A. Wheeler’s Page on "Fully Countering Trusting Trust through Diverse
Double-Compiling"—Author's 2009 Ph.D. thesis at George Mason University
5.11.4 Zombie Computer
5.11.4.1 Content
5.11.4.1.1 Introduction
A zombie is a computer connected to the Internet performing malicious tasks under
remote control . Botnets on zombies spread spam e-mail and launch denial-of-service
attacks. A coordinated denial-of-service attack by multiple machines is like a zombie
horde attack. The concept was extended to smart phones as they developed SMS and
WiFi access.[6]
5.11.4.1.2 Advertising
Zombies send spam e-mail which is 80% worldwide spam.[1] It hides the spammer and
reduces their costs.[2] Similarly they make click fraud against sites displaying pay-per-
click advertising, host phishing or money mule recruitment websites.
5.11.4.1.3 Distributed denial-of-service attacks
Zombies can perform distributed denial-of-service (DDOS) attacks for coordinated
denial-of-service attacks on a target website to crash or prevent access to the site.
[3] A degradation-of-service is found by pulsing the zombies as moderated and periodic
activation of the zombies to slow and not crash the site which is harder to detect and
remedy. [4][5]
5.11.4.2 References
1. Tom Spring (June 20, 2005). "Spam Slayer: Slaying Spam-Spewing Zombie PCs".
PC World. Archived from the original on July 16, 2017. Retrieved December 19, 2015.
2. White, Jay D. (2007). Managing Information in the Public Sector. M.E. Sharpe.
p. 221. ISBN 0-7656-1748-X.
3. Weisman, Steve (2008). The Truth about Avoiding Scams. FT Press.
p. 201. ISBN 0-13-233385-6.
4. Schwabach, Aaron (2006). Internet and the Law. ABC-CLIO. p. 325. ISBN 1-85109-
731-7.
5. Steve Gibson, The Attacks on GRC.COM, Gibson Research Corporation, first: May
4, 2001, last: August 12, 2009
6. Furchgott, Roy (August 14, 2009). "Phone Hacking Threat Is Low, but it
Exists". Gadgetwise Blog. New York Times. Archived from the original on July 16, 2017.
Retrieved July 16, 2017.
7. Study by IronPort finds 80% of e-mail spam sent by Zombie PCs. June 28, 2006
8. Botnet operation controlled 1.5 million PCs
9. Is Your PC a Zombie? on About.com
10. Intrusive analysis of a web-based proxy zombie network
11. A detailed account of what a zombie machine looks like and what it takes to
"fix" it
12. Correspondence between Steve Gibson and Wicked
13. Zombie networks, comment spam, and referer [sic] spam
14. The New York Times: Phone Hacking Threat is Low, But It Exists
15. Hackers Target Cell Phones, WPLG-TV/ABC-10 Miami
16. Researcher: BlackBerry Spyware Wasn’t Ready for Prime Time
17. Forbes: How to Hijack Every iPhone in the World
18. Hackers Plan to Clobber the Cloud, Spy on Blackberries
19. SMobile Systems release solution for Etisalat BlackBerry spyware
20. LOIC IRC-0 - An Open-Source IRC Botnet for Network Stress Testing
21. An Open-Source IRC and Webpage Botnet for Network Stress Testing
5.11.5 Man-In-The-Middle
5.11.5.1 Commentary
A man-in-the-middle attack occurs when the attacker relays / alters the communication
between two parties who they have direct communication e.g. eavesdropping.[1]
The attack overcomes mutual authentication when the attacker represents each
endpoint which is countered by endpoint authentication e.g. mutually
trusted certificate authority.[2][3][4][5]
5.11.5.1.1 Defense and detection
The attack is prevented or detected by authentication ensuring that the given message
comes from a valid source and tamper detection gives evidence of change of message.
5.11.5.1.2 Authentication
Cryptographic systems protect against MITM attacks have a way of authentication for
messages. They exchange data (public keys) with the message over a secure channel
and key-agreement protocols.[6]
Clients and servers exchange certificates issued / verified by a trusted third party
called a certificate authority (CA) so that the messages from the owner can be
authenticated. Mutual authentication give a two way check.
Attestments use a public key hash[7] are for sound and visual media being difficult and
time-consuming need a human in the loop to start the operation.
HTTP Public Key Pinning has a list of public key hashes for the first action and then use
one or more of the keys in the list applied by the server to authenticate the transaction.
DNSSEC adds to DNS with signatures to authenticate DNS records to stop directing the
client to a malevolent IP address.
5.11.5.1.3 Tamper detection
Response time discrepancies helps detect an attack sometimes[8] e.g. hash functions.
Quantum Cryptography authenticate part or all of their classical communication with
an unconditionally secure authentication scheme.[9]
5.11.5.1.4 Forensic analysis
Captured network traffic analysis gives evidence based on:[10]
• IP address of the server
• DNS name of the server
• X.509 certificate of the server
• Is the certificate self signed?
• Is the certificate signed by a trusted CA?
• Has the certificate been revoked?
• Has the certificate been changed recently?
• Do other clients, elsewhere on the Internet, also get the same certificate?
5.11.5.1.5 Notable instances
Instances of man-in-the-middle attack are documented in [11][12][13][14][15][16].
5.11.5.2 References
1. Tanmay Patange (November 10, 2013). "How to defend yourself against MITM or Man-in-
the-middle attack".
2. Callegati, Franco; Cerroni, Walter; Ramilli, Marco (2009). "IEEE Xplore - Man-in-the-
Middle Attack to the HTTPS Protocol". ieeexplore.ieee.org: 78–81. Retrieved 13
April2016.
3. MiTM on RSA public key encryption
4. How Encryption Works
5. Public-key cryptography
6. Merkle, Ralph C (April 1978). "Secure Communications Over Insecure
Channels". Communications of the ACM. 21 (4): 294–
299. doi:10.1145/359460.359473. Received August, 1975; revised September 1977
7. Heinrich, Stuart (2013). "Public Key Infrastructure based on Authentication of Media
Attestments". arXiv:1311.7182v1  .
8. Aziz, Benjamin; Hamilton, Geoff (2009). "Detecting man-in-the-middle attacks by precise
timing". 2009 Third International Conference on Emerging Security Information,
Systems and Technologies: 81–86. Retrieved 2017-02-25.
9. "5. Unconditionally secure authentication". liu.se.
10."Network Forensic Analysis of SSL MITM Attacks". NETRESEC Network Security Blog.
Retrieved March 27, 2011.
11.Leyden, John (2003-11-07). "Help! my Belkin router is spamming me". The Register.
12.Meyer, David (10 January 2013). "Nokia: Yes, we decrypt your HTTPS data, but don't
worry about it". Gigaom, Inc. Retrieved 13 June 2014.
13."NSA disguised itself as Google to spy, say reports". CNET. 12 Sep 2013. Retrieved 15
Sep 2013.
14."Comcast using man-in-the-middle attack to warn subscribers of potential copyright
infringement".
15."Comcast still uses MITM javascript injection to serve unwanted ads and messages".
16."Comcast continues to inject its own code into websites you visit".
5.11.6 Man-in-the-browser
5.11.6.1 Content
5.11.6.1.1 Introduction
Man-in-the-browser connected to man-in-the-middle is a proxy Trojan horse[1] infecting
a web browser by modify web pages, transaction content or inserting transactions
hiding the processing from the user and host application. It can be countered by
using out-of-band transaction verification, although SMS verification can be defeated
by man-in-the-mobile attack. Trojans are detected and removed by antiviruses.[2][3][4]
5.11.6.1.2 Description
Man-in-the-browser term was adopted in [5][6] and is a Trojan working by using
features in a web browser capabilities e.g. Help Objects, extensions and scripts.
[6] Antivirus can detect some methods.[2]
Examples of the threats are documented in [7][8][9][10][11][12][1][2][11][13][14][15][16]
[17][18][19][15][1][20][20][21][15][12][10][1][23][24][3][25].
5.11.6.1.3 Protection
5.11.6.1.3.1 Antivirus
Antivirus will find, block and remove know Trojans[2] but is not fully successful.[3][4]
5.11.6.1.3.2 Hardened software
• Browser security software blocks attacks from browser extensions and controls
communication.[11][12][15]
• Alternative software reduces / eliminates malware infection risk.[26][27]
• Secure Web Browser with a two-factor security process
5.11.6.1.3.3 Out-of-band transaction verification
Out-of-band transaction verification relies on a transaction being validated by a
communication route independent of the transaction.[28]
5.11.6.1.3.4 Man-in-the-Mobile
Mobile phone mobile Trojan spyware man-in-the-mobile[29] will frustrate out-of-band
transaction verification SMS transaction verification.[30][31]
5.11.6.1.3.5 Web fraud detection
Web Fraud Detection uses automatic check for anomalous behaviour patterns in
transactions.[32]
5.11.6.1.4 Related attacks
5.11.6.1.4.1 Proxy trojans
Keyloggers are proxy trojans while browser-session recorders are similar.[1]
5.11.6.1.4.2 Man-in-the-middle
Man-in-the-middle attack can be countered but this is not so with a man-in-the-browser
attack.
5.11.6.1.4.3 Boy-in-the-browser
Boy-in-the-browser changes the client's computer network routing to become man-in-
the-middle attack and the malware is removed to avoid detection.[33]
5.11.6.1.4.4 Clickjacking
Clickjacking tricks the web browser user to click to perform malicious code in the
webpage.
5.11.6.2 References
1.  Bar-Yosef, Noa (2010-12-30). "The Evolution of Proxy Trojans". Retrieved 2012-02-03.
2. F-Secure (2007-02-11). "Threat Description: Trojan-Spy:W32/Nuklus.A". Retrieved 2012-
02-03.
3. Trusteer (2009-09-14). "Measuring the in-the-wild effectiveness of Antivirus against
Zeus" (PDF). Archived from the original (PDF) on November 6, 2011. Retrieved 2012-02-
05.
4. Quarri Technologies, Inc (2011). "Web Browsers: Your Weak Link in Achieving PCI
Compliance" (PDF). Retrieved 2012-02-05.
5. Paes de Barros, Augusto (15 September 2005). "O futuro dos backdoors - o pior dos
mundos" (PDF) (in Portuguese). Sao Paulo, Brazil: Congresso Nacional de Auditoria de
Sistemas, Segurança da Informação e Governança - CNASI. Archived from the
original(PDF) on July 6, 2011. Retrieved 2009-06-12.
6. Gühring, Philipp (27 January 2007). "Concepts against Man-in-the-Browser
Attacks"(PDF). Retrieved 2008-07-30.
7. Dunn, John E (2010-07-03). "Trojan Writers Target UK Banks With Botnets".
Retrieved 2012-02-08.
8. Dunn, John E (2010-10-12). "Zeus not the only bank Trojan threat, users warned".
Retrieved 2012-02-03.
9. Curtis, Sophie (2012-01-18). "Facebook users targeted in Carberp man-in-the-browser
attack". Retrieved 2012-02-03.
10.Marusceac Claudiu Florin (2008-11-28). "Trojan.PWS.ChromeInject.B Removal Tool".
Retrieved 2012-02-05.
11.Nattakant Utakrit, School of Computer and Security Science, Edith Cowan University
(2011-02-25). "Review of Browser Extensions, a Man-in-theBrowser Phishing
Techniques Targeting Bank Customers". Retrieved 2012-02-03.
12.Symantec Marc Fossi (2010-12-08). "ZeuS-style banking Trojans seen as greatest threat
to online banking: Survey". Retrieved 2012-02-03.
13.Ted Samson (2011-02-22). "Crafty OddJob malware leaves online bank accounts open
to plunder". Retrieved 2012-02-06.
14.Symantec Marc Fossi (2008-01-23). "Banking with Confidence". Retrieved 2008-07-30.
15.Trusteer. "Trusteer Rapport". Retrieved 2012-02-03.
16.CEO of Trusteer Mickey Boodaei (2011-03-31). "Man-in-the-Browser attacks target the
enterprise". Retrieved 2012-02-03.
17.www.net-security.org (2011-05-11). "Explosive financial malware targets Windows".
Retrieved 2012-02-06.
18.Jozsef Gegeny; Jose Miguel Esparza (2011-02-25). "Tatanga: a new banking trojan with
MitB functions". Retrieved 2012-02-03.
19."Tiny 'Tinba' Banking Trojan Is Big Trouble". msnbc.com. Retrieved 2016-02-28.
20.Borean, Wayne (2011-05-24). "The Mac OS X Virus That Wasn't". Retrieved 2012-02-08.
21.Fisher, Dennis (2011-05-02). "Crimeware Kit Emerges for Mac OS X". Archived from the
original on September 5, 2011. Retrieved 2012-02-03.
22.F-secure. "Threat DescriptionTrojan-Spy:W32/Zbot". Retrieved 2012-02-05.
23.Hyun Choi; Sean Kiernan (2008-07-24). "Trojan.Wsnpoem Technical Details". Symantec.
Retrieved 2012-02-05.
24.Microsoft (2010-04-30). "Encyclopedia entry: Win32/Zbot - Learn more about malware -
Microsoft Malware Protection Center". Symantec. Retrieved 2012-02-05.
25.Richard S. Westmoreland (2010-10-20). "Antisource - ZeuS". Archived from the
original on 2012-01-20. Retrieved 2012-02-05.
26.Horowitz, Michael (2012-02-06). "Online banking: what the BBC missed and a safety
suggestion". Retrieved 2012-02-08.
27.Purdy, Kevin (2009-10-14). "Use a Linux Live CD/USB for Online Banking".
Retrieved 2012-02-04.
28.Finextra Research (2008-11-13). "Commerzbank to deploy Cronto mobile phone-based
authentication technology". Retrieved 2012-02-08.
29.Chickowski, Ericka (2010-10-05). "'Man In The Mobile' Attacks Highlight Weaknesses In
Out-Of-Band Authentication". Retrieved 2012-02-09.
30.Schwartz, Mathew J. (2011-07-13). "Zeus Banking Trojan Hits Android Phones".
Retrieved 2012-02-04.
31.Balan, Mahesh (2009-10-14). "Internet Banking & Mobile Banking users beware – ZITMO
& SPITMO is here !!". Retrieved 2012-02-05.
32.Sartain, Julie (2012-02-07). "How to protect online transactions with multi-factor
authentication". Retrieved 2012-02-08.
33.Imperva (2010-02-14). "Threat Advisory Boy in the Browser". Retrieved 2015-03-12.
5.11.7 Man-In-The-Mobile
5.11.7.1 Commentary
Mobile phone mobile Trojan spyware[29] will frustrate out-of-band transaction
verification SMS transaction verification.[30][31]
5.11.7.2 References
1  Bar-Yosef, Noa (2010-12-30). "The Evolution of Proxy Trojans". Retrieved 2012-
02-03.
2 F-Secure (2007-02-11). "Threat Description: Trojan-Spy:W32/Nuklus.A".
Retrieved 2012-02-03.
3 Trusteer (2009-09-14). "Measuring the in-the-wild effectiveness of Antivirus
against Zeus" (PDF). Archived from the original (PDF) on November 6, 2011.
Retrieved 2012-02-05.
4 Quarri Technologies, Inc (2011). "Web Browsers: Your Weak Link in Achieving
PCI Compliance" (PDF). Retrieved 2012-02-05.
5 Paes de Barros, Augusto (15 September 2005). "O futuro dos backdoors - o pior
dos mundos" (PDF) (in Portuguese). Sao Paulo, Brazil: Congresso Nacional de Auditoria
de Sistemas, Segurança da Informação e Governança - CNASI. Archived from the
original(PDF) on July 6, 2011. Retrieved 2009-06-12.
6 Gühring, Philipp (27 January 2007). "Concepts against Man-in-the-Browser
Attacks"(PDF). Retrieved 2008-07-30.
7 Dunn, John E (2010-07-03). "Trojan Writers Target UK Banks With Botnets".
Retrieved 2012-02-08.
8 Dunn, John E (2010-10-12). "Zeus not the only bank Trojan threat, users warned".
Retrieved 2012-02-03.
9 Curtis, Sophie (2012-01-18). "Facebook users targeted in Carberp man-in-the-
browser attack". Retrieved 2012-02-03.
10 Marusceac Claudiu Florin (2008-11-28). "Trojan.PWS.ChromeInject.B Removal
Tool". Retrieved 2012-02-05.
11 Nattakant Utakrit, School of Computer and Security Science, Edith Cowan
University (2011-02-25). "Review of Browser Extensions, a Man-in-theBrowser Phishing
Techniques Targeting Bank Customers". Retrieved 2012-02-03.
12 Symantec Marc Fossi (2010-12-08). "ZeuS-style banking Trojans seen as greatest
threat to online banking: Survey". Retrieved 2012-02-03.
13 Ted Samson (2011-02-22). "Crafty OddJob malware leaves online bank accounts
open to plunder". Retrieved 2012-02-06.
14 Symantec Marc Fossi (2008-01-23). "Banking with Confidence". Retrieved 2008-
07-30.
15 Trusteer. "Trusteer Rapport". Retrieved 2012-02-03.
16 CEO of Trusteer Mickey Boodaei (2011-03-31). "Man-in-the-Browser attacks
target the enterprise". Retrieved 2012-02-03.
17 www.net-security.org (2011-05-11). "Explosive financial malware targets
Windows". Retrieved 2012-02-06.
18 Jozsef Gegeny; Jose Miguel Esparza (2011-02-25). "Tatanga: a new banking
trojan with MitB functions". Retrieved 2012-02-03.
19 "Tiny 'Tinba' Banking Trojan Is Big Trouble". msnbc.com. Retrieved 2016-02-28.
20 Borean, Wayne (2011-05-24). "The Mac OS X Virus That Wasn't". Retrieved 2012-
02-08.
21 Fisher, Dennis (2011-05-02). "Crimeware Kit Emerges for Mac OS X". Archived
from the original on September 5, 2011. Retrieved 2012-02-03.
22 F-secure. "Threat DescriptionTrojan-Spy:W32/Zbot". Retrieved 2012-02-05.
23 Hyun Choi; Sean Kiernan (2008-07-24). "Trojan.Wsnpoem Technical Details".
Symantec. Retrieved 2012-02-05.
24 Microsoft (2010-04-30). "Encyclopedia entry: Win32/Zbot - Learn more about
malware - Microsoft Malware Protection Center". Symantec. Retrieved 2012-02-05.
25 Richard S. Westmoreland (2010-10-20). "Antisource - ZeuS". Archived from the
original on 2012-01-20. Retrieved 2012-02-05.
26 Horowitz, Michael (2012-02-06). "Online banking: what the BBC missed and a
safety suggestion". Retrieved 2012-02-08.
27 Purdy, Kevin (2009-10-14). "Use a Linux Live CD/USB for Online Banking".
Retrieved 2012-02-04.
28 Finextra Research (2008-11-13). "Commerzbank to deploy Cronto mobile phone-
based authentication technology". Retrieved 2012-02-08.
29 Chickowski, Ericka (2010-10-05). "'Man In The Mobile' Attacks Highlight
Weaknesses In Out-Of-Band Authentication". Retrieved 2012-02-09.
30 Schwartz, Mathew J. (2011-07-13). "Zeus Banking Trojan Hits Android Phones".
Retrieved 2012-02-04.
31 Balan, Mahesh (2009-10-14). "Internet Banking & Mobile Banking users beware –
ZITMO & SPITMO is here !!". Retrieved 2012-02-05.
32 Sartain, Julie (2012-02-07). "How to protect online transactions with multi-factor
authentication". Retrieved 2012-02-08.
33 Imperva (2010-02-14). "Threat Advisory Boy in the Browser". Retrieved 2015-03-
12.
5.11.8 Clickjacking
Clickjacking tricks a Web user into clicking on something different from the user
expectation to reveal confidential data or take control of their computer.[1][2][3][4] It
takes advantage of vulnerability of the browser and uses embedded code or script to
execute without the user's knowledge.[5][6][7][8]
5.11.8.1.1 Examples
5.11.8.1.2 Examples of processes are:
• Tricking users into enabling their webcam and microphone through Flash.[9]
• Tricking users into making their social networking profile information public
• Downloading and running a malware for a remote attacker to control of computers[10]
[11][12]
• Making users follow someone on Twitter[13]
• Sharing or liking links on Facebook[14][15]
• Getting likes on Facebook fan page[16] or +1 on Google+
• Clicking Google AdSense ads to generate pay-per-click revenue[17]
• Playing YouTube videos to gain views
• Following someone on Facebook
Clickjacking attacks are hard to develop due to browser differences but there are tools
to help automate the process for using clients on vulnerable websites. Clickjacking can
help other web attacks.[18][19]
5.11.8.1.2.1 Likejacking
Likejacking tricks website users to like a Facebook page that they did not wish.[20][21]
[22] It was corrected by delaying direct access of the like button.[23][24]
5.11.8.1.2.2 Cursorjacking
Cursorjacking changes the cursor from the location the user perceives.[25][26][27][28]
[29]
5.11.8.1.3 Password manager attack
Password manager attack demonstrates the lack of security in password managers
where they show additional passwords for  synchronizing passwords for multiple
devices.[30]
5.11.8.1.4 Prevention
5.11.8.1.4.1 Client-side
Clickjacking can be avoided by add-ons the browser e.g. NoScript[31][32][33][26],
GuardedID[34] and Gazelle.[35]
5.11.8.1.4.2 Server-side
Server-side protection can protect clickjacking with scripts like Framekiller,[33] other
scripts[36] and X-Frame-Options.[37][38][39][40][41][42][43][44][45]
5.11.8.2 References
1. Robert McMillan (17 September 2008). "At Adobe's request, hackers nix 'clickjacking'
talk". PC World. Retrieved 2008-10-08.
2. Megha Dhawan (29 September 2008). "Beware, clickjackers on the prowl". India Times.
Retrieved 2008-10-08.
3. Dan Goodin (7 October 2008). "Net game turns PC into undercover surveillance
zombie". The Register. Retrieved 2008-10-08.
4. Fredrick Lane (8 October 2008). "Web Surfers Face Dangerous New Threat:
'Clickjacking'". newsfactor.com. Archived from the original on 13 October 2008.
Retrieved 2008-10-08.
5. Sumner Lemon (30 September 2008). "Business Center: Clickjacking Vulnerability to Be
Revealed Next Month". Retrieved 2008-10-08.
6. You don't know (click)jack Robert Lemos, October 2008
7. JAstine, Berry. "Facebook Help Number 1-888-996-3777". Retrieved 7 June 2016.
8. The Confused Deputy rides again!, Tyler Close, October 2008
9. Constantin, Lucian. "Adobe to fix Flash flaw that allows webcam
spying". Computerworld.
10."select element persistance allows for attacks". Retrieved 2012-10-09.
11."UI selection timeout missing on download prompts". Retrieved 2014-02-04.
12."Delay following click events in file download dialog too short on OS X".
Retrieved 2016-03-08.
13.Daniel Sandler (12 February 2009). "Twitter's "Don't Click" prank, explained
(dsandler.org)". Retrieved 2009-12-28.
14.Krzysztof Kotowicz (21 December 2009). "New Facebook clickjacking attack in the
wild". Retrieved 2009-12-29.
15.BBC (3 June 2010). "Facebook "clickjacking" spreads across site". BBC News.
Retrieved 2010-06-03.
16.Josh MacDonald. "Facebook Has No Defence Against Black Hat Marketing".
Retrieved 2016-02-03.
17."Clickjacking campaign avoids click fraud, abuses Google AdSense". SC Magazine US.
10 January 2017.
18."The Clickjacking meets XSS: a state of art". Exploit DB. 2008-12-26. Retrieved 2015-
03-31.
19.Krzysztof Kotowicz. "Exploiting the unexploitable XSS with clickjacking".
Retrieved 2015-03-31.
20.Cohen, Richard (31 May 2010). "Facebook Work - "Likejacking"". Sophos.
Retrieved 2010-06-05.
21.Ballou, Corey (2 June 2010). ""Likejacking" Term Catches On". jqueryin.com. Archived
from the original on 5 June 2010. Retrieved 2010-06-08.
22.Perez, Sarah (2 June 2010). ""Likejacking" Takes Off on Facebook". ReadWriteWeb.
Retrieved 2010-06-05.
23.Kushner, David (June 2011). "Facebook Philosophy: Move Fast and Break Things".
spectrum.ieee.org. Retrieved 2011-07-15.
24.Perez, Sarah (23 April 2010). "How to "Like" Anything on the Web
(Safely)". ReadWriteWeb. Retrieved 24 August 2011.
25.Podlipensky, Paul. "Cursor Spoofing and Cursorjacking". Podlipensky.com. Paul
Podlipensky. Retrieved 22 November 2017.
26.Krzysztof Kotowicz (18 January 2012). "Cursorjacking Again". Retrieved 2012-01-31.
27.Aspect Security. "Cursor-jacking attack could result in application security breaches".
Retrieved 2012-01-31.
28."Mozilla Foundation Security Advisory 2014-50". Mozilla. Retrieved 17 August 2014.
29."Mozilla Foundation Security Advisory 2015-35". Mozilla. Retrieved 25 October 2015.
30."Password Managers: Attacks and Defenses" (PDF). Retrieved 26 July 2015.
31.Giorgio Maone (24 June 2011). "NoScript Anywhere". hackademix.net. Retrieved 2011-
06-30.
32.Giorgio Maone (8 October 2008). "Hello ClearClick, Goodbye Clickjacking".
hackademix.net. Retrieved 2008-10-27.
33.Michal Zalevski (10 December 2008). "Browser Security Handbook, Part 2, UI
Redressing". Google Inc. Retrieved 2008-10-27.
34.Robert Hansen (4 February 2009). "Clickjacking and GuardedID ha.ckers.org web
application security lab". Retrieved 2011-11-30.
35.Wang, Helen J.; Grier, Chris; Moschchuk, Alexander; King, Samuel T.; Choudhury, Piali;
Venter, Herman (August 2009). "The Multi-Principal OS Construction of the Gazelle Web
Browser" (PDF). 18th Usenix Security Symposium, Montreal, Canada. Retrieved 2010-
01-26.
36.Giorgio Maone (27 October 2008). "Hey IE8, I Can Has Some Clickjacking Protection".
hackademix.net. Retrieved 2008-10-27.
37.Eric Lawrence (27 January 2009). "IE8 Security Part VII: ClickJacking Defenses".
Retrieved 2010-12-30.
38.Eric Lawrence (30 March 2010). "Combating ClickJacking With X-Frame-Options".
Retrieved 2010-12-30.
39.Ryan Naraine (8 June 2009). "Apple Safari jumbo patch: 50+ vulnerabilities fixed".
Retrieved 2009-06-10.
40.https://developer.mozilla.org/en/The_X-FRAME-OPTIONS_response_header The X-Frame-
Options response header — MDC
41.Adam Barth (26 January 2010). "Security in Depth: New Security Features".
Retrieved 2010-01-26.
42."Web specifications support in Opera Presto 2.6". 12 October 2010. Retrieved 2012-01-
22.
43."HTTP Header Field X-Frame-Options". IETF. 2013.
44."Content Security Policy Level 2". w3.org. 2014-07-02. Retrieved 2015-01-29.
45."Clickjacking Defense Cheat Sheet". Retrieved 2016-01-15.
5.11.9 Eavesdropping
5.11.9.1 Commentary
Eavesdropping can also be done over telephone lines, email, and other methods
of instant messaging considered private.  VoIP communications software is also
vulnerable to electronic eavesdropping via infections such as trojans.
Network eavesdropping is a network layer attack that focuses on capturing
small packets from the network transmitted by other computers and reading the data
content in search of any type of information. This type of network attack is generally
one of the most effective as a lack of encryption services are used. It is also linked to
the collection of metadata.
5.11.9.2 References
1. Garner, p. 550[full citation needed]
2. Shorter Oxford English Dictionary (6th ed.), Oxford University Press,
2007, ISBN 978-0-19-920687-2
3. "eavesdrop". Online Etymology Dictionary.
4. Inside the Court of Henry VIII. Public Broadcasting Service. April 8, 2016.
5. Secrets of Henry VIII’s Palace. Public Broadcasting System. June 30, 2013.
6. Inside the Court of Henry VIII. Public Broadcasting Service. April 8, 2016.
7. Stollznow, Karen (August 7, 2014). "Eavesdropping: etymology, meaning, and
some creepy little statues". KarenStollznow.com.
8. . KarenStollznow.com.
5.12 Malware For Profit
5.12.1 Privacy-Invasive Software
5.12.1.1 Content
Privacy-invasive software ignores users’ privacy being spread with a specific intent.
The three types are adware, spyware and content hijacker.
5.12.1.1.1 Background
Internet privacy threats vary from tracking user activity to mass marketing based on
personal information gathered from spam and telemarketing.
5.12.1.1.2 Definitions
Spyware is software that steals personal information on his computer[1][2] and is
dependent on the amount of user consent, and the negative impact. It is more define in
[3][4] as:
Spyware is a term for tracking software deployed without adequate notice, consent, or
control for the user.
Spyware describes technologies distributed without user consent damage user control
over:
1) Changes affect user experience, privacy, or system security;
2) Use of system resources;
3) Collection, use, and distribution of their sensitive data.
 Alternatively spyware defined as badware gives an outline of:[5]
1) A program is deceptive or irreversible.
2) A program has objectionable behaviour.
Universal conclusions define acceptable /unacceptable system actions.[6][7] Spyware
has alternative names as trackware, evilware and badware which can be called
privacy-invasive software.[8]
5.12.1.1.3 Past, Present and Future
The internet led to browsers and commercial websites[9][10][11] and debatable
practices[12][13] by amassing personal data and distributing advertisements as spam
email, notice boards,[14][15][16] banner ads and dedicated programs, adware,
with pop-up windows. Targeting of adverts lead to spyware for the collecting on user
interests through their browsing history and their profiles[17] eventually leading to
further spam and other types of invasive, privacy compromising advertising.[18] The
infections from unsolicited software caused crashed computers, change application
settings, gathered personal data, and hit computer performance[19] and development
of countermeasures, anti-spyware tools, to clean computers from spyware, adware, and
any other type of shady software by identify programs with signatures (semantics,
program code, or other identifying attributes) not always fully successfully.[20][21][22]
The development of media centres and mobile devices mean that spyware could
monitor consumer actions and movements.[23][24][25]
5.12.1.2 References
1 Gibson, GRC OptOut -- Internet Spyware Detection and Removal, Gibson
Research Corporation
2 Weiss, A. (2005), "Spyware Be Gone", ACM netWorker, ACM Press, New York,
USA, 9(1)
3 ASC (2006-10-05). "Anti-Spyware Coalition".
4 StopBadware.org, StopBadware.org
5 StopBadware.org Guidelines, "StopBadware.org Software
Guidelines", StopBadware.org, archived from the original on September 28, 2007
6 Bruce, J. (2005), "Defining Rules for Acceptable Adware", Proceedings of the
15th Virus Bulletin Conference, Dublin, Ireland
7 Sipior, J.C. (2005), "A United States Perspective on the Ethical and Legal Issues
of Spyware", Proceedings of 7th International Conference on Electronic Commerce,
Xian, China
8 Saroiu, S.; Gribble, S.D.; Levy, H.M. (2004), "Measurement and Analysis of
Spyware in a University Environment", Proceedings of the 1st Symposium on
Networked Systems Design and Implementation (NSDI), San Francisco, USA
9 Andreessen, M. (1993), NCSA Mosaic Technical Summary, USA: National Center
for Supercomputing Applications
10 Rosenberg, R.S. (2004), The Social Impact of Computers (3rd ed.), Place=Elsevier
Academic Press, San Diego CA
11 Abhijit, C.; Kuilboer, J.P. (2002), E-Business & E-Commerce Infrastructure:
Technologies Supporting the E-Business Initiative, Columbus, USA: McGraw Hill
12 CDT (2006), Following the Money (PDF), Center for Democracy & Technology
13 Shukla, S.; Nah, F.F. (2005), "Web Browsing and Spyware
Intrusion", Communications of the ACM, New York, USA, 48 (8),
p. 85, doi:10.1145/1076211.1076245
14 McFedries, P. (2005), The Spyware Nightmare, Nebraska, USA: in IEEE Spectrum,
Volume 42, Issue 8
15 Zhang, X. (2005), "What Do Consumers Really Know About
Spyware?", Communications of the ACM, ACM, 48 (8),
p. 44, doi:10.1145/1076211.1076238
16 CNET (2005), The Money Game: How Adware Works and How it is Changing,
CNET Anti Spyware Workshop, San Francisco, US
17 Vincentas (11 July 2013). "Privacy Invasive Software in SpyWareLoop.com".
Spyware Loop. Archived from the original on 9 April 2014. Retrieved 27 July 2013.
18 Görling, S. (2004), An Introduction to the Parasite Economy, Luxemburg: In
Proceedings of EICAR
19 Pew, Internet (2005), "The Threat of Unwanted Software Programs is Changing
the Way People use the Internet" (PDF), PIP Spyware Report July 05, Pew Internet &
American Life Project, archived from the original (PDF) on July 13, 2007
20 Good, N.; et al. (2006), User Choices and Regret: Understanding Users’ Decision
Process about Consentually Acquired Spyware, Columbus, USA: I/S: A Journal of Law
and Policy for the Information Society, Volume 2, Issue 2
21 MTL (2006), AntiSpyware Comparison Reports, http://www.malware-
test.com/antispyware.html: Malware-Test Lab[dead link]
22 Webroot (2006), "Differences between Spyware and Viruses", Spysweeper.com,
Webroot Software, archived from the original on 2007-10-01
23 CES, International Consumer Electronics Association
24 Newman, M.W. (2006), "Recipes for Digital Living", IEEE Computer, Vol. 39, Issue
2
25 Business 2.0 Magazine (October 26, 2006), 20 Smart Companies to Start Now
26 General
27 Boldt, M. (2007a), Privacy-Invasive Software - Exploring Effects and
Countermeasures (PDF), School of Engineering, Blekinge Institute of Technology,
Sweden: Licentiate Thesis Series No. 2007:01.
28 Boldt, M. (2010), Privacy-Invasive Software (PDF), Blekinge, Sweden: School of
Computing, Blekinge Institute of Technology
29 Boldt, M.; Carlsson, B.; Larsson, T.; Lindén, N. (2007b), Preventing Privacy-
Invasive Software using Online Reputations (PDF), Springer Verlag, Berlin Germany:
in Lecture Notes in Computer Science series, Volume 4721.
30 Boldt, M.; Carlsson, B. (2006a), Privacy-Invasive Software and Preventive
Mechanisms (PDF), Papeete French, Polynesia: in Proceedings of IEEE International
Conference on Systems and Networks Communications (ICSNC 2006).
31 Boldt, M.; Carlsson, B. (2006b), Analysing Privacy-Invasive Software
Countermeasures, Papeete, French Polynesia: in Proceedings of IEEE International
Conference on Systems and Networks Communications (ICSEA 2006).
32 Boldt, M.; Jacobsson, A.; Carlsson, B. (2004), "Exploring Spyware
Effects" (PDF), Proceedings of the Eighth Nordic Workshop on Secure IT
Systems (NordSec2004), Helsinki, Finland.
33 Jacobsson, A. (2007), Security in Information Networks - from Privacy-Invasive
Software to Plug and Play Business, School of Engineering, Blekinge Institute of
Technology, Sweden: Doctoral Thesis.
34 Jacobsson, A. (2004), Exploring Privacy Risks in Information Networks, School of
Engineering, Blekinge Institute of Technology, Sweden: Licentiate Thesis Series No.
2004:11.
35 Jacobsson, A.; Boldt, M.; Carlsson, B. (2004), Privacy-Invasive Software in File-
Sharing Tools (PDF), Kluwer Academic Publishers, Dordrecht NL, pp. 281-296:
Deswarte, F. Cuppens, S. Jajodia and L. Wang (Eds.) Information Security Management,
Education and Privacy.
5.12.2 Adware
5.12.2.1 Content
5.12.2.1.1 Introduction
Adware, (advertising-supported software), obtains revenue for its developer by
generating online advertisements in the user interface during execution or installation.
It can be two types:
• display of the advertisement
• "pay-per-click" when the user clicks on the advertisement.
The software may give a static box display, a banner display, full screen, a video, pop-
up ad or in some other form. Software can be free relying on revenue from advertising
or software at a fee without advertising. The software analyzes location and browser
history to give adverts for relevent goods or services. Sometimes software displays
unwanted adverts and is malware.[1]
5.12.2.1.2 Advertising-supported software
In valid adware, the advertising functions are bundled with the program and usually
viewed as recovering development costs, and to getting revenue. Alternatively, the
developer may give the software to the user without charge or reduced. Payback
(money, motivation) leads to support of an adware system[2] which looks like an
expanding market[3] and gives a basis for open-source software.
5.12.2.1.3 Application software
Software can be advertising-supported mode and paid, advertisement-free mode which
uses online purchase of a license / registration code for unlocking the mode or
purchase and download of a separate version. Alternatively businesses can fund
development while advertisers pay more.[7][8][9][10][12][13][14]
5.12.2.1.4 Software as a service
Adware follows the concept of software as a service[2][15][3][16][17][18] but the
Federal Trade Commission regard it as spayware[19][28] when downloaded or installed
without the user's consent. But how, what, and when consumers need to be told about
software installed for sufficient consent e.g. extra software bundled with primary
software is questioned.
5.12.2.1.5 Malware
Adware is malware[20][21] whien there are unwanted adverts on a
computer[22[23] e.g.as pop-ups or unclosable window[24] ranging from irritant,
[25] online threat,[26] viruses / trojans[27] or spyware.[29] If the adware is legal the
developer can sue antivirus companies for blocking adware.[30] An anti-adware
including antivirus can detect, quarantine, and remove the malware (adware and
spyware).[31] Sometimes adware using stolen certificates to disable anti-malware and
virus protection although there are technical means to overcome this.[30]
Sometimes adware has been found in files, boot partitions and firmware of mobile
devices.[32]
5.12.2.2 References
1. Tulloch, Mitch (2003). Koch, Jeff; Haynes, Sandra, eds. Microsoft Encyclopedia of
Security. Redmond, Washington: Microsoft Press. p. 16. ISBN 0-7356-1877-1. Any
software that installs itself on your system without your knowledge and
displays advertisements when the user browses the Internet.
2. Braue, David (4 September 2008). "Feature: Ad-supported software". ZDNet. Retrieved 4
December 2012.
3. Hayes Weier, Mary (5 May 2007). "Businesses Warm To No-Cost, Ad-Supported
Software". Information Week. Archived from the original on 8 August 2016. Retrieved 4
December 2012.
4. Foley, Mary Jo (30 July 2007). "Microsoft Works to become a free, ad-funded product".
Retrieved 4 December 2012.
5. Foley, Mary Jo (9 October 2009). "Microsoft adds an 'Office Starter' edition to its
distribution plans". ZDNet. Retrieved 4 December 2012.
6. Foley, Mary Jo (21 June 2012). "Microsoft begins phasing out Starter edition of its
Office suite". ZDNet. Retrieved 4 December 2012.
7. Levy, Ari (23 April 2012). "Ad-supported software reaches specialized audience". SF
Gate. Retrieved 4 December 2012.
8. https://adblockplus.org/acceptable-ads
9. Tung, Liam (11 March 2011). "Skype now free ad-supported software". iT News for
Australian Business. Retrieved 4 December 2012.
10."Kindle, Wi-Fi, Graphite, 6" Display with New E Ink Pearl Technology — includes Special
Offers & Sponsored Screensavers". Amazon.com. Amazon.com, Inc. Retrieved 4
August2011.
11."Microsoft Advertising Historical Timeline". Microsoft Advertising. September 2008.
Retrieved 20 November 2012.
12."Windows 8 Ads in Apps". Microsoft Advertising. Archived from the original on 21
November 2012. Retrieved 20 November 2012.
13.Kim, Stephen (1 October 2012). "Microsoft Advertising Unveils New Windows 8 Ads in
Apps Concepts with Agency Partners at Advertising Week 2012". Microsoft.
Retrieved 20 November 2012.
14.Fried, Ina. "Microsoft eyes making desktop apps free". CNET. Archived from the
original on 24 November 2005. Retrieved 20 November 2012.
15.Teeter, Ryan; Karl Barksdale (9 February 2011). Google Apps For Dummies. pp. 3–
27. ISBN 1-118-05240-4.
16.17 January 2011 by Jolie O'Dell 203 (17 January 2011). "Facebook's Ad Revenue Hit
$1.86B for 2010". Mashable.com. Retrieved 21 December 2011.
17.Womack, Brian (20 September 2011). "Facebook Revenue Will Reach $4.27 Billion,
EMarketer Says". Bloomberg. Retrieved 21 December 2011.
18.Foley, Mary Jo (3 May 2007). "Meet Microsoft, the advertising company". ZDNet.
Retrieved 20 November 2012.
19.Majoras, Deborah Platt. "FTC Staff Report. Monitoring Software on Your PC: Spyware,
Adware, and Other Software" (PDF). Federal Trade Commission. Retrieved 4 April 2005.
20.National Cyber Security Alliance. "Malware & Botnets". StaySafeOnline.org. Archived
from the original on 13 December 2012. Retrieved 4 December 2012. The terms
'spyware' and 'adware' apply to several different [malware] technologies...
21."Viruses and other forms of malicious software". Princeton University Office of
Information Technology. 5 July 2012. Archived from the original on 24 December 2012.
Retrieved 4 December 2012. malware also includes worms, spyware and adware.
22.Vincentas (11 July 2013). "Adware in SpyWareLoop.com". Spyware Loop. Archived
from the original on 1 October 2013. Retrieved 27 July 2013.
23."Malware from A to Z". Lavasoft. Retrieved 4 December 2012. [Adware] delivers
advertising content potentially in a manner or context that may be unexpected and
unwanted by users.
24.National Cyber Security Alliance. "Data Privacy Day Glossary". StaySafeOnline.org.
Archived from the original on 20 March 2013. Retrieved 4 December 2012. Adware: type
of malware that allows popup ads on a computer system, ultimately taking over a
user's Internet browsing.
25."Spyware, Adware and Malware — Advice for networks and network users". RM
Education. Retrieved 4 December 2012. [Adware] tend[s] to be more of an irritant than
do actual damage to your system, but [is] an unwanted presence nonetheless.
26."McAfee, Inc. Names Most Dangerous Celebrities in Cyberspace". McAfee. Archived
from the original on 4 June 2013. Retrieved 4 December 2012. online threats, such as
spyware, spam, phishing, adware, viruses and other malware... Copy available at
Bloomberg.
27.Stern, Jerry. "Spyware, Adware, Malware, Thief: Creating Business Income from Denial
of Service and Fraud" (PDF). ASPects, Newsletter of the Association of Shareware
Professionals. Association of Software Professionals. Archived from the
original (PDF) on 2012-09-17. Adware has become a bad word, linked to spyware and
privacy violations by everyone except the publishers of the products... [it was] a good
thing ten or fifteen years ago, and [is] bad now... [t]he lines for adware are even being
blended into virus and trojan territory.
28.Spyware Workshop: Monitoring Software On Your Personal Computer: Spyware, Adware
and Other Software. Federal Trade Commission. March 2005. p. 2. Retrieved 4
December 2012.
29.Schwabach, Aaron (2005). Internet and the Law: Technology, Society, and
Compromises. ABC-CLIO. p. 10. ISBN 978-1-85109-731-9. Retrieved 4 December 2012.
30.Casey, Henry T. (25 November 2015). "Latest adware disables antivirus
software". Tom's Guide. Yahoo.com. Retrieved 25 November 2015.
31.Honeycutt, Jerry (20 April 2004). "How to protect your computer from Spyware and
Adware". Microsoft.com. Microsoft corporation. Archived from the original on 7
February 2006.
32."Decompile: Technical analysis of the Trojan". Cheetah Mobile. 9 November 2015.
Retrieved 7 December 2015.
5.12.3 Phishing
5.12.3.1 Content
5.12.3.1.1 Introduction
Phishing gathers data about a user or organization without their knowledge and sends
it to collection agent without the permission, or takes control over a device without the
user's cognition.[1] Phishing can be adware, system monitors, tracking cookies,
keyloggers and trojans.[2] and tracks and stores Internet user history and giving pop-up
ads. When it is malicious, it is hidden from the user and is hard to find or can also
intervene with user control by installing software or redirecting web browsers, change
computer settings, to result in slow Internet connections, un-authorized changes in
browser or software settings.
Phishing is bundled with genuine software from a malicious website or may have been
added to the intentional functionality of the software but this can be avoided by using
anti-Phishing software which executing forms a basic element of computer security.
Phishing used or made by the government is govware or policeware is normally a trojan
horse to monitor communications from the computer and is part of the legal framework
for the use of such software[3][4][5] Now history is recorded for many websites.[6]
5.12.3.1.2 Routes of infection
Phishing is usually transmitted through downloads deceiving the user or
exploiting software vulnerabilities without user knowledge, or by using deceptive
tactics with bundling itself with desired software through a Trojan horse. Also a web
page containing code can attack the browser and force download and install of
Phishing.
5.12.3.1.3 Effects and behaviors
Phishing shows itself as unwanted behavior and degradation of system performance,
stability issues, e.g. applications freezing, failure to boot, system crashes, difficulty
connecting to the Internet, disabling firewalls and antivirus software, reduced browser
security.[7]
5.12.3.1.4 Remedies and prevention
Following the rise of Phishing technology, counter techniques have developed to
remove or block them and procedures to minimise acquisition of Phishing however
eventually the system needs to be rebuilt.
5.12.3.1.4.1 Anti-Phishing programs
Anti-Phishing programs have been constructed[8] whilst anti-virus companies have
produced anti-Phishing with their products but been reluctant to issue them due to
lawsuits by Phishing developers. The anti-virus classifies Phishing as extended threats
and they give real-time protection.
5.12.3.1.4.2 How anti-Phishing software works
Anti-Phishing follows two strategies viz. scanning input data for Phishing and blocking
it and regular scanning for Phishing installed in sensitive places on the computer and
deleting it. The search list is updated as and when new Phishing is found. Phishing
adopts different strategies to keep running based on checking that 2 threads are
executing with one spawn the other and vice versa or locking the program catalog.
5.12.3.1.5 Security practices
Preparation against Phishing includes removing security holes in internet browsers.[9]
[10] Some ISPs use network firewalls and web proxies to block access to problem Web
sites[11], lists valid web sites from the system only allowing downloads from reputable
sources.[12]
5.12.3.1.6 Applications
5.12.3.1.6.1 Stealware and affiliate fraud
Stealware sends the payments from the legitimate affiliate to the Phishing vendor by
attacking affiliate networks replacing tags on the user's activity thus cutting user
choices whilst the affiliates lose revenue and pay for the privilege, the network loses
credibility and the pseudo affiliate gains income.[13] Affiliate fraud applies the
stealware concept when the pseudo affiliate is the same as an affiliate. Mobile devices
are vulnerable to chargeware, which charges users illegal mobile costs.
5.12.3.1.6.2 Identity theft and fraud
Phishing can be used for identity theft.[14][15][16][17]
5.12.3.1.6.3 Digital rights management
Copy-protection technologies use Phishing through rootkits[18] however this led to
trouble with bad implementations.[19][20][21][22][23][24]
5.12.3.1.6.4 Personal relationships
Phishing can be used to observe partners of relationships but this can be regarded as
wiretapping and other computer crimes.[25]
5.12.3.1.6.5 Browser cookies
Anti-Phishing lists cookies which can be seen as malicious and can be deleted.[26]
5.12.3.1.7 Examples
Phishing has a diversity of behaviors which groups them into families:
• sends traffic to advertisements on Web sites, displaying pop-up ads, modifying search
results, and DNS lookups to these sites.[27]
• surveillance suite for police and secret services with support by training and
technology updates.[28]
• personal dat collection,[29][30] download and execute of code.[31][32][33][34]
• Phishing installing other Phishing, tracking aggregate browsing behavior, redirecting
affiliate references, and displaying advertisements.[35][36]
• redirect error pages to advertising for broken links or erroneous URL and stopping
access to password-protected Web sites.[37]
• hide inside system-critical processes as rootkit and fighting anti-malware.[38]
• demanding payment for going away.[39][40][41]
• plugin to display window-panel and can be removed by itself.
• User internet history collection for advertisers and redirecting links[13]
• collect search-history, internet history, keystrokes[42] and hijack routers.[43]
5.12.3.1.8 History and development
Phishing devlopment is documented in [44][45][46][47][48][49][50][51][52][53[54][55]
[56][57][58][59][60][61][62][63][64][65][66][67]
5.12.3.1.9 Legal issues
Criminal law has been documented in [68][69][70][71][72][73] for Phishing. US FTC
actions are documented in [74][75][76]. Netherlands OPTAactions are documented in
[77]. Civil law is documented in [79][80]. Libel suits by Phishing developers is
documented in [81][82]. WebcamGate is documented in [83][84][85].
5.12.3.2 References
1. FTC Report (2005). "[1]"
2. SPYWARE ""Archived copy" (PDF). Archived from the original (PDF) on November 1,
2013. Retrieved 2016-02-05."
3. Basil Cupa, Trojan Horse Resurrected: On the Legality of the Use of Government
Spyware (Govware), LISS 2013, pp. 419–428
4. FAQ – Häufig gestellte Fragen Archived May 6, 2013, at the Wayback Machine.
5. Jeremy Reimer (July 20, 2007). "The tricky issue of spyware with a badge: meet
'policeware'". Ars Technica.
6. Cooley, Brian (March 7, 2011). "'Like,' 'tweet' buttons divulge sites you visit: CNET
News Video". CNet News. Retrieved March 7, 2011.
7. Edelman, Ben; December 7, 2004 (updated February 8, 2005); Direct Revenue Deletes
Competitors from Users' Disks; benedelman.com. Retrieved November 28, 2006.
8. ""Microsoft Acquires Anti-Spyware Leader GIANT Company". December 16, 2004.
Archived from the original on February 27, 2009. Retrieved April 10, 2009."
9. Stefan Frei, Thomas Duebendofer, Gunter Ollman, and Martin May, Understanding the
Web browser threat: Examination of vulnerable online Web browser populations and the
insecurity iceberg, Communication Systems Group, 2008
10.Nikos Virvilisa, Alexios Mylonasa, Nikolaos Tsalisa, and Dimitris Gritzalisa, Security
Busters: Web Browser security vs. rogue sites, Computers & Security, 2015
11.Schuster, Steve. ""Blocking Marketscore: Why Cornell Did It". Archived from the
original on February 14, 2007.". Cornell University, Office of Information Technologies.
March 31, 2005.
12.Vincentas (July 11, 2013). "Information About Spyware in SpyWareLoop.com". Spyware
Loop. Archived from the original on November 3, 2013. Retrieved July 27, 2013.
13.Edelman, Ben (2004). "The Effect of 180solutions on Affiliate Commissions and
Merchants". Benedelman.org. Retrieved November 14, 2006.
14.Ecker, Clint (2005). Massive spyware-based identity theft ring uncovered. Ars
Technica, August 5, 2005.
15.Eckelberry, Alex. "Massive identity theft ring", SunbeltBLOG, August 4, 2005.
16.Eckelberry, Alex. "Identity Theft? What to do?", SunbeltBLOG, August 8, 2005.
17.FTC Releases Survey of Identity Theft in U.S. 27.3 Million Victims in Past 5 Years,
Billions in Losses for Businesses and Consumers. Federal Trade Commission,
September 3, 2003.
18.Russinovich, Mark. "Sony, Rootkits and Digital Rights Management Gone Too
Far,", Mark's Blog, October 31, 2005. Retrieved November 22, 2006.
19.Press release from the Texas Attorney General's office, November 21, 2005; Attorney
General Abbott Brings First Enforcement Action In Nation Against Sony BMG For
Spyware Violations. Retrieved November 28, 2006.
20."Sony sued over copy-protected CDs; Sony BMG is facing three lawsuits over its
controversial anti-piracy software", BBC News, November 10, 2005. Retrieved
November 22, 2006.
21.Information About XCP Protected CDs. Retrieved November 29, 2006.
22.Microsoft.com – Description of the Windows Genuine Advantage Notifications
application. Retrieved June 13, 2006.
23.Weinstein, Lauren. Windows XP update may be classified as 'spyware', Lauren
Weinstein's Blog, June 5, 2006. Retrieved June 13, 2006.
24.Evers, Joris. Microsoft's antipiracy tool phones home daily, CNET, June 7, 2006.
Retrieved August 31, 2014.
25."Creator and Four Users of Loverspy Spyware Program Indicted". Department of
Justice. August 26, 2005. Archived from the original on November 19, 2013.
Retrieved November 21, 2014.
26."Tracking Cookie". Symantec. Retrieved 2013-04-28.
27.""CoolWebSearch". Parasite information database. Archived from the original on
January 6, 2006. Retrieved September 4, 2008.
28.Nicole Perlroth (August 30, 2012). "Software Meant to Fight Crime Is Used to Spy on
Dissidents". The New York Times. Retrieved August 31, 2012.
29."GO Keyboard - Emoji keyboard, Swipe input, GIFs". GOMO Dev Team.
30."GO Keyboard - Emoticon keyboard, Free Theme, GIF". GOMO Dev Team.
31."Malicious behavior". Google.
32."Virustotal detection". Betanews. September 21, 2017.
33."PRIVACY and security". GOMO Dev Team.
34."GO Keyboard spying warning". Betanews. September 21, 2017.
35."CA Spyware Information Center – HuntBar". .ca.com. Archived from the original on May
9, 2012. Retrieved September 11, 2010.
36."What is Huntbar or Search Toolbar?". Pchell.com. Retrieved September 11, 2010.
37.""InternetOptimizer". Parasite information database. Archived from the original on
January 6, 2006. Retrieved September 4, 2008.
38.Roberts, Paul F. "Spyware meets Rootkit Stealth". eweek.com. June 20, 2005.
39."FTC, Washington Attorney General Sue to Halt Unfair Movieland Downloads". Federal
Trade Commission. August 15, 2006.
40."Attorney General McKenna Sues Movieland.com and Associates for Spyware".
Washington State Office of the Attorney General. August 14, 2006.
41."Complaint for Permanent Injunction and Other Equitable Relief (PDF, 25
pages)"(PDF). Federal Trade Commission. August 8, 2006.
42."Anti-Spyware". Total Technology Resources. Retrieved 20 November 2017.
43.PCMAG, New Malware changes router settings, PC Magazine, June 13,
2008. Archived July 15, 2011, at the Wayback Machine.
44.Vossen, Roland (attributed); October 21, 1995; Win 95 Source code in c!! posted to
rec..programmer; retrieved from groups.google.com November 28, 2006.[dead link]
45.Wienbar, Sharon. "The Spyware Inferno". News.com. August 13, 2004.
46.Hawkins, Dana; "Privacy Worries Arise Over Spyware in Kids' Software". U.S. News &
World Report. June 25, 2000 Archived November 3, 2013, at the Wayback Machine.
47."AOL/NCSA Online Safety Study Archived December 13, 2005, at the Wayback
Machine.". America Online & The National Cyber Security Alliance. 2005.
48.Spanbauer, Scott. "Is It Time to Ditch IE?". Pcworld.com. September 1, 2004
49.Keizer, Gregg. "Analyzing IE At 10: Integration With OS Smart Or Not?". TechWeb
Technology News. August 25, 2005. Archived September 29, 2007, at the Wayback
Machine.
50.Edelman, Ben (2004). "Claria License Agreement Is Fifty Six Pages Long". Retrieved
July 27, 2005.
51.Edelman, Ben (2005). "Comparison of Unwanted Software Installed by P2P Programs".
Retrieved July 27, 2005.
52.""WeatherBug". Parasite information database. Archived from the original on February
6, 2005. Retrieved September 4, 2008.
53."Adware.WildTangent". Sunbelt Malware Research Labs. June 12, 2008.
Retrieved September 4, 2008.[permanent dead link]
54."Winpipe". Sunbelt Malware Research Labs. June 12, 2008. Retrieved September
4,2008. It is possible that this spyware is distributed with the adware bundler
WildTangent or from a threat included in that bundler.
55."How Did I Get Gator?". PC Pitstop. Retrieved July 27, 2005.
56."eTrust Spyware Encyclopedia – FlashGet". Computer Associates. Retrieved July 27,
2005. Archived May 5, 2007, at the Wayback Machine.
57."Jotti's malware scan of FlashGet 3". Virusscan.jotti.org. Archived from the original on
March 23, 2010. Retrieved September 11, 2010.
58.VirusTotal scan of FlashGet 3.
59."Jotti's malware scan of FlashGet 1.96". Virusscan.jotti.org. Archived from the
originalon May 10, 2011. Retrieved September 11, 2010.
60.VirusTotal scan of FlashGet 1.96.
61.Some caution is required since FlashGet 3 EULA makes mention of Third Party
Software, but does not name any third party producer of software. However, a scan
with SpyBot Search & Destroy, performed on November 20, 2009 after installing
FlashGet 3 did not show any malware on an already anti-spyware immunized system (by
SpyBot and SpywareBlaster).
62."Gadgets boingboing.net, ''MagicJack's EULA says it will spy on you and force you into
arbitration''". Gadgets.boingboing.net. April 14, 2008. Retrieved September 11, 2010.
63.Roberts, Paul F. (May 26, 2005). "Spyware-Removal Program Tagged as a Trap". eWeek.
Retrieved September 4, 2008.
64.Howes, Eric L. "The Spyware Warrior List of Rogue/Suspect Anti-Spyware Products &
Web Sites". Retrieved July 10, 2005.
65.Also known as WinAntiVirusPro, ErrorSafe, SystemDoctor, WinAntiSpyware,
AVSystemCare, WinAntiSpy, Windows Police Pro, Performance Optimizer,
StorageProtector, PrivacyProtector, WinReanimator, DriveCleaner, WinspywareProtect,
PCTurboPro, FreePCSecure, ErrorProtector, SysProtect, WinSoftware, XPAntivirus,
Personal Antivirus, Home Antivirus 20xx, VirusDoctor, and ECsecure
66.Elinor Mills (April 27, 2010). "Google: Fake antivirus is 15 percent of all
malware". CNET. Retrieved 2011-11-05.
67.McMillan, Robert. Antispyware Company Sued Under Spyware Law. PC World, January
26, 2006.
68."Lawsuit filed against 180solutions Archived June 22, 2008, at the Wayback
Machine.". zdnet.com September 13, 2005
69.Hu, Jim. "180solutions sues allies over adware". news.com July 28, 2004
70.Coollawyer; 2001–2006; Privacy Policies, Terms and Conditions, Website Contracts,
Website Agreements; coollawyer.com. Retrieved November 28, 2006.
71."CHAPTER 715 Computer Spyware and Malware Protection Archived April 6, 2012, at
the Wayback Machine.". nxtsearch.legis.state.ia.us. Retrieved May 11, 2011.
72.Chapter 19.270 RCW: Computer spyware. apps.leg.wa.gov. Retrieved November 14,
2006.
73.Gross, Grant. US lawmakers introduce I-Spy bill. InfoWorld, March 16, 2007. Retrieved
March 24, 2007.
74.See Federal Trade Commission v. Sperry & Hutchinson Trading Stamp Co.
75.FTC Permanently Halts Unlawful Spyware Operations Archived November 2, 2013, at
the Wayback Machine. (FTC press release with links to supporting documents); see
also FTC cracks down on spyware and PC hijacking, but not true lies, Micro Law, IEEE
MICRO (Jan.-Feb. 2005), also available at IEEE Xplore.
76.See Court Orders Halt to Sale of Spyware (FTC press release November 17, 2008, with
links to supporting documents).
77.OPTA, "Besluit van het college van de Onafhankelijke Post en Telecommunicatie
Autoriteit op grond van artikel 15.4 juncto artikel 15.10 van de Telecommunicatiewet
tot oplegging van boetes ter zake van overtredingen van het gestelde bij of krachtens
de Telecommunicatiewet" from November 5,
2007, http://opta.nl/download/202311+boete+verspreiding+ongewenste+software.pdf
78."State Sues Major "Spyware" Distributor" (Press release). Office of New York State
Attorney General. April 28, 2005. Archived from the original on January 10, 2009.
Retrieved September 4, 2008. Attorney General Spitzer today sued one of the nation's
leading internet marketing companies, alleging that the firm was the source of
"spyware" and "adware" that has been secretly installed on millions of home
computers.
79.Gormley, Michael. "Intermix Media Inc. says it is settling spyware lawsuit with N.Y.
attorney general". Yahoo! News. June 15, 2005. Archived from the original on June 22,
2005.
80.Gormley, Michael (June 25, 2005). "Major advertisers caught in spyware net". USA
Today. Retrieved September 4, 2008.
81.Festa, Paul. "See you later, anti-Gators?". News.com. October 22, 2003.
82."Gator Information Center". pcpitstop.com November 14, 2005.
83."Initial LANrev System Findings" Archived June 15, 2010, at the Wayback Machine.,
LMSD Redacted Forensic Analysis, L-3 Services – prepared for Ballard Spahr(LMSD's
counsel), May 2010. Retrieved August 15, 2010.
84.Doug Stanglin (February 18, 2010). "School district accused of spying on kids via laptop
webcams". USA Today. Retrieved February 19, 2010.
85."Suit: Schools Spied on Students Via Webcam". CBS NEWS. March 8, 2010.Spyware
5.13 Spyware
5.13.1 Content
5.13.1.1 Introduction
Spyware gathers data about a user or organization without their knowledge and sends
it to collection agent without the permission, or takes control over a device without the
user's cognition.[1] Spyware can be adware, system monitors, tracking cookies,
keyloggers and trojans.[2] and tracks and stores Internet user history and giving pop-up
ads. When it is malicious, it is hidden from the user and is hard to find or can also
intervene with user control by installing software or redirecting web browsers, change
computer settings, to result in slow Internet connections, un-authorized changes in
browser or software settings.
Spyware is bundled with genuine software from a malicious website or may have been
added to the intentional functionality of the software but this can be avoided by using
anti-spyware software which executing forms a basic element of computer security.
Spyware used or made by the government is govware or policeware is normally a trojan
horse to monitor communications from the computer and is part of the legal framework
for the use of such software[3][4][5] Now history is recorded for many websites.[6]
5.13.1.2 Routes of infection
Spyware is usually transmitted through downloads deceiving the user or
exploiting software vulnerabilities without user knowledge, or by using deceptive
tactics with bundling itself with desired software through a Trojan horse. Also a web
page containing code can attack the browser and force download and install of
spyware.
5.13.1.3 Effects and behaviors
Spyware shows itself as unwanted behavior and degradation of system performance,
stability issues, e.g. applications freezing, failure to boot, system crashes, difficulty
connecting to the Internet, disabling firewalls and antivirus software, reduced browser
security.[7]
5.13.1.4 Remedies and prevention
Following the rise of spyware technology, counter techniques have developed to
remove or block them and procedures to minimise acquisition of spyware however
eventually the system needs to be rebuilt.
5.13.1.5 Anti-spyware programs
Anti-spyware programs have been constructed[8] whilst anti-virus companies have
produced anti-spyware with their products but been reluctant to issue them due to
lawsuits by spyware developers. The anti-virus classifies spyware as extended threats
and they give real-time protection.
5.13.1.6 How anti-spyware software works
Anti-spyware follows two strategies viz. scanning input data for spyware and blocking
it and regular scanning for spyware installed in sensitive places on the computer and
deleting it. The search list is updated as and when new spyware is found. Spyware
adopts different strategies to keep running based on checking that 2 threads are
executing with one spawn the other and vice versa or locking the program catalog.
5.13.1.7 Security practices
Preparation against spyware includes removing security holes in internet browsers.[9]
[10] Some ISPs use network firewalls and web proxies to block access to problem Web
sites[11], lists valid web sites from the system only allowing downloads from reputable
sources.[12]
5.13.1.8 Applications
5.13.1.8.1 Stealware and affiliate fraud
Stealware sends the payments from the legitimate affiliate to the spyware vendor by
attacking affiliate networks replacing tags on the user's activity thus cutting user
choices whilst the affiliates lose revenue and pay for the privilege, the network loses
credibility and the pseudo affiliate gains income.[13] Affiliate fraud applies the
stealware concept when the pseudo affiliate is the same as an affiliate. Mobile devices
are vulnerable to chargeware, which charges users illegal mobile costs.
5.13.1.8.2 Identity theft and fraud
Spyware can be used for identity theft.[14][15][16][17]
5.13.1.8.3 Digital rights management
Copy-protection technologies use spyware through rootkits[18] however this led to
trouble with bad implementations.[19][20][21][22][23][24]
5.13.1.8.4 Personal relationships
86.Spyware can be used to observe partners of relationships but this can be regarded as
wiretapping and other computer crimes.[25]
5.13.1.8.5 Browser cookies
Anti-spyware lists cookies which can be seen as malicious and can be deleted.[26]
5.13.1.9 Examples
Spyware has a diversity of behaviors which groups them into families:
• sends traffic to advertisements on Web sites, displaying pop-up ads, modifying
search results, and DNS lookups to these sites.[27]
• surveillance suite for police and secret services with support by training and
technology updates.[28]
• personal dat collection,[29][30] download and execute of code.[31][32][33][34]
• spyware installing other spyware, tracking aggregate browsing behavior,
redirecting affiliate references, and displaying advertisements.[35][36]
• redirect error pages to advertising for broken links or erroneous URL and
stopping access to password-protected Web sites.[37]
• hide inside system-critical processes as rootkit and fighting anti-malware.[38]
• demanding payment for going away.[39][40][41]
• plugin to display window-panel and can be removed by itself.
• User internet history collection for advertisers and redirecting links[13]
• collect search-history, internet history, keystrokes[42] and hijack routers.[43]
5.13.1.10 History and development
Spyware devlopment is documented in [44][45][46][47][48][49][50][51][52][53[54[55][56]
[57][58][59][60][61][62][63][64][65][66][67]
5.13.1.11 Legal issues
Criminal law has been documented in [68][69][70][71][72][73] for spyware. US FTC
actions are documented in [74][75][76]. Netherlands OPTAactions are documented in
[77]. Civil law is documented in [79][80]. Libel suits by spyware developers is
documented in [81][82]. WebcamGate is documented in [83][84][85].
5.13.2 References
1. FTC Report (2005). "[1]"
2. SPYWARE ""Archived copy" (PDF). Archived from the original (PDF) on November
1, 2013. Retrieved 2016-02-05."
3. Basil Cupa, Trojan Horse Resurrected: On the Legality of the Use of Government
Spyware (Govware), LISS 2013, pp. 419–428
4. FAQ – Häufig gestellte Fragen Archived May 6, 2013, at the Wayback Machine.
5. Jeremy Reimer (July 20, 2007). "The tricky issue of spyware with a badge: meet
'policeware'". Ars Technica.
6. Cooley, Brian (March 7, 2011). "'Like,' 'tweet' buttons divulge sites you visit:
CNET News Video". CNet News. Retrieved March 7, 2011.
7. Edelman, Ben; December 7, 2004 (updated February 8, 2005); Direct Revenue
Deletes Competitors from Users' Disks; benedelman.com. Retrieved November 28,
2006.
8. ""Microsoft Acquires Anti-Spyware Leader GIANT Company". December 16, 2004.
Archived from the original on February 27, 2009. Retrieved April 10, 2009."
9. Stefan Frei, Thomas Duebendofer, Gunter Ollman, and Martin May, Understanding
the Web browser threat: Examination of vulnerable online Web browser populations and
the insecurity iceberg, Communication Systems Group, 2008
10. Nikos Virvilisa, Alexios Mylonasa, Nikolaos Tsalisa, and Dimitris
Gritzalisa, Security Busters: Web Browser security vs. rogue sites, Computers &
Security, 2015
11. Schuster, Steve. ""Blocking Marketscore: Why Cornell Did It". Archived from the
original on February 14, 2007.". Cornell University, Office of Information Technologies.
March 31, 2005.
12. Vincentas (July 11, 2013). "Information About Spyware in SpyWareLoop.com".
Spyware Loop. Archived from the original on November 3, 2013. Retrieved July
27, 2013.
13. Edelman, Ben (2004). "The Effect of 180solutions on Affiliate Commissions and
Merchants". Benedelman.org. Retrieved November 14, 2006.
14. Ecker, Clint (2005). Massive spyware-based identity theft ring uncovered. Ars
Technica, August 5, 2005.
15. Eckelberry, Alex. "Massive identity theft ring", SunbeltBLOG, August 4, 2005.
16. Eckelberry, Alex. "Identity Theft? What to do?", SunbeltBLOG, August 8, 2005.
17. FTC Releases Survey of Identity Theft in U.S. 27.3 Million Victims in Past 5
Years, Billions in Losses for Businesses and Consumers. Federal Trade Commission,
September 3, 2003.
18. Russinovich, Mark. "Sony, Rootkits and Digital Rights Management Gone Too
Far,", Mark's Blog, October 31, 2005. Retrieved November 22, 2006.
19. Press release from the Texas Attorney General's office, November 21,
2005; Attorney General Abbott Brings First Enforcement Action In Nation Against Sony
BMG For Spyware Violations. Retrieved November 28, 2006.
20. "Sony sued over copy-protected CDs; Sony BMG is facing three lawsuits over its
controversial anti-piracy software", BBC News, November 10, 2005. Retrieved
November 22, 2006.
21. Information About XCP Protected CDs. Retrieved November 29, 2006.
22. Microsoft.com – Description of the Windows Genuine Advantage Notifications
application. Retrieved June 13, 2006.
23. Weinstein, Lauren. Windows XP update may be classified as 'spyware', Lauren
Weinstein's Blog, June 5, 2006. Retrieved June 13, 2006.
24. Evers, Joris. Microsoft's antipiracy tool phones home daily, CNET, June 7, 2006.
Retrieved August 31, 2014.
25. "Creator and Four Users of Loverspy Spyware Program Indicted". Department of
Justice. August 26, 2005. Archived from the original on November 19, 2013.
Retrieved November 21, 2014.
26. "Tracking Cookie". Symantec. Retrieved 2013-04-28.
27. ""CoolWebSearch". Parasite information database. Archived from the original on
January 6, 2006. Retrieved September 4, 2008.
28. Nicole Perlroth (August 30, 2012). "Software Meant to Fight Crime Is Used to Spy
on Dissidents". The New York Times. Retrieved August 31, 2012.
29. "GO Keyboard - Emoji keyboard, Swipe input, GIFs". GOMO Dev Team.
30. "GO Keyboard - Emoticon keyboard, Free Theme, GIF". GOMO Dev Team.
31. "Malicious behavior". Google.
32. "Virustotal detection". Betanews. September 21, 2017.
33. "PRIVACY and security". GOMO Dev Team.
34. "GO Keyboard spying warning". Betanews. September 21, 2017.
35. "CA Spyware Information Center – HuntBar". .ca.com. Archived from the
original on May 9, 2012. Retrieved September 11, 2010.
36. "What is Huntbar or Search Toolbar?". Pchell.com. Retrieved September
11, 2010.
37. ""InternetOptimizer". Parasite information database. Archived from the
original on January 6, 2006. Retrieved September 4, 2008.
38. Roberts, Paul F. "Spyware meets Rootkit Stealth". eweek.com. June 20, 2005.
39. "FTC, Washington Attorney General Sue to Halt Unfair Movieland
Downloads". Federal Trade Commission. August 15, 2006.
40. "Attorney General McKenna Sues Movieland.com and Associates for Spyware".
Washington State Office of the Attorney General. August 14, 2006.
41. "Complaint for Permanent Injunction and Other Equitable Relief (PDF, 25
pages)"(PDF). Federal Trade Commission. August 8, 2006.
42. "Anti-Spyware". Total Technology Resources. Retrieved 20 November 2017.
43. PCMAG, New Malware changes router settings, PC Magazine, June 13,
2008. Archived July 15, 2011, at the Wayback Machine.
44. Vossen, Roland (attributed); October 21, 1995; Win 95 Source code in c!! posted
to rec..programmer; retrieved from groups.google.com November 28, 2006.[dead link]
45. Wienbar, Sharon. "The Spyware Inferno". News.com. August 13, 2004.
46. Hawkins, Dana; "Privacy Worries Arise Over Spyware in Kids' Software". U.S.
News & World Report. June 25, 2000 Archived November 3, 2013, at the Wayback
Machine.
47. "AOL/NCSA Online Safety Study Archived December 13, 2005, at the Wayback
Machine.". America Online & The National Cyber Security Alliance. 2005.
48. Spanbauer, Scott. "Is It Time to Ditch IE?". Pcworld.com. September 1, 2004
49. Keizer, Gregg. "Analyzing IE At 10: Integration With OS Smart Or Not?". TechWeb
Technology News. August 25, 2005. Archived September 29, 2007, at the Wayback
Machine.
50. Edelman, Ben (2004). "Claria License Agreement Is Fifty Six Pages Long".
Retrieved July 27, 2005.
51. Edelman, Ben (2005). "Comparison of Unwanted Software Installed by P2P
Programs". Retrieved July 27, 2005.
52. ""WeatherBug". Parasite information database. Archived from the original on
February 6, 2005. Retrieved September 4, 2008.
53. "Adware.WildTangent". Sunbelt Malware Research Labs. June 12, 2008.
Retrieved September 4, 2008.[permanent dead link]
54. "Winpipe". Sunbelt Malware Research Labs. June 12, 2008. Retrieved September
4,2008. It is possible that this spyware is distributed with the adware bundler
WildTangent or from a threat included in that bundler.
55. "How Did I Get Gator?". PC Pitstop. Retrieved July 27, 2005.
56. "eTrust Spyware Encyclopedia – FlashGet". Computer Associates. Retrieved July
27, 2005. Archived May 5, 2007, at the Wayback Machine.
57. "Jotti's malware scan of FlashGet 3". Virusscan.jotti.org. Archived from the
original on March 23, 2010. Retrieved September 11, 2010.
58. VirusTotal scan of FlashGet 3.
59. "Jotti's malware scan of FlashGet 1.96". Virusscan.jotti.org. Archived from the
originalon May 10, 2011. Retrieved September 11, 2010.
60. VirusTotal scan of FlashGet 1.96.
61. Some caution is required since FlashGet 3 EULA makes mention of Third Party
Software, but does not name any third party producer of software. However, a scan
with SpyBot Search & Destroy, performed on November 20, 2009 after installing
FlashGet 3 did not show any malware on an already anti-spyware immunized system (by
SpyBot and SpywareBlaster).
62. "Gadgets boingboing.net, ''MagicJack's EULA says it will spy on you and force
you into arbitration''". Gadgets.boingboing.net. April 14, 2008. Retrieved September
11, 2010.
63. Roberts, Paul F. (May 26, 2005). "Spyware-Removal Program Tagged as a
Trap". eWeek. Retrieved September 4, 2008.
64. Howes, Eric L. "The Spyware Warrior List of Rogue/Suspect Anti-Spyware
Products & Web Sites". Retrieved July 10, 2005.
65. Also known as WinAntiVirusPro, ErrorSafe, SystemDoctor, WinAntiSpyware,
AVSystemCare, WinAntiSpy, Windows Police Pro, Performance Optimizer,
StorageProtector, PrivacyProtector, WinReanimator, DriveCleaner, WinspywareProtect,
PCTurboPro, FreePCSecure, ErrorProtector, SysProtect, WinSoftware, XPAntivirus,
Personal Antivirus, Home Antivirus 20xx, VirusDoctor, and ECsecure
66. Elinor Mills (April 27, 2010). "Google: Fake antivirus is 15 percent of all
malware". CNET. Retrieved 2011-11-05.
67. McMillan, Robert. Antispyware Company Sued Under Spyware Law. PC
World, January 26, 2006.
68. "Lawsuit filed against 180solutions Archived June 22, 2008, at the Wayback
Machine.". zdnet.com September 13, 2005
69. Hu, Jim. "180solutions sues allies over adware". news.com July 28, 2004
70. Coollawyer; 2001–2006; Privacy Policies, Terms and Conditions, Website
Contracts, Website Agreements; coollawyer.com. Retrieved November 28, 2006.
71. "CHAPTER 715 Computer Spyware and Malware Protection Archived April 6,
2012, at the Wayback Machine.". nxtsearch.legis.state.ia.us. Retrieved May 11, 2011.
72. Chapter 19.270 RCW: Computer spyware. apps.leg.wa.gov. Retrieved November
14, 2006.
73. Gross, Grant. US lawmakers introduce I-Spy bill. InfoWorld, March 16, 2007.
Retrieved March 24, 2007.
74. See Federal Trade Commission v. Sperry & Hutchinson Trading Stamp Co.
75. FTC Permanently Halts Unlawful Spyware Operations Archived November 2,
2013, at the Wayback Machine. (FTC press release with links to supporting documents);
see also FTC cracks down on spyware and PC hijacking, but not true lies, Micro Law,
IEEE MICRO (Jan.-Feb. 2005), also available at IEEE Xplore.
76. See Court Orders Halt to Sale of Spyware (FTC press release November 17, 2008,
with links to supporting documents).
77. OPTA, "Besluit van het college van de Onafhankelijke Post en Telecommunicatie
Autoriteit op grond van artikel 15.4 juncto artikel 15.10 van de Telecommunicatiewet
tot oplegging van boetes ter zake van overtredingen van het gestelde bij of krachtens
de Telecommunicatiewet" from November 5,
2007, http://opta.nl/download/202311+boete+verspreiding+ongewenste+software.pdf
78. "State Sues Major "Spyware" Distributor" (Press release). Office of New York
State Attorney General. April 28, 2005. Archived from the original on January 10, 2009.
Retrieved September 4, 2008. Attorney General Spitzer today sued one of the nation's
leading internet marketing companies, alleging that the firm was the source of
"spyware" and "adware" that has been secretly installed on millions of home
computers.
79. Gormley, Michael. "Intermix Media Inc. says it is settling spyware lawsuit with
N.Y. attorney general". Yahoo! News. June 15, 2005. Archived from the original on June
22, 2005.
80. Gormley, Michael (June 25, 2005). "Major advertisers caught in spyware
net". USA Today. Retrieved September 4, 2008.
81. Festa, Paul. "See you later, anti-Gators?". News.com. October 22, 2003.
82. "Gator Information Center". pcpitstop.com November 14, 2005.
83. "Initial LANrev System Findings" Archived June 15, 2010, at the Wayback
Machine., LMSD Redacted Forensic Analysis, L-3 Services – prepared for Ballard
Spahr(LMSD's counsel), May 2010. Retrieved August 15, 2010.
84. Doug Stanglin (February 18, 2010). "School district accused of spying on kids via
laptop webcams". USA Today. Retrieved February 19, 2010.
85. "Suit: Schools Spied on Students Via Webcam". CBS NEWS. March 8, 2010.
5.13.3 Botnet
5.13.3.1 Commentary

5.13.3.2 Reference

5.13.4 Keystroke logging


5.13.4.1 Commentary
Keylogging records the keys struck on a keyboard or human–computer interaction with
the operator unaware of the monitoring.[1]  They troubleshoot technical problems with
computers and networks and range from hardware and software-based approaches
of acoustic analysis. A logfile from keylogger is based on screen capture designed to
work on the target computer's software[2] . Keyloggers are not stopped
by HTTPS encryption because that only protects data in transit between computers,
thus the threat being from the user's computer.
Keyloggers can be hypervisor-based (residing below the operating system), kernel-
based (based in the operating system), API-based (hooked keyboard APIs of an
application), operating system APIs (polling keyboard states or events),[3][4] form
grabbing based (recording states on buttons), Javascript-based (script tag injected into
web pages),[5] memory injection based (alters memory tables for browser and system
functions),[6] packet analyzers (capturing network traffic associated with HTTP posts), 
and remote access software keyloggers (local keyloggers allowing access to locally
recorded data from remote locations).[28][29][30][31][30][32][33]
Keyloggers are a recognised method for study of writing processes.[7][8][9] eg
cognitive writing processes, research writing, educational domains for second
language learning, programming skills, and typing skills.
Software keyloggers are enhanced with features such as clipboard logging, screen
logging, programmatically capturing the text in a control,[10] recording of every
program/folder/window opened, search engines
queries, instantmessenger conversations, FTP and other Internet-based activities.
Hardware-based keyloggers are independant of software as they work at the hardware
level in a computer system. They can show as firmware-based,[11] keyboard hardware,
[12] wireless keyboard and mouse sniffers,[13] keyboard overlays,[14] acoustic
keyloggers,[15][16] electromagnetic emissions,[17][18] optical surveillance,[19][20]
physical evidence and smartphone sensors.[21][22][23][24][25][26][27]
Trojans are used to install key loggers when they are used as malware.[34][35] and
have been exploited by authorities to track criminal activities.[36][37][40]
Keyloggers are countered by a number of techniques corresponding to the specific
keylogger technique.
An anti keylogger detects keyloggers by comparing files in the computer against a
database for similarities like anti virus software.[38]
Keyloggers can be avoided by rebooting the computer from a read only device and
using automatic form filler programs, one-time passwords, security tokens,[39]
keystroke interference software,[41] speech recognition, and handwriting recognition
and mouse gestures, and macro expanders/recorders.
Non-technological methods include moving input fields on a screen[42] and replacing
part s by dummy segments.
5.13.4.2 References
1. "Keylogger". Oxford dictionaries.
2. Keyloggers: How they work and how to detect them (Part 1), Secure List, "Today,
keyloggers are mainly used to steal user data relating to various online payment
systems, and virus writers are constantly writing new keylogger Trojans for this very
purpose."
3. "What is a Keylogger?". PC Tools.
4. Caleb Chen (2017-03-20). "Microsoft Windows 10 has a keylogger enabled by
default – here's how to disable it".
5. "The Evolution of Malicious IRC Bots" (PDF). Symantec. 2005-11-26: 23–24.
Retrieved 2011-03-25.
6. Jonathan Brossard (2008-09-03). "Bypassing pre-boot authentication passwords
by instrumenting the BIOS keyboard buffer (practical low level attacks against x86 pre-
boot authentication software)" (PDF). Iviz Technosolutions. Archived from the
original(PDF) on 2008-09-13. Retrieved 2008-09-23. External link in |publisher= (help)
7. "Web-Based Keylogger Used to Steal Credit Card Data from Popular
Sites". Threatpost | The first stop for security news. 2016-10-06. Retrieved 2017-01-24.
8. "SpyEye Targets Opera, Google Chrome Users". Krebs on Security. Retrieved 26
April2011.
9. K.P.H. Sullivan & E. Lindgren (Eds., 2006), Studies in Writing: Vol. 18. Computer
Key-Stroke Logging and Writing: Methods and Applications. Oxford: Elsevier.
10. V. W. Berninger (Ed., 2012), Past, present, and future contributions of cognitive
writing research to cognitive psychology. New York/Sussex: Taylor &
Francis. ISBN 9781848729636
11. Vincentas (11 July 2013). "Keystroke Logging in SpyWareLoop.com". Spyware
Loop. Archived from the original on 7 December 2013. Retrieved 27 July 2013.
12. Microsoft. "EM_GETLINE Message()". Microsoft. Retrieved 2009-07-15.
13. "Apple keyboard hack". Apple keyboard hack. Digital Society. Archived from the
original on 26 August 2009. Retrieved 9 June 2011.
14. "Keylogger Removal". Keylogger Removal. SpyReveal Anti Keylogger. Archived
from the original on 29 April 2011. Retrieved 25 April 2011.
15. "Keylogger Removal". Keylogger Removal. SpyReveal Anti Keylogger.
Retrieved 26 February 2016.
16. Jeremy Kirk (2008-12-16). "Tampered Credit Card Terminals". IDG News Service.
Retrieved 2009-04-19.
17. Andrew Kelly (2010-09-10). "Cracking Passwords using Keyboard Acoustics and
Language Modeling" (PDF).
18. Sarah Young (14 September 2005). "Researchers recover typed text using audio
recording of keystrokes". UC Berkeley NewsCenter.
19. "Remote monitoring uncovered by American techno activists". ZDNet. 2000-10-
26. Retrieved 2008-09-23.
20. Martin Vuagnoux and Sylvain Pasini (2009-06-01). "Compromising
Electromagnetic Emanations of Wired and Wireless Keyboards". Lausanne: Security and
Cryptography Laboratory (LASEC).
21. "ATM camera". snopes.com. Retrieved 2009-04-19. External link in |
publisher=(help)
22. Maggi, Federico; Volpatto, Alberto; Gasparini, Simone; Boracchi, Giacomo;
Zanero, Stefano (2011). A fast eavesdropping attack against touchscreens (PDF). 7th
International Conference on Information Assurance and Security.
IEEE. doi:10.1109/ISIAS.2011.6122840.
23. Marquardt, Philip; Verma, Arunabh; Carter, Henry; Traynor, Patrick
(2011). (sp)iPhone: decoding vibrations from nearby keyboards using mobile phone
accelerometers. Proceedings of the 18th ACM conference on Computer and
communications security. ACM. pp. 561–562. doi:10.1145/2046707.2046771.
24. "iPhone Accelerometer Could Spy on Computer Keystrokes". Wired. 19 October
2011. Retrieved August 25, 2014. External link in |publisher= (help)
25. Owusu, Emmanuel; Han, Jun; Das, Sauvik; Perrig, Adrian; Zhang, Joy
(2012). ACCessory: password inference using accelerometers on smartphones.
Proceedings of the Thirteenth Workshop on Mobile Computing Systems and
Applications. ACM. doi:10.1145/2162081.2162095.
26. Aviv, Adam J.; Sapp, Benjamin; Blaze, Matt; Smith, Jonathan M.
(2012). Practicality of accelerometer side channels on smartphones. Proceedings of the
28th Annual Computer Security Applications Conference.
ACM. doi:10.1145/2420950.2420957.
27. Cai, Liang; Chen, Hao (2011). TouchLogger: inferring keystrokes on touch screen
from smartphone motion (PDF). Proceedings of the 6th USENIX conference on Hot
topics in security. USENIX. Retrieved 25 August 2014.
28. Xu, Zhi; Bai, Kun; Zhu, Sencun (2012). TapLogger: inferring user inputs on
smartphone touchscreens using on-board motion sensors. Proceedings of the fifth ACM
conference on Security and Privacy in Wireless and Mobile Networks. ACM. pp. 113–
124. doi:10.1145/2185448.2185465.
29. Miluzzo, Emiliano; Varshavsky, Alexander; Balakrishnan, Suhrid; Choudhury,
Romit Roy (2012). Tapprints: your finger taps have fingerprints. Proceedings of the 10th
international conference on Mobile systems, applications, and services. ACM. pp. 323–
336. doi:10.1145/2307636.2307666.
30. Spreitzer, Raphael (2014). PIN Skimming: Exploiting the Ambient-Light Sensor in
Mobile Devices. Proceedings of the 4th ACM Workshop on Security and Privacy in
Smartphones & Mobile Devices. ACM. pp. 51–62. doi:10.1145/2666620.2666622.
31. "The Security Digest Archives". Retrieved 2009-11-22.
32. "Soviet Spies Bugged World's First Electronic Typewriters". qccglobal.com.
33. Geoffrey Ingersoll. "Russia Turns To Typewriters To Protect Against Cyber
Espionage". 2013.
34. Sharon A. Maneki. "Learning from the Enemy: The GUNMAN Project". 2012.
35. Agence France-Presse, Associated Press. "Wanted: 20 electric typewriters for
Russia to avoid leaks". inquirer.net.
36. Anna Arutunyan. "Russian security agency to buy typewriters to avoid
surveillance"Archived 2013-12-21 at the Wayback Machine..
37. Young, Adam; Yung, Moti (1997). "Deniable Password Snatching: On the
Possibility of Evasive Electronic Espionage". Proceedings of IEEE Symposium on
Security and Privacy. IEEE: 224–235. doi:10.1109/SECPRI.1997.601339.
38. Young, Adam; Yung, Moti (1996). "Cryptovirology: extortion-based security
threats and countermeasures". Proceedings of IEEE Symposium on Security and
Privacy. IEEE: 129–140. doi:10.1109/SECPRI.1996.502676.
39. John Leyden (2000-12-06). "Mafia trial to test FBI spying tactics: Keystroke
logging used to spy on mob suspect using PGP". The Register. Retrieved 2009-04-19.
40. John Leyden (2002-08-16). "Russians accuse FBI Agent of Hacking". The
Register.
41. Alex Stim (2015-10-28). "3 methods to disable Windows 10 built-in Spy
Keylogger".
42. Theron, kristen (19 February 2016). "What is Anti Keylogger".
43. Austin Modine (2008-10-10). "Organized crime tampers with European card swipe
devices". The Register. Retrieved 2009-04-18.
44. Scott Dunn (2009-09-10). "Prevent keyloggers from grabbing your passwords".
Windows Secrets. Retrieved 2014-05-10.
45. Christopher Ciabarra (2009-06-10). "Anti Keylogger". Networkintercept.com.
Archived from the original on 2010-06-26.
46. Cormac Herley and Dinei Florencio (2006-02-06). "How To Login From an Internet
Cafe Without Worrying About Keyloggers" (PDF). Microsoft Research. Retrieved 2008-
09-23.

5.13.5 Form grabbing


5.13.5.1 Content
5.13.5.1.1 Introduction
Form grabbing retrieves authorization and log-in credentials from web data form before
passing it over the Internet to a secure server avoiding HTTPS encryption and more
effective a keylogger because it acquires credentials even with virtual keyboard, auto-
fill, or copy and paste.[1] It processes the data using variable names.[2]
5.13.5.1.2 History
Form grabbing is based on a trojan with its development over time described in [3][4][5]
[6].
5.13.5.1.3 Countermeasures
Countermeasures use the methods like antivirus,[1] limiting user access and a list of
malicious servers in firewalls.[2]
5.13.5.2 References
1. "Capturing Online Passwords and Antivirus." Web log post. Business Information
Technology Services, 24 July 2013.
2. Graham, James, Richard Howard, and Ryan Olson. Cyber Security Essentials. Auerbach
Publications, 2011. Print.
3. *Shevchenko, Sergei. "Downloader.Berbew." Symantec, 13 Feb. 2007.
4. *Abrams, Lawrence. "CryptoLocker Ransomware Information Guide and FAQ." Bleeding
Computers. 20 Dec. 2013.
5. *"Form Grabbing." Web log post. Rochester Institute of Technology, 10 Sept. 2011.
6. Kruse, Peter. "Crimekit for MacOSX Launched." Web log post. Canadian Security
Intelligence Service, 02 May 2011.
5.13.6 Web Threats
5.13.6.1 Commentary
5.13.6.1.1 Introduction
A web threat is a threat using the Web to do cybercrime. They use malware and fraud
via the internet protocols and components. Cybercriminals profit by stealing data for
later sale and infecting networks. Web threats present risks, e.g. financial damages,
identity theft, loss of confidential information/data, theft of network resources,
damaged brand/personal reputation, erosion of consumer confidence in e-commerce
and online banking.[1][2][3]
5.13.6.1.2 Delivery methods
Web threats[4] come from spam, phishing, or website to collects data and/or dispense
malware. Others use phishing, DNS poisoning, and ways to seem like a trusted source.
Some apply social engineering to encourage the receiver to open the email and apply
links or attachments for malevolent purposes. Any web site can be infected without the
knowledge of the user or owner.
5.13.6.1.3 Growth of web threats
Web threats have increased with the greater development of the web[5] and the script
languages that are used.[6] Examples of these are documented in [7][8].
5.13.6.1.4 Prevention and detection
Web threats can only be dealt with multi-layered protection I.e.protection in the cloud,
at the Internet gateway, across network servers and on the client.
5.13.6.2 References
1. Cortada, James W. (2003-12-04). The Digital Hand: How Computers Changed the Work
of American Manufacturing, Transportation, and Retail Industries. USA: Oxford
University Press. p. 512. ISBN 0-19-516588-8.
2. Cortada, James W. (2005-11-03). The Digital Hand: Volume II: How Computers Changed
the Work of American Financial, Telecommunications, Media, and Entertainment
Industries. USA: Oxford University Press. ISBN 978-0-19-516587-6.
3. Cortada, James W. (2007-11-06). The Digital Hand, Vol 3: How Computers Changed the
Work of American Public Sector Industries. USA: Oxford University Press.
p. 496. ISBN 978-0-19-516586-9.
4. Trend Micro (2008) Web Threats: Challenges and Solutions
from http://us.trendmicro.com/imperia/md/content/us/pdf/webthreats/wp01_webthreats_
080303.pdf
5. Maone, Giorgio (2008) Malware 2.0 is Now!
from http://hackademix.net/2008/01/12/malware-20-is-now/
6. Horwath, Fran (2008) Web 2.0: next-generation web threats from http://www.it-
director.com/technology/security/content.php?cid=10162
7. Naraine, Ryan (2008) Business Week site hacked, serving drive-by exploits
from http://blogs.zdnet.com/security/?p=1902#more-1902
8. Danchev, Dancho (2008) Compromised Web Servers Serving Fake Flash Players
from http://ddanchev.blogspot.com/2008/08/compromised-web-servers-serving-fake.html
5.13.7 Fraudulent dialer
5.13.7.1 Commentary
5.13.7.1.1 Introduction
Fraudulent dialers connect to premium-rate numbers using security holes in an
operating system. Some dialers give the user a running commentary for special content
through the special number. Installing a service is cheap so the premium rate call is
very profitable with few overheads. Users with broadband connections are not affected
as there is no regular phone numbers in a DSL network without a dial-up modem
whereas  ISDN adapter or analog modem can be subverted.
Malicious dialers are identified by:
•A popup opens with new websites.
•there is a small hint about the price.
•downloads ignore the cancel button.
•There is a default connection without notice.
•The dialer gives unwanted connections without user interaction.
•No notice about the price before dialing.
•Premium connection price.
•The dialer cannot be uninstalled.
5.13.7.1.2 Installation routes
Malicious dialers are loaded through links in spam email or erroneous web pages.
5.13.7.1.3 German regulatory law
The law contains the following regulations:
•Forced price notices for service providers.
•Maximum price limits, legitimacy checks and automatic disconnects.
•Registration of dialers.
•Blocking of dialers.
•Right of information for consumers from the RegTP.
5.13.7.2 References
1. Krogue, Ken. "Inside Sales Jobs and Career Demand Up 54%: But Most Leverage Comes
With Dialer Software and Lead Research". Fo≥rbes. Forbes.com. Retrieved 2013-04-24.
5.13.8 Malbot
5.13.8.1 Commentary
5.13.8.1.1 Introduction
Bots coordinate and operate automated attacks on networked computers for denial-of-
service attacks and click fraud. A spambot is an internet bot that sends spam onto the
Internet.[2]
Malicious bots can be:
a)Spambots that harvest email addresses from contact or guestbook pages
b)Downloader programs to flood a channel
c)Website scrapers to grab website content and re-use it to generated doorway pages
d)Viruses and worms
e)DDoS attacks
f)Botnets, zombie computers, etc.
g)buy up good seats for concerts, etc..[5] 
h)in Massively Multiplayer Online Roleplaying Games farm for resources to save time or
effort.
i)increase views for YouTube videos.
j)increase traffic counts on analytics reporting accounts from advertisers.[6][7]
k)post inflammatory or nonsensical posts to disrupt the forum and anger users.
An anti-bot technique is CAPTCHA, Turing test, using graphically-encoded human-
readable text.
5.13.8.2 References
1. Dunham, Ken; Melnick, Jim (2008). Malicious Bots: An Inside Look into the Cyber-
Criminal Underground of the Internet. CRC Press. ISBN 9781420069068.
2. ^ Jump up to:a b c Zeifman, Igal. "Bot Traffic Report 2016". Incapsula. Retrieved 1
February 2017.
3. "Touch Arcade Forum Discussion on fraud in the Top 25 Free Ranking".
4. "App Store fake reviews: Here's how they encourage your favourite developers to
cheat". Electricpig.
5. Safruti, Ido. "Why Detecting Bot Attacks Is Becoming More Difficult". DARKReading.
6. Holiday, Ryan. "Fake Traffic Means Real Paydays". BetaBeat.
7. von Lipinski, Percy (28 May 2013). "CNN's iReport hit hard by pay-per-view scandal".
PulsePoint. Retrieved 21 July 2016.
5.13.9 Scareware
5.13.9.1 Commentary
5.13.9.1.1 Introduction
Scareware uses social engineering to shock, feeling of anxiety, or threat to make users
purchase unneeded software. It can include rogue security software, ransomware and
other scam software to make users believe their system is infected and suggests
payment recovery.[1][2][3][4]
5.13.9.1.2 Scam scareware
Scareware is a software product to produce frivolous and alarming warnings or threat
notices for fictitious or useless commercial firewall and registry cleaner software by
barraging the user with constant warning messages that do not increase its
effectiveness in any way.[5][6]It may suggest download of ficticious software.[7][8][9]
[10][11] It can show up as spyware.[12][13][14][15][16] Other antispyware scareware
may be promoted using a phishing scam. Another approach is to trick the user into
disabling legitimate security software.[17]
5.13.9.1.3 Legal action
Legal action has been taken against scareware makers successfully.[18][19][20][21]
[22][23]
5.13.9.1.4 Prank software
Scareware can scare the user through the use of unanticipated shocking images,
sounds or video.[24][25]
5.13.9.2 References
1. "Millions tricked by 'scareware'". BBC News. 2009-10-19. Retrieved 2009-10-20.
2. 'Scareware' scams trick searchers. BBC News (2009-03-23). Retrieved on 2009-
03-23.
3. "Scareware scammers adopt cold call tactics". The Register. 2009-04-10.
Retrieved 2009-04-12.
4. Phishing Activity Trends Report: 1st Half 2009
5. John Leydon (2009-10-20). "Scareware Mr Bigs enjoy 'low risk' crime bonanza".
The Register. Retrieved 2009-10-21.
6. Carine Febre (2014-10-20). "Fake Warning Example". Carine Febre.
Retrieved 2014-11-21.
7. "Symantec Security Response: Misleading Applications". Symantec. 2007-08-31.
Retrieved 2010-04-15.
8. JM Hipolito (2009-06-04). "Air France Flight 447 Search Results Lead to Rogue
Antivirus". Trend Micro. Retrieved 2009-06-06.
9. Moheeb Abu Rajab and Luca Ballard (2010-04-13). "The Nocebo Effect on the
Web: An Analysis of Fake Anti-Virus Distribution" (PDF). Google. Retrieved 2010-11-18.
10. "Mass 'scareware' attack hits 1.5M websites, still spreading". On Deadline. April
1, 2011.
11. "Malicious Web attack hits a million site addresses". Reuters.com. April 1, 2011.
12. "Google to Warn PC Virus Victims via Search Site". BBC News. 2011-07-21.
Retrieved 2011-07-22.
13. "Smart Fortress 2012"
14. "bugs on the screen". Windows Sysinternals.
15. Vincentas (11 July 2013). "Scareware in SpyWareLoop.com". Spyware Loop.
Retrieved 27 July2013.
16. spywarewarrior.com filed under "Brave Sentry."
17. theregister.co.uk
18. Etengoff, Aharon (2008-09-29). "Washington and Microsoft target spammers". The
Inquirer. Retrieved 2008-10-04.
19. Tarun (2008-09-29). "Microsoft to sue scareware security vendors". Lunarsoft.
Retrieved 2009-09-24. [...] the Washington attorney general (AG) [...] has also brought
lawsuits against companies such as Securelink Networks and High Falls Media, and the
makers of a product called QuickShield, all of whom were accused of marketing their
products using deceptive techniques such as fake alert messages.
20. "Fighting the scourge of scareware". BBC News. 2008-10-01. Retrieved 2008-10-
02.
21. "Win software". Federal Trade Commission.
22. "Wanted by the FBI - SHAILESHKUMAR P. JAIN". FBI.
23. "D'Souza Final Order" (PDF). Federal Trade Commission.
24. Contents of disk #448. Amiga-stuff.com - see DISK 448.
25. Dark Drive Prank
26. O’Dea, Hamish (2009-10-16). "The Modern Rogue – Malware With a Face".
Australia: Microsoft.
5.13.10 Rogue security software
5.13.10.1 Commentary
5.13.10.1.1 Introduction
Rogue security software misinforms users that they have a virus on their computer so
that they buy fake software and really puts malware on their computer. It is regarded
as scareware become ransomware.[1][2]
5.13.10.1.2 Propagation
Rogue security software uses social engineering (fraud) to overcome security of
operating systems and browsers[2] e.g. displayed a false message from a website of
infection and leading to the installion or purchase of scareware with a Trojan part as:
• A browser plug-in or extension (typically toolbar)
• An image, screensaver or archive file attached to an e-mail message
• Multimedia codec required to play a certain video clip
• Software shared on peer-to-peer networks[3]
• A free online malware-scanning service[4]
Some software moves between computers as drive-by downloads using weaknesses in
browsers, file viewers, or emails for installation without user help.[3][5] Lately
distribution prioritises infected URLs in search engine results.[6][7][8][9] Cold-calling is
also used for distribution with callers from support body.[10]
5.13.10.1.3 Common infections
Common infections follow the following strategies:
• Search engine optimization tricks the engine into displaying malicious URLs in
search results with malicious webpages filled with popular keywords e.g. from Google
Trends to get a top ranking in the search results. [11][12]
• Websites with third-party advertising services can be infected so that all their
websites are contaminated.[13]
• Spam messages with malicious attachments, links to binaries and drive-by
download sites are other methods for spreading rogue software.[14]
5.13.10.1.4 Operation
After installation, the software tries to get the user to buy a service or extra software
by:
• Warning the user of fake discovery of malware / pornography.[15]
• showing a simulation a system crash.[2]
• Prevention of removal of the malware.
• Installing malware, then pseudo detecting it.
• Changing system registries and security settings, then alerting the user.
• Giving offers to fix performance housekeeping on the computer.[15]
• Giving pop-up warnings and security alerts mimicing system notices.[16][2]
The FTC and anti-malware are very effective against profitable
spyware and adware distribution.[17][18][19][20][21]
5.13.10.1.5 Countermeasures
Legal actions were very slow to react to rogue software though it only well-established
crimes but forums list of dangerous products.[22][23][24] Eventually antivirus absorbed
the job.[25][26] and police have devloped policies to counteract rogueware.[27]
5.13.10.2 References
1. "Symantec Report on Rogue Security Software" (PDF). Symantec. 2009-10-28.
Retrieved 2010-04-15.
2. "Microsoft Security Intelligence Report volume 6 (July - December
2008)". Microsoft. 2009-04-08. p. 92. Retrieved 2009-05-02.
3. Doshi, Nishant (2009-01-19), Misleading Applications – Show Me The
Money!, Symantec, retrieved 2016-03-22
4. Doshi, Nishant (2009-01-21), Misleading Applications – Show Me The Money! (Part
2), Symantec, retrieved 2016-03-22
5. "News Adobe Reader and Acrobat Vulnerability". blogs.adobe.com. Retrieved 25
November 2010.
6. Chu, Kian; Hong, Choon (2009-09-30), Samoa Earthquake News Leads To Rogue
AV, F-Secure, retrieved 2010-01-16
7. Hines, Matthew (2009-10-08), Malware Distributors Mastering News SEO, eWeek,
retrieved 2010-01-16
8. Raywood, Dan (2010-01-15), Rogue anti-virus prevalent on links that relate to
Haiti earthquake, as donors encouraged to look carefully for genuine sites, SC
Magazine, retrieved 2010-01-16
9. Moheeb Abu Rajab and Luca Ballard (2010-04-13). "The Nocebo Effect on the
Web: An Analysis of Fake Anti-Virus Distribution" (PDF). Google. Retrieved 2010-11-18.
10. "Warning over anti-virus cold-calls to UK internet users". BBC News. Retrieved 7
March2012.
11. "Sophos Technical Papers - Sophos SEO Insights". sophos.com.
12. "Sophos Fake Antivirus Journey from Trojan tpna" (PDF).
13. "Sophos Fake Antivirus Journey from Trojan tpna" (PDF).
14. "Sophos Fake Antivirus Journey from Trojan tpna" (PDF).
15. "Free Security Scan" Could Cost Time and Money, Federal Trade Commission,
2008-12-10, retrieved 2009-05-02
16. "SAP at a crossroads after losing $1.3B verdict". Yahoo! News. 24 November
2010. Retrieved 25 November 2010.
17. Testimony of Ari Schwartz on "Spyware" (PDF), Senate Committee on Commerce,
Science, and Transportation, 2005-05-11
18. Leyden, John (2009-04-11). "Zango goes titsup: End of desktop adware market".
The Register. Retrieved 2009-05-05.
19. Cole, Dave (2006-07-03), Deceptonomics: A Glance at The Misleading Application
Business Model, Symantec, retrieved 2016-03-22
20. Doshi, Nishant (2009-01-27), Misleading Applications – Show Me The Money! (Part
3), Symantec, retrieved 2016-03-22
21. Stewart, Joe. "Rogue Antivirus Dissected - Part 2". Secureworks.com.
SecureWorks. Retrieved 9 March 2016.
22. Rogue security software
23. "Spyware Warrior: Rogue/Suspect Anti-Spyware Products & Web
Sites". spywarewarrior.com.
24. "Virus, Spyware, & Malware Removal Guides". BleepingComputer.
25. Ex Parte Temporary Restraining Order RDB08CV3233 (PDF), United States
District Court for the District of Maryland, 2008-12-03, retrieved 2009-05-02
26. Lordan, Betsy (2008-12-10), Court Halts Bogus Computer Scans, Federal Trade
Commission, retrieved 2009-05-02
27. Krebs, Brian (2009-03-20), "Rogue Antivirus Distribution Network
Dismantled", Washington Post, retrieved 2009-05-02
28. Howes, Eric L (2007-05-04), iSynergy Media Seo Services Guest posting Services,
retrieved 2009-05-02
5.13.11 Ransomware
5.13.11.1 Commentary
5.13.11.1.1 Introduction
Ransomware threatens to expose a victim's data or block access to it unless
a ransom is paid. The system can be easy to reverse for the expert, but advanced
malware uses cryptoviral extortion to encrypt the victim's files to make them
inaccessible.[1][2][3][4] The latter make recovery impossible without the
decryption key. Ransomware attacks use Trojans masked as legitimate data to get the
user to download or open an email attachment.[5][6][7] It was documented in [8][9][10].
5.13.11.1.2 Operation
The concept of file encrypting ransomware came about with Young and Yung and
called it cryptoviral extortion.[11] It works in a three-round protocol carried out
between the attacker and the victim.[1][12][13][14]
• [attacker→victim] The attacker generates a key pair and places the
corresponding public key in the malware which is released.
• [victim→attacker] The malware generates a random symmetric key and encrypts
the victim's data with it. It uses the public key in the malware to encrypt the symmetric
key. It zeroizes the key and the original plaintext data to prevent recovery. It sends a
message to the user including the asymmetric ciphertext and ransom. The victim sends
the asymmetric ciphertext and e-money to the attacker.
• [attacker→victim] The attacker receives the payment, deciphers the asymmetric
ciphertext with the attacker's private key, and sends the symmetric key to the victim.
The victim deciphers the encrypted data with the needed symmetric key thereby
completing the cryptovirology attack.
Some payloads lock or restrict the system until payment is made[15]
[16] others encrypt the victim's files so that only the malware author has the
decryption key[1][17][18] whilst the malware requires money to reverse the problem.
[19][5][20][21][22][23][24][25]
5.13.11.1.3 History
The history of encrypting ransomware is documented in [26][27][1][11][28][29][30][31]
[32][33][34][23][35][36][37][38][39][40][41][42][43][44][45][46][46][47][48][49]. Non-
encrypting ransomware history is given in [14][50][12][51][52][53].
5.13.11.1.4 Leakware (Doxware)
Leakware (Doxware) is a ransomware attack threatening to release stolen information
from the victim data.[54][55]
5.13.11.1.5 Mobile ransomware
Ransomware aim at mobile devices and are mostly blockers[56][56][57] and displays
display a blocking message on top of apps,[57] starts clickjacking,[58] lock access to
the device,[59] and exploit security holes.[60]
5.13.11.1.6 Examples
Examples of ransomware are described in [61][62][63][5][64][65][5][13][66][6][7][64][67]
[68][22][69][70][71][72][73][74][75][9][76][77][78][79][80][35][81][82][83][10][84][85][86]
[87][88][89][90][91][92][93][94][95][96][97][98][99][100][101][102][103][104][105][22]
[106][107][108][109][110][111][112][113][114][115][116][117][118].
5.13.11.1.7 Freedom of speech challenges and criminal punishment
Researchers develop proof-of-concept attack code and vulnerability analysis. It gives
definitions of threat, priority of issues, and countermeasures. Lawyers and police want
to make ransomware illegal.[119][120][121][122]
5.13.11.2 References
1. Young, A.; M. Yung (1996). Cryptovirology: extortion-based security threats and
countermeasures. IEEE Symposium on Security and Privacy. pp. 129–
140. doi:10.1109/SECPRI.1996.502676. ISBN 0-8186-7417-2.
2. Jack Schofield (28 July 2016). "How can I remove a ransomware infection?". The
Guardian. Retrieved 28 July 2016.
3. Michael Mimoso (28 March 2016). "Petya Ransomware Master File Table
Encryption". threatpost.com. Retrieved 28 July 2016.
4. Justin Luna (September 21, 2016). "Mamba ransomware encrypts your hard
drive, manipulates the boot process". Neowin. Retrieved 5 November 2016.
5. Dunn, John E. "Ransom Trojans spreading beyond Russian heartland".
TechWorld. Retrieved 10 March 2012.
6. "New Internet scam: Ransomware.." FBI. 9 August 2012.
7. "Citadel malware continues to deliver Reveton ransomware.." Internet Crime
Complaint Center (IC3). 30 November 2012.
8. "Update: McAfee: Cyber criminals using Android malware and ransomware the
most". InfoWorld. Retrieved 16 September 2013.
9. "Cryptolocker victims to get files back for free". BBC News. 6 August 2014.
Retrieved 18 August 2014.
10. "FBI says crypto ransomware has raked in >$18 million for cybercriminals". Ars
Technica. Retrieved 25 June 2015.
11. Young, Adam L.; Yung, Moti (2017). "Cryptovirology: The Birth, Neglect, and
Explosion of Ransomware". 60 (7). Communications of the ACM: 24–26. Retrieved 27
June 2017.
12. "Ransomware squeezes users with bogus Windows activation demand".
Computerworld. Retrieved 9 March 2012.
13. "Police warn of extortion messages sent in their name". Helsingin Sanomat.
Retrieved 9 March 2012.
14. McMillian, Robert. "Alleged Ransomware Gang Investigated by Moscow Police".
PC World. Retrieved 10 March 2012.
15. "Ransomware: Fake Federal German Police (BKA) notice". SecureList (Kaspersky
Lab). Retrieved 10 March 2012.
16. "And Now, an MBR Ransomware". SecureList (Kaspersky Lab). Retrieved 10
March2012.
17. Adam Young (2005). Zhou, Jianying; Lopez, Javier, eds. "Building a Cryptovirus
Using Microsoft's Cryptographic API". Information Security: 8th International
Conference, ISC 2005. Springer-Verlag. pp. 389–401.
18. Young, Adam (2006). "Cryptoviral Extortion Using Microsoft's Crypto API: Can
Crypto APIs Help the Enemy?". International Journal of Information Security. Springer-
Verlag. 5 (2): 67–76. doi:10.1007/s10207-006-0082-7.
19. Danchev, Dancho (22 April 2009). "New ransomware locks PCs, demands
premium SMS for removal". ZDNet. Retrieved 2 May 2009.
20. "Ransomware plays pirated Windows card, demands $143". Computerworld.
Retrieved 9 March 2012.
21. Cheng, Jacqui (18 July 2007). "New Trojans: give us $300, or the data gets it!".
Ars Technica. Retrieved 16 April 2009.
22.  "You're infected—if you want to see your data again, pay us $300 in
Bitcoins". Ars Technica. Retrieved 23 October 2013.
23. "CryptoDefense ransomware leaves decryption key accessible". Computerworld.
IDG. Retrieved 7 April 2014.
24. "What to do if Ransomware Attacks on your Windows Computer?". Techie Motto.
Retrieved 25 April 2016.
25. Parker, Luke (9 June 2016). "Large UK businesses are holding bitcoin to pay
ransoms". Retrieved 9 June 2016.
26. Kassner, Michael. "Ransomware: Extortion via the Internet". TechRepublic.
Retrieved 10 March 2012.
27. Sebastiaan von Solms; David Naccache. "On Blind 'Signatures and Perfect
Crimes"(PDF). Pdfs.sematicscholar.org. Retrieved 25 October 2017.
28. Schaibly, Susan (26 September 2005). "Files for ransom". Network World.
Retrieved 17 April 2009.
29. Leyden, John (24 July 2006). "Ransomware getting harder to break". The
Register. Retrieved 18 April 2009.
30. Naraine, Ryan (6 June 2008). "Blackmail ransomware returns with 1024-bit
encryption key". ZDNet. Retrieved 3 May 2009.
31. Lemos, Robert (13 June 2008). "Ransomware resisting crypto cracking
efforts". SecurityFocus. Retrieved 18 April 2009.
32. Krebs, Brian (9 June 2008). "Ransomware Encrypts Victim Files with 1,024-Bit
Key". The Washington Post. Retrieved 16 April 2009.
33. "Kaspersky Lab reports a new and dangerous blackmailing virus". Kaspersky
Lab. 5 June 2008. Retrieved 11 June 2008.
34. Violet Blue (22 December 2013). "CryptoLocker's crimewave: A trail of millions in
laundered Bitcoin". ZDNet. Retrieved 23 December 2013.
35. "Encryption goof fixed in TorrentLocker file-locking malware". PC World.
Retrieved 15 October 2014.
36. "Cryptolocker 2.0 – new version, or copycat?". WeLiveSecurity. ESET.
Retrieved 18 January 2014.
37. "New CryptoLocker Spreads via Removable Drives". Trend Micro. Retrieved 18
January2014.
38. "Synology NAS devices targeted by hackers, demand Bitcoin ransom to decrypt
files". ExtremeTech. Ziff Davis Media. Retrieved 18 August 2014.
39. "File-encrypting ransomware starts targeting Linux web servers". PC World. IDG.
Retrieved 31 May 2016.
40. "Cybercriminals Encrypt Website Databases in "RansomWeb"
Attacks". SecurityWeek. Retrieved 31 May 2016.
41. "Hackers holding websites to ransom by switching their encryption keys". The
Guardian. Retrieved 31 May 2016.
42. "The new .LNK between spam and Locky
infection". Blogs.technet.microsoft.com. Retrieved 25 October 2017.
43. Muncaster, Phil (13 April 2016). "PowerShell Exploits Spotted in Over a Third of
Attacks".
44. "Locky Ransomware Has Evolved—The Dangers of PowerShell
Scripting". Sentinelone.com. Retrieved 24 May 2017.
45. "New ransomware employs Tor to stay hidden from security". The Guardian.
Retrieved 31 May 2016.
46. "The current state of ransomware: CTB-Locker". Sophos Blog. Sophos.
Retrieved 31 May 2016.
47. Brook, Chris (4 June 2015). "Author Behind Ransomware Tox Calls it Quits, Sells
Platform". Retrieved 6 August 2015.
48. Dela Paz, Roland (29 July 2015). "Encryptor RaaS: Yet another new Ransomware-
as-a-Service on the Block". Retrieved 6 August 2015.
49. "Symantec classifies ransomware as the most dangerous cyber threat – Tech2".
2016-09-22. Retrieved 2016-09-22.
50. Leyden, John. "Russian cops cuff 10 ransomware Trojan suspects". The Register.
Retrieved 10 March 2012.
51. "Criminals push ransomware hosted on GitHub and SourceForge pages by
spamming 'fake nude pics' of celebrities". TheNextWeb. Retrieved 17 July 2013.
52. "New OS X malware holds Macs for ransom, demands $300 fine to the FBI for
'viewing or distributing' porn". TheNextWeb. Retrieved 17 July 2013.
53. "Man gets ransomware porn pop-up, goes to cops, gets arrested on child porn
charges". Ars Technica. Retrieved 31 July 2013.
54. Young, A. (2003). Non-Zero Sum Games and Survivable Malware. IEEE Systems,
Man and Cybernetics Society Information Assurance Workshop. pp. 24–29.
55. A. Young, M. Yung (2004). Malicious Cryptography: Exposing Cryptovirology.
Wiley. ISBN 0-7645-4975-8.
56. "Ransomware on mobile devices: knock-knock-block". Kaspersky Lab.
Retrieved 6 Dec 2016.
57. "Your Android phone viewed illegal porn. To unlock it, pay a $300 fine". Ars
Technica. Retrieved 9 April 2017.
58. "New Android ransomware uses clickjacking to gain admin privileges". PC World.
Retrieved 9 April 2017.
59. "Here's How to Overcome Newly Discovered iPhone Ransomware". Fortune.
Retrieved 9 April 2017.
60. "Ransomware scammers exploited Safari bug to extort porn-viewing iOS
users". Ars Technica. Retrieved 9 April 2017.
61. "Gardaí warn of 'Police Trojan' computer locking virus". TheJournal.ie.
Retrieved 31 May 2016.
62. "Barrie computer expert seeing an increase in the effects of the new
ransomware". Barrie Examiner. Postmedia Network. Retrieved 31 May 2016.
63. "Fake cop Trojan 'detects offensive materials' on PCs, demands money". The
Register. Retrieved 15 August 2012.
64. "Reveton Malware Freezes PCs, Demands Payment". InformationWeek.
Retrieved 16 August 2012.
65. Dunn, John E. "Police alert after ransom Trojan locks up 1,100 PCs". TechWorld.
Retrieved 16 August 2012.
66. Constantian, Lucian. "Police-themed Ransomware Starts Targeting US and
Canadian Users". PC World. Retrieved 11 May 2012.
67. "Reveton 'police ransom' malware gang head arrested in Dubai". TechWorld.
Retrieved 18 October 2014.
68. "'Reveton' ransomware upgraded with powerful password stealer". PC World.
Retrieved 18 October 2014.
69. "Disk encrypting Cryptolocker malware demands $300 to decrypt your
files". Geek.com. Retrieved 12 September 2013.
70. "CryptoLocker attacks that hold your computer to ransom". The Guardian.
Retrieved 23 October 2013.
71. "Destructive malware "CryptoLocker" on the loose – here's what to do". Naked
Security. Sophos. Retrieved 23 October 2013.
72. "CryptoLocker crooks charge 10 Bitcoins for second-chance decryption
service". NetworkWorld. Retrieved 5 November 2013.
73. "CryptoLocker creators try to extort even more money from victims with new
service". PC World. Retrieved 5 November 2013.
74. "Wham bam: Global Operation Tovar whacks CryptoLocker ransomware &
GameOver Zeus botnet". Computerworld. IDG. Retrieved 18 August 2014.
75. "U.S. Leads Multi-National Action Against "Gameover Zeus" Botnet and
"Cryptolocker" Ransomware, Charges Botnet Administrator". Justice.gov. U.S.
Department of Justice. Retrieved 18 August 2014.
76. "Australians increasingly hit by global tide of cryptomalware". Symantec.
Retrieved 15 October 2014.
77. Grubb, Ben (17 September 2014). "Hackers lock up thousands of Australian
computers, demand ransom". Sydney Morning Herald. Retrieved 15 October 2014.
78. "Australia specifically targeted by Cryptolocker: Symantec". ARNnet. 3 October
2014. Retrieved 15 October 2014.
79. "Scammers use Australia Post to mask email attacks". Sydney Morning Herald.
15 October 2014. Retrieved 15 October 2014.
80. Steve Ragan (October 7, 2014). "Ransomware attack knocks TV station off
air". CSO. Retrieved 15 October 2014.
81. "Over 9,000 PCs in Australia infected by TorrentLocker
ransomware". CSO.com.au. Retrieved 18 December 2014.
82. "Malvertising campaign delivers digitally signed CryptoWall ransomware". PC
World. Retrieved 25 June 2015.
83. "CryptoWall 3.0 Ransomware Partners With FAREIT Spyware". Trend Micro.
Retrieved 25 June 2015.
84. Andra Zaharia (5 November 2015). "Security Alert: CryptoWall 4.0 – new,
enhanced and more difficult to detect". HEIMDAL. Retrieved 5 January 2016.
85. "Ransomware on mobile devices: knock-knock-block". Kaspersky Lab.
Retrieved 4 Dec2016.
86. "The evolution of mobile ransomware". Avast. Retrieved 4 Dec 2016.
87. "Mobile ransomware use jumps, blocking access to phones". PCWorld. IDG
Consumer & SMB. Retrieved 4 Dec 2016.
88. "Cyber-attack: Europol says it was unprecedented in scale". BBC News. 13 May
2017. Retrieved 13 May 2017.
89. "'Unprecedented' cyberattack hits 200,000 in at least 150 countries, and the
threat is escalating". CNBC. 14 May 2017. Retrieved 16 May 2017.
90. "The real victim of ransomware: Your local corner store". CNET. Retrieved 2017-
05-22.
91. Marsh, Sarah (12 May 2017). "The NHS trusts hit by malware – full list". The
Guardian. Retrieved 12 May 2017.
92. "Honda halts Japan car plant after WannaCry virus hits computer
network". Reuters. 21 June 2017. Retrieved 21 June 2017.
93. "Ransomware virus plagues 75k computers across 99 countries". RT
International. Retrieved 2017-05-12.
94. Scott, Paul Mozur, Mark; Goel, Vindu (2017-05-19). "Victims Call Hackers' Bluff as
Ransomware Deadline Nears". The New York Times. ISSN 0362-4331. Retrieved 2017-
05-22.
95. Constantin, Lucian. "Petya ransomware is now double the
trouble". NetworkWorld. Retrieved 2017-06-27.
96. "Tuesday's massive ransomware outbreak was, in fact, something much
worse". Ars Technica. Retrieved 2017-06-28.
97. "Cyber-attack was about data and not money, say experts". BBC News. 29 June
2017. Retrieved 29 June 2017.
98. "'Bad Rabbit' ransomware strikes Ukraine and Russia". BBC. October 24, 2017.
Retrieved October 24, 2017.
99. Hern, Alex (25 October 2017). "Bad Rabbit: Game of Thrones-referencing
ransomware hits Europe". Theguardian.com. Retrieved 25 October 2017.
100. Larson, Selena (October 25, 2017). "New ransomware attack hits Russia and
spreads around globe". CNN. Retrieved October 25, 2017.
101. Cameron, Dell (October 24, 2017). "'Bad Rabbit' Ransomware Strikes Russia and
Ukraine". Gizmodo. Retrieved October 24, 2017.
102. Palmer, Danny (October 24, 2017). "Bad Rabbit ransomware: A new variant of
Petya is spreading, warn researchers". ZDNet. Retrieved October 24, 2017.
103. "Yuma Sun weathers malware attack". Yuma Sun. Retrieved 18 August 2014.
104. Cannell, Joshua. "Cryptolocker Ransomware: What You Need To Know, last
updated 06/02/2014". Malwarebytes Unpacked. Retrieved 19 October 2013.
105. Leyden, Josh. "Fiendish CryptoLocker ransomware: Whatever you do, don't
PAY". The Register. Retrieved 18 October 2013.
106. "Cryptolocker Infections on the Rise; US-CERT Issues Warning". SecurityWeek.
19 November 2013. Retrieved 18 January 2014.
107. "'Petya' Ransomware Outbreak Goes Global". krebsonsecurity.com. Krebs on
Security. Retrieved 29 June 2017.
108. "How to protect yourself from Petya malware". CNET. Retrieved 29 June 2017.
109. "Petya ransomware attack: What you should do so that your security is not
compromised". The Economic Times. 29 June 2017. Retrieved 29 June 2017.
110. "New 'Petya' Ransomware Attack Spreads: What to Do". Tom's Guide. 27 June
2017. Retrieved 29 June 2017.
111. "India worst hit by Petya in APAC, 7th globally: Symantec". The Economic Times.
29 June 2017. Retrieved 29 June 2017.
112. "TRA issues advice to protect against latest ransomware Petya | The National".
Retrieved 29 June 2017.
113. "Petya Ransomware Spreading Via EternalBlue Exploit « Threat Research Blog".
FireEye. Retrieved 29 June 2017.
114. Chang, Yao-Chung (2012). Cybercrime in the Greater China Region: Regulatory
Responses and Crime Prevention Across the Taiwan Strait. Edward Elgar
Publishing. ISBN 9780857936684. Retrieved 30 June 2017.
115. "Infection control for your computers: Protecting against cyber crime - GP
Practice Management Blog". GP Practice Management Blog. 18 May 2017. Retrieved 30
June2017.
116. "List of free Ransomware Decryptor Tools to unlock files". Thewindowsclub.com.
Retrieved 28 July 2016.
117. "Emsisoft Decrypter for HydraCrypt and UmbreCrypt
Ransomware". Thewindowsclub.com. Retrieved 28 July 2016.
118. "Ransomware removal tools". Retrieved 19 September 2017.
119. Logan M. Fields (25 February 2017). "The Minority Report – Week 7 – The Half-
Way Point". World News.
120. NetSec Editor (15 February 2017). "Maryland Ransomware Bill Makes Attacks
Felonies". Network Security News.
121. Wang Wei (6 June 2017). "14-Year-Old Japanese Boy Arrested for Creating
Ransomware". The Hacker News.
122. Young, Adam L.; Yung, Moti (2005). "An Implementation of Cryptoviral Extortion
Using Microsoft's Crypto API" (PDF). Cryptovirology Labs. Retrieved 16 August 2017.
123. A. Young, M. Yung (2004). Malicious Cryptography: Exposing Cryptovirology.
Wiley. ISBN 0-7645-4975-8.
124. Russinovich, Mark (7 January 2013). "Hunting Down and Killing Ransomware
(Scareware)". Microsoft TechNet blog.
125. Simonite, Tom (4 February 2015). "Holding Data Hostage: The Perfect Internet
Crime? Ransomware (Scareware)". MIT Technology Review.
126. Brad, Duncan (2 March 2015). "Exploit Kits and CryptoWall 3.0". The Rackspace
Blog! & NewsRoom.
127. "Ransomware on the Rise". The Federal Bureau of Investigation JANUARY 2015.
128. Yang, T.; Yang, Y.; Qian, K.; Lo, D.C.T.; Qian, L. & Tao, L. "Automated Detection
and Analysis for Android Ransomware". IEEE Internet of Things Journal, CONFERENCE,
AUGUST 2015.
129. Richet, Jean-Loup. "Extortion on the Internet: the Rise of Crypto-
Ransomware" (PDF). Harvard.
5.13.12 Crimeware
5.13.12.1 Commentary
Crimeware is crime involving computers and networks.[1] The computer may initiate
the crime, or be the target[2][3] on a single personal or national scale[4] for privacy,
scams, theft, harassment and threats (illegal obscene or offensive content)[15][16][17]
[18][19][20][21], hacking, fraud, identity theft, unsolicited sending of bulk email,
copyright infringement, unwarranted mass-surveillance, child pornography, and child
grooming hitting the global economy by various estimates.[5][6][7][8][13][14] People
performing Cybercrime require to have sufficient technical knowledge to construct and
deploy the attack using viruses, denial-of-service attacks, hacking, phishing and
malware (malicious code).
Cybercrime is performed by cyberterrorists, foreign intelligence services, or other
groups to map potential security holes in critical systems intimidating government or
organization for political or social objectives.[9] Cyberextortionists threaten repeated
denial of service or other attacks by malicious hackers by demanding money in return
for promising to stop the attacks and to offer protection for a website, e-mail server, or
computer system.[10][11] Cyberwarfare occurs in war by hacking or denial of service to
strategic command and control systems.[12]
Documented cases cover many areas of commerce and government.[22][23][24][25][26]
[27][28][29][30]
Cybercrime technology information is spread over the internet[31] and the service
architecture of the cloud encourages spam and denial of service.[32][33]
Investigation is performed through logs on the computers and Internet Service
Providers.
Legislation is weak in the area of Cybercrime so it is rare that criminals are prosecuted
successfully.[34][35][36]
Punishment for cybercrimes imprison convicted prisoners, ban them from using
particular devices, restricting them to particular devices and as security expert.[37]
[38] Awareness has been the best way of preventing being caught with Cybercrime.[39]
5.13.12.2 References
1. Moore, R. (2005) "Cyber crime: Investigating High-Technology Computer Crime,"
Cleveland, Mississippi: Anderson Publishing.
2. Warren G. Kruse, Jay G. Heiser (2002). Computer forensics: incident response
essentials. Addison-Wesley. p. 392. ISBN 0-201-70719-5.
3. Halder, D., & Jaishankar, K. (2011) Cyber crime and the Victimization of Women:
Laws, Rights, and Regulations. Hershey, PA, USA: IGI Global. ISBN 978-1-60960-830-9
4. Steve Morgan (January 17, 2016). "Cyber Crime Costs Projected To Reach $2
Trillion by 2019". Forbes. Retrieved September 22, 2016.
5. "Cyber crime costs global economy $445 billion a year: report". Reuters. 2014-06-
09. Retrieved 2014-06-17.
6. "Sex, Lies and Cybercrime Surveys" (PDF). Microsoft. 2011-06-15.
Retrieved 2015-03-11.
7. "#Cybercrime— what are the costs to victims - North Denver News". North
Denver News. Retrieved 16 May 2015.
8. "Cybercrime will Cost Businesses Over $2 Trillion by 2019" (Press release).
Juniper Research. Retrieved May 21, 2016.
9. "Cybercriminals Need Shopping Money in 2017, Too! -
SentinelOne". sentinelone.com. Retrieved 2017-03-24.
10. Lepofsky, Ron. "Cyberextortion by Denial-of-Service Attack" (PDF). Archived
from the original (PDF) on July 6, 2011.
11. Mohanta, Abhijit (6 December 2014). "Latest Sony Pictures Breach : A Deadly
Cyber Extortion". Retrieved 20 September 2015.
12. War is War? The utility of cyberspace operations in the contemporary operational
environment
13. "Cyber Crime definition".
14. "Save browsing". google.
15. "2011 U.S. Sentencing Guidelines Manual § 2G1.3(b)(3)".
16. "United States of America v. Neil Scott Kramer". Retrieved 2013-10-23.
17. "South Carolina". Retrieved 16 May 2015.
18. "1. In Connecticut , harassment by computer is now a crime". Nerac Inc.
February 3, 2003. Archived from the original on April 10, 2008.
19. "Section 18.2-152.7:1". Code of Virginia. Legislative Information System of
Virginia. Retrieved 2008-11-27.
20. Susan W. Brenner, Cybercrime: Criminal Threats from Cyberspace, ABC-CLIO,
2010, pp. 91
21. "We talked to the opportunist imitator behind Silk Road 3.0". 2014-11-07.
Retrieved 2016-10-04.
22. Weitzer, Ronald (2003). Current Controversies in Criminology. Upper Saddle
River, New Jersey: Pearson Education Press. p. 150.
23. David Mann And Mike Sutton (2011-11-06). ">>Netcrime". Bjc.oxfordjournals.org.
Retrieved 2011-11-10.
24. "A walk on the dark side". The Economist. 2007-09-30.
25. "DHS: Secretary Napolitano and Attorney General Holder Announce Largest U.S.
Prosecution of International Criminal Network Organized to Sexually Exploit Children".
Dhs.gov. Retrieved 2011-11-10.
26. DAVID K. LI (January 17, 2012). "Zappos cyber attack". New York Post.
27. Salvador Rodriguez (June 6, 2012). "Like LinkedIn, eHarmony is hacked; 1.5
million passwords stolen". Los Angeles Times.
28. Rick Rothacker (Oct 12, 2012). "Cyber attacks against Wells Fargo "significant,"
handled well: CFO". Reuters.
29. "AP Twitter Hack Falsely Claims Explosions at White House". Samantha Murphy.
April 23, 2013. Retrieved April 23, 2013.
30. "Fake Tweet Erasing $136 Billion Shows Markets Need Humans". Bloomberg.
April 23, 2013. Retrieved April 23, 2013.
31. Richet, Jean-Loup (2013). "From Young Hackers to Crackers". International
Journal of Technology and Human Interaction. 9 (1).
32. Richet, Jean-Loup (2011). "Adoption of deviant behavior and cybercrime 'Know
how' diffusion". York Deviancy Conference.
33. Richet, Jean-Loup (2012). "How to Become a Black Hat Hacker? An Exploratory
Study of Barriers to Entry Into Cybercrime.". 17th AIM Symposium.
34. Kshetri, Nir. "Diffusion and Effects of Cyber Crime in Developing Countries".
35. Northam, Jackie. "U.S. Creates First Sanctions Program Against Cybercriminals".
36. Adrian Cristian MOISE (2015). "Analysis of Directive 2013/40/EU on attacks
against information systems in the context of approximation of law at the European
level" (PDF). Journal of Law and Administrative Sciences. Archived from the
original (PDF) on December 8, 2015.
37. "Managing the Risks Posed by Offender Computer Use - Perspectives" (PDF).
December 2011.
38. Bowker, Art (2012). The Cybercrime Handbook for Community Corrections:
Managing Risk in the 21st Century. Springfield: Thomas. ISBN 9780398087289.
39. Feinberg, T (2008). "Whether it happens at school or off-campus, cyberbullying
disrupts and affects.". Cyberbullying: 10.
40. Further reading[edit]
41. Balkin, J., Grimmelmann, J., Katz, E., Kozlovski, N., Wagman, S. & Zarsky, T.
(2006) (eds) Cybercrime: Digital Cops in a Networked Environment, New York University
Press, New York.
42. Bowker, Art (2012) "The Cybercrime Handbook for Community Corrections:
Managing Risk in the 21st Century" Charles C. Thomas Publishers, Ltd. Springfield.
43. Brenner, S. (2007) Law in an Era of Smart Technology, Oxford: Oxford University
Press
44. Broadhurst, R., and Chang, Lennon Y.C. (2013) "Cybercrime in Asia: trends and
challenges", in B. Hebenton, SY Shou, & J. Liu (eds), Asian Handbook of Criminology
(pp. 49–64). New York: Springer (ISBN 978-1-4614-5217-1)
45. Chang, L.Y. C. (2012) Cybercrime in the Greater China Region: Regulatory
Responses and Crime Prevention across the Taiwan Strait. Cheltenham: Edward Elgar.
(ISBN 978-0-85793-667-7)
46. Chang, Lennon Y.C., & Grabosky, P. (2014) "Cybercrime and establishing a
secure cyber world", in M. Gill (ed) Handbook of Security (pp. 321–339). NY: Palgrave.
47. Csonka P. (2000) Internet Crime; the Draft council of Europe convention on
cyber-crime: A response to the challenge of crime in the age of the internet? Computer
Law & Security Report Vol.16 no.5.
48. Easttom C. (2010) Computer Crime Investigation and the Law
49. Fafinski, S. (2009) Computer Misuse: Response, regulation and the
law Cullompton: Willan
50. Glenny, Misha, DarkMarket : cyberthieves, cybercops, and you, New York, NY :
Alfred A. Knopf, 2011. ISBN 978-0-307-59293-4
51. Grabosky, P. (2006) Electronic Crime, New Jersey: Prentice Hall
52. Halder, D., & Jaishankar, K. (2011) Cyber crime and the Victimization of Women:
Laws, Rights, and Regulations. Hershey, PA, USA: IGI Global. ISBN 978-1-60960-830-9
53. Jaishankar, K. (Ed.) (2011). Cyber Criminology: Exploring Internet Crimes and
Criminal behavior. Boca Raton, FL, USA: CRC Press, Taylor and Francis Group.
54. McQuade, S. (2006) Understanding and Managing Cybercrime, Boston: Allyn &
Bacon.
55. McQuade, S. (ed) (2009) The Encyclopedia of Cybercrime, Westport,
CT: Greenwood Press.
56. Parker D (1983) Fighting Computer Crime, U.S.: Charles Scribner's Sons.
57. Pattavina, A. (ed) Information Technology and the Criminal Justice
System, Thousand Oaks, CA: Sage.
58. Paul Taylor. Hackers: Crime in the Digital Sublime (November 3, 1999 ed.).
Routledge; 1 edition. p. 200. ISBN 0-415-18072-4.
59. Robertson, J. (2010, March 2). Authorities bust 3 in infection of 13m computers.
Retrieved March 26, 2010, from Boston News: Boston.com
60. Walden, I. (2007) Computer Crimes and Digital Investigations, Oxford: Oxford
University Press.
61. Rolón, Darío N. Control, vigilancia y respuesta penal en el ciberespacio, Latin
American's New Security Thinking, Clacso, 2014, pp. 167/182
62. Richet, J.L. (2013) From Young Hackers to Crackers, International Journal of
Technology and Human Interaction (IJTHI), 9(3), 53-62.
63. Wall, D.S. (2007) Cybercrimes: The transformation of crime in the information
age, Cambridge: Polity.
64. Williams, M. (2006) Virtually Criminal: Crime, Deviance and Regulation
Online, Routledge, London.
65. Yar, M. (2006) Cybercrime and Society, London: Sage.
5.14 Protection
5.14.1 Anti-keylogger
5.14.1.1 Content
5.14.1.1.1 Introduction
An anti-keylogger is designed to detect  keystroke loggers with the ability to delete /
immobilize hidden keystroke logger software but it cannot distiguish between
a legitimate keystroke-logger program and the illegitimate. The keylogger is flagged for
optionally removal
5.14.1.1.2 Use of anti-keyloggers
Keyloggers can be part of hidden malware packages downloaded. Their detection is not
easy however anti-keyloggers are often effective when used properly to scan for and
remove / immobilize keystroke logging software. Regular anti-keylogging scans are
recommended to reduce the period for keylogger action
5.14.1.1.3 Public computers
Keyloggers are often found on public computers[1] because so many people access the
machine and can install hardware and software keyloggers[2] so anti-keyloggers are
often used frequently to ensure clean systems.
5.14.1.1.4 Gaming usage
Keyloggers are found often in online gaming, to check gamer's access credentials to
send back to the hacker for him to steal the account.
5.14.1.1.5 Financial institutions
Financial institutions with basic security are targetted by keyloggers[3][4] so anti-
keyloggers must be used regularly.
5.14.1.1.6 Personal use
Anti-keyloggers are good for individual use to protect privacy on the computer for
online banking, passwords, personal communication, etc. Keyloggers are often used to
spy on ex-partner's activities and chat.[5]
5.14.1.1.7 Types
5.14.1.1.7.1 Signature-based
A signature base uniquely identifies a keylogger from a list of all keyloggers which is
maintained by good vendors and is only of use if the loggers are in the list.
5.14.1.1.7.2 Heuristic analysis
This applies a checklist of known features, attributes, and methods for known
keyloggers to the computer system and is more successful than signature-based
methods. Drawbacks include false finds of apps or operating system module but the
final choice of action is left to the user.
5.14.1.2 References
1. "Keyloggers found plugged into library computers". SC Magazine. Retrieved 25
April 2011.
2. "Anti Keylogging & Public Computers". Anti Keylogging & Public
Computers. Archived from the original on 22 May 2011. Retrieved 10 May 2011.
3. "Cyber threat landscape faced by financial and insurance industry". Dr Kim-Kwang
Raymond Choo. Retrieved 21 February 2011.
4. "Privacy Watch: More Criminals Use Keystroke Loggers". Privacy Watch: More
Criminals Use Keystroke Loggers. PC World About.
5. "Is someone you know spying on you?". USA Today. 4 March 2010. Retrieved 25
April 2011.
5.14.2 Antivirus software
5.14.2.1 Content
Antivirus is computer software used to prevent, detect and remove malicious software.
[1] It was originally developed for viruses, but is now extended for all malware.[2][3]
Methods used to identify malware are:
 Sandbox detection: trial of the program in a virtual environment[79] or real
environment.[80]
 Data mining: profiling by data mining and machine learning.[81][82][83][84][85]
[86][87][88][89][90][91][92][93][94]
 Signature-based detection: forming signatures[95] using dynamic analysis
systems and added to the signatures database.[96] This can fail for disguises of
the same malware.[97]
 Heuristics: reduce the groupings of malware.[98]
 Rootkit detection: scan for rootkits gaining administrative-level control over a
system.[102]
 Real-time protection: monitors systems for suspicious activity.[103]
 Issues of concern are rogue security applications[107], false positives,[108][109]
[110][111][112][113][114][115][116][117][118] system and interoperability related
issues,[119][120][121][122][123][124][125][126][127][128] effectiveness,[129]
[130][131][132][133][134][135] new viruses,[136][137][138] rootkits,[139]
damaged files,[140][141][142][143] performance[150] and false sense of security.
[151][152][153][154][154] Firmware issues are interfere with update process.
[144][145][146][147][148][149]
 Cloud antivirus uses a lightweight agent on the protected computer and offloads
most data analysis to the infrastructure[155][156][157][158] whilst online
scanning is websites with free online scanning the computer[159]
5.14.2.2 References
1. Naveen, Sharanya. "Anti-virus software". Retrieved May 31, 2016.
2. Henry, Alan. "The Difference Between Antivirus and Anti-Malware (and Which to
Use)".
3. "What is antivirus software?". Microsoft. Archived from the original on April 11,
2011.
4. von Neumann, John (1966) Theory of self-reproducing automata. University of
Illinois Press.
5. Thomas Chen, Jean-Marc Robert (2004). "The Evolution of Viruses and Worms".
Retrieved February 16, 2009.
6. From the first email to the first YouTube video: a definitive internet history. Tom
Meltzer and Sarah Phillips. The Guardian. October 23, 2009
7. IEEE Annals of the History of Computing, Volumes 27–28. IEEE Computer Society,
2005. 74: "[...]from one machine to another led to experimentation with
the Creeper program, which became the world's first computer worm: a
computation that used the network to recreate itself on another node, and
spread from node to node."
8. John Metcalf (2014). "Core War: Creeper & Reaper". Retrieved May 1, 2014.
9. "Creeper – The Virus Encyclopedia".
10.What was the First Antivirus Software?. Anti-virus-software-
review.toptenreviews.com. Retrieved on January 3, 2017.
11."Elk Cloner". Retrieved December 10, 2010.
12."Top 10 Computer Viruses: No. 10 – Elk Cloner". Retrieved December 10, 2010.
13."List of Computer Viruses Developed in 1980s". Retrieved December 10, 2010.
14.Fred Cohen: "Computer Viruses – Theory and Experiments" (1983).
Eecs.umich.edu (November 3, 1983). Retrieved on 2017-01-03.
15.Cohen, Fred (April 1, 1988). "Invited Paper: On the Implications of Computer
Viruses and Methods of Defense". Computers & Security. 7 (2): 167–
184. doi:10.1016/0167-4048(88)90334-3 – via ACM Digital Library.
16.Szor, Peter (February 13, 2005). The Art of Computer Virus Research and
Defense. Addison-Wesley Professional. ASIN 0321304543 – via Amazon.
17."Virus Bulletin :: In memoriam: Péter Ször 1970–2013".
18."History of Viruses".
19.Leyden, John (January 19, 2006). "PC virus celebrates 20th birthday". The
Register. Retrieved March 21, 2011.
20.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
20https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-20 "About computer
viruses of 1980's" (PDF). Retrieved February 17, 2016.
21.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
21https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-21 Panda Security
(April 2004). "(II) Evolution of computer viruses". Archived from the original on
August 2, 2009. Retrieved June 20, 2009.
22.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
22https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-22 Kaspersky Lab
Virus list. viruslist.com
23.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
23https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-23 Wells, Joe (August
30, 1996). "Virus timeline". IBM. Archived from the original on June 4, 2008.
Retrieved June 6, 2008.
24.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Gdata_24-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Gdata_24-0 G Data
Software AG (2011). "G Data presents security firsts at CeBIT 2010".
Retrieved August 22, 2011.
25.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-viruskit_25-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-viruskit_25-0 G Data
Software AG (2016). "Virus Construction Set II". Retrieved July 3, 2016.
26.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-UniqueNameOfRef_26-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-UniqueNameOfRef_26-
0 Karsmakers, Richard (January 2010). "The ultimate Virus Killer Book and
Software". Retrieved July 6, 2016.
27.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
27https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-27 "McAfee Becomes
Intel Security". McAfee Inc. Retrieved January 15, 2014.
28.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
28https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-28 Cavendish,
Marshall (2007). Inventors and Inventions, Volume 4. Paul Bernabeo.
p. 1033. ISBN 0761477675.
29.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
29https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-29 "About ESET
Company".
30.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
30https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-30 "ESET NOD32
Antivirus". Vision Square. February 16, 2016. to:a b Cohen, Fred, An Undetectable
Computer Virus (Archived), 1987, IBM
31.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
32https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-32 Yevics, Patricia
A. "Flu Shot for Computer Viruses". americanbar.org.
32.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
33https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-33 Strom, David (April
1, 2010). "How friends help friends on the Internet: The Ross Greenberg Story".
wordpress.com.
33.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
34https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-34 "Anti-virus is 30
years old". spgedwards.com. April 2012.
34.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
35https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-35 "A Brief History of
Antivirus Software". techlineinfo.com.
35.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
36https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-36 Grimes, Roger A.
(June 1, 2001). Malicious Mobile Code: Virus Protection for Windows. O'Reilly
Media, Inc. p. 522. ISBN 9781565926820.
36.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
37https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-37 "F-PROT
Tækniþjónusta – CYREN Iceland". frisk.is.
37.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
38https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-38 Direccion General
del Derecho de Autor, SEP, Mexico D.F. Registry 20709/88 Book 8, page 40, dated
November 24, 1988. to:a b "The 'Security Digest' Archives (TM) :
www.phreak.org-virus_l".
38.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
40https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-40 "Symantec
Softwares and Internet Security at PCM".
39.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
41https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-41 SAM Identifies
Virus-Infected Files, Repairs Applications, InfoWorld, May 22, 1989
40.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
42https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-42 SAM Update Lets
Users Program for New Viruses, InfoWorld, February 19, 1990
41.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
43https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-43 Naveen,
Sharanya. "Panda Security". Retrieved May 31, 2016.
42.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
44https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
44 http://www.tgsoft.it, TG Soft S.a.s. -. "Who we are – TG Soft Software House".
43.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
45https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-45 "A New Virus
Naming Convention (1991) – CARO – Computer Antivirus Research Organization".
44.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
46https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-46 "CARO Members".
CARO. Retrieved June 6, 2011.
45.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
47https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-47 CAROids,
Hamburg 2003 Archived November 7, 2014, at the Wayback Machine.
46.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
48https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-48 "F-Secure Weblog :
News from the Lab". F-secure.com. Retrieved September 23, 2012.
47.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
49https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-49 "About
EICAR". EICAR official website. Retrieved October 28, 2013.
48.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
50https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-50 David Harley, Lysa
Myers & Eddy Willems. "Test Files and Product Evaluation: the Case for and
against Malware Simulation" (PDF). AVAR2010 13th Association of anti Virus
Asia Researchers International Conference. Archived from the original (PDF) on
September 29, 2011. Retrieved June 30, 2011.
49.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
51https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-51 "Dr. Web LTD
Doctor Web / Dr. Web Reviews, Best AntiVirus Software Reviews, Review
Centre". Reviewcentre.com. Retrieved February 17, 2014. to:a b c d [In 1994, AV-
Test.org reported 28,613 unique malware samples (based on MD5). "A Brief
History of Malware; The First 25 Years"]
50.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
53https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-53 "BitDefender
Product History".
51.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
54https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-54 "InfoWatch
Management". InfoWatch. Retrieved August 12, 2013.
52.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
55https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-55 "Linuxvirus –
Community Help Wiki".
53.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
56https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-56 "Sorry –
recovering...".
54.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
57https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-57 "Sourcefire
acquires ClamAV". ClamAV. August 17, 2007. Retrieved February 12, 2008.
55.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
58https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-58 "Cisco Completes
Acquisition of Sourcefire". cisco.com. October 7, 2013. Retrieved June 18, 2014.
56.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
59https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-59 Der Unternehmer –
brand eins online. Brandeins.de (July 2009). Retrieved on January 3, 2017.
57.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
60https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-60 Williams, Greg
(April 2012). "The digital detective: Mikko Hypponen's war on malware is
escalating". Wired.
58.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
61https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-61 "Everyday
cybercrime – and what you can do about it".
59.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
62https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-62 Szor 2005, pp. 66–
67
60.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
63https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-63 "New virus travels
in PDF files". August 7, 2001. Retrieved October 29, 2011.
61.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
64https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-64 Slipstick Systems
(February 2009). "Protecting Microsoft Outlook against Viruses". Archived from
the original on June 2, 2009. Retrieved June 18, 2009.
62.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
65https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-65 "CloudAV: N-
Version Antivirus in the Network Cloud". usenix.org.
63.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
66https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-66 McAfee Artemis
Preview Report. av-comparatives.org
64.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
67https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-67 McAfee Third
Quarter 2008. corporate-ir.net
65.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
68https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-68 "AMTSO Best
Practices for Testing In-the-Cloud Security Products » AMTSO".
66.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
69https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-69 "TECHNOLOGY
OVERVIEW". AVG Security. Retrieved February 16, 2015.
67.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
70https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-70 "Magic Quadrant
Endpoint Protection Platforms 2016". Gartner Research.
68.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-NetworkWorld_71-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-NetworkWorld_71-
0 Messmer, Ellen. "Start-up offers up endpoint detection and response for
behavior-based malware detection". networkworld.com.
69.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-HSToday.US_72-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-HSToday.US_72-
0 "Homeland Security Today: Bromium Research Reveals Insecurity in Existing
Endpoint Malware Protection Deployments".
70.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
73https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-73 "Duelling
Unicorns: CrowdStrike Vs. Cylance In Brutal Battle To Knock Hackers
Out". Forbes. July 6, 2016.
71.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
74https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-74 Potter, Davitt
(June 9, 2016). "Is Anti-virus Dead? The Shift Toward Next-Gen Endpoints".
72.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
75https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
75 "CylancePROTECT® Achieves HIPAA Security Rule Compliance Certification".
Cylance.
73.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
76https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-76 "Trend Micro-
XGen". Trend Micro. October 18, 2016.
74.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
77https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-77 "Next-Gen
Endpoint". Sophos.
75.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
78https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-78 The Forrester
Wave™: Endpoint Security Suites, Q4 2016. Forrester.com (October 19, 2016).
Retrieved on 2017-01-03.
76.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
79https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-79 Sandboxing
Protects Endpoints | Stay Ahead Of Zero Day Threats. Enterprise.comodo.com
(June 20, 2014). Retrieved on 2017-01-03.
77.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
80https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-80 Szor 2005,
pp. 474–481
78.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
81https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-81 Kiem, Hoang;
Thuy, Nguyen Yhanh and Quang, Truong Minh Nhat (December 2004) "A Machine
Learning Approach to Anti-virus System", Joint Workshop of Vietnamese Society
of AI, SIGKBS-JSAI, ICS-IPSJ and IEICE-SIGAI on Active Mining ; Session 3:
Artificial Intelligence, Vol. 67, pp. 61–65
79.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
82https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-82 Data Mining
Methods for Malware Detection. ProQuest. 2008. pp. 15–. ISBN 978-0-549-88885-
7.
80.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
83https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-83 Dua, Sumeet; Du,
Xian (April 19, 2016). Data Mining and Machine Learning in Cybersecurity. CRC
Press. pp. 1–. ISBN 978-1-4398-3943-0.
81.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
84https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-84 Firdausi, Ivan;
Lim, Charles; Erwin, Alva; Nugroho, Anto Satriyo (2010). "Analysis of Machine
learning Techniques Used in Behavior-Based Malware Detection". 2010 Second
International Conference on Advances in Computing, Control, and
Telecommunication Technologies. p. 201. doi:10.1109/ACT.2010.33. ISBN 978-1-
4244-8746-2.
82.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
85https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-85 Siddiqui,
Muazzam; Wang, Morgan C.; Lee, Joohan (2008). "A survey of data mining
techniques for malware detection using file features". Proceedings of the 46th
Annual Southeast Regional Conference on XX – ACM-SE 46.
p. 509. doi:10.1145/1593105.1593239. ISBN 9781605581057.
83.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
86https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-86 Deng, P.S.; Jau-
Hwang Wang; Wen-Gong Shieh; Chih-Pin Yen; Cheng-Tan Tung (2003). "Intelligent
automatic malicious code signatures extraction". IEEE 37th Annual 2003
International Carnahan Conference on Security Technology, 2003. Proceedings.
p. 600. doi:10.1109/CCST.2003.1297626. ISBN 0-7803-7882-2.
84.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
87https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-87 Komashinskiy,
Dmitriy; Kotenko, Igor (2010). "Malware Detection by Data Mining Techniques
Based on Positionally Dependent Features". 2010 18th Euromicro Conference on
Parallel, Distributed and Network-based Processing.
p. 617. doi:10.1109/PDP.2010.30. ISBN 978-1-4244-5672-7.
85.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
88https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-88 Schultz, M.G.;
Eskin, E.; Zadok, F.; Stolfo, S.J. (2001). "Data mining methods for detection of
new malicious executables". Proceedings 2001 IEEE Symposium on Security and
Privacy. S&P 2001. p. 38. doi:10.1109/SECPRI.2001.924286. ISBN 0-7695-1046-9.
86.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
89https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-89 Ye, Yanfang;
Wang, Dingding; Li, Tao; Ye, Dongyi (2007). "IMDS". Proceedings of the 13th ACM
SIGKDD international conference on Knowledge discovery and data mining – KDD
'07. p. 1043. doi:10.1145/1281192.1281308. ISBN 9781595936097.
87.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
90https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-90 Kolter, J. Zico;
Maloof, Marcus A. (December 1, 2006). "Learning to Detect and Classify
Malicious Executables in the Wild". 7: 2721–2744.
88.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
91https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-91 Tabish, S. Momina;
Shafiq, M. Zubair; Farooq, Muddassar (2009). "Malware detection using statistical
analysis of byte-level file content". Proceedings of the ACM SIGKDD Workshop on
Cyber Security and Intelligence Informatics – CSI-KDD '09.
p. 23. doi:10.1145/1599272.1599278. ISBN 9781605586694.
89.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
92https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-92 Ye, Yanfang;
Wang, Dingding; Li, Tao; Ye, Dongyi; Jiang, Qingshan (2008). "An intelligent PE-
malware detection system based on association mining". Journal in Computer
Virology. 4 (4): 323. doi:10.1007/s11416-008-0082-4.
90.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
93https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-93 Sami, Ashkan;
Yadegari, Babak; Peiravian, Naser; Hashemi, Sattar; Hamze, Ali (2010). "Malware
detection based on mining API calls". Proceedings of the 2010 ACM Symposium
on Applied Computing – SAC '10.
p. 1020. doi:10.1145/1774088.1774303. ISBN 9781605586397.
91.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
94https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-94 Shabtai, Asaf;
Kanonov, Uri; Elovici, Yuval; Glezer, Chanan; Weiss, Yael (2011). ""Andromaly": A
behavioral malware detection framework for android devices". Journal of
Intelligent Information Systems. 38: 161. doi:10.1007/s10844-010-0148-x.
92.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
95https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-95 Fox-Brewster,
Thomas. "Netflix Is Dumping Anti-Virus, Presages Death Of An Industry". Forbes.
Retrieved September 4, 2015.
93.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
96https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-96 Automatic
Malware Signature Generation. (PDF) . Retrieved on January 3, 2017.
94.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
97https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-97 Szor 2005,
pp. 252–288
95.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
98https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-98 "Generic
detection". Kaspersky. Retrieved July 11, 2013.
96.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
99https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-99 Symantec
Corporation (February 2009). "Trojan.Vundo". Archived from the original on April
9, 2009. Retrieved April 14, 2009.
97.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
100https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-100 Symantec
Corporation (February 2007). "Trojan.Vundo.B". Archived from the original on
April 27, 2009. Retrieved April 14, 2009.
98.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
101https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-101 "Antivirus
Research and Detection Techniques". ExtremeTech. Archived from the original
on February 27, 2009. Retrieved February 24, 2009.
99.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
102https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-102 "Terminology –
F-Secure Labs".
100.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
103https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-103 Kaspersky Lab
Technical Support Portal Archived February 14, 2011, at WebCite
101.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
104https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-104 Kelly, Michael
(October 2006). "Buying Dangerously". Retrieved November 29, 2009.
102.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
105https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-105 Bitdefender
(2009). "Automatic Renewal". Retrieved November 29, 2009.
103.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
106https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
106 Symantec (2014). "Norton Automatic Renewal Service FAQ". Retrieved April
9, 2014.
104.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
107https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-107 SpywareWarrior
(2007). "Rogue/Suspect Anti-Spyware Products & Web Sites".
Retrieved November 29, 2009.
105.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
108https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-108 Protalinski, Emil
(November 11, 2008). "AVG incorrectly flags user32.dll in Windows XP
SP2/SP3". Ars Technica. Retrieved February 24, 2011.
106.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
109https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-109 McAfee to
compensate businesses for buggy update, retrieved December 2, 2010
107.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
110https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-110 Buggy McAfee
update whacks Windows XP PCs, archived from the original on January 13, 2011,
retrieved December 2, 2010
108.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
111https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-111 Tan, Aaron (May
24, 2007). "Flawed Symantec update cripples Chinese PCs". CNET Networks.
Retrieved April 5, 2009. to:a b Harris, David (June 29, 2009). "January 2010 –
Pegasus Mail v4.52 Release". Pegasus Mail. Archived from the original on May
28, 2010. Retrieved May 21, 2010.
109.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
113https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-113 "McAfee DAT
5958 Update Issues". April 21, 2010. Archived from the original on April 24, 2010.
Retrieved April 22, 2010.
110.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
114https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-114 "Botched
McAfee update shutting down corporate XP machines worldwide". April 21,
2010. Archived from the original on April 22, 2010. Retrieved April 22, 2010.
111.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
115https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-115 Leyden, John
(December 2, 2010). "Horror AVG update ballsup bricks Windows 7". The
Register. Retrieved December 2, 2010.
112.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
116https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-116 MSE false
positive detection forces Google to update Chrome, retrieved October 3, 2011
113.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
117https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-117 Sophos
Antivirus Detects Itself as Malware, Deletes Key Binaries, The Next Web,
retrieved March 5, 2014
114.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
118https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-118 Shh/Updater-B
false positive by Sophos anti-virus products, Sophos, retrieved March 5, 2014
115.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
119https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-119 "Plus! 98: How
to Remove McAfee VirusScan". Microsoft. January 2007. Archivedfrom the
original on April 8, 2010. Retrieved September 27, 2014.
116.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
120https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-120 Vamosi, Robert
(May 28, 2009). "G-Data Internet Security 2010". PC World. Retrieved February
24, 2011.
117.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
121https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-121 Higgins, Kelly
Jackson (May 5, 2010). "New Microsoft Forefront Software Runs Five Antivirus
Vendors' Engines". Darkreading. Retrieved February 24, 2011.
118.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
122https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-122 "Steps to take
before you install Windows XP Service Pack 3". Microsoft. April
2009. Archived from the original on December 8, 2009. Retrieved November
29, 2009.
119.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
123https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-123 "Upgrading from
Windows Vista to Windows 7". Retrieved March 24, 2012. Mentioned within
"Before you begin".
120.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
124https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-124 "Upgrading to
Microsoft Windows Vista recommended steps.". Retrieved March 24, 2012.
121.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
125https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-125 "How to
troubleshoot problems during installation when you upgrade from Windows 98 or
Windows Millennium Edition to Windows XP". May 7, 2007. Retrieved March
24, 2012.Mentioned within "General troubleshooting".
122.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
126https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
126 "Troubleshooting". Retrieved February 17, 2011.
123.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
127https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-127 "Spyware,
Adware, and Viruses Interfering with Steam". Retrieved April 11, 2013.Steam
support page.
124.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
128https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-128 "Field Notice:
FN – 63204 – Cisco Clean Access has Interoperability issue with Symantec Anti-
virus – delays Agent start-up".
125.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
129https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-129 Goodin, Dan
(December 21, 2007). "Anti-virus protection gets worse". Channel Register.
Retrieved February 24, 2011.
126.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
130https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-130 "ZeuS Tracker ::
Home".
127.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
131https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-131 Illett, Dan (July
13, 2007). "Hacking poses threats to business". Computer Weekly.
Retrieved November 15, 2009.
128.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
132https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-132 Espiner, Tom
(June 30, 2008). "Trend Micro: Antivirus industry lied for 20 years". ZDNet.
Retrieved September 27, 2014.
129.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
133https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-133 AV
Comparatives (December 2013). "Whole Product Dynamic "Real World"
Production Test" (PDF). Archived (PDF) from the original on January 2, 2014.
Retrieved January 2, 2014.
130.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
134https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-134 Kirk,
Jeremy. "Guidelines released for antivirus software tests".
131.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Harley_2011_135-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Harley_2011_135-
0 Harley, David (2011). AVIEN Malware Defense Guide for the
Enterprise. Elsevier. p. 487. ISBN 9780080558660.
132.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
136https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-136 Kotadia, Munir
(July 2006). "Why popular antivirus apps 'do not work'". Retrieved April 14, 2010.
to:a b The Canadian Press (April 2010). "Internet scam uses adult game to extort
cash". CBC News. Archived from the original on April 18, 2010. Retrieved April
17, 2010.
133.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
138https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-138 Exploit Code;
Data Theft; Information Security; Privacy; Hackers; system, Security mandates
aim to shore up shattered SSL; Reader, Adobe kills two actively exploited bugs
in; stalker, Judge dismisses charges against accused Twitter. "Researchers up
evilness ante with GPU-assisted malware".
134.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
139https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-139 Iresh, Gina
(April 10, 2010). "Review of Bitdefender Antivirus Security Software 2017
edition". www.digitalgrog.com.au. Digital Grog. Retrieved November 20, 2016.
135.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
140https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-140 "Why F-PROT
Antivirus fails to disinfect the virus on my computer?". Retrieved August
20, 2015.
136.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
141https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-141 "Actions to be
performed on infected objects". Retrieved August 20, 2015.
137.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
142https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-142 "Cryptolocker
Ransomware: What You Need To Know". Retrieved March 28, 2014.
138.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
143https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-143 "How Anti-Virus
Software Works". Retrieved February 16, 2011.
139.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
144https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-144 "BT Home Hub
Firmware Upgrade Procedure". Retrieved March 6, 2011.
140.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
145https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-145 "The 10 faces of
computer malware". July 17, 2009. Retrieved March 6, 2011.
141.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
146https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-146 "New BIOS
Virus Withstands HDD Wipes". March 27, 2009. Retrieved March 6, 2011.
142.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
147https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-147 "Phrack Inc.
Persistent BIOS Infection". June 1, 2009. Archived from the original on April 30,
2011. Retrieved March 6, 2011.
143.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
148https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-148 "Turning USB
peripherals into BadUSB". Retrieved October 11, 2014.
144.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
149https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-149 "Why the
Security of USB Is Fundamentally Broken". July 31, 2014. Retrieved October
11, 2014.
145.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
150https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-150 "How Antivirus
Software Can Slow Down Your Computer". Support.com Blog. Retrieved July
26, 2010.
146.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
151https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-151 "Softpedia
Exclusive Interview: Avira 10". Ionut Ilascu. Softpedia. April 14, 2010.
Retrieved September 11, 2011.
147.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
152https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-152 "Norton
AntiVirus ignores malicious WMI instructions". Munir Kotadia. CBS Interactive.
October 21, 2004. Retrieved April 5, 2009.
148.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
153https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-153 "NSA and GCHQ
attacked antivirus software so that they could spy on people, leaks indicate".
June 24, 2015. Retrieved October 30, 2016. to:a b "Popular security software
came under relentless NSA and GCHQ attacks". Andrew Fishman, Morgan
Marquis-Boire. June 22, 2015. Retrieved October 30, 2016.
149.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
155https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-155 Zeltser, Lenny
(October 2010). "What Is Cloud Anti-Virus and How Does It Work?". Archived from
the original on October 10, 2010. Retrieved October 26, 2010.
150.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
156https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-156 Erickson, Jon
(August 6, 2008). "Antivirus Software Heads for the Clouds". Information Week.
Retrieved February 24, 2010.
151.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
157https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-157 "Comodo Cloud
Antivirus released". wikipost.org. Retrieved May 30, 2016.
152.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
158https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-158 "Comodo Cloud
Antivirus User Guideline PDF" (PDF). help.comodo.com. Retrieved May 30, 2016.
153.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
159https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-159 Krebs, Brian
(March 9, 2007). "Online Anti-Virus Scans: A Free Second Opinion". Washington
Post. Retrieved February 24, 2011.
154.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
160https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-160 Naraine, Ryan
(February 2, 2007). "Trend Micro ships free 'rootkit buster'". ZDNet.
Retrieved February 24, 2011. to:a b Rubenking, Neil J. (March 26, 2010). "Avira
AntiVir Personal 10". PC Magazine. Retrieved February 24, 2011.
155.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
162https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-162 Rubenking, Neil
J. (September 16, 2010). "PC Tools Spyware Doctor with AntiVirus 2011". PC
Magazine. Retrieved February 24, 2011.
156.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
163https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-163 Rubenking, Neil
J. (October 4, 2010). "AVG Anti-Virus Free 2011". PC Magazine.
Retrieved February 24, 2011.
157.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
164https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-164 Rubenking, Neil
J. (November 19, 2009). "PC Tools Internet Security 2010". PC Magazine.
Retrieved February 24, 2011. to:a b Skinner, Carrie-Ann (March 25, 2010). "AVG
Offers Free Emergency Boot CD". PC World. Retrieved February 24, 2011.
158.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
166https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-166 "FBI estimates
major companies lose $12m annually from viruses". January 30, 2007.
Retrieved February 20, 2011.
159.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
167https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-167 Kaiser, Michael
(April 17, 2009). "Small and Medium Size Businesses are Vulnerable". National
Cyber Security Alliance. Retrieved February 24, 2011.
160.https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
168https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-168 Nearly 50%
Women Don’t Use Anti-virus Software. Spamfighter.com (September 2, 2010).
Retrieved on January 3, 2017.
161.Bibliography[edit]
162.Szor, Peter (2005), The Art of Computer Virus Research and Defense, Addison-
Wesley, ISBN 0-321-30454-3
5.14.3 Browser security
5.14.3.1 Commentary
5.14.3.1.1 Introduction
Browser security applies to web browsers to protect networked data and computers
from breaches of privacy or malware and is exploited using such things as JavaScript,
cross-site scripting,[1] Adobe Flash.[2] Other attacks come from
vulnerabilities (security holes).[3][4][5][6][7]
5.14.3.1.2 Security
Browsers are compromised through:
• Operating system is hit and malware reads/modifies the browser memory in privilege
mode [8]
• malware runs in background process and reads/modifies the browser memory in
privilege mode
• Browsers are hacked
• Plugins are hacked
• communications are intercepted outside the machine [9]
The browser are often unaware of any problems with a safe connection. Websites use
browser information to format displays of the page[10] so malware can attack using
know problems of the browser.[11] Once an attacker runs processes on the visitor's
machine, then exploiting known vulnerabilities allows the attacker to get privileged
access and infect the system and expand its activities to the machine and network.[12]
Breaches aim to bypass protections to display pop-up advertising[13] get personal data
for either marketing or identity theft, website tracking or analytics unknown to the
user[14][15][16][17][2] and installing malware.
Updates reduce susceptible browser features[18] and operating system attack by a
rootkit.[19] Browser scripting, add-ons, and cookies[20][21][22] are specially unsafe
and also must be covered.
Browsers can use secure methods of network communication to help prevent some of
these attacks:
 DNS with security and encryption.
 HTTP with security and digitally signed public key certificates / extended validation
certificates.
Networks can help defenses with firewalls, filtering proxy servers to block malicious
websites and perform antivirus scans of downloads and support organisations.[23]
5.14.3.1.3 Plugins and extensions
Plugins and extensions widen vulnerabilities implicit in themselves or through specially
designed malware.[24][25][26][2][13][27][28][29]
5.14.3.1.4 Password security model
The web expects the user to check the address bar to ensure that password security is
sufficient[30] while full screen browser windows can hide the address bar and warning
for a fake web address.[31]
5.14.3.1.5 Privacy
Hardware browsers aim to remove security problems so that data cannot be written to
the device holding the operating system and browser. LiveCDs using a malware free
operating system and browser from a non-writable device will load free of malware
every time the LiveCD image is booted.
Browser hardening makes the lowest privilege for the user account limiting
opportunities for attacks to the whole operating system.[32] Blacklisting[33][34]
[35] and whitelisting[36][37] of plugins and extensions adds extra protection while
protected mode improves security through sandboxing and mandatory integrity control.
[38][39] Suspect sites are reported to Google,[40] which examines and confirms them
flagging those browsers.[41] Some extensions and plugins help classify security.[42]
[43][44]
5.14.3.1.6 Best practice
Best practice is obtained when:
• Booting from a known clean OS with a clean browser
• Using a hardened browser or add-on-free-browsing mode
• Use trusted and secure DNS
• Employ link-checking browser plug-ins
• Employ firewalls and anti-malware software
5.14.3.2 References
1. Maone, Giorgio. "NoScript :: Add-ons for Firefox". Mozilla Add-ons. Mozilla Foundation.
2. NC (Social Science Research Network). "BetterPrivacy :: Add-ons for Firefox". Mozilla
Add-ons. Mozilla Foundation.
3. Keizer, Greg. Firefox 3.5 Vulnerability Confirmed. Retrieved 19 November 2010.
4. Messmer, Ellen and NetworkWorld. "Google Chrome Tops 'Dirty Dozen' Vulnerable Apps
List". Retrieved 19 November 2010.
5. Skinner, Carrie-Ann. Opera Plugs "Severe" Browser Hole Archived 20 May 2009 at
the Wayback Machine.. Retrieved 19 November 2010.
6. Bradly, Tony. "It's Time to Finally Drop Internet Explorer 6" . Retrieved 19 November
2010.
7. "Browser". Mashable. Retrieved 2 September 2011.
8. Smith, Dave. "The Yontoo Trojan: New Mac OS X Malware Infects Google Chrome,
Firefox And Safari Browsers Via Adware". IBT Media Inc. Retrieved 21 March 2013.
9. Goodin, Dan. "MySQL.com breach leaves visitors exposed to malware". Retrieved 26
September 2011.
10.Clinton Wong. "HTTP Transactions". O'Reilly. Archived from the original on 13 June
2013.
11."9 Ways to Know Your PC is Infected with Malware".
12."Symantec Security Response Whitepapers".
13.Palant, Wladimir. "Adblock Plus :: Add-ons for Firefox". Mozilla Add-ons. Mozilla
Foundation.
14."Facebook privacy probed over 'like,' invitations". CBC News. 23 September 2010.
Retrieved 24 August 2011.
15.Albanesius, Chloe (19 August 2011). "German Agencies Banned From Using Facebook,
'Like' Button". PC Magazine. Retrieved 24 August 2011.
16.McCullagh, Declan (2 June 2010). "Facebook 'Like' button draws privacy
scrutiny". CNET News. Retrieved 19 December 2011.
17.Roosendaal, Arnold (30 November 2010). "Facebook Tracks and Traces Everyone: Like
This!". SSRN 1717563  .
18.State of Vermont. "Web Browser Attacks". Archived from the original on 13 February
2012. Retrieved 11 April 2012.
19."Windows Rootkit Overview" (PDF). Symantec. Retrieved 2013-04-20.
20."Cross Site Scripting Attack". Retrieved 20 May 2013.
21.Lenny Zeltser. "Mitigating Attacks on the Web Browser and Add-Ons". Retrieved 20
May2013.
22.Dan Goodin. "Two new attacks on SSL decrypt authentication cookies". Retrieved 20
May 2013.
23."beefproject.com".
24."How to Create a Rule That Will Block or Log Browser Helper Objects in Symantec
Endpoint Protection". Symantec.com. Retrieved 12 April 2012.
25."Soltani, Ashkan, Canty, Shannon, Mayo, Quentin, Thomas, Lauren and Hoofnagle, Chris
Jay: Flash Cookies and Privacy". 2009-08-10. SSRN 1446862  .
26."Local Shared Objects -- "Flash Cookies"". Electronic Privacy Information Center. 2005-
07-21. Archived from the original on 16 April 2010. Retrieved 2010-03-08.
27.Chee, Philip. "Flashblock :: Add-ons for Firefox". Mozilla Add-ons. Mozilla Foundation.
28."Pwn2Own 2010: interview with Charlie Miller". 2010-03-01. Retrieved 2010-03-27.
29."Expert says Adobe Flash policy is risky". 2009-11-12. Retrieved 2010-03-27.
30.John C. Mitchell. "Browser Security Model" (PDF).
31.http://feross.org/html5-fullscreen-api-attack/
32."Using a Least-Privileged User Account". Microsoft. Retrieved 2013-04-20.
33."How to Stop an ActiveX control from running in Internet Explorer". Microsoft.
Retrieved 2014-11-22.
34."Internet Explorer security zones registry entries for advanced users". Microsoft.
Retrieved 2014-11-22.
35."Out-of-date ActiveX control blocking". Microsoft. Retrieved 2014-11-22.
36."Internet Explorer Add-on Management and Crash Detection". Microsoft.
Retrieved 2014-11-22.
37."How to Manage Internet Explorer Add-ons in Windows XP Service Pack 2". Microsoft.
Retrieved 2014-11-22.
38.Matthew Conover. "Analysis of the Windows Vista Security Model" (PDF). Symantec
Corporation. Retrieved 2007-10-08.
39."Browser Security: Lessons from Google Chrome".
40."Report malicious software (URL) to Google".
41."Google Safe Browsing".
42."5 Ways to Secure Your Web Browser". ZoneAlarm.
43."Adblock Plus Will Soon Block Fewer Ads — SiliconFilter". Siliconfilter.com.
Retrieved 2013-04-20.
44."Securing Your Web Browser". Archived from the original on 26 March 2010.
Retrieved 2010-03-27.
5.14.4 Internet security
5.14.4.1 Content
5.14.4.1.1 Introduction
Internet security applies security techniques to the Internet covering browsers
and networks. It establish rules and measures to counteract attacks over the Internet.
[1] which is an insecure channel for exchanging information giving to a high risk
of intrusion or fraud eg phishing.[2] Encryption and from-the-ground-up engineering[3]
help protection.
Threats arise from malicious software, denial-of-service attacks, phishing and
application vulnerabilities.
Malicious software comes as viruses, Trojan horses, spyware, and worms and has
malicious intent and disrupts computer operation, gathers sensitive information, or
gains access to private computer systems.
• A botnet is a set of zombie systems taken over by a robot program performing mass
malicious acts for the originator.
• Computer Viruses replicate their structures or effects by infecting other files or
structures on a system.
• Worms replicate themselves throughout a network performing malicious tasks
throughout.
• Ransomware restricts access to infected systems by demanding a ransom before the
restriction is removed.
• Scareware aims to cause shock, anxiety, or the perception of a threat for a user.
• Spyware monitors activity on a system and reports it to the originator.
• A Trojan pretends to be harmless for the user who downloads it onto his computer.
A denial-of-service attack makes a computer resource unavailable to its users. [4]
Phishing pretends to be a trustworthy entity to gather information.[5][6][7]
Application vulnerabilities can lead to any of the previous threats.[8][9]
Network layer security is provided by TCP/IP protocols secured with cryptographic
methods and security protocols eg Secure Sockets Layer (SSL), succeeded
by Transport Layer Security (TLS) for web traffic, Pretty Good Privacy (PGP) for email,
and IPsec for the network layer security. IPsec protects TCP/IP communication with a
set of security extensions by IETF providing security and authentication at the IP layer
by encrypting data. It uses the Authentication Header (AH) and ESP to give data
integrity, data origin authentication, and anti-replay service. These protocols can be
used alone or in combination to provide the desired set of security services for
the Internet Protocol (IP) layer. They use entities that are security protocols for AH and
ESP, security association for policy management and traffic processing, manual and
automatic key management for the internet key exchange (IKE) and algorithms
for authentication and encryption. Security services provided at the IP layer includes
access control, data origin integrity, protection against replays, and confidentiality.
The algorithm allows these sets to work independently without affecting other parts of
the implementation. The IPsec implementation is operated in a host or security
gateway environment giving protection to IP traffic.
Some web sites offer users a security token which is generated regularly using random
numbers and applied to validate access to the online access based on the devices'
serial number.[10]
Electronic mail security uses pretty good privacy, MIME and MAC. Pretty Good
Privacy applies encryption to messages and data files with Triple DES or CAST-128.
Email messages are signed to ensure integrity and sender identity and Encrypted to
ensure confidentiality.[11] MIME transforms the sender non-ASCII data to NVT ASCII
data and delivers it to the client SMTP sent through the Internet[12] to the
server SMTP which reverses the process. A Message authentication code (MAC) uses
cryptography to use a secret key to encrypt a message at the sender and receiver.[13]
A firewall controls access between networks which act as the intermediate server
filtering between SMTP and Hypertext Transfer Protocol (HTTP) connections.[14] The
types of filter are packet filter, stateful packet inspection and application-level
gateway.
Use of web browsers is driven by their facilities[15] and their security [16] and
vulnerabilities.[17][18][19] Antivirus software and Internet security programs protect
programmable devices from attack by detecting and eliminating viruses and are
provided for all platforms.[20] A password manager helps a user store and organize
encrypted passwords with a master password to grant the user access to their entire
password database.[21]
Security suites contain firewalls, anti-virus, anti-spyware and more[22] and theft
protection, portable storage device safety check, private Internet browsing, cloud anti-
spam, file shredder or make security-related decisions (answering popup windows).[23]
5.14.4.2 References
1. Gralla, Preston (2007). How the Internet Works. Indianapolis: Que Pub. ISBN 0-
7897-2132-5.
2. Rhee, M. Y. (2003). Internet Security: Cryptographic Principles,Algorithms and
Protocols. Chichester: Wiley. ISBN 0-470-85285-2.
3. An example of a completely re-engineered computer is the Librem laptop which
uses components certified by web-security experts. It was launched after a crowd
funding campaign in 2015.
4. "Information Security: A Growing Need of Businesses and Industries
Worldwide". University of Alabama at Birmingham Business Program. Retrieved 20
November 2014.
5. Ramzan, Zulfikar (2010). "Phishing attacks and countermeasures". In Stamp,
Mark & Stavroulakis, Peter. Handbook of Information and Communication Security.
Springer. ISBN 9783642041174.
6. Van der Merwe, A J, Loock, M, Dabrowski, M. (2005), Characteristics and
Responsibilities involved in a Phishing Attack, Winter International Symposium on
Information and Communication Technologies, Cape Town, January 2005.
7. "2012 Global Losses From Phishing Estimated At $1.5 Bn". FirstPost. February
20, 2013. Retrieved December 21, 2014.
8. "Improving Web Application Security: Threats and
Countermeasures". msdn.microsoft.com. Retrieved 2016-04-05.
9. "Justice Department charges Russian spies and criminal hackers in Yahoo
intrusion". Washington Post. Retrieved 15 March 2017.
10. Margaret Rouse (September 2005). "What is a security token?".
SearchSecurity.com. Retrieved 2014-02-14.
11. "Virtual Private Network". NASA. Retrieved 2014-02-14.
12. Asgaut Eng (1996-04-10). "Network Virtual Terminal". The Norwegian Institute of
Technology ppv.org. Retrieved 2014-02-14.
13. "What Is a Message Authentication Code?". Wisegeek.com. Retrieved 2013-04-
20.
14. "Firewalls - Internet Security". sites.google.com. Retrieved 2016-06-30.
15. "Browser Statistics". W3Schools.com. Retrieved 2011-08-10.
16. Bradly, Tony. "It's Time to Finally Drop Internet Explorer 6". PCWorld.com.
Retrieved 2010-11-09.
17. Messmer, Ellen and NetworkWorld (2010-11-16). "Google Chrome Tops 'Dirty
Dozen' Vulnerable Apps List". PCWorld.com. Retrieved 2010-11-09.
18. Keizer, Greg (2009-07-15). "Firefox 3.5 Vulnerability Confirmed". PCWorld.com.
Retrieved 2010-11-09.
19. Skinner, Carrie-Ann. "Opera Plugs "Severe" Browser Hole". PC World.com.
Archived from the original on May 20, 2009. Retrieved 2010-11-09.
20. Larkin, Eric (2008-08-26). "Build Your Own Free Security Suite". Retrieved 2010-
11-09.
21. "USE A FREE PASSWORD MANAGER" (PDF). scsccbkk.org.
22. Rebbapragada, Narasu. "All-in-one Security". PC World.com. Archived from the
original on October 27, 2010. Retrieved 2010-11-09.
23. "Free products for PC security". 2015-10-08.
24. National Institute of Standards and Technology (NIST.gov) - Information
Technology portal with links to computer- and cyber security
25. National Institute of Standards and Technology (NIST.gov) -Computer Security
Resource Center -Guidelines on Electronic Mail Security, version 2
26. The Internet Engineering Task Force.org - UK organization -IP Authentication
Header 1998
27. The Internet Engineering Task Force.org - UK organization -Encapsulating
Security Payload
28. Wireless Safety.org - Up to date info on security threats, news stories, and step
by step tutorials
29. PwdHash Stanford University - Firefox & IE browser extensions that
transparently convert a user's password into a domain-specific password.
30. Internet security.net - by JC Montejo & Goio Miranda (free security programs),
est 2007.
31. Internet and Data Security Guide UK anonymous membership site
32. Cybertelecom.org Security - surveying federal Internet security work
33. DSL Reports.com- Broadband Reports, FAQs and forums on Internet security, est
1999
34. FBI Safe Online Surfing Internet Challenge - Cyber Safety for Young
Americans (FBI)
5.14.5 Mobile security
5.14.5.1 Commentery
5.14.5.1.1 Introduction
Mobile security is required as new risks come from smart phones using sensitive
information which can be targets of attack exploiting weaknesses in phones in
communication mode eg SMS, MMS, wifi, Bluetooth and GSM, the browser and
operating system.
Counter-measures are applied to smart phones with security in different layers of
software from design to use and from operating systems, through software layers, to
downloadable apps.
Threats[1] interrupting their operation come from some apps that are malware being
limited eg location information via GPS, blocking access to the address book,
preventing data transmission/manipulation, messaging, availability.[2] Attacks come
from spies, thieves, and hackers.[3][4][5][6][7]
Infected phones allow an attacker to
• manipulate the smartphone as a zombie machine for spam by sms or email;[8]
• make phone calls.[8]
• record and monitor conversations to others.[8]
• steal a user's identity.[8]
• discharging the battery.[9][10]
• preventing the operation and/or making it unusable.[11]
• removing personal data.[11]
Attacks derive from flaws in the management of SMS and MMS.
• managing binary SMS messages causing denial of service attacks.[12]
• extended email address giving the curse of silence.
• SMS from the Internet causing distributed denial of service.[13]
• MMS to other phones with an attachment infected with a virus.[11]
Attacks based on the GSM networks break the encryption of the mobile network.[15]
[16] or eavesdrop with a fake base station (IMSI catcher).
Attacks are based on Wi-Fi with access point spoofing,[17][18][19] and worms.[20]
Bluetooth-based attacks are based on unregistered services and virtual serial
ports[21] and transmitted malware.[11]
The mobile web browser suffers from buffer and stack overflow,[22] phishing, malicious
websites, etc.
Operating systems suffers from manipulation of firmware and malicious signature
certificates, bypassjng the bytecode verifier and accessing the kernel, changing the
central configuration file, changing the image of the firmware,[23] and using
valid signatures with invalid certificate.[24]
Attacks based on hardware come from electromagnetic waveforms,[25] juice jacking,
and password cracking.[26]
Malware are loaded onto smart phones as they are a permanent point of access to the
internet.[27][28] Its phases of attack are infection,[29] explicit/Implied permission, and
common/no interaction. Once infected the phone accomplishes its goal of monetary
damage, damage data and/or device, and concealed damage[30] then spread to other
systems[31] through Wi-Fi, Bluetooth or infrared or remote networks (telephone calls or
SMS or emails). It shows up as viruses and trojans,[11][32] ransomware,[34] and
spyware,[11] and libraries.[35]
Countermeasures consist of secure operating systems, rootkit detectors,[36][37][38]
process isolation,[36][39][40][41][42] file permissions,[43][44][45] memory protection,
[42] runtime environments,[46][47][42] security software (antivirus and firewall,[48][49]
[50][37] visual notifications, Turing test, biometric identification, resource monitoring,
[52] (battery, memory usage, network traffic, services,[53] network surveillance, spam
filters, encryption of stored or transmitted information, telecom network monitoring,
manufacturer surveillance, memove debug mode,[54][55] default settings,[36][56]
security audit of apps,[36] detect suspicious applications demanding rights,[57]
revocation procedures,[58][59] avoid heavily customized systems, improve software
patch processes,[57] user awareness, being skeptical,[60] permissions given to
applications,[61][56][62] be careful,[63][64] ensure data,[65] centralized storage of text
messages,[66] ;imitations of certain security measures,[67][55] next generation of
mobile security,[68] rich operating system, secure operating system, trusted execution
environment, secure element).
5.14.5.2 References
1. BYOD and Increased Malware Threats Help Driving Billion Dollar Mobile Security
Services Market in 2013, ABI Research
2. Bishop 2004.
3. Olson, Parmy. "Your smartphone is hackers' next big target". CNN.
Retrieved August 26, 2013.
4. (PDF)http://www.gov.mu/portal/sites/cert/files/Guide%20on%20Protection
%20Against%20Hacking.pdf. Missing or empty |title= (help)
5. Lemos, Robert. "New laws make hacking a black-and-white choice". CNET
News.com. Retrieved September 23, 2002.
6. McCaney, Kevin. "'Unknowns' hack NASA, Air Force, saying 'We're here to help'".
Retrieved May 7, 2012.
7. Bilton 2010.
8. Guo, Wang & Zhu 2004, p. 3.
9. Dagon, Martin & Starder 2004, p. 12.
10. Dixon & Mishra 2010, p. 3.
11. Töyssy & Helenius 2006, p. 113.
12. Siemens 2010, p. 1.
13. "Brookstone spy tank app". limormoyal.com. Retrieved 2016-08-11.
14. Gendrullis 2008, p. 266.
15. European Telecommunications Standards Institute 2011, p. 1.
16. Jøsang, Miralabé & Dallot 2015.
17. Roth, Polak & Rieffel 2008, p. 220.
18. Gittleson, Kim (28 March 2014) Data-stealing Snoopy drone unveiled at Black
Hat BBC News, Technology, Retrieved 29 March 2014
19. Wilkinson, Glenn (25 September 2012) Snoopy: A distributed tracking and
profiling framework Sensepost, Retrieved 29 March 2014
20. Töyssy & Helenius 2006, p. 27.
21. Mulliner 2006, p. 113.
22. Dunham, Abu Nimeh & Becher 2008, p. 225.
23. Becher 2009, p. 65.
24. Becher 2009, p. 66.
25. Kasmi C, Lopes Esteves J (13 August 2015). "IEMI Threats for Information
Security: Remote Command Injection on Modern Smartphones". IEEE Transactions on
Electromagnetic Compatibility. doi:10.1109/TEMC.2015.2463089. Lay
summary – WIRED (14 October 2015).
26. Aviv, Adam J.; Gibson, Katherine; Mossop, Evan; Blaze, Matt; Smith, Jonathan
M. Smudge Attacks on Smartphone Touch Screens (PDF). 4th USENIX Workshop on
Offensive Technologies.
27. Schmidt et al. 2009a, p. 3.
28. Suarez-Tangil, Guillermo; Juan E. Tapiador; Pedro Peris-Lopez; Arturo Ribagorda
(2014). "Evolution, Detection and Analysis of Malware in Smart Devices" (PDF). IEEE
Communications Surveys & Tutorials.
29. Becher 2009, p. 87.
30. Becher 2009, p. 88.
31. Mickens & Noble 2005, p. 1.
32. Raboin 2009, p. 272.
33. Töyssy & Helenius 2006, p. 114.
34. Haas, Peter D. (2015-01-01). "Ransomware goes mobile: An analysis of the
threats posed by emerging methods". UTICA COLLEGE.
35. Becher 2009, p. 91-94.
36. Becher 2009, p. 12.
37. Schmidt, Schmidt & Clausen 2008, p. 5-6.
38. Halbronn & Sigwald 2010, p. 5-6.
39. Ruff 2011, p. 127.
40. Hogben & Dekker 2010, p. 50.
41. Schmidt, Schmidt & Clausen 2008, p. 50.
42. c Shabtai et al. 2009, p. 10.
43. Becher 2009, p. 31.
44. Schmidt, Schmidt & Clausen 2008, p. 3.
45. Shabtai et al. 2009, p. 7-8.
46. Pandya 2008, p. 15.
47. Becher 2009, p. 22.
48. Becher et al. 2011, p. 96.
49. Becher 2009, p. 128.
50. Becher 2009, p. 140.
51. Thirumathyam & Derawi 2010, p. 1.
52. Schmidt, Schmidt & Clausen 2008, p. 7-12.
53. Becher 2009, p. 126.
54. Becher et al. 2011, p. 101.
55. Ruff 2011, p. 11.
56. Hogben & Dekker 2010, p. 45.
57. Becher 2009, p. 13.
58. Becher 2009, p. 34.
59. Ruff 2011, p. 7.
60. Hogben & Dekker 2010, p. 46-48.
61. Ruff 2011, p. 7-8.
62. Shabtai et al. 2009, p. 8-9.
63. Hogben & Dekker 2010, p. 43.
64. Hogben & Dekker 2010, p. 47.
65. Hogben & Dekker 2010, p. 43-45.
66. Charlie Sorrel (2010-03-01). "TigerText Deletes Text Messages From Receiver's
Phone". Wired. Archived from the original on 2010-10-17. Retrieved 2010-03-02.
67. Becher 2009, p. 40.
68. http://www.insidesecure.com/Markets-solutions/Payment-and-Mobile-
Banking/Mobile-Bishop, Matt (2004). Introduction to Computer Security. Addison Wesley
Professional. ISBN 978-0-321-24744-5.
69. Dunham, Ken; Abu Nimeh, Saeed; Becher, Michael (2008). Mobile Malware Attack
and Defense. Syngress Media. ISBN 978-1-59749-298-0.
70. Rogers, David (2013). Mobile Security: A Guide for Users. Copper Horse Solutions
Limited. ISBN 978-1-291-53309-5.
71. Articles[edit]
72. Becher, Michael (2009). Security of Smartphones at the Dawn of Their
Ubiquitousness (PDF) (Dissertation). Mannheim University.
73. Becher, Michael; Freiling, Felix C.; Hoffmann, Johannes; Holz, Thorsten;
Uellenbeck, Sebastian; Wolf, Christopher (May 2011). Mobile Security Catching Up?
Revealing the Nuts and Bolts of the Security of Mobile Devices (PDF). 2011 IEEE
Symposium on Security and Privacy. pp. 96–111. doi:10.1109/SP.2011.29. ISBN 978-1-
4577-0147-4.
74. Bilton, Nick (26 July 2010). "Hackers With Enigmatic Motives Vex
Companies". The New York Times. p. 5.
75. Cai, Fangda; Chen, Hao; Wu, Yuanyi; Zhang, Yuan (2015). AppCracker:
Widespread Vulnerabilities in Userand Session Authentication in Mobile
Apps (PDF) (Dissertation). University of California, Davis.
76. Crussell, Johnathan; Gibler, Clint; Chen, Hao (2012). Attack of the Clones:
Detecting Cloned Applications on Android Markets (PDF) (Dissertation). University of
California, Davis.
77. Dagon, David; Martin, Tom; Starder, Thad (October–December 2004). "Mobile
Phones as Computing Devices: The Viruses are Coming!". IEEE Pervasive
Computing. 3 (4): 11. doi:10.1109/MPRV.2004.21.[dead link]
78. Dixon, Bryan; Mishra, Shivakant (June–July 2010). On and Rootkit and Malware
Detection in Smartphones (PDF). 2010 International Conference on Dependable
Systems and Networks Workshops (DSN-W). ISBN 978-1-4244-7728-9.
79. Gendrullis, Timo (November 2008). A real-world attack breaking A5/1 within
hours. Proceedings of CHES ’08. Springer. pp. 266–282. doi:10.1007/978-3-540-85053-
3_17.
80. Guo, Chuanxiong; Wang, Helen; Zhu, Wenwu (November 2004). Smart-Phone
Attacks and Defenses (PDF). ACM SIGCOMM HotNets. Association for Computing
Machinery, Inc. Retrieved March 31, 2012.
81. Halbronn, Cedric; Sigwald, John (2010). Vulnerabilities & iPhone Security
Model (PDF). HITB SecConf 2010.
82. Hogben, Giles; Dekker, Marnix (December 2010). "Smartphones: Information
security Risks, Opportunities and Recommendations for users". ENISA.
83. Jøsang, Audun; Miralabé, Laurent; Dallot, Léonard (2015). "Vulnerability by
Design in Mobile Network Security" (PDF). Journal of Information Warfare
(JIF). 14 (4). ISSN 1445-3347.
84. Mickens, James W.; Noble, Brian D. (2005). Modeling epidemic spreading in
mobile environments. WiSe '05 Proceedings of the 4th ACM workshop on Wireless
security. Association for Computing Machinery, Inc. pp. 77–
86. doi:10.1145/1080793.1080806.
85. Mulliner, Collin Richard (2006). Security of Smart Phones (PDF) (M.Sc. thesis).
University of California, Santa Barbara.
86. Pandya, Vaibhav Ranchhoddas (2008). Iphone Security Analysis (PDF) (Thesis).
San Jose State University.
87. Raboin, Romain (December 2009). La sécurité des
smartphones (PDF). Symposium sur la sécurité des technologies de l'information et des
communications 2009. SSTIC09 (in French).
88. Racic, Radmilo; Ma, Denys; Chen, Hao (2006). Exploiting MMS Vulnerabilities to
Stealthily Exhaust Mobile Phone’s Battery (PDF) (Dissertation). University of California,
Davis.
89. Roth, Volker; Polak, Wolfgang; Rieffel, Eleanor (2008). Simple and Effective
Defense Against Evil Twin Access Points. ACM SIGCOMM
HotNets. doi:10.1145/1352533.1352569. ISBN 978-1-59593-814-5.
90. Ruff, Nicolas (2011). Sécurité du système Android (PDF). Symposium sur la
sécurité des technologies de l'information et des communications 2011. SSTIC11 (in
French).
91. Ruggiero, Paul; Foote, Jon. Cyber Threats to Mobile Phones (PDF) (thesis). US-
CERT.
92. Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Clausen, Jan; Yüksel, Kamer
Ali; Kiraz, Osman; Camtepe, Ahmet; Albayrak, Sahin (October 2008). Enhancing Security
of Linux-based Android Devices (PDF). Proceedings of 15th International Linux
Kongress.
93. Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Batyuk, Leonid; Clausen, Jan
Hendrik; Camtepe, Seyit Ahmet; Albayrak, Sahin (April 2009a). Smartphone Malware
Evolution Revisited: Android Next Target? (PDF). 4th International Conference on
Malicious and Unwanted Software (MALWARE). ISBN 978-1-4244-5786-1.
Retrieved 2010-11-30.
94. Shabtai, Asaf; Fledel, Yuval; Kanonov, Uri; Elovici, Yuval; Dolev, Shlomi (2009).
"Google Android: A State-of-the-Art Review of Security
Mechanisms". CoRR. arXiv:0912.5101v1 .
95. Thirumathyam, Rubathas; Derawi, Mohammad O. (2010). Biometric Template
Data Protection in Mobile Device Using Environment XML-database. 2010 2nd
International Workshop on Security and Communication Networks (IWSCN). ISBN 978-1-
4244-6938-3.
96. Töyssy, Sampo; Helenius, Marko (2006). "About malicious software in
smartphones". Journal in Computer Virology. Springer Paris. 2 (2): 109–
119. doi:10.1007/s11416-006-0022-0. Retrieved 2010-11-30.
97. European Telecommunications Standards Institute (2011). "3GPP Confidentiality
and Integrity Algorithms & UEA1 UIA1". Archived from the original on 12 May 2012.
98. Siemens (2010). "Series M Siemens SMS DoS Vulnerability".
99. Further reading[edit]
100. CIGREF (October 2010). "Sécurisation de la mobilité" (PDF) (in French).
101. Chong, Wei Hoo (November 2007). iDEN Smartphone Embedded Software
Testing (PDF). Fourth International Conference on Information Technology, 2007. ITNG
'07. doi:10.1109/ITNG.2007.103. ISBN 0-7695-2776-0.
102. Jansen, Wayne; Scarfone, Karen (October 2008). "Guidelines on Cell Phone and
PDA Security: Recommendations of the National Institute of Standards and
Technology" (PDF). National Institute of Standards and Technology. Retrieved April
21, 2012.
103. Lee, Sung-Min; Suh, Sang-bum; Jeong, Bokdeuk; Mo, Sangdok (January 2008). A
Multi-Layer Mandatory Access Control Mechanism for Mobile Devices Based on
Virtualization. 5th IEEE Consumer Communications and Networking Conference, 2008.
CCNC 2008. doi:10.1109/ccnc08.2007.63. ISBN 978-1-4244-1456-7. Archived from the
original on May 16, 2013.
104. Li, Feng; Yang, Yinying; Wu, Jie (March 2010). CPMC: An Efficient Proximity
Malware Coping Scheme in Smartphone-based Mobile Networks (PDF). INFOCOM, 2010
Proceedings IEEE. doi:10.1109/INFCOM.2010.5462113.
105. Ni, Xudong; Yang, Zhimin; Bai, Xiaole; Champion, Adam C.; Xuan, Dong (October
2009). Distribute: Differentiated User Access Control on Smartphones (PDF). 6th IEEE
International Conference on Mobile Adhoc and Periodic Sensor Systems, 2009. MASS
'09. ISBN 978-1-4244-5113-5.[dead link]
106. Ongtang, Machigar; McLaughlin, Stephen; Enck, William; Mcdaniel, Patrick
(December 2009). Semantically Rich Application-Centric Security in Android (PDF).
Annual Computer Security Applications Conference, 2009. ACSAC '09. ISSN 1063-9527.
107. Schmidt, Aubrey-Derrick; Bye, Rainer; Schmidt, Hans-Gunther; Clausen, Jan;
Kiraz, Osman; Yüksel, Kamer A.; Camtepe, Seyit A.; Albayrak, Sahin (2009b). Static
Analysis of Executables for Collaborative Malware Detection on Android (PDF). IEEE
International Conference Communications, 2009. ICC '09. ISSN 1938-1883.
108. Yang, Feng; Zhou, Xuehai; Jia, Gangyong; Zhang, Qiyuan (2010). A Non-
cooperative Game Approach for Intrusion Detection Systems in Smartphone systems.
8th Annual Communication Networks and Services Research
Conference. doi:10.1109/CNSR.2010.24. ISBN 978-1-4244-6248-3. Archived from the
original on May 16, 2013.
5.14.6 Network security
5.14.6.1 Commentary
5.14.6.1.1 Introduction
Network security is the policies and practices to prevent and
monitor unauthorized access, misuse, modification, or denial of a computer
network and network-accessible resources. The administrator gives users
authenticating information to allow access to valid data and processes.
Network security concept relies on multi-factor authentication. Once allowed,
the firewall applies access policies to coordinate use of services by users[1] though
precluding unauthorized access worms or Trojans are not ruled out. Anti-virus or
intrusion prevention system[2] discover and stop the malware by monitoring the
network and logging anomalies and applying machine learning / traffic analysis to find
attackers, malicious insiders or targeted external attackers.[3] In addition
communication can be encrypted for privacy. Honeypots (decoy network-accessible
resources) in the network not normally accessed legitimately help surveillance and
early-warning tools. [4]
Security managements varies according to the networks job.
Network attacks are Passive (the intruder intercepts data on the network), and Active
(the intruder starts commands impacting normal network operation or to spy and
access the network assets.[5]
Types of attacks include:[6]
• Passive
• Network
• Wiretapping
• Port scanner
• Idle scan
• Active
• Denial-of-service attack
• DNS spoofing
• Man in the middle
• ARP poisoning
• VLAN hopping
• Smurf attack
• Buffer overflow
• Heap overflow
• Format string attack
• SQL injection
• Phishing
• Cross-site scripting
• CSRF
• Cyber-attack
5.14.6.2 References
1. A Role-Based Trusted Network Provides Pervasive Security and Compliance -
interview with Jayshree Ullal, senior VP of Cisco
2. Dave Dittrich, Network monitoring/Intrusion Detection Systems (IDS), University
of Washington.
3. "Dark Reading: Automating Breach Detection For The Way Security Professionals
Think". October 1, 2015.
4. "''Honeypots, Honeynets''". Honeypots.net. 2007-05-26. Retrieved 2011-12-09.
5. Wright, Joe; Jim Harmening (2009) "15" Computer and Information Security
Handbook Morgan Kaufmann Publications Elsevier Inc p. 257
6. http://www.cnss.gov/Assets/pdf/cnssi_4009.pdf
7. Case Study: Network Clarity, SC Magazine 2014
8. Cisco. (2011). What is network security?. Retrieved from cisco.com
9. Security of the Internet (The Froehlich/Kent Encyclopedia of
Telecommunications vol. 15. Marcel Dekker, New York, 1997, pp. 231–255.)
10. Introduction to Network Security, Matt Curtin.
11. MPLS, SD-WAN and Network Security', Yishay Yovel.
12. Security Monitoring with Cisco Security MARS, Gary Halleen/Greg Kellogg, Cisco
Press, Jul. 6, 2007.
13. Self-Defending Networks: The Next Generation of Network Security, Duane
DeCapite, Cisco Press, Sep. 8, 2006.
14. Security Threat Mitigation and Response: Understanding CS-MARS, Dale
Tesch/Greg Abelar, Cisco Press, Sep. 26, 2006.
15. Securing Your Business with Cisco ASA and PIX Firewalls, Greg Abelar, Cisco
Press, May 27, 2005.
16. Deploying Zone-Based Firewalls, Ivan Pepelnjak, Cisco Press, Oct. 5, 2006.
17. Network Security: PRIVATE Communication in a PUBLIC World, Charlie Kaufman
| Radia Perlman | Mike Speciner, Prentice-Hall, 2002. ISBN .
18. Network Infrastructure Security, Angus Wong and Alan Yeung, Springer, 2009.
5.14.7 Defensive computing
5.14.7.1 Commentary
5.14.7.1.1 Introduction
Defensive computing aims to avoid dangerous practices to reduce the risk of problems
for computer users. It tries to anticipate problems and prepare resolution in all
circumstances (even mistakes) before they happen by using general and specific
practices and techniques. It relies on network security and the backup / restoration of
data.
5.14.7.1.2 Network security
Users reduce the risk due to network access with firewalls to control edges of the
computing environment, anti-malware to protect within the computer and consideration
of all data and processes by the user.
5.14.7.1.2.1 Firewall
A firewall protects a computer from unsafe communications traffic with the Internet
and prevents the unofficial access of computer systems[1] combined into a software
package that runs independently on specific computer systems or routers and modems
configured with appropriate filters.[2]
5.14.7.1.2.2 Anti-malware software
Anti-malware protects the user from the ever growing quantity of malware within the
computer by anti-virus, anti-phishing and email filtering software[3] which is
maintained as current.[2]
5.14.7.1.2.3 Skepticism
Users must accept responsibility for their actions on the computer. They must think of
the results of what they are doing or what they have with the data.[4] Procedures
include scanning email attachments before opening them and filtering suspicious
emails[2] Users should configure their computers to show file extensions to enhance
user appreciation of dangerous files seemingly harmless.[4] Evaluating websites,
emails and files leads to a correct attitude for the defensive user.
5.14.7.1.3 Backup and recovery procedures
Malware, equipment failure and general misuse can cause loss / corruption of data so
defensive procedures would encourage backup and restoration of systems so that
systems can be reset to their previous known state.
5.14.7.1.3.1 Backup of data
Backup of data and systems can take various forms like processing multiple copies of
data in parallel, moving data to an external device or the internet cloud.[5]
5.14.7.1.3.2 Restoration
Restoration of data and systems allow users the choice of operating a procedure to
restore a computer to a given state after system failure or serious corruption / data loss
and remove malicious files not previously existing.[5]
5.14.7.1.4 Good practices for protecting data
Procedures to protect data are:
• Regular backup important data.
• Administrator privileges used only in emergencey.
• Maintained current software.
• Maintained current antimalware.
• Use different passwords
• Disable auto run feature except for reserved units[6]
• Always use a correctly configured firewall
• Be sceptical
5.14.7.2 References
1. http://www.cs.unm.edu/~treport/tr/02-12/firewall.pdf, A History and Survey of Network
Firewalls
2. http://news.cnet.com/8301-13554_3-9923976-33.html, The Pillars of Defensive
Computing
3. https://www.washingtonpost.com/wp-
dyn/content/article/2008/03/19/AR2008031901439.html, Antivirus Firms Scrambling to
Keep Up
4. http://www.melbpc.org.au/pcupdate/2206/2206article6.htm Archived 2006-07-24 at
the Wayback Machine., How To Protect Yourself From Virus Infection
5. http://www.microsoft.com/protect/yourself/data/what.mspx, How to Decide what Data to
Back Up
6. http://news.cnet.com/8301-13554_3-10027754-33.html, Be safer than NASA: Disable
autorun
5.14.8 Firewall
5.14.8.1 Commentary
5.14.8.1.1 Introduction
A firewall is a network security system to monitor and control network traffic based on
predetermined security rules.[1] It provides a mechanism of transfer between trusted
internal network and the outside network.[2] They can be classified as network or host-
based. Network firewalls filter traffic between networks and are software running on
computers or specialist hardware. Host-based systems have software on one host that
controlling traffic.[3][4] Firewalls give extra functionality as a DHCP[5][6] or VPN[7][8]
[9][10] server for the network.[11][12][28][29][30]
Firewalls came with the Internet and its global use and connectivity[14] and succeeded
routers used in the late 1980s.[15][16][17] The first firewalls filtered network
addresses and ports of packet to allow or block.[18][19] When special ports are used
the firewall can be enhanced.[20][21][22] Second generation firewalls were circuit-level
gateways[23] using the transport layer.[24][25][26] The third generation firewal works
at the application layer eg File Transfer Protocol (FTP), Domain Name System (DNS),
or Hypertext Transfer Protocol (HTTP)) via the Firewall Toolkit, next-generation
firewall (NGFW), Intrusion prevention systems (IPS), User identity
management integration, and Web application firewall (WAF).[27] A proxy server which
can act as a firewall by responding to input packets like an application is a gateway
from one network to another for an application[2] and make it harder to tamper with a
system. Network address translation is applied with firewalls to hide local addressing.
[31]
5.14.8.2 References
1. Boudriga, Noureddine (2010). Security of mobile communications. Boca Raton:
CRC Press. pp. 32–33. ISBN 0849379423.
2. Oppliger, Rolf (May 1997). "Internet Security: FIREWALLS and
BEYOND". Communications of the ACM. 40 (5): 94. doi:10.1145/253769.253802.
3. Vacca, John R. (2009). Computer and information security handbook.
Amsterdam: Elsevier. p. 355. ISBN 9780080921945.
4. "What is Firewall?". Retrieved 2015-02-12.
5. "Firewall as a DHCP Server and Client". Palo Alto Networks. Retrieved 2016-02-
08.
6. "DHCP". www.shorewall.net. Retrieved 2016-02-08.
7. "What is a VPN Firewall? - Definition from Techopedia". Techopedia.com.
Retrieved 2016-02-08.
8. "VPNs and Firewalls". technet.microsoft.com. Retrieved 2016-02-08.
9. "VPN and Firewalls (Windows Server)". Resources and Tools for IT Professionals
| TechNet.
10. "Configuring VPN connections with firewalls".
11. Andrés, Steven; Kenyon, Brian; Cohen, Jody Marc; Johnson, Nate; Dolly, Justin
(2004). Birkholz, Erik Pack, ed. Security Sage's Guide to Hardening the Network
Infrastructure. Rockland, MA: Syngress. pp. 94–95. ISBN 9780080480831.
12. Naveen, Sharanya. "Firewall". Retrieved 7 June 2016.
13. Canavan, John E. (2001). Fundamentals of Network Security (1st ed.). Boston,
MA: Artech House. p. 212. ISBN 9781580531764.
14. Liska, Allan (Dec 10, 2014). Building an Intelligence-Led Security Program.
Syngress. p. 3. ISBN 0128023708.
15. Ingham, Kenneth; Forrest, Stephanie (2002). "A History and Survey of Network
Firewalls" (PDF). Retrieved 2011-11-25.
16. [1] Firewalls by Dr.Talal Alkharobi
17. RFC 1135 The Helminthiasis of the Internet
18. Peltier, Justin; Peltier, Thomas R. (2007). Complete Guide to CISM Certification.
Hoboken: CRC Press. p. 210. ISBN 9781420013252.
19. Ingham, Kenneth; Forrest, Stephanie (2002). "A History and Survey of Network
Firewalls" (PDF). p. 4. Retrieved 2011-11-25.
20. TCP vs. UDP By Erik Rodriguez
21. William R. Cheswick, Steven M. Bellovin, Aviel D. Rubin (2003). "Google Books
Link". Firewalls and Internet Security: repelling the wily hacker
22. Aug 29, 2003 Virus may elude computer defenses by Charles Duhigg, Washington
Post
23. Proceedings of National Conference on Recent Developments in Computing and
Its Applications, August 12–13, 2009. I.K. International Pvt. Ltd. 2009-01-01.
Retrieved 2014-04-22.
24. Conway, Richard (204). Code Hacking: A Developer's Guide to Network Security.
Hingham, Massachusetts: Charles River Media. p. 281. ISBN 1-58450-314-9.
25. Andress, Jason (May 20, 2014). The Basics of Information Security:
Understanding the Fundamentals of InfoSec in Theory and Practice (2nd ed.). Elsevier
Science. ISBN 9780128008126.
26. Chang, Rocky (October 2002). "Defending Against Flooding-Based Distributed
Denial-of-Service Attacks: A Tutorial". IEEE Communications Magazine. 40 (10): 42–
43. doi:10.1109/mcom.2002.1039856.
27. "WAFFle: Fingerprinting Filter Rules of Web Application Firewalls". 2012.
28. "Firewalls". MemeBridge. Retrieved 13 June 2014.
29. "Software Firewalls: Made of Straw? Part 1 of 2". Symantec Connect Community.
2010-06-29. Retrieved 2014-03-28.
30. "Auto Sandboxing". Comodo Inc. Retrieved 2014-08-28.
31. "Advanced Security: Firewall". Microsoft. Retrieved 2014-08-28.
32. Internet Firewalls: Frequently Asked Questions, compiled by Matt Curtin, Marcus
Ranum and Paul Robertson.
33. Firewalls Aren’t Just About Security - Cyberoam Whitepaper focusing on Cloud
Applications Forcing Firewalls to Enable Productivity.
34. Evolution of the Firewall Industry - Discusses different architectures and their
differences, how packets are processed, and provides a timeline of the evolution.
35. A History and Survey of Network Firewalls - provides an overview of firewalls at
the various ISO levels, with references to the original papers where first firewall work
was reported.
36. Software Firewalls: Made of Straw? Part 1 and Software Firewalls: Made of
Straw? Part 2 - a technical view on software firewall design and potential weaknesses
5.14.9 Intrusion detection system
5.14.9.1 Commentary
5.14.9.1.1 Introduction
An intrusion detection system (IDS) monitors a network or systems for malicious
activity or policy violations. Any detected activity or violation is logged for an
administrator, central facility and event management (SIEM) system with
filtering techniques to detect malicious activity. An IDS monitors all activities whereas
a firewall monitor external activities.
IDS can be classified by where detection takes place (network or host) and the
detection method that is employed. Network intrusion detection systems (NIDS)
monitor traffic to and from all devices on the network and analyse passing traffic on
the entire subnet, and matches it to profiles of known attacks. If a match is found the
administrator is notified.[1] Host intrusion detection systems (HIDS) monitor the traffic
on a device only and alert the administrator if suspicious activity is detected by taking
snapshot of existing system files and matching it to the previous snapshot. Intrusion
detection systems can also be system-specific using custom tools and honeypots.
Intrusion prevention systems use signature-based, statistical anomaly-based, and
stateful protocol analysis.[8][12] Signature-based IDS looks for specific patterns eg
known malicious instruction sequences (signatures) used by malware. It is easily to
detect known attacks, but impossible for new attacks. Anomaly-based intrusion
detection systems finds unknown attacks using machine learning to model trusted
activity, and then checking against this model.[2][3][4][13] Stateful protocol analysis
detection identifies deviations of protocol states by comparing observed events with
predetermined profiles of generally accepted definitions of benign activity.[8]
Intrusion detection and prevention systems (IDPS) identify incidents, log information,
and report attempts and listing problems with policies, threats and security violations
by individuals.[5][6][7][8][9][10].
Intrusion prevention systems are categorised as:[6][11]
1. Network-based intrusion prevention system monitoring the entire network for
suspicious traffic by analyzing protocol activity.
2. Wireless intrusion prevention systems as above but for a wireless network.
3. Network behavior analysis examining anomalies in network traffic to identify.
4. Host-based intrusion prevention system monitoring a single host for suspicious activity
by analyzing events occurring within that host.
The intrusion systems are restricted by device errors, false addresses, out of date
software and signatures[14], delayed updates[13], encrypted information, erroneous
packet addresses and network/host problems.[15]
Attackers use simple techniques to evade protection eg fragmented message packets,
changed communication ports, coordinated attacks, address spoofing/proxying, pattern
change of attack data.
Tools help administrators review audit trails.[16] They can require ever increasing
resources.[17] They follow a basic model[18] using statistics for anomaly detection in
profiles of users, host and target systems[19] and rule-based expert system to detect
known intrusions.[20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37]
[38]
5.14.9.2 References
1. Abdullah A. Mohamed, "Design Intrusion Detection System Based On Image
Block Matching", International Journal of Computer and Communication Engineering,
IACSIT Press, Vol. 2, No. 5, September 2013.
2. "Gartner report: Market Guide for User and Entity Behavior Analytics".
September 2015.
3. "Gartner: Hype Cycle for Infrastructure Protection, 2016".
4. "Gartner: Defining Intrusion Detection and Prevention Systems".
Retrieved September 20, 2016.
5. Scarfone, Karen; Mell, Peter (February 2007). "Guide to Intrusion Detection and
Prevention Systems (IDPS)" (PDF). Computer Security Resource Center. National
Institute of Standards and Technology (800–94). Retrieved 1 January 2010.
6. "NIST – Guide to Intrusion Detection and Prevention Systems (IDPS)" (PDF).
February 2007. Retrieved 2010-06-25.
7. Robert C. Newman (19 February 2009). Computer Security: Protecting Digital
Resources. Jones & Bartlett Learning. ISBN 978-0-7637-5994-0. Retrieved 25
June 2010.
8. Michael E. Whitman; Herbert J. Mattord (2009). Principles of Information
Security. Cengage Learning EMEA. ISBN 978-1-4239-0177-8. Retrieved 25 June 2010.
9. Tim Boyles (2010). CCNA Security Study Guide: Exam 640-553. John Wiley and
Sons. p. 249. ISBN 978-0-470-52767-2. Retrieved 29 June 2010.
10. Harold F. Tipton; Micki Krause (2007). Information Security Management
Handbook. CRC Press. p. 1000. ISBN 978-1-4200-1358-0. Retrieved 29 June 2010.
11. John R. Vacca (2010). Managing Information Security. Syngress.
p. 137. ISBN 978-1-59749-533-2. Retrieved 29 June 2010.
12. Engin Kirda; Somesh Jha; Davide Balzarotti (2009). Recent Advances in Intrusion
Detection: 12th International Symposium, RAID 2009, Saint-Malo, France, September
23–25, 2009, Proceedings. Springer. p. 162. ISBN 978-3-642-04341-3. Retrieved 29
June 2010.
13. nitin.; Mattord, verma (2008). Principles of Information Security. Course
Technology. pp. 290–301. ISBN 978-1-4239-0177-8.
14. c Anderson, Ross (2001). Security Engineering: A Guide to Building Dependable
Distributed Systems. New York: John Wiley & Sons. pp. 387–388. ISBN 978-0-471-38922-
4.
15. http://www.giac.org/paper/gsec/235/limitations-network-intrusion-
detection/100739
16. Anderson, James P., "Computer Security Threat Monitoring and Surveillance,"
Washing, PA, James P. Anderson Co., 1980.
17. David M. Chess; Steve R. White (2000). "An Undetectable Computer
Virus". Proceedings of Virus Bulletin Conference.
18. Denning, Dorothy E., "An Intrusion Detection Model," Proceedings of the Seventh
IEEE Symposium on Security and Privacy, May 1986, pages 119–131
19. Lunt, Teresa F., "IDES: An Intelligent System for Detecting Intruders,"
Proceedings of the Symposium on Computer Security; Threats, and Countermeasures;
Rome, Italy, November 22–23, 1990, pages 110–121.
20. Lunt, Teresa F., "Detecting Intruders in Computer Systems," 1993 Conference on
Auditing and Computer Technology, SRI International
21. Sebring, Michael M., and Whitehurst, R. Alan., "Expert Systems in Intrusion
Detection: A Case Study," The 11th National Computer Security Conference, October,
1988
22. Smaha, Stephen E., "Haystack: An Intrusion Detection System," The Fourth
Aerospace Computer Security Applications Conference, Orlando, FL, December, 1988
23. Vaccaro, H.S., and Liepins, G.E., "Detection of Anomalous Computer Session
Activity," The 1989 IEEE Symposium on Security and Privacy, May, 1989
24. Teng, Henry S., Chen, Kaihu, and Lu, Stephen C-Y, "Adaptive Real-time Anomaly
Detection Using Inductively Generated Sequential Patterns," 1990 IEEE Symposium on
Security and Privacy
25. Heberlein, L. Todd, Dias, Gihan V., Levitt, Karl N., Mukherjee, Biswanath, Wood,
Jeff, and Wolber, David, "A Network Security Monitor," 1990 Symposium on Research in
Security and Privacy, Oakland, CA, pages 296–304
26. Winkeler, J.R., "A UNIX Prototype for Intrusion and Anomaly Detection in Secure
Networks," The Thirteenth National Computer Security Conference, Washington, DC.,
pages 115–124, 1990
27. Dowell, Cheri, and Ramstedt, Paul, "The ComputerWatch Data Reduction Tool,"
Proceedings of the 13th National Computer Security Conference, Washington, D.C.,
1990
28. Snapp, Steven R, Brentano, James, Dias, Gihan V., Goan, Terrance L., Heberlein,
L. Todd, Ho, Che-Lin, Levitt, Karl N., Mukherjee, Biswanath, Smaha, Stephen E., Grance,
Tim, Teal, Daniel M. and Mansur, Doug, "DIDS (Distributed Intrusion Detection System)
-- Motivation, Architecture, and An Early Prototype," The 14th National Computer
Security Conference, October, 1991, pages 167–176.
29. Jackson, Kathleen, DuBois, David H., and Stallings, Cathy A., "A Phased
Approach to Network Intrusion Detection," 14th National Computing Security
Conference, 1991
30. Paxson, Vern, "Bro: A System for Detecting Network Intruders in Real-Time,"
Proceedings of The 7th USENIX Security Symposium, San Antonio, TX, 1998
31. Amoroso, Edward, "Intrusion Detection: An Introduction to Internet Surveillance,
Correlation, Trace Back, Traps, and Response," Intrusion.Net Books, Sparta, New
Jersey, 1999, ISBN 0-9666700-7-8
32. Kohlenberg, Toby (Ed.), Alder, Raven, Carter, Dr. Everett F. (Skip), Jr., Esler,
Joel., Foster, James C., Jonkman Marty, Raffael, and Poor, Mike, "Snort IDS and IPS
Toolkit," Syngress, 2007, ISBN 978-1-59749-099-3
33. Barbara, Daniel, Couto, Julia, Jajodia, Sushil, Popyack, Leonard, and Wu,
Ningning, "ADAM: Detecting Intrusions by Data Mining," Proceedings of the IEEE
Workshop on Information Assurance and Security, West Point, NY, June 5–6, 2001
34. Intrusion Detection Techniques for Mobile Wireless Networks, ACM WINET 2003
<http://www.cc.gatech.edu/~wenke/papers/winet03.pdf>
35. Viegas, E.; Santin, A. O.; Fran?a, A.; Jasinski, R.; Pedroni, V. A.; Oliveira, L. S.
(2017-01-01). "Towards an Energy-Efficient Anomaly-Based Intrusion Detection Engine
for Embedded Systems". IEEE Transactions on Computers. 66 (1): 163–
177. doi:10.1109/TC.2016.2560839. ISSN 0018-9340.
36. França, A. L.; Jasinski, R.; Cemin, P.; Pedroni, V. A.; Santin, A. O. (2015-05-
01). "The energy cost of network security: A hardware vs. software comparison". 2015
IEEE International Symposium on Circuits and Systems (ISCAS): 81–
84. doi:10.1109/ISCAS.2015.7168575.
37. França, A. L. P. d; Jasinski, R. P.; Pedroni, V. A.; Santin, A. O. (2014-07-
01). "Moving Network Protection from Software to Hardware: An Energy Efficiency
Analysis". 2014 IEEE Computer Society Annual Symposium on VLSI: 456–
461. doi:10.1109/ISVLSI.2014.89.
38. "Towards an Energy-Efficient Anomaly-Based Intrusion Detection Engine for
Embedded Systems" (PDF). SecPLab.
39.  This article incorporates public domain material from the National Institute of
Standards and Technology document "Guide to Intrusion Detection and Prevention
Systems, SP800-94" by Karen Scarfone, Peter Mell (retrieved on 1 January 2010).
40. Further reading[edit]
41. Hansen, James V.; Benjamin Lowry, Paul; Meservy, Rayman; McDonald, Dan
(2007). "Genetic programming for prevention of cyberterrorism through dynamic and
evolving intrusion detection". Decision Support Systems (DSS). 43 (4): 1362–
1374. doi:10.1016/j.dss.2006.04.004. SSRN 877981 .
42. Scarfone, Karen; Mell, Peter (February 2007). "Guide to Intrusion Detection and
Prevention Systems (IDPS)" (PDF). Computer Security Resource Center. National
Institute of Standards and Technology (800-94). Retrieved 1 January 2010.
43. Saranya, J.; Padmavathi, G. (2015). "A Brief Study on Different Intrusions and
Machine Learning-based Anomaly Detection Methods in Wireless Sensor
Networks" (PDF). Avinashilingam Institute for Home Science and Higher Education for
Women (6(4)). Retrieved 4 April 2015.
44. Singh, Abhishek. "Evasions In Intrusion Prevention Detection Systems". Virus
Bulletin. Retrieved April 2010. Check date values in: |access-date= (help)
45. Bezroukov, Nikolai (11 December 2008). "Architectural Issues of Intrusion
Detection Infrastructure in Large Enterprises (Revision 0.82)". Softpanorama.
Retrieved 30 July 2010.
46. P.M. Mafra and J.S. Fraga and A.O. Santin (2014). "Algorithms for a distributed
IDS in MANETs". Journal of Computer and System Sciences. 80 (3): 554–
570. doi:10.1016/j.jcss.2013.06.011.
47. NIST SP 800-83, Guide to Malware Incident Prevention and Handling
48. NIST SP 800-94, Guide to Intrusion Detection and Prevention Systems (IDPS)
49. Study by Gartner "Magic Quadrant for Network Intrusion Prevention System
Appliances"
5.14.10 Data loss prevention software
5.14.10.1 Commentary
5.14.10.1.1 Introduction
Data loss prevention software detects potential data breaches/data ex-filtration
transmissions and prevents them by monitoring, detecting and blocking sensitive data
while in-use(endpoint actions), in-motion (network traffic), and at-rest (data storage).
Data loss and data leak are often used together.[1] Data loss occurrences become data
leak incidents where data is lost and eventually taken by an unauthorized party. A data
leak can happen without losing the original data.
5.14.10.1.2 Categories
There are 3 classes of technology for data leakage incidents viz. standard security
measures, advanced/intelligent security measures, access control and encryption and
designated systems.[2]
5.14.10.1.2.1 Standard measures
Standard security uses firewalls, intrusion detection systems and antivirus software as
anti malware
5.14.10.1.2.2 Advanced measures
Advanced security measures adopt machine learning and temporal
reasoning algorithms for detecting unusual data access or email exchange. It uses
honeypots to find authorized personnel with malevalent intention. Activity-based
verification and monitoring help determine abnormal data access.
5.14.10.1.2.3 Designated systems
Designated systems use exact data matching, structured data fingerprinting, statistical
methods, rule and regular expression matching, published lexicons, conceptual
definitions and keywords[3] to determine sensitive information and detect / prevent
unaccredited attempts to use it in any way.
5.14.10.1.3 Types
5.14.10.1.3.1 Network
Network (data in motion) technology acts at network exit points near the perimeter. It
study network traffic for violations of information security policies. Multiple security
control points document activity examined by a central management server.[1]
5.14.10.1.3.2 Endpoint
Endpoint (data in use) systems operate on internal end-user units. It targets internal
and external communications and accesses control information flow between groups of
users and email and instant messaging before archiving to assess blocked paths to
identify later legal discovery. They can monitor and control access to physical devices
before it is encrypted and supply application controls to block attempted transmissions
of confidential information providing immediate user feedback.
5.14.10.1.3.3 Data identification
This includes techniques to determine sensitive data whether structured with fixed
fields within the file or unstructured for free-form files.[4][5] Data can be categorised
into content analysis with structured data and contextual analysis using place of origin
or the application or system generating the data.[6]
Techniques for defining methods can be precise and imprecise. Precise methods use
content registration and are almost fully successfully. Imprecise methods keywords,
lexicons, regular expressions, extended regular expressions, meta data tags, bayesian
analysis and statistical analysis techniques such as Machine Learning, etc. [7]
The analysis relates to its accuracy which is important to lowering false positives and
negatives. It depends on many variables, situational or technological and should be
validated.
5.14.10.1.3.4 Data leak detection
When data ends up at third parties and some at an unauthorized place the source of the
leak must be investigated.[8]
5.14.10.1.3.5 Data at-rest
Data at rest is archived information which is of interest to the originator. The longer
data is unused the greater chance it could be accessed for illicit purposes.9] Protection
is obtained access control, data encryption and data retention policies.[1]
5.14.10.1.3.6 Data in-use
Data in use is data the user is currently employing. Systems protecting data in-use
monitor and flag unauthorized activities[1] including screen-capture, copy/paste, print
and fax operations of sensitive data and attempts to transmit it over communication
channels.[10]
5.14.10.1.3.7 Data in-motion
Data in motion is information over the network to an endpoint with data is monitored
over a network through the communication channels.[1]
5.14.10.2 References
1. Asaf Shabtai, Yuval Elovici, Lior Rokach, A Survey of Data Leakage Detection and
Prevention Solutions, Springer-Verlag New York Incorporated, 2012
2. Phua, C., Protecting organisations from personal data breaches, Computer Fraud and
Security, 1:13-18, 2009
3. Ouellet, E., Magic Quadrant for Content-Aware Data Loss Prevention, Technical Report,
RA4 06242010, Gartner RAS Core Research, 2012
4. "unstructured data Definition from PC Magazine Encyclopedia".
5. Brian E. Burke, “Information Protection and Control survey: Data Loss Prevention and
Encryption trends,” IDC, May 2008
6. "Understanding and Selecting a Data Loss Prevention Solution" (PDF). Securosis, L.L.C.
Retrieved January 13, 2017.
7. "Core DLP Technology - © GTB Technologies, Inc. 2006".
8. Panagiotis Papadimitriou, Hector Garcia-Molina (January 2011), "Data Leakage
Detection" (PDF), IEEE Transactions on Knowledge and Data Engineering, 23 (1): 51–
63, doi:10.1109/TKDE.2010.100
9. Costante, E., Vavilis, S., Etalle, S., Petkovic, M., & Zannone, N. Database Anomalous
Activities: Detection and Quantification .SECRYPT 2013
10.Gugelmann, D.; Studerus, P.; Lenders, V.; Ager, B. (2015-07-01). "Can Content-Based
Data Loss Prevention Solutions Prevent Data Leakage in Web Traffic?". IEEE Security
Privacy. 13 (4): 52–59. doi:10.1109/MSP.2015.88. ISSN 1540-7993.
5.14.11 IoT Authorization
Authorization controls access rights to resources related to IoT. It relies on
access policies which are divided into policy definition phase where access is defined,
and policy enforcement phase where requests are accepted or rejected.
5.14.12 FIDO Alliance
5.14.12.1 Commentary
The FIDO ("Fast IDentity Online") Alliance is a consortium launched in 2013 to promote
interoperability among strong authentication devices and the problems of multiple
usernames and passwords.[1][2][3] Its specifications will support a full range of
authentication technologies, including biometrics and communications standards.[4][5]
[6][7][8][9][10]
5.14.12.2 References
1."PayPal, Lenovo Launch New Campaign to Kill the Password". MIT Technology Review.
2."FIDO Alliance Members". FIDO Alliance.
3.https://fidoalliance.org/membership/members/
4."FIDO Alliance >> Specifications overview". FIDO Alliance.
5."Specifications Overview". FIDO Alliance. Retrieved 31 October 2014.
6."FIDO 1.0 Specifications Published and Final". FIDO Alliance. Retrieved 31
December2014.
7."Computerworld, December 10, 2014: "Open authentication spec from FIDO Alliance
moves beyond passwords"". Computerworld. Retrieved 10 December 2014.
8."eWeek, July 1, 2015: "FIDO Alliance Extends Two-Factor Security Standards to
Bluetooth, NFC"". eWeek. Retrieved 1 July 2015.
9."W3C Member Submission, November20, 2015: "FIDO 2.0: Web API for accessing FIDO
2.0 credentials"". W3C. Retrieved March 14, 2016.
10."PayPal Engineering Blog, February 17, 2016: "Acceptance of FIDO 2.0 Specifications
by the W3C accelerates the movement to end passwords"". PayPal. Retrieved March
14,2016.
5.15 Summary
5.15.1 Threats
Threats are defined by “find what threat I can use and where” by exploiting
vulnerability of systems. Threats arise in software attacks, theft of intellectual
property, identity theft, theft of equipment or information, sabotage, and information
extortion. They can be used for cyberbullying, cybercrime and cyberwarfare. They
take the form of infectious malware, concealment type and malware for profit.
Infectious malware consists of computer viruses and worms with their payloads.
Concealment type are Trojan horses, rootkits, backdoors, zombie computer, man-in-
the-middle, man-in-the-browser, man-in-the-mobile and clickjacking. Malware for
profit comes as privacy-invasive software, adware, phishing, spyware, botnet,
keystroke logging, form grabbing, web threats, fraudulent dialer, malbot, scareware,
rogue security software, ransomware and crimeware. Other threats arise from
eavesdropping and denial of service.
These threats can be classified as in the following table.
Threat Classification Type Of Basic Threat
Software Attacks Threat Type
Theft Of Intellectual Threat Type
Property
Identity Theft Threat Type
Theft Of Equipment Or Threat Type
Information
Sabotage Threat Type
Information Extortion Threat Type
Cyberbullying Application
Cybercrime Application
Cyberwarfare Application
Viruses Infectious Malware Running program
Worms Infectious Malware Running program Transmit program
Trojan Concealment Type Running program Send information
Rootkits Concealment Type Running program
Backdoors Concealment Type Running program
Zombie Computer Concealment Type Running program Send information
Man-In-The-Middle Concealment Type Running program Send information
Man-In-The-Browser Concealment Type Running program Send information
Man-In-The-Mobile Concealment Type Running program Send information
Clickjacking Concealment Type Running program Send information
Privacy-Invasive Software Malware For Profit Running program Send information
Adware Malware For Profit Running program
Phishing Malware For Profit Running program Send information
Spyware Malware For Profit Running program Send information
Botnet Malware For Profit Running program Send information
Keystroke Logging Malware For Profit Running program
Form Grabbing Malware For Profit Running program Send information
Web Threats Malware For Profit Running program
Fraudulent Dialer Malware For Profit Running program
Malbot Malware For Profit Running program
Scareware Malware For Profit Running program
Rogue Security Software Malware For Profit Running program
Ransomware Malware For Profit Running program
Crimeware Malware For Profit Running program
Eavesdropping Other Threats Running program
Denial Of Service Other Threats Running program

5.15.2 References
37 44 U.S.C. § 3542(b)(1)
38 Stewart, James (2012). CISSP Study Guide. Canada: John Wiley & Sons, Inc.
pp. 255–257. ISBN 978-1-118-31417-3 – via Online PSU course resource, EBL Reader.
39 Gordon, Lawrence; Loeb, Martin (November 2002). "The Economics of
Information Security Investment". ACM Transactions on Information and System
Security. 5 (4): 438–457. doi:10.1145/581271.581274.
40 Stewart, James (2012). CISSP Certified Information Systems Security
Professional Study Guide Sixth Edition. Canada: John Wiley & Sons, Inc. pp. 255–
257. ISBN 978-1-118-31417-3.
41 Suetonius Tranquillus, Gaius (2008). Lives of the Caesars (Oxford World's
Classics). New York: Oxford University Press. p. 28. ISBN 978-0199537563.
42 Singh, Simon (2000). The Code Book. Anchor. pp. 289–290. ISBN 0-385-49532-3.
43 Cherdantseva Y. and Hilton J.: "Information Security and Information Assurance.
The Discussion about the Meaning, Scope and Goals". In: Organizational, Legal, and
Technological Dimensions of Information System Administrator. Almeida F., Portela, I.
(eds.). IGI Global Publishing. (2013)
44 ISO/IEC 27000:2009 (E). (2009). Information technology - Security techniques -
Information security management systems - Overview and vocabulary. ISO/IEC.
45 Committee on National Security Systems: National Information Assurance (IA)
Glossary, CNSS Instruction No. 4009, 26 April 2010.
46 ISACA. (2008). Glossary of terms, 2008. Retrieved
from http://www.isaca.org/Knowledge-Center/Documents/Glossary/glossary.pdf
47 Pipkin, D. (2000). Information security: Protecting the global enterprise. New
York: Hewlett-Packard Company.
48 B., McDermott, E., & Geer, D. (2001). Information security is information risk
management. In Proceedings of the 2001 Workshop on New Security Paradigms NSPW
‘01, (pp. 97 – 104). ACM. doi:10.1145/508171.508187
49 Anderson, J. M. (2003). "Why we need a new definition of information
security". Computers & Security. 22 (4): 308–313. doi:10.1016/S0167-4048(03)00407-3.
50 Venter, H. S.; Eloff, J. H. P. (2003). "A taxonomy for information security
technologies". Computers & Security. 22 (4): 299–307. doi:10.1016/S0167-
4048(03)00406-1.
51 https://www.isc2.org/uploadedFiles/(ISC)2_Public_Content/2013%20Global
%20Information%20Security%20Workforce%20Study%20Feb%202013.pdf
52 Perrin, Chad. "The CIA Triad". Retrieved 31 May 2012.
53 "Engineering Principles for Information Technology Security" (PDF).
csrc.nist.gov.
54 "oecd.org" (PDF). Archived from the original (PDF) on May 16, 2011.
Retrieved 2014-01-17.
55 "NIST Special Publication 800-27 Rev A" (PDF). csrc.nist.gov.
56 Aceituno, Vicente. "Open Information Security Maturity Model". Retrieved 12
February 2017.
57 Boritz, J. Efrim. "IS Practitioners' Views on Core Concepts of Information
Integrity". International Journal of Accounting Information Systems. Elsevier. 6 (4):
260–279. doi:10.1016/j.accinf.2005.07.001. Retrieved 12 August 2011.
58 Loukas, G.; Oke, G. (September 2010) [August 2009]. "Protection Against Denial
of Service Attacks: A Survey" (PDF). Comput. J. 53 (7): 1020–
1037. doi:10.1093/comjnl/bxp078.
59 ISACA (2006). CISA Review Manual 2006. Information Systems Audit and Control
Association. p. 85. ISBN 1-933284-15-3.
60 Spagnoletti, Paolo; Resca A. (2008). "The duality of Information Security
Management: fighting against predictable and unpredictable threats". Journal of
Information System Security. 4 (3): 46–62.
61 Kiountouzis, E.A.; Kokolakis, S.A. Information systems security: facing the
information society of the 21st century. London: Chapman & Hall, Ltd. ISBN 0-412-
78120-4.
62 "NIST SP 800-30 Risk Management Guide for Information Technology
Systems"(PDF). Retrieved 2014-01-17.
63 [1]
64 "Segregation of Duties Control matrix". ISACA. 2008. Archived from the
original on 3 July 2011. Retrieved 2008-09-30.
65 Shon Harris (2003). All-in-one CISSP Certification Exam Guide (2nd
ed.). Emeryville, California: McGraw-Hill/Osborne. ISBN 0-07-222966-7.
66 itpi.org Archived December 10, 2013, at the Wayback Machine.
67 "book summary of The Visible Ops Handbook: Implementing ITIL in 4 Practical
and Auditable Steps". wikisummaries.org. Retrieved 2016-06-22.
68 Harris, Shon (2008). All-in-one CISSP Certification Exam Guide (4th ed.). New
York, NY: McGraw-Hill. ISBN 978-0-07-149786-2.
69 "The Disaster Recovery Plan". Sans Institute. Retrieved 7 February 2012.
70 Lim, Joo S., et al. "Exploring the Relationship between Organizational Culture
and Information Security Culture." Australian Information Security Management
Conference.
71 Schlienger, Thomas; Teufel, Stephanie (2003). "Information security culture-from
analysis to change". South African Computer Journal. 31: 46–52.
72 "BSI-Standards". BSI. Retrieved 29 November 2013.

6 IoT security Solutions


6.1 Commentary
6.1.1 Introduction
This section is the sixth part of the article considering IoT security. It is based in the
context of:
• search theory
• iot
• iot security
• threats
• generalisation
• defenses
• generalisation
• applications
• conclusions
IoT solution cost on technologies and services that store, integrate, visualize, and
analyze IoT data.
Cybersecurity requirements span five key areas:
• Identification—understanding risk profile and current state
• Protection—applying prevention strategies to mitigate vulnerabilities and threats
• Detection—detecting anomalies and events
• Response—incident response, mitigation, and improvements
• Recovery—continuous life cycle improvement
Application protocol-based intrusion detection system (APIDS) consists of:
• Artificial immune system
• Bypass switch
• Denial-of-service attack
• DNS analytics
• Intrusion Detection Message Exchange Format
• Protocol-based intrusion detection system (PIDS)
• Real-time adaptive security
• Security management
• Software-defined protection
• Defensive programming
• Secure input and output handling
• Security bug
IoT security is dependent on information security which is composed of internet
security, cyberwarfare, computer security, mobile security and network security.
Threats come from:
• Computer crime
• Vulnerability
• Eavesdropping
• Exploits
• Trojans
• Viruses
• worms
• Denial of service
• Malware
• Payloads
• Rootkits
• Keyloggers
Defenses come from:
• Computer access control
• Application security
◦ Antivirus software
◦ Secure coding
◦ Security by design
◦ Secure operating systems
• Authentication
◦ Multi-factor authentication
• Authorization
• Data-centric security
• Firewall (computing)
• Intrusion detection system
• Intrusion prevention system
• Mobile secure gateway
Threats can be counter acted by:
• Computer security
• Cyber security standards
• Hardening
• Multiple Independent Levels of Security
• Secure by default
• Security through obscurity
• Software Security Assurance
• Software quality
• Software development philosophies
• Software development process
• Computer security
• Computer security procedures
• Information security
• Internet security
• Cyberwarfare
• Computer security
• Mobile security
• Network security
Information security comes from:
• Internet security
• Cyberwarfare
• Computer security
• Mobile security
• Network security
Operating systems give a basis for controlling security:
• Damn Vulnerable Linux
• IX (operating system)
• OpenBSM
• Operating system (section Security)
• Security engineering
• Security-evaluated operating system
• Trusted operating system
• External links[edit]
• AIX
• OpenBSD
• Hardened Linux
• Qubes
6.1.2 Authentication
6.1.2.1 Commentary
Authentication confirms the truth of an attribute of a single piece of data claimed true
by an entity. Identification states or indicates a person's identity, authentication
confirms that identity. It might involve confirming the identity of a person by validating
their identity documents, verifying the authenticity of a website with a digital
certificate.[1]  In other words, authentication often involves verifying the validity of at
least one form of identification.
In computing, verifying a person's identity is required to allow access to confidential
data or systems. Authentication can be considered to be of three types. The first
accepts proof of identity given by a credible person who has first-hand evidence that
the identity is genuine. Centralized authority-based trust relationships back most
secure internet communication through known public certificate authorities;
decentralized peer-based trust is used for personal services and trust is established by
known individuals signing each other's cryptographic key at key signing parties.[2]
Alternatively we can compare the attributes of the object itself to what is known about
objects of that origin and in computing, a user can be given access to secure
systems based on user credentials that imply authenticity through a password, or a key
card.[3]
Authentication can fall into three categories, based on factors of authentication:
something the user knows, something the user has, and something the user is.[4] 
It can apply levels of security provided by combining different factors from various
categories with the number of factors giving progressive levels of security.[1]
Strong authentication is defined as a layered authentication using two or more
authenticators to establish the identity of an originator or receiver of information[5] 
with mutually independent and at least one factor “non-reusable and non-replicable”,
except for an inherence factor incapable of being stolen off the Internet[1][6] as
supported by the Fast IDentity Online (FIDO) Alliance standards.[7]
Continuous Authentication improves on initial log-in session by using some biometric
trait(s) e.g. behavioural biometrics based in writing styles [8].
Digital authentication (e-authentication) avoids is used for electronic communication
with man-in-the-middle security attacks and extra identity factors authenticate each
party's identity. It is a group of processes where the confidence for user identities is
established and presented via electronic methods to an information system.
NIST has a generic model for digital authentication. It consists of:
Enrollment –application to a credential service provider to become a subscriber.
Authentication –authentication giving token and credentials and permission and
verification to perform transactions in an authenticated session with a relying party.
Life-cycle maintenance – the credential service provider maintains the user’s credential
for its lifetime, while the subscriber is responsible for maintaining their
authenticator(s).[1][9]
Authorization verifies that "you are permitted to do what you are trying to do".[16] [17]
6.1.2.2 References
1. Turner, Dawn M. "Digital Authentication: The Basics". Cryptomathic. Retrieved 9
August 2016.
2. Ahi, Kiarash (May 26, 2016). "Advanced terahertz techniques for quality control
and counterfeit detection". Proc. SPIE 9856, Terahertz Physics, Devices, and Systems
X: Advanced Applications in Industry and Defense, 98560G. doi:10.1117/12.2228684.
Retrieved May 26, 2016.
3. "How to Tell – Software". microsoft.com. Retrieved 11 December 2016.
4. Federal Financial Institutions Examination Council (2008). "Authentication in an
Internet Banking Environment" (PDF). Retrieved 2009-12-31.
5. Committee on National Security Systems. "National Information Assurance (IA)
Glossary" (PDF). National Counterintelligence and Security Center. Retrieved 9
August2016.
6. European Central Bank. "Recommendations for the Security of Internet
Payments"(PDF). European Central Bank. Retrieved 9 August 2016.
7. "FIDO Alliance Passes 150 Post-Password Certified Products". InfoSecurity
Magazine. 2016-04-05. Retrieved 2016-06-13.
8. Brocardo ML, Traore I, Woungang I, Obaidat MS. "Authorship verification using
deep belief network systems". Int J Commun Syst. 2017. doi:10.1002/dac.3259
9. "Draft NIST Special Publication 800-63-3: Digital Authentication Guideline".
National Institute of Standards and Technology, USA. Retrieved 9 August 2016.
10. Eliasson, C; Matousek (2007). "Noninvasive Authentication of Pharmaceutical
Products through Packaging Using Spatially Offset Raman Spectroscopy". Analytical
Chemistry. 79(4): 1696–1701. doi:10.1021/ac062223z. PMID 17297975. Retrieved 9
Nov 2014.
11. Li, Ling (March 2013). "Technology designed to combat fakes in the global supply
chain". Business Horizons. 56 (2): 167–177. doi:10.1016/j.bushor.2012.11.010.
Retrieved 9 Nov 2014.
12. How Anti-shoplifting Devices Work", HowStuffWorks.com
13. Norton, D. E. (2004). The effective teaching of language arts. New York:
Pearson/Merrill/Prentice Hall.
14. McTigue, E.; Thornton, E.; Wiese, P. (2013). "Authentication Projects for
Historical Fiction: Do you believe it?". The Reading Teacher. 66: 495–
505. doi:10.1002/trtr.1132.
15. The Register, UK; Dan Goodin; 30 March 2008; Get your German Interior
Minister's fingerprint, here. Compared to other solutions, "It's basically like leaving the
password to your computer everywhere you go, without you being able to control it
anymore", one of the hackers comments.
16. https://technet.microsoft.com/en-us/library/ff687018.aspx
17. "AuthN, AuthZ and Gluecon – CloudAve". cloudave.com. 26 April 2010.
Retrieved 11 December 2016.
18. A mechanism for identity delegation at authentication level, N Ahmed, C Jensen
– Identity and Privacy in the Internet Age – Springer 2009
6.1.3 IoT Authorization
6.1.3.1 Commentary
Authorization controls access rights to resources related to IoT. It relies on
access policies which are divided into policy definition phase where access is defined,
and policy enforcement phase where requests are accepted or rejected.
6.1.3.2 References
1. Fraser, B. (1997), RFC 2196 – Site Security Handbook, IETF
2. Jøsang, Audun (2017), A Consistent Definition of Authorization, Proceedings of
the 13th International Workshop on Security and Trust Management (STM 2017)
6.1.4 Multi-factor authentication
6.1.4.1 Commentary
6.1.4.1.1 Introduction
Multi-factor authentication protects access using several separate pieces of evidence
are submitted to an authentication mechanism from categories: knowledge (something
they know), possession (something they have), and inherence (something they are).[1]
[2]

some physical object in the possession of the user, such as a USB stick with a secret
token, a bank card, a key, etc.
some secret known to the user, such as a password, PIN, TAN, etc.
some physical characteristic of the user (biometrics), such as a fingerprint, eye iris,
voice, typing speed, pattern in key press intervals, etc.[3]
Knowledge factors[4]

Possession factors as disconnected tokens,[5] Connected tokens,[6] inherence factors,


[7]
Mobile phone two-factor authentication[8][9][10]
Advantages of mobile phone two-factor authentication
No additional tokens are necessary because it uses mobile devices that are (usually)
carried all the time.
As they are constantly changed, dynamically generated passcodes are safer to use
than fixed (static) log-in information.
Depending on the solution, passcodes that have been used are automatically replaced
in order to ensure that a valid code is always available; acute transmission/reception
problems do not therefore prevent logins.
The option to specify a maximum permitted number of incorrect entries reduces the
risk of attacks by unauthorized persons.
It is user friendly.
Disadvantages of mobile phone two-factor authentication
The mobile phone must be carried by the user, charged, and kept in range of a cellular
network whenever authentication might be necessary. If the phone is unable to display
messages, such as if it becomes damaged or shuts down for an update or due to
temperature extremes (e.g. winter exposure), access is often impossible without
backup plans.
The user must share their personal mobile number with the provider, reducing personal
privacy and potentially allowing spam.
Text messages to mobile phones using SMS are insecure and can be intercepted. The
token can thus be stolen and used by third parties.[11]
Text messages may not be delivered instantly, adding additional delays to the
authentication process.
Account recovery typically bypasses mobile phone two-factor authentication.[12]
Modern smart phones are used both for browsing email and for receiving SMS. Email is
usually always logged in. So if the phone is lost or stolen, all accounts for which the
email is the key can be hacked as the phone can receive the second factor. So smart
phones combine the two factors into one factor.
Mobile phones can be stolen, potentially allowing the thief to gain access into the
user's accounts.
SIM cloning gives hackers access to mobile phone connections.

6.1.4.1.2 Advances in mobile two-factor authentication


11.Advances in research of two-factor authentication for mobile devices consider
different methods in which a second factor can be implemented while not posing a
hindrance to the user. With the continued use and improvements in the accuracy of
mobile hardware such as GPS, microphone, and gyro/acceleromoter, the ability to use
them as a second factor of authentication is becoming more trustworthy. For example,
by recording the ambient noise of the user’s location from a mobile device and
comparing it with the recording of the ambient noise from the computer in the same
room on which the user is trying to authenticate, one is able to have an effective
second factor of authentication.[13]This also reduces the amount of time and effort
needed to complete the process.

6.1.4.1.3 United States


6.1.4.1.3.1 Regulation
Details for authentication in the USA are defined with the Homeland Security
Presidential Directive 12 (HSPD-12).[14]
Existing authentication methodologies involve the explained three types of basic
"factors". Authentication methods that depend on more than one factor are more
difficult to compromise than single-factor methods.[15]
IT regulatory standards for access to Federal Government systems require the use of
multi-factor authentication to access sensitive IT resources, for example when logging
on to network devices to perform administrative tasks[16] and when accessing any
computer using a privileged login.[17]

6.1.4.1.3.2 Guidance
NIST Special Publication 800-63-2 discusses various forms of two-factor authentication
and provides guidance on using them in business processes requiring different levels of
assurance.[18]
In 2005, the United States' Federal Financial Institutions Examination Council issued
guidance for financial institutions recommending financial institutions conduct risk-
based assessments, evaluate customer awareness programs, and develop security
measures to reliably authenticate customers remotely accessing online financial
services, officially recommending the use of authentication methods that depend on
more than one factor (specifically, what a user knows, has, and is) to determine the
user's identity.[19] In response to the publication, numerous authentication vendors
began improperly promoting challenge-questions, secret images, and other knowledge-
based methods as "multi-factor" authentication. Due to the resulting confusion and
widespread adoption of such methods, on August 15, 2006, the FFIEC published
supplemental guidelines—which states that by definition, a "true" multi-factor
authentication system must use distinct instances of the three factors of
authentication it had defined, and not just use multiple instances of a single factor.[20]
6.1.4.1.3.3 Security
According to proponents, multi-factor authentication could drastically reduce the
incidence of online identity theft and other online fraud, because the victim's password
would no longer be enough to give a thief permanent access to their information.
However, many multi-factor authentication approaches remain vulnerable to phishing,
[21] man-in-the-browser, and man-in-the-middle attacks.[22]
Multi-factor authentication may be ineffective against modern threats, like ATM
skimming, phishing, and malware.[23]
6.1.4.1.3.4 Industry regulation
Payment Card Industry Data Security Standard (PCI-DSS)[edit]
The Payment Card Industry (PCI) Data Security Standard, requirement 8.3, requires the
use of MFA for all remote network access that originates from outside the network to a
Card Data Environment (CDE).[24] Beginning with PCI-DSS version 3.2, the use of MFA
is required for all administrative access to the CDE, even if the user is within a trusted
network.[25]
6.1.4.1.3.5 Implementation considerations
Many multi-factor authentication products require users to deploy client software to
make multi-factor authentication systems work. Some vendors have created separate
installation packages
for network login, Web access credentials and VPN connection credentials. For such
products, there may be four or five different software packages to push down to
the client PC in order to make use of the token or smart card. This translates to four or
five packages on which version control has to be performed, and four or five packages
to check for conflicts with business applications. If access can be operated using web
pages, it is possible to limit the overheads outlined above to a single application. With
other multi-factor authentication solutions, such as "virtual" tokens and some hardware
token products, no software must be installed by end users.
There are drawbacks to multi-factor authentication that are keeping many approaches
from becoming widespread. Some consumers have difficulty keeping track of a
hardware token or USB plug. Many consumers do not have the technical skills needed
to install a client-side software certificate by themselves. Generally, multi-factor
solutions require additional investment for implementation and costs for maintenance.
Most hardware token-based systems are proprietary and some vendors charge an
annual fee per user. Deployment of hardware tokens is logistically challenging.
Hardware tokens may get damaged or lost and issuance of tokens in large industries
such as banking or even within large enterprises needs to be managed. In addition to
deployment costs, multi-factor authentication often carries significant additional
support costs. A 2008 survey of over 120 U.S. credit unions by the Credit Union
Journal reported on the support costs associated with two-factor authentication. In
their report, software certificates and software toolbar approaches were reported to
have the highest support costs.
6.1.4.1.3.6 Examples
Several popular web services employ multi-factor authentication, usually as an optional
feature that is deactivated by default.[26]
Two-factor authentication
Many Internet services (among them: Google, Amazon AWS) use open Time-based One-
time Password Algorithm (TOTP) to support multi-factor or two-factor authentication
6.1.4.2 References
1. "Two-factor authentication: What you need to know (FAQ) - CNET". CNET.
Retrieved 2015-10-31.
2. "How to extract data from an iCloud account with two-factor authentication
activated". iphonebackupextractor.com. Retrieved 2016-06-08.
3. "What is 2FA?". Retrieved 19 February 2015.
4. "Securenvoy - what is 2 factor authentication?". Retrieved April 3, 2015.
5. de Borde, Duncan. "Two-factor authentication" (PDF). Archived from the
original(PDF) on January 12, 2012.
6. b van Tilborg, Henk C.A.; Jajodia, Sushil, eds. (2011). Encyclopedia of
Cryptography and Security, Volume 1. Springer Science & Business Media.
p. 1305. ISBN 9781441959058.
7. Biometrics for Identification and Authentication - Advice on Product Selection
8. http://eprint.iacr.org/2014/135.pdf
9. "Mobile Two Factor Authentication" (PDF). securenvoy.com. Retrieved August
30, 2016. 2012 copyright
10. "How Russia Works on Intercepting Messaging Apps - bellingcat". bellingcat.
2016-04-30. Retrieved 2016-04-30.
11. SSMS – A Secure SMS Messaging Protocol for the M-Payment Systems,
Proceedings of the 13th IEEE Symposium on Computers and Communications (ISCC'08),
pp. 700–705, July 2008 arXiv:1002.3171
12. Rosenblatt, Seth; Cipriani, Jason (June 15, 2015). "Two-factor authentication:
What you need to know (FAQ)". CNET. Retrieved 2016-03-17.
13. "Sound-Proof: Usable Two-Factor Authentication Based on Ambient Sound |
USENIX". www.usenix.org. Retrieved 2016-02-24.
14. US Security Directive as issued on August 12, 2007 Archived September 16,
2012, at the Wayback Machine.
15. "Frequently Asked Questions on FFIEC Guidance on Authentication in an Internet
Banking Environment", August 15, 2006[dead link]
16. "SANS Institute, Critical Control 10: Secure Configurations for Network Devices
such as Firewalls, Routers, and Switches".
17. "SANS Institute, Critical Control 12: Controlled Use of Administrative Privileges".
18. "Electronic Authentication Guide" (PDF). Special Publication 800-63-2. NIST.
2013. Retrieved 2014-11-06.
19. "FFIEC Press Release". 2005-10-12. Retrieved 2011-05-13.
20. FFIEC (2006-08-15). "Frequently Asked Questions on FFIEC Guidance on
Authentication in an Internet Banking Environment" (PDF). Retrieved 2012-01-14.
21. Brian Krebs (July 10, 2006). "Security Fix - Citibank Phish Spoofs 2-Factor
Authentication". Washington Post. Retrieved 20 September 2016.
22. Bruce Schneier (March 2005). "The Failure of Two-Factor
Authentication". Schneier on Security. Retrieved 20 September 2016.
23. "The Failure of Two-Factor Authentication - Schneier on Security". schneier.com.
Retrieved 23 October 2015.
24. "Official PCI Security Standards Council Site - Verify PCI Compliance, Download
Data Security and Credit Card Security Standards". www.pcisecuritystandards.org.
Retrieved 2016-07-25.
25. "For PCI MFA Is Now Required For Everyone | Centrify Blog". blog.centrify.com.
Retrieved 2016-07-25.
26. GORDON, WHITSON (3 September 2012). "Two-Factor Authentication: The Big
List Of Everywhere You Should Enable It Right Now". LifeHacker. Australia. Retrieved 1
6.1.5 Best Practices
6.1.5.1 Commentary
IoT security, like all information security, cannot absolute or guaranteed. Threats are
constantly discovered requiring monitoring, maintenance and review for policy and
practice on a regularly.
IoT Security Foundation (IoTSF) publishes, reviews and maintains guidance and
framework information on a regular basis or in exceptional circumstances when this is
required. It is built on feedback from users and experts to provide continuous
improvement. It provides publicity when needed and gives an audit trail on its website.
The IoT security compliance framework is a structured checklist process to direct
organisations through the IoT security assurance process where information collected
demonstrates conformance with best practice.
6.1.5.2 References

6.1.6 FIDO Alliance


6.1.6.1 Commentary
The FIDO ("Fast IDentity Online") Alliance is a consortium launched in 2013 to promote
interoperability among strong authentication devices and the problems of multiple
usernames and passwords.[1][2][3] Its specifications will support a full range of
authentication technologies, including biometrics and communications standards.[4][5]
[6][7][8][9][10]
6.1.6.2 References
6. "PayPal, Lenovo Launch New Campaign to Kill the Password". MIT Technology
Review.
7. "FIDO Alliance Members". FIDO Alliance.
8. https://fidoalliance.org/membership/members/
9. "FIDO Alliance >> Specifications overview". FIDO Alliance.
10. "Specifications Overview". FIDO Alliance. Retrieved 31 October 2014.
11. "FIDO 1.0 Specifications Published and Final". FIDO Alliance. Retrieved 31
December2014.
12. "Computerworld, December 10, 2014: "Open authentication spec from FIDO
Alliance moves beyond passwords"". Computerworld. Retrieved 10 December 2014.
13. "eWeek, July 1, 2015: "FIDO Alliance Extends Two-Factor Security Standards to
Bluetooth, NFC"". eWeek. Retrieved 1 July 2015.
14. "W3C Member Submission, November20, 2015: "FIDO 2.0: Web API for accessing
FIDO 2.0 credentials"". W3C. Retrieved March 14, 2016.
15. "PayPal Engineering Blog, February 17, 2016: "Acceptance of FIDO 2.0
Specifications by the W3C accelerates the movement to end passwords"". PayPal.
Retrieved March 14,2016.
6.1.7 Antivirus
6.1.7.1 Commentary
An antivirus prevents, detects, and removes malware such as malicious browser helper
objects, browser hijackers, ransomware, key-loggers, back-doors, rootkits, Trojan
horses, worms, malicious Layered Service Providers, dialers, fraud
tools, adware and spyware. Some products include protection from other threats, like
infected and malicious URLs, spam, scam and phishing attacks, online
identity (privacy), online banking attacks, social engineering techniques, advanced
persistent threat (APT) and botnet DDoS attacks.
Antivirus is computer software used to prevent, detect and remove malicious software.
[1] It was originally developed for viruses, but is now extended for all malware.[2][3]
Methods used to identify malware are:
 Sandbox detection: trial of the program in a virtual environment[79] or real
environment.[80]
 Data mining: profiling by data mining and machine learning.[81][82][83][84][85]
[86][87][88][89][90][91][92][93][94]
 Signature-based detection: forming signatures[95] using dynamic analysis
systems and added to the signatures database.[96] This can fail for disguises of the
same malware.[97]
 Heuristics: reduce the groupings of malware.[98]
 Rootkit detection: scan for rootkits gaining administrative-level control over a
system.[102]
 Real-time protection: monitors systems for suspicious activity.[103]
 Issues of concern are rogue security applications[107], false positives,[108][109]
[110][111][112][113][114][115][116][117][118] system and interoperability related
issues,[119][120][121][122][123][124][125][126][127][128] effectiveness,[129][130][131]
[132][133][134][135] new viruses,[136][137][138] rootkits,[139] damaged files,[140]
[141][142][143] performance[150] and false sense of security.[151][152][153][154][154]
Firmware issues are interfere with update process.[144][145][146][147][148][149]
 Cloud antivirus uses a lightweight agent on the protected computer and offloads
most data analysis to the infrastructure[155][156][157][158] whilst online scanning is
websites with free online scanning the computer[159]
Problems arise from:
• Rogue security applications
• False positives detected
• Running concurrent multiple antiviruses degrade performance and create
conflicts.
• Disabling virus protection when installing major updates.
• Program conflicts to cause malfunction or impair performance and stability.
• Support issues of interoperability of remote and network access control.
• Damaged files after removing viruses
Virus removal tools are available to help remove stubborn infections or certain types of
infection. A bootable rescue disk can be used to run antivirus software outside of the
installed operating system, in order to remove infections while they are dormant.

6.1.7.2 References
1. Naveen, Sharanya. "Anti-virus software". Retrieved May 31, 2016.
2. Henry, Alan. "The Difference Between Antivirus and Anti-Malware (and Which to
Use)".
3. "What is antivirus software?". Microsoft. Archived from the original on April 11,
2011.
4. von Neumann, John (1966) Theory of self-reproducing automata. University of
Illinois Press.
5. Thomas Chen, Jean-Marc Robert (2004). "The Evolution of Viruses and Worms".
Retrieved February 16, 2009.
6. From the first email to the first YouTube video: a definitive internet history. Tom
Meltzer and Sarah Phillips. The Guardian. October 23, 2009
7. IEEE Annals of the History of Computing, Volumes 27–28. IEEE Computer Society,
2005. 74: "[...]from one machine to another led to experimentation with
the Creeper program, which became the world's first computer worm: a computation
that used the network to recreate itself on another node, and spread from node to
node."
8. John Metcalf (2014). "Core War: Creeper & Reaper". Retrieved May 1, 2014.
9. "Creeper – The Virus Encyclopedia".
10. What was the First Antivirus Software?. Anti-virus-software-
review.toptenreviews.com. Retrieved on January 3, 2017.
11. "Elk Cloner". Retrieved December 10, 2010.
12. "Top 10 Computer Viruses: No. 10 – Elk Cloner". Retrieved December 10, 2010.
13. "List of Computer Viruses Developed in 1980s". Retrieved December 10, 2010.
14. Fred Cohen: "Computer Viruses – Theory and Experiments" (1983).
Eecs.umich.edu (November 3, 1983). Retrieved on 2017-01-03.
15. Cohen, Fred (April 1, 1988). "Invited Paper: On the Implications of Computer
Viruses and Methods of Defense". Computers & Security. 7 (2): 167–
184. doi:10.1016/0167-4048(88)90334-3 – via ACM Digital Library.
16. Szor, Peter (February 13, 2005). The Art of Computer Virus Research and
Defense. Addison-Wesley Professional. ASIN 0321304543 – via Amazon.
17. "Virus Bulletin :: In memoriam: Péter Ször 1970–2013".
18. "History of Viruses".
19. Leyden, John (January 19, 2006). "PC virus celebrates 20th birthday". The
Register. Retrieved March 21, 2011.
20. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
20https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-20 "About computer viruses
of 1980's" (PDF). Retrieved February 17, 2016.
21. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
21https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-21 Panda Security (April
2004). "(II) Evolution of computer viruses". Archived from the original on August 2,
2009. Retrieved June 20, 2009.
22. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
22https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-22 Kaspersky Lab Virus list.
viruslist.com
23. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
23https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-23 Wells, Joe (August 30,
1996). "Virus timeline". IBM. Archived from the original on June 4, 2008. Retrieved June
6, 2008.
24. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Gdata_24-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Gdata_24-0 G Data Software
AG (2011). "G Data presents security firsts at CeBIT 2010". Retrieved August 22, 2011.
25. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-viruskit_25-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-viruskit_25-0 G Data
Software AG (2016). "Virus Construction Set II". Retrieved July 3, 2016.
26. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-UniqueNameOfRef_26-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-UniqueNameOfRef_26-
0 Karsmakers, Richard (January 2010). "The ultimate Virus Killer Book and Software".
Retrieved July 6, 2016.
27. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
27https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-27 "McAfee Becomes Intel
Security". McAfee Inc. Retrieved January 15, 2014.
28. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
28https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-28 Cavendish, Marshall
(2007). Inventors and Inventions, Volume 4. Paul Bernabeo. p. 1033. ISBN 0761477675.
29. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
29https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-29 "About ESET Company".
30. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
30https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-30 "ESET NOD32 Antivirus".
Vision Square. February 16, 2016. to:a b Cohen, Fred, An Undetectable Computer Virus
(Archived), 1987, IBM
31. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
32https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-32 Yevics, Patricia A. "Flu
Shot for Computer Viruses". americanbar.org.
32. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
33https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-33 Strom, David (April 1,
2010). "How friends help friends on the Internet: The Ross Greenberg Story".
wordpress.com.
33. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
34https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-34 "Anti-virus is 30 years
old". spgedwards.com. April 2012.
34. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
35https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-35 "A Brief History of
Antivirus Software". techlineinfo.com.
35. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
36https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-36 Grimes, Roger A. (June
1, 2001). Malicious Mobile Code: Virus Protection for Windows. O'Reilly Media, Inc.
p. 522. ISBN 9781565926820.
36. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
37https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-37 "F-PROT Tækniþjónusta –
CYREN Iceland". frisk.is.
37. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
38https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-38 Direccion General del
Derecho de Autor, SEP, Mexico D.F. Registry 20709/88 Book 8, page 40, dated
November 24, 1988. to:a b "The 'Security Digest' Archives (TM) : www.phreak.org-
virus_l".
38. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
40https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-40 "Symantec Softwares
and Internet Security at PCM".
39. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
41https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-41 SAM Identifies Virus-
Infected Files, Repairs Applications, InfoWorld, May 22, 1989
40. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
42https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-42 SAM Update Lets Users
Program for New Viruses, InfoWorld, February 19, 1990
41. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
43https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-43 Naveen,
Sharanya. "Panda Security". Retrieved May 31, 2016.
42. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
44https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-44 http://www.tgsoft.it, TG
Soft S.a.s. -. "Who we are – TG Soft Software House".
43. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
45https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-45 "A New Virus Naming
Convention (1991) – CARO – Computer Antivirus Research Organization".
44. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
46https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-46 "CARO Members". CARO.
Retrieved June 6, 2011.
45. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
47https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-47 CAROids, Hamburg
2003 Archived November 7, 2014, at the Wayback Machine.
46. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
48https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-48 "F-Secure Weblog : News
from the Lab". F-secure.com. Retrieved September 23, 2012.
47. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
49https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-49 "About EICAR". EICAR
official website. Retrieved October 28, 2013.
48. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
50https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-50 David Harley, Lysa Myers
& Eddy Willems. "Test Files and Product Evaluation: the Case for and against Malware
Simulation" (PDF). AVAR2010 13th Association of anti Virus Asia Researchers
International Conference. Archived from the original (PDF) on September 29, 2011.
Retrieved June 30, 2011.
49. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
51https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-51 "Dr. Web LTD Doctor Web
/ Dr. Web Reviews, Best AntiVirus Software Reviews, Review Centre".
Reviewcentre.com. Retrieved February 17, 2014. to:a b c d [In 1994, AV-Test.org
reported 28,613 unique malware samples (based on MD5). "A Brief History of Malware;
The First 25 Years"]
50. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
53https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-53 "BitDefender Product
History".
51. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
54https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-54 "InfoWatch
Management". InfoWatch. Retrieved August 12, 2013.
52. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
55https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-55 "Linuxvirus – Community
Help Wiki".
53. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
56https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-56 "Sorry – recovering...".
54. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
57https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-57 "Sourcefire acquires
ClamAV". ClamAV. August 17, 2007. Retrieved February 12, 2008.
55. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
58https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-58 "Cisco Completes
Acquisition of Sourcefire". cisco.com. October 7, 2013. Retrieved June 18, 2014.
56. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
59https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-59 Der Unternehmer – brand
eins online. Brandeins.de (July 2009). Retrieved on January 3, 2017.
57. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
60https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-60 Williams, Greg (April
2012). "The digital detective: Mikko Hypponen's war on malware is escalating". Wired.
58. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
61https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-61 "Everyday cybercrime –
and what you can do about it".
59. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
62https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-62 Szor 2005, pp. 66–67
60. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
63https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-63 "New virus travels in PDF
files". August 7, 2001. Retrieved October 29, 2011.
61. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
64https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-64 Slipstick Systems
(February 2009). "Protecting Microsoft Outlook against Viruses". Archived from the
original on June 2, 2009. Retrieved June 18, 2009.
62. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
65https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-65 "CloudAV: N-Version
Antivirus in the Network Cloud". usenix.org.
63. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
66https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-66 McAfee Artemis Preview
Report. av-comparatives.org
64. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
67https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-67 McAfee Third Quarter
2008. corporate-ir.net
65. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
68https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-68 "AMTSO Best Practices
for Testing In-the-Cloud Security Products » AMTSO".
66. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
69https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-69 "TECHNOLOGY
OVERVIEW". AVG Security. Retrieved February 16, 2015.
67. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
70https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-70 "Magic Quadrant
Endpoint Protection Platforms 2016". Gartner Research.
68. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-NetworkWorld_71-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-NetworkWorld_71-0 Messmer,
Ellen. "Start-up offers up endpoint detection and response for behavior-based malware
detection". networkworld.com.
69. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-HSToday.US_72-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-HSToday.US_72-0 "Homeland
Security Today: Bromium Research Reveals Insecurity in Existing Endpoint Malware
Protection Deployments".
70. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
73https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-73 "Duelling Unicorns:
CrowdStrike Vs. Cylance In Brutal Battle To Knock Hackers Out". Forbes. July 6, 2016.
71. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
74https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-74 Potter, Davitt (June 9,
2016). "Is Anti-virus Dead? The Shift Toward Next-Gen Endpoints".
72. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
75https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-75 "CylancePROTECT®
Achieves HIPAA Security Rule Compliance Certification". Cylance.
73. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
76https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-76 "Trend Micro-XGen".
Trend Micro. October 18, 2016.
74. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
77https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-77 "Next-Gen Endpoint".
Sophos.
75. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
78https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-78 The Forrester Wave™:
Endpoint Security Suites, Q4 2016. Forrester.com (October 19, 2016). Retrieved on
2017-01-03.
76. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
79https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-79 Sandboxing Protects
Endpoints | Stay Ahead Of Zero Day Threats. Enterprise.comodo.com (June 20, 2014).
Retrieved on 2017-01-03.
77. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
80https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-80 Szor 2005, pp. 474–481
78. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
81https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-81 Kiem, Hoang; Thuy,
Nguyen Yhanh and Quang, Truong Minh Nhat (December 2004) "A Machine Learning
Approach to Anti-virus System", Joint Workshop of Vietnamese Society of AI, SIGKBS-
JSAI, ICS-IPSJ and IEICE-SIGAI on Active Mining ; Session 3: Artificial Intelligence, Vol.
67, pp. 61–65
79. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
82https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-82 Data Mining Methods for
Malware Detection. ProQuest. 2008. pp. 15–. ISBN 978-0-549-88885-7.
80. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
83https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-83 Dua, Sumeet; Du, Xian
(April 19, 2016). Data Mining and Machine Learning in Cybersecurity. CRC Press.
pp. 1–. ISBN 978-1-4398-3943-0.
81. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
84https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-84 Firdausi, Ivan; Lim,
Charles; Erwin, Alva; Nugroho, Anto Satriyo (2010). "Analysis of Machine learning
Techniques Used in Behavior-Based Malware Detection". 2010 Second International
Conference on Advances in Computing, Control, and Telecommunication Technologies.
p. 201. doi:10.1109/ACT.2010.33. ISBN 978-1-4244-8746-2.
82. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
85https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-85 Siddiqui, Muazzam;
Wang, Morgan C.; Lee, Joohan (2008). "A survey of data mining techniques for malware
detection using file features". Proceedings of the 46th Annual Southeast Regional
Conference on XX – ACM-SE 46.
p. 509. doi:10.1145/1593105.1593239. ISBN 9781605581057.
83. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
86https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-86 Deng, P.S.; Jau-Hwang
Wang; Wen-Gong Shieh; Chih-Pin Yen; Cheng-Tan Tung (2003). "Intelligent automatic
malicious code signatures extraction". IEEE 37th Annual 2003 International Carnahan
Conference on Security Technology, 2003. Proceedings.
p. 600. doi:10.1109/CCST.2003.1297626. ISBN 0-7803-7882-2.
84. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
87https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-87 Komashinskiy, Dmitriy;
Kotenko, Igor (2010). "Malware Detection by Data Mining Techniques Based on
Positionally Dependent Features". 2010 18th Euromicro Conference on Parallel,
Distributed and Network-based Processing. p. 617. doi:10.1109/PDP.2010.30. ISBN 978-
1-4244-5672-7.
85. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
88https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-88 Schultz, M.G.; Eskin, E.;
Zadok, F.; Stolfo, S.J. (2001). "Data mining methods for detection of new malicious
executables". Proceedings 2001 IEEE Symposium on Security and Privacy. S&P 2001.
p. 38. doi:10.1109/SECPRI.2001.924286. ISBN 0-7695-1046-9.
86. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
89https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-89 Ye, Yanfang; Wang,
Dingding; Li, Tao; Ye, Dongyi (2007). "IMDS". Proceedings of the 13th ACM SIGKDD
international conference on Knowledge discovery and data mining – KDD '07.
p. 1043. doi:10.1145/1281192.1281308. ISBN 9781595936097.
87. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
90https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-90 Kolter, J. Zico; Maloof,
Marcus A. (December 1, 2006). "Learning to Detect and Classify Malicious Executables
in the Wild". 7: 2721–2744.
88. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
91https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-91 Tabish, S. Momina;
Shafiq, M. Zubair; Farooq, Muddassar (2009). "Malware detection using statistical
analysis of byte-level file content". Proceedings of the ACM SIGKDD Workshop on
Cyber Security and Intelligence Informatics – CSI-KDD '09.
p. 23. doi:10.1145/1599272.1599278. ISBN 9781605586694.
89. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
92https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-92 Ye, Yanfang; Wang,
Dingding; Li, Tao; Ye, Dongyi; Jiang, Qingshan (2008). "An intelligent PE-malware
detection system based on association mining". Journal in Computer Virology. 4 (4):
323. doi:10.1007/s11416-008-0082-4.
90. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
93https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-93 Sami, Ashkan; Yadegari,
Babak; Peiravian, Naser; Hashemi, Sattar; Hamze, Ali (2010). "Malware detection based
on mining API calls". Proceedings of the 2010 ACM Symposium on Applied Computing –
SAC '10. p. 1020. doi:10.1145/1774088.1774303. ISBN 9781605586397.
91. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
94https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-94 Shabtai, Asaf; Kanonov,
Uri; Elovici, Yuval; Glezer, Chanan; Weiss, Yael (2011). ""Andromaly": A behavioral
malware detection framework for android devices". Journal of Intelligent Information
Systems. 38: 161. doi:10.1007/s10844-010-0148-x.
92. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
95https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-95 Fox-Brewster,
Thomas. "Netflix Is Dumping Anti-Virus, Presages Death Of An Industry". Forbes.
Retrieved September 4, 2015.
93. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
96https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-96 Automatic Malware
Signature Generation. (PDF) . Retrieved on January 3, 2017.
94. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
97https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-97 Szor 2005, pp. 252–288
95. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
98https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-98 "Generic detection".
Kaspersky. Retrieved July 11, 2013.
96. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
99https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-99 Symantec Corporation
(February 2009). "Trojan.Vundo". Archived from the original on April 9, 2009.
Retrieved April 14, 2009.
97. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
100https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-100 Symantec Corporation
(February 2007). "Trojan.Vundo.B". Archived from the original on April 27, 2009.
Retrieved April 14, 2009.
98. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
101https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-101 "Antivirus Research
and Detection Techniques". ExtremeTech. Archived from the original on February 27,
2009. Retrieved February 24, 2009.
99. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
102https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-102 "Terminology – F-
Secure Labs".
100. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
103https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-103 Kaspersky Lab
Technical Support Portal Archived February 14, 2011, at WebCite
101. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
104https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-104 Kelly, Michael
(October 2006). "Buying Dangerously". Retrieved November 29, 2009.
102. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
105https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-105 Bitdefender
(2009). "Automatic Renewal". Retrieved November 29, 2009.
103. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
106https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
106 Symantec (2014). "Norton Automatic Renewal Service FAQ". Retrieved April
9, 2014.
104. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
107https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-107 SpywareWarrior
(2007). "Rogue/Suspect Anti-Spyware Products & Web Sites". Retrieved November
29, 2009.
105. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
108https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-108 Protalinski, Emil
(November 11, 2008). "AVG incorrectly flags user32.dll in Windows XP SP2/SP3". Ars
Technica. Retrieved February 24, 2011.
106. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
109https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-109 McAfee to
compensate businesses for buggy update, retrieved December 2, 2010
107. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
110https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-110 Buggy McAfee update
whacks Windows XP PCs, archived from the original on January 13, 2011,
retrieved December 2, 2010
108. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
111https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-111 Tan, Aaron (May 24,
2007). "Flawed Symantec update cripples Chinese PCs". CNET Networks.
Retrieved April 5, 2009. to:a b Harris, David (June 29, 2009). "January 2010 – Pegasus
Mail v4.52 Release". Pegasus Mail. Archived from the original on May 28, 2010.
Retrieved May 21, 2010.
109. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
113https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-113 "McAfee DAT 5958
Update Issues". April 21, 2010. Archived from the original on April 24, 2010.
Retrieved April 22, 2010.
110. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
114https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-114 "Botched McAfee
update shutting down corporate XP machines worldwide". April 21, 2010. Archived from
the original on April 22, 2010. Retrieved April 22, 2010.
111. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
115https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-115 Leyden, John
(December 2, 2010). "Horror AVG update ballsup bricks Windows 7". The Register.
Retrieved December 2, 2010.
112. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
116https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-116 MSE false positive
detection forces Google to update Chrome, retrieved October 3, 2011
113. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
117https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-117 Sophos Antivirus
Detects Itself as Malware, Deletes Key Binaries, The Next Web, retrieved March 5, 2014
114. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
118https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-118 Shh/Updater-B false
positive by Sophos anti-virus products, Sophos, retrieved March 5, 2014
115. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
119https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-119 "Plus! 98: How to
Remove McAfee VirusScan". Microsoft. January 2007. Archivedfrom the original on
April 8, 2010. Retrieved September 27, 2014.
116. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
120https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-120 Vamosi, Robert (May
28, 2009). "G-Data Internet Security 2010". PC World. Retrieved February 24, 2011.
117. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
121https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-121 Higgins, Kelly Jackson
(May 5, 2010). "New Microsoft Forefront Software Runs Five Antivirus Vendors'
Engines". Darkreading. Retrieved February 24, 2011.
118. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
122https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-122 "Steps to take before
you install Windows XP Service Pack 3". Microsoft. April 2009. Archived from the
original on December 8, 2009. Retrieved November 29, 2009.
119. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
123https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-123 "Upgrading from
Windows Vista to Windows 7". Retrieved March 24, 2012. Mentioned within "Before you
begin".
120. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
124https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-124 "Upgrading to
Microsoft Windows Vista recommended steps.". Retrieved March 24, 2012.
121. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
125https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-125 "How to troubleshoot
problems during installation when you upgrade from Windows 98 or Windows
Millennium Edition to Windows XP". May 7, 2007. Retrieved March 24, 2012.Mentioned
within "General troubleshooting".
122. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
126https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-126 "Troubleshooting".
Retrieved February 17, 2011.
123. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
127https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-127 "Spyware, Adware,
and Viruses Interfering with Steam". Retrieved April 11, 2013.Steam support page.
124. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
128https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-128 "Field Notice: FN –
63204 – Cisco Clean Access has Interoperability issue with Symantec Anti-virus –
delays Agent start-up".
125. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
129https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-129 Goodin, Dan
(December 21, 2007). "Anti-virus protection gets worse". Channel Register.
Retrieved February 24, 2011.
126. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
130https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-130 "ZeuS Tracker ::
Home".
127. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
131https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-131 Illett, Dan (July 13,
2007). "Hacking poses threats to business". Computer Weekly. Retrieved November
15, 2009.
128. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
132https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-132 Espiner, Tom (June 30,
2008). "Trend Micro: Antivirus industry lied for 20 years". ZDNet. Retrieved September
27, 2014.
129. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
133https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-133 AV Comparatives
(December 2013). "Whole Product Dynamic "Real World" Production
Test" (PDF). Archived (PDF) from the original on January 2, 2014. Retrieved January
2, 2014.
130. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
134https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-134 Kirk,
Jeremy. "Guidelines released for antivirus software tests".
131. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Harley_2011_135-
0https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-Harley_2011_135-0 Harley,
David (2011). AVIEN Malware Defense Guide for the Enterprise. Elsevier.
p. 487. ISBN 9780080558660.
132. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
136https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-136 Kotadia, Munir (July
2006). "Why popular antivirus apps 'do not work'". Retrieved April 14, 2010. to:a b The
Canadian Press (April 2010). "Internet scam uses adult game to extort cash". CBC
News. Archived from the original on April 18, 2010. Retrieved April 17, 2010.
133. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
138https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-138 Exploit Code; Data
Theft; Information Security; Privacy; Hackers; system, Security mandates aim to shore
up shattered SSL; Reader, Adobe kills two actively exploited bugs in; stalker, Judge
dismisses charges against accused Twitter. "Researchers up evilness ante with GPU-
assisted malware".
134. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
139https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-139 Iresh, Gina (April 10,
2010). "Review of Bitdefender Antivirus Security Software 2017
edition". www.digitalgrog.com.au. Digital Grog. Retrieved November 20, 2016.
135. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
140https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-140 "Why F-PROT Antivirus
fails to disinfect the virus on my computer?". Retrieved August 20, 2015.
136. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
141https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-141 "Actions to be
performed on infected objects". Retrieved August 20, 2015.
137. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
142https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-142 "Cryptolocker
Ransomware: What You Need To Know". Retrieved March 28, 2014.
138. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
143https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-143 "How Anti-Virus
Software Works". Retrieved February 16, 2011.
139. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
144https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-144 "BT Home Hub
Firmware Upgrade Procedure". Retrieved March 6, 2011.
140. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
145https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-145 "The 10 faces of
computer malware". July 17, 2009. Retrieved March 6, 2011.
141. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
146https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-146 "New BIOS Virus
Withstands HDD Wipes". March 27, 2009. Retrieved March 6, 2011.
142. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
147https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-147 "Phrack Inc.
Persistent BIOS Infection". June 1, 2009. Archived from the original on April 30, 2011.
Retrieved March 6, 2011.
143. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
148https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-148 "Turning USB
peripherals into BadUSB". Retrieved October 11, 2014.
144. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
149https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-149 "Why the Security of
USB Is Fundamentally Broken". July 31, 2014. Retrieved October 11, 2014.
145. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
150https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-150 "How Antivirus
Software Can Slow Down Your Computer". Support.com Blog. Retrieved July 26, 2010.
146. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
151https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-151 "Softpedia Exclusive
Interview: Avira 10". Ionut Ilascu. Softpedia. April 14, 2010. Retrieved September
11, 2011.
147. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
152https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-152 "Norton AntiVirus
ignores malicious WMI instructions". Munir Kotadia. CBS Interactive. October 21, 2004.
Retrieved April 5, 2009.
148. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
153https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-153 "NSA and GCHQ
attacked antivirus software so that they could spy on people, leaks indicate". June 24,
2015. Retrieved October 30, 2016. to:a b "Popular security software came under
relentless NSA and GCHQ attacks". Andrew Fishman, Morgan Marquis-Boire. June 22,
2015. Retrieved October 30, 2016.
149. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
155https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-155 Zeltser, Lenny
(October 2010). "What Is Cloud Anti-Virus and How Does It Work?". Archived from the
original on October 10, 2010. Retrieved October 26, 2010.
150. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
156https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-156 Erickson, Jon (August
6, 2008). "Antivirus Software Heads for the Clouds". Information Week.
Retrieved February 24, 2010.
151. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
157https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-157 "Comodo Cloud
Antivirus released". wikipost.org. Retrieved May 30, 2016.
152. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
158https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-158 "Comodo Cloud
Antivirus User Guideline PDF" (PDF). help.comodo.com. Retrieved May 30, 2016.
153. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
159https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-159 Krebs, Brian (March 9,
2007). "Online Anti-Virus Scans: A Free Second Opinion". Washington Post.
Retrieved February 24, 2011.
154. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
160https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-160 Naraine, Ryan
(February 2, 2007). "Trend Micro ships free 'rootkit buster'". ZDNet. Retrieved February
24, 2011. to:a b Rubenking, Neil J. (March 26, 2010). "Avira AntiVir Personal 10". PC
Magazine. Retrieved February 24, 2011.
155. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
162https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-162 Rubenking, Neil J.
(September 16, 2010). "PC Tools Spyware Doctor with AntiVirus 2011". PC Magazine.
Retrieved February 24, 2011.
156. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
163https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-163 Rubenking, Neil J.
(October 4, 2010). "AVG Anti-Virus Free 2011". PC Magazine. Retrieved February
24, 2011.
157. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
164https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-164 Rubenking, Neil J.
(November 19, 2009). "PC Tools Internet Security 2010". PC Magazine.
Retrieved February 24, 2011. to:a b Skinner, Carrie-Ann (March 25, 2010). "AVG Offers
Free Emergency Boot CD". PC World. Retrieved February 24, 2011.
158. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
166https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-166 "FBI estimates major
companies lose $12m annually from viruses". January 30, 2007. Retrieved February
20, 2011.
159. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
167https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-167 Kaiser, Michael (April
17, 2009). "Small and Medium Size Businesses are Vulnerable". National Cyber Security
Alliance. Retrieved February 24, 2011.
160. https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-
168https://en.wikipedia.org/wiki/Antivirus_software - cite_ref-168 Nearly 50% Women
Don’t Use Anti-virus Software. Spamfighter.com (September 2, 2010). Retrieved on
January 3, 2017.
161. Szor, Peter (2005), The Art of Computer Virus Research and Defense, Addison-
Wesley, ISBN 0-321-30454-3

6.1.8 Firewall
6.1.8.1 Commentary
A firewall is a network security system to monitor and control network traffic based on
predetermined security rules.[1] It provides a mechanism of transfer between trusted
internal network and the outside network.[2] They can be classified as network or host-
based. Network firewalls filter traffic between networks and are software running on
computers or specialist hardware. Host-based systems have software on one host that
controlling traffic.[3][4] Firewalls give extra functionality as a DHCP[5][6] or VPN[7][8]
[9][10] server for the network.[11][12][28][29][30]
Firewalls came with the Internet and its global use and connectivity[14] and succeeded
routers used in the late 1980s.[15][16][17] The first firewalls filtered network
addresses and ports of packet to allow or block.[18][19] When special ports are used the
firewall can be enhanced.[20][21][22] Second generation firewalls were circuit-level
gateways[23] using the transport layer.[24][25][26] The third generation firewal works at
the application layer eg File Transfer Protocol (FTP), Domain Name System (DNS),
or Hypertext Transfer Protocol (HTTP)) via the Firewall Toolkit, next-generation
firewall (NGFW), Intrusion prevention systems (IPS), User identity management integration,
and Web application firewall (WAF).[27] A proxy server which can act as a firewall by
responding to input packets like an application is a gateway from one network to
another for an application[2] and make it harder to tamper with a system. Network
address translation is applied with firewalls to hide local addressing. [31]

6.1.8.2 References
1. Boudriga, Noureddine (2010). Security of mobile communications. Boca Raton:
CRC Press. pp. 32–33. ISBN 0849379423.
2. Oppliger, Rolf (May 1997). "Internet Security: FIREWALLS and
BEYOND". Communications of the ACM. 40 (5): 94. doi:10.1145/253769.253802.
3. Vacca, John R. (2009). Computer and information security handbook.
Amsterdam: Elsevier. p. 355. ISBN 9780080921945.
4. "What is Firewall?". Retrieved 2015-02-12.
5. "Firewall as a DHCP Server and Client". Palo Alto Networks. Retrieved 2016-02-
08.
6. "DHCP". www.shorewall.net. Retrieved 2016-02-08.
7. "What is a VPN Firewall? - Definition from Techopedia". Techopedia.com.
Retrieved 2016-02-08.
8. "VPNs and Firewalls". technet.microsoft.com. Retrieved 2016-02-08.
9. "VPN and Firewalls (Windows Server)". Resources and Tools for IT Professionals
| TechNet.
10. "Configuring VPN connections with firewalls".
11. Andrés, Steven; Kenyon, Brian; Cohen, Jody Marc; Johnson, Nate; Dolly, Justin
(2004). Birkholz, Erik Pack, ed. Security Sage's Guide to Hardening the Network
Infrastructure. Rockland, MA: Syngress. pp. 94–95. ISBN 9780080480831.
12. Naveen, Sharanya. "Firewall". Retrieved 7 June 2016.
13. Canavan, John E. (2001). Fundamentals of Network Security (1st ed.). Boston,
MA: Artech House. p. 212. ISBN 9781580531764.
14. Liska, Allan (Dec 10, 2014). Building an Intelligence-Led Security Program.
Syngress. p. 3. ISBN 0128023708.
15. Ingham, Kenneth; Forrest, Stephanie (2002). "A History and Survey of Network
Firewalls" (PDF). Retrieved 2011-11-25.
16. [1] Firewalls by Dr.Talal Alkharobi
17. RFC 1135 The Helminthiasis of the Internet
18. Peltier, Justin; Peltier, Thomas R. (2007). Complete Guide to CISM Certification.
Hoboken: CRC Press. p. 210. ISBN 9781420013252.
19. Ingham, Kenneth; Forrest, Stephanie (2002). "A History and Survey of Network
Firewalls" (PDF). p. 4. Retrieved 2011-11-25.
20. TCP vs. UDP By Erik Rodriguez
21. William R. Cheswick, Steven M. Bellovin, Aviel D. Rubin (2003). "Google Books
Link". Firewalls and Internet Security: repelling the wily hacker
22. Aug 29, 2003 Virus may elude computer defenses by Charles Duhigg, Washington
Post
23. Proceedings of National Conference on Recent Developments in Computing and
Its Applications, August 12–13, 2009. I.K. International Pvt. Ltd. 2009-01-01.
Retrieved 2014-04-22.
24. Conway, Richard (204). Code Hacking: A Developer's Guide to Network Security.
Hingham, Massachusetts: Charles River Media. p. 281. ISBN 1-58450-314-9.
25. Andress, Jason (May 20, 2014). The Basics of Information Security:
Understanding the Fundamentals of InfoSec in Theory and Practice (2nd ed.). Elsevier
Science. ISBN 9780128008126.
26. Chang, Rocky (October 2002). "Defending Against Flooding-Based Distributed
Denial-of-Service Attacks: A Tutorial". IEEE Communications Magazine. 40 (10): 42–
43. doi:10.1109/mcom.2002.1039856.
27. "WAFFle: Fingerprinting Filter Rules of Web Application Firewalls". 2012.
28. "Firewalls". MemeBridge. Retrieved 13 June 2014.
29. "Software Firewalls: Made of Straw? Part 1 of 2". Symantec Connect Community.
2010-06-29. Retrieved 2014-03-28.
30. "Auto Sandboxing". Comodo Inc. Retrieved 2014-08-28.
31. "Advanced Security: Firewall". Microsoft. Retrieved 2014-08-28.
32. Internet Firewalls: Frequently Asked Questions, compiled by Matt Curtin, Marcus
Ranum and Paul Robertson.
33. Firewalls Aren’t Just About Security - Cyberoam Whitepaper focusing on Cloud
Applications Forcing Firewalls to Enable Productivity.
34. Evolution of the Firewall Industry - Discusses different architectures and their
differences, how packets are processed, and provides a timeline of the evolution.
35. A History and Survey of Network Firewalls - provides an overview of firewalls at
the various ISO levels, with references to the original papers where first firewall work
was reported.
36. Software Firewalls: Made of Straw? Part 1 and Software Firewalls: Made of
Straw? Part 2 - a technical view on software firewall design and potential weaknesses
6.1.9 Intrusion detection system
6.1.9.1 Commentary
An intrusion detection system (IDS) monitors a network or systems for malicious
activity or policy violations. Any detected activity or violation is logged for an
administrator, central facility and event management (SIEM) system with
filtering techniques to detect malicious activity. An IDS monitors all activities whereas
a firewall monitor external activities.
IDS can be classified by where detection takes place (network or host) and the
detection method that is employed. Network intrusion detection systems (NIDS)
monitor traffic to and from all devices on the network and analyse passing traffic on
the entire subnet, and matches it to profiles of known attacks. If a match is found the
administrator is notified.[1] Host intrusion detection systems (HIDS) monitor the traffic
on a device only and alert the administrator if suspicious activity is detected by taking
snapshot of existing system files and matching it to the previous snapshot. Intrusion
detection systems can also be system-specific using custom tools and honeypots.
Intrusion prevention systems use signature-based, statistical anomaly-based, and
stateful protocol analysis.[8][12] Signature-based IDS looks for specific patterns eg
known malicious instruction sequences (signatures) used by malware. It is easily to
detect known attacks, but impossible for new attacks. Anomaly-based intrusion
detection systems finds unknown attacks using machine learning to model trusted
activity, and then checking against this model.[2][3][4][13] Stateful protocol analysis
detection identifies deviations of protocol states by comparing observed events with
predetermined profiles of generally accepted definitions of benign activity.[8]
Intrusion detection and prevention systems (IDPS) identify incidents, log information,
and report attempts and listing problems with policies, threats and security violations
by individuals.[5][6][7][8][9][10].
Intrusion prevention systems are categorised as:[6][11]
Network-based intrusion prevention system monitoring the entire network for
suspicious traffic by analyzing protocol activity.
Wireless intrusion prevention systems as above but for a wireless network.
Network behavior analysis examining anomalies in network traffic to identify.
Host-based intrusion prevention system monitoring a single host for suspicious activity
by analyzing events occurring within that host.
The intrusion systems are restricted by device errors, false addresses, out of date
software and signatures[14], delayed updates[13], encrypted information, erroneous
packet addresses and network/host problems.[15]
Attackers use simple techniques to evade protection eg fragmented message packets,
changed communication ports, coordinated attacks, address spoofing/proxying, pattern
change of attack data.
Tools help administrators review audit trails.[16] They can require ever increasing
resources.[17] They follow a basic model[18] using statistics for anomaly detection in
profiles of users, host and target systems[19] and rule-based expert system to detect
known intrusions.[20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37]
[38]
6.1.9.2 References
1 Abdullah A. Mohamed, "Design Intrusion Detection System Based On Image
Block Matching", International Journal of Computer and Communication Engineering,
IACSIT Press, Vol. 2, No. 5, September 2013.
2 "Gartner report: Market Guide for User and Entity Behavior Analytics".
September 2015.
3 "Gartner: Hype Cycle for Infrastructure Protection, 2016".
4 "Gartner: Defining Intrusion Detection and Prevention Systems".
Retrieved September 20, 2016.
5 Scarfone, Karen; Mell, Peter (February 2007). "Guide to Intrusion Detection and
Prevention Systems (IDPS)" (PDF). Computer Security Resource Center. National
Institute of Standards and Technology (800–94). Retrieved 1 January 2010.
6 "NIST – Guide to Intrusion Detection and Prevention Systems (IDPS)" (PDF).
February 2007. Retrieved 2010-06-25.
7 Robert C. Newman (19 February 2009). Computer Security: Protecting Digital
Resources. Jones & Bartlett Learning. ISBN 978-0-7637-5994-0. Retrieved 25
June 2010.
8 Michael E. Whitman; Herbert J. Mattord (2009). Principles of Information
Security. Cengage Learning EMEA. ISBN 978-1-4239-0177-8. Retrieved 25 June 2010.
9 Tim Boyles (2010). CCNA Security Study Guide: Exam 640-553. John Wiley and
Sons. p. 249. ISBN 978-0-470-52767-2. Retrieved 29 June 2010.
10 Harold F. Tipton; Micki Krause (2007). Information Security Management
Handbook. CRC Press. p. 1000. ISBN 978-1-4200-1358-0. Retrieved 29 June 2010.
11 John R. Vacca (2010). Managing Information Security. Syngress.
p. 137. ISBN 978-1-59749-533-2. Retrieved 29 June 2010.
12 Engin Kirda; Somesh Jha; Davide Balzarotti (2009). Recent Advances in Intrusion
Detection: 12th International Symposium, RAID 2009, Saint-Malo, France, September
23–25, 2009, Proceedings. Springer. p. 162. ISBN 978-3-642-04341-3. Retrieved 29
June 2010.
13 nitin.; Mattord, verma (2008). Principles of Information Security. Course
Technology. pp. 290–301. ISBN 978-1-4239-0177-8.
14 c Anderson, Ross (2001). Security Engineering: A Guide to Building Dependable
Distributed Systems. New York: John Wiley & Sons. pp. 387–388. ISBN 978-0-471-38922-
4.
15 http://www.giac.org/paper/gsec/235/limitations-network-intrusion-
detection/100739
16 Anderson, James P., "Computer Security Threat Monitoring and Surveillance,"
Washing, PA, James P. Anderson Co., 1980.
17 David M. Chess; Steve R. White (2000). "An Undetectable Computer
Virus". Proceedings of Virus Bulletin Conference.
18 Denning, Dorothy E., "An Intrusion Detection Model," Proceedings of the Seventh
IEEE Symposium on Security and Privacy, May 1986, pages 119–131
19 Lunt, Teresa F., "IDES: An Intelligent System for Detecting Intruders,"
Proceedings of the Symposium on Computer Security; Threats, and Countermeasures;
Rome, Italy, November 22–23, 1990, pages 110–121.
20 Lunt, Teresa F., "Detecting Intruders in Computer Systems," 1993 Conference on
Auditing and Computer Technology, SRI International
21 Sebring, Michael M., and Whitehurst, R. Alan., "Expert Systems in Intrusion
Detection: A Case Study," The 11th National Computer Security Conference, October,
1988
22 Smaha, Stephen E., "Haystack: An Intrusion Detection System," The Fourth
Aerospace Computer Security Applications Conference, Orlando, FL, December, 1988
23 Vaccaro, H.S., and Liepins, G.E., "Detection of Anomalous Computer Session
Activity," The 1989 IEEE Symposium on Security and Privacy, May, 1989
24 Teng, Henry S., Chen, Kaihu, and Lu, Stephen C-Y, "Adaptive Real-time Anomaly
Detection Using Inductively Generated Sequential Patterns," 1990 IEEE Symposium on
Security and Privacy
25 Heberlein, L. Todd, Dias, Gihan V., Levitt, Karl N., Mukherjee, Biswanath, Wood,
Jeff, and Wolber, David, "A Network Security Monitor," 1990 Symposium on Research in
Security and Privacy, Oakland, CA, pages 296–304
26 Winkeler, J.R., "A UNIX Prototype for Intrusion and Anomaly Detection in Secure
Networks," The Thirteenth National Computer Security Conference, Washington, DC.,
pages 115–124, 1990
27 Dowell, Cheri, and Ramstedt, Paul, "The ComputerWatch Data Reduction Tool,"
Proceedings of the 13th National Computer Security Conference, Washington, D.C.,
1990
28 Snapp, Steven R, Brentano, James, Dias, Gihan V., Goan, Terrance L., Heberlein,
L. Todd, Ho, Che-Lin, Levitt, Karl N., Mukherjee, Biswanath, Smaha, Stephen E., Grance,
Tim, Teal, Daniel M. and Mansur, Doug, "DIDS (Distributed Intrusion Detection System)
-- Motivation, Architecture, and An Early Prototype," The 14th National Computer
Security Conference, October, 1991, pages 167–176.
29 Jackson, Kathleen, DuBois, David H., and Stallings, Cathy A., "A Phased
Approach to Network Intrusion Detection," 14th National Computing Security
Conference, 1991
30 Paxson, Vern, "Bro: A System for Detecting Network Intruders in Real-Time,"
Proceedings of The 7th USENIX Security Symposium, San Antonio, TX, 1998
31 Amoroso, Edward, "Intrusion Detection: An Introduction to Internet Surveillance,
Correlation, Trace Back, Traps, and Response," Intrusion.Net Books, Sparta, New
Jersey, 1999, ISBN 0-9666700-7-8
32 Kohlenberg, Toby (Ed.), Alder, Raven, Carter, Dr. Everett F. (Skip), Jr., Esler,
Joel., Foster, James C., Jonkman Marty, Raffael, and Poor, Mike, "Snort IDS and IPS
Toolkit," Syngress, 2007, ISBN 978-1-59749-099-3
33 Barbara, Daniel, Couto, Julia, Jajodia, Sushil, Popyack, Leonard, and Wu,
Ningning, "ADAM: Detecting Intrusions by Data Mining," Proceedings of the IEEE
Workshop on Information Assurance and Security, West Point, NY, June 5–6, 2001
34 Intrusion Detection Techniques for Mobile Wireless Networks, ACM WINET 2003
<http://www.cc.gatech.edu/~wenke/papers/winet03.pdf>
35 Viegas, E.; Santin, A. O.; Fran?a, A.; Jasinski, R.; Pedroni, V. A.; Oliveira, L. S.
(2017-01-01). "Towards an Energy-Efficient Anomaly-Based Intrusion Detection Engine
for Embedded Systems". IEEE Transactions on Computers. 66 (1): 163–
177. doi:10.1109/TC.2016.2560839. ISSN 0018-9340.
36 França, A. L.; Jasinski, R.; Cemin, P.; Pedroni, V. A.; Santin, A. O. (2015-05-
01). "The energy cost of network security: A hardware vs. software comparison". 2015
IEEE International Symposium on Circuits and Systems (ISCAS): 81–
84. doi:10.1109/ISCAS.2015.7168575.
37 França, A. L. P. d; Jasinski, R. P.; Pedroni, V. A.; Santin, A. O. (2014-07-
01). "Moving Network Protection from Software to Hardware: An Energy Efficiency
Analysis". 2014 IEEE Computer Society Annual Symposium on VLSI: 456–
461. doi:10.1109/ISVLSI.2014.89.
38 "Towards an Energy-Efficient Anomaly-Based Intrusion Detection Engine for
Embedded Systems" (PDF). SecPLab. This article incorporates public domain
material from the National Institute of Standards and Technology document "Guide to
Intrusion Detection and Prevention Systems, SP800-94" by Karen Scarfone, Peter Mell
(retrieved on 1 January 2010).
39 Hansen, James V.; Benjamin Lowry, Paul; Meservy, Rayman; McDonald, Dan
(2007). "Genetic programming for prevention of cyberterrorism through dynamic and
evolving intrusion detection". Decision Support Systems (DSS). 43 (4): 1362–
1374. doi:10.1016/j.dss.2006.04.004. SSRN 877981.
40 Scarfone, Karen; Mell, Peter (February 2007). "Guide to Intrusion Detection and
Prevention Systems (IDPS)" (PDF). Computer Security Resource Center. National
Institute of Standards and Technology (800-94). Retrieved 1 January 2010.
41 Saranya, J.; Padmavathi, G. (2015). "A Brief Study on Different Intrusions and
Machine Learning-based Anomaly Detection Methods in Wireless Sensor
Networks" (PDF). Avinashilingam Institute for Home Science and Higher Education for
Women (6(4)). Retrieved 4 April 2015.
42 Singh, Abhishek. "Evasions In Intrusion Prevention Detection Systems". Virus
Bulletin. Retrieved April 2010. Check date values in: |access-date= (help)
43 Bezroukov, Nikolai (11 December 2008). "Architectural Issues of Intrusion
Detection Infrastructure in Large Enterprises (Revision 0.82)". Softpanorama.
Retrieved 30 July 2010.
44 P.M. Mafra and J.S. Fraga and A.O. Santin (2014). "Algorithms for a distributed
IDS in MANETs". Journal of Computer and System Sciences. 80 (3): 554–
570. doi:10.1016/j.jcss.2013.06.011.
45 Common vulnerabilities and exposures (CVE) by product
46 NIST SP 800-83, Guide to Malware Incident Prevention and Handling
47 NIST SP 800-94, Guide to Intrusion Detection and Prevention Systems (IDPS)
48 Study by Gartner "Magic Quadrant for Network Intrusion Prevention System
Appliances"
6.1.10 Computational Linguistics methods
Computational Linguistics methods study fundamental principles of encoding,
transmission and comprehension of information using logic, philosophy, linguistics,
musicology, mathematics, computer science, artificial intelligence and cognitive
science by a mix of top-down and bottom-up approaches like a ‘model approach’ and
ontology approach coupled with distributional semantics and vector models.
We consider knowledge representation, computational linguistics and visualisation and
geometrical algorithmics, probabilistic parsing and grammar induction, computational
pragmatics and dialogue modelling, information retrieval, statistical morpho-syntactic
parsing and machine translation, optimality theory and discourse, and computational
cognitive science, mathematics, focusing on consequence and plurality.
6.1.11 Deep Learning and Machine Intelligence
• Deep learning and neural networks for computer vision, and natural language
processing  
• Topic modeling, probabilistic graphical models for sequential data and dynamic
systems,
• Meta-heuristic algorithms, reinforcement learning, game algorithms, knowledge
engineering
• Big data processing framework and platform: Hadoop MapReduce, Spark, Hadoop
cluster
• Data mining algorithms, Data visualization, Practical big data analysis for scientific
and industrial data
• Pattern mining and intelligent services for IoT(Internet of Things) and Stream data
• Big data modeling and big data quality management

6.1.12 Research Engineer OBELICS


Optimization of scalable realtime models and functional testing for e-drive concepts  is
a European project to develop new tools to enable the multi-level modelling and testing
of EV and their components in order to deliver more efficient vehicle designs faster
while supporting modularity to enable mass production and improved affordability.
Within this project Siemens are active strengthening and extend its competitive
position in simulation tools, test methodologies and tools for model/software/hardware-
in-the-loop and design and validation tools for control algorithm development for such
mechatronic systems, including co-simulation with environment (i.e. traffic, sensor
imperfections, …) modelling tools involvig actuator control, identification of parameters
of multi-physical models, model reduction strategies for real-time applications, virtual
sensor modelling, setting up efficient co-simulation framework, new validation and test
methods for high frequency behavior of inverters, ECU tuning and implementation of
new modeling and testing methods in industrial use cases.
6.1.13 Mobile secure gateway
6.1.13.1 Commentary
Mobile secure gateway provides secure communication between a mobile application
and respective backend resources using software or hardware system. It consists of
two parts – a Client library and a Gateway. The client uses an API and a library for
mobile application giving secure connectivity to the physical/ virtual server Gateway[1]
with cryptographic protocol forming a secured channel for communication between the
mobile application and hosts. Gateway isolates internal IT infrastructure from the
Internet, permitting authorized client requests to reach a specific set of hosts inside a
restricted network.
6.1.13.2 References
1 "Mobile Security". www.peerlyst.com. Retrieved 6 May 2016.
6.1.14 Secure by design
6.1.14.1 Commentary
Secure is designed into software development in all aspects. It assumes malicious
activities happen and minimizes impact and by publicising the design security can be
improved (Linus' law) and using principle of least privilege. Software must assume
distrust of security and faulttolerance.
Server/client architectures are subject to security flaws as authorisation fails easily
but can be overcome by encryption, hashing, and other security mechanisms and good
system development.
6.1.14.2 References
1  Dougherty, Chad; Sayre, Kirk; Seacord, Robert C.; Svoboda, David; Togashi,
Kazuya. "Secure Design Patterns". CMU. Retrieved 14 October 2017.
6.1.15 Secure coding
6.1.15.1 Commentary
Securing coding develops software avoid security vulnerabilities. Defects, bugs and
logic flaws are the main reason for software vulnerabilities and are derived from a
small number of basic programming errors to give practices and education to avoid
these errors like buffer overflow prevention, format string attack prevention, integer
overflow prevention.
6.1.15.2 References
1 Taylor, Art; Brian Buege; Randy Layman (2006). Hacking Exposed J2EE & Java.
McGraw-Hill Primis. p. 426. ISBN 0-390-59975-1.
6.1.16 Security-focused operating system
6.1.16.1 Commentary
Security-focused operating systems achieve certification from an external security-
auditing organization and receives sufficient support for multilevel security and
evidence of correctness to meet a particular set of government requirements to be a
"trusted operating system".[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]
[19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34]
6.1.16.2 References
1 Quinn Norton (January 14, 2006). "Anonyity on a Disc". Wired.com.
Retrieved November 6, 2011.
2 McIntire, Tim (8 August 2006). "Take a closer look at OpenBSD". IBM. Archived
from the original on January 27, 2007. Retrieved 19 February 2015.
3 "HardenedBSD – Proactive Security Project". bsdmag.org. HAKIN9 MEDIA SP. Z
O.O. SP. K. Retrieved 12 June 2015.
4 Carlier, David (December 4, 2014). "Hardened BSD - A Proactive Security
Project". BSD Magazine. HAKIN9 MEDIA SP. Z O.O. SP. K. 8 (10/2014 (63)). ISSN 1898-
9144. Retrieved 12 June 2015.
5 Percival, Colin, Fabian Keil, delphij, mahrens, et. al. (12 June
2015). "HardenedBSD — sys/dev/xen/blkback blkback.c, sys/dev/xen/blkfront blkfront.c
block.h" (4443). secure.freshbsd.org. Retrieved 12 June 2015.
6 "BlackArch Linux". Retrieved 5 Feb 2016.
7 Porup, J.M. (9 August 2016). "Copperhead OS: The startup that wants to solve
Android's woeful security". arstechnica.co.uk. Ars Technica UK.
8 Corbet, Jonathan (17 February 2016). "CopperheadOS: Securing the
Android". lwn.net.
9 Linder, Brad (29 March 2016). "F-Droid, Copperhead, Guardian Project partner to
create a security-focused, Android-based ecosystem". liliputing.com.
10 "Securing Debian Manual". debian.org. Retrieved 19 April 2015.
11 "SELinux". debian.org. Retrieved 19 April 2015.
12 "SELinux: бронежилет для корпоративного пингвина" [SELinux: bullet-proof
vest for corporate penguin] (in Russian). 6 September 2011. Retrieved 26 October 2011.
13 "Hardened Linux Features". Hardened Linux. Hardened Linux Developers. 26
April 2014. Retrieved 3 January 2017.
14 "Hardened Linux End-Of-Life Announcement". Hardened Linux at SourceForge.
Hardened Linux Developers. Retrieved 3 January 2017.
15 Mempo
16 "Redirecting...". qubes-os.org. Retrieved 19 April 2015.
17 "Overview - Replicant". Redmine.replicant.us. Retrieved 2013-09-30.
18 Paul Kocialkowski (February 4, 2012). "WikiStart – Replicant".
Redmine.replicant.us. Retrieved 2013-09-30.
19 "Android and Users' Freedom - GNU Project - Free Software Foundation".
Gnu.org. Retrieved 2013-09-30.
20 "About". Replicant project. Retrieved 2013-09-30.
21 Don Reisinger (13 March 2014). "Samsung Galaxy devices may have backdoor to
user data, developer says". CNET. Retrieved 25 April 2014.
22 Michael Larabel (12 March 2014). "Replicant Developers Find Backdoor In
Android Samsung Galaxy Devices". Phoronix. Retrieved 25 April 2014.
23 Paul Kocialkowski. "Samsung Galaxy Back-door". Replicant Wiki. Archived
from the original on 6 April 2014. Retrieved 25 April 2014.
24 "Tin Hat". D'Youville College.
25 Grey One. "Linux Distributions Built For Anonymity". GreyCoder. Retrieved 19
April 2015.
26 "Ipredia". ipredia.org. Retrieved 19 April 2015.
27 "IprediaOS". ipredia.org/os. Retrieved 19 May 2015.
28 "IprediaI2P-out-of-date". reddit.com/r/i2p/comments/2hcbhl/anyone_else_here_use
_iprediaos/. Retrieved 19 May 2015.
29 "DE(E)SU - Liberté Linux". dee.su. Retrieved 19 April 2015.
30 Security Onion (official website).
31 Security Onion Wiki (on SourceForge).
32 Security Onion Solutions: Tools (on GitHub).
33 "Sun Patch: Trusted Solaris 8 4/01: in.telnet patch". 4 October 2002. Retrieved 13
August 2012. 4734086 in.telnetd vulnerable to buffer overflow ?? (Solaris bug 4483514)
34 "What is Server Core?". Microsoft TechNet. Microsoft Corporation. Retrieved 17
October 2013.
6.1.17 Intel Active Management Technology
6.1.17.1 Commentary
Intel Active Management Technology, part of Intel vPro, implements out-of-band
management, giving administrators remote administration, remote management,
and remote control of PCs with no involvement of the host processor or BIOS, even
when the system is powered off. Remote administration includes remote power-up and
power-down, remote reset, redirected boot, console redirection, pre-boot access to
BIOS settings, programmable filtering for inbound and outbound network traffic, agent
presence checking, out-of-band policy-based alerting, access to system information,
such as hardware asset information, persistent event logs, and other information that
is stored in dedicated memory (not on the hard drive) where it is accessible even if the
OS is down or the PC is powered off. Some of these functions require the deepest level
of rootkit, a second non-removable spy computer built around the main computer.
Sandy Bridge and future chipsets have "the ability to remotely kill and restore a lost or
stolen PC via 3G". Hardware rootkits built into the chipset can help recover stolen
computers, remove data, or render them useless, but they also present privacy and
security concerns of undetectable spying and redirection by management or hackers
who might gain control.
6.1.17.2 References
1 "Intel Centrino 2 with vPro Technology and Intel Core2 Processor with vPro
Technology" (PDF). Intel. 2008. Archived from the original (PDF) on 2008-12-06.
Retrieved 2008-08-07.
2 "Architecture Guide: Intel Active Management Technology". Intel. 2008-06-26.
Archived from the original on 2012-06-07. Retrieved 2008-08-12.
3 "Remote Pc Management with Intel's vPro". Tom's Hardware Guide.
Retrieved 2007-11-21.
4 "Intel vPro Chipset Lures MSPs, System Builders". ChannelWeb. Retrieved 1
August2007.
5 "Intel Mostly Launches Centrino 2 Notebook Platform". ChannelWeb. Retrieved 1
July2008.
6 "A new dawn for remote management? A first glimpse at Intel's vPro platform".
ars technica. Retrieved 2007-11-07.
7 "Revisiting vPro for Corporate Purchases". Gartner. Retrieved 2008-08-07.
8 "Answers to Frequently Asked Questions about libreboot". libreboot.org.
Retrieved 2015-09-25.
9 "Archived copy". Archived from the original on 2012-04-14. Retrieved 2012-04-30.
10 "Intel Centrino 2 with vPro Technology" (PDF). Intel. Archived from the
original(PDF) on 2008-03-15. Retrieved 2008-07-15.
11 "Intel MSP". Msp.intel.com. Retrieved 2016-05-25.
12 "Intel® Product Security Center". Security-center.intel.com. Retrieved 2017-05-
07.
13 Charlie Demerjian (2017-05-01). "Remote security exploit in all 2008+ Intel
platforms". SemiAccurate. Retrieved 2017-05-07.
14 "Red alert! Intel patches remote execution hole that's been hidden in chips since
2010". Theregister.co.uk. Retrieved 2017-05-07.
15 HardOCP: Purism Is Offering Laptops with Intel's Management Engine Disabled
16 System76 to disable Intel Management Engine on its notebooks
17 Garrison, Justin (2011-03-28). "How to Remotely Control Your PC (Even When it
Crashes)". Howtogeek.com. Retrieved 2017-05-07.
18 "Open Manageability Developer Tool Kit | Intel® Software". Software.intel.com.
Retrieved 2017-05-07.
19 "Intel vPro Technology". Intel. Retrieved 2008-07-14.
20 "Intel Active Management Technology System Defense and Agent Presence
Overview" (PDF). Intel. February 2007. Retrieved 2008-08-16.
21 "Intel Centrino 2 with vPro Technology". Intel. Archived from the original on
2008-03-15. Retrieved 2008-06-30.
22 "New Intel-Based Laptops Advance All Facets of Notebook PCs". Intel. Archived
from the original on 2008-07-17. Retrieved 2008-07-15.
23 "Understanding Intel AMT over wired vs. wireless (video)". Intel. Archived
from the original on March 26, 2008. Retrieved 2008-08-14.
24 "Intel® vPro™ Technology". Intel.
25 "Part 3: Post Deployment of Intel vPro in an Altiris Environment: Enabling and
Configuring Delayed Provisioning". Intel (forum). Retrieved 2008-09-12.
26 "Archived copy" (PDF). Archived from the original (PDF) on January 3, 2014.
Retrieved July 20, 2013.
27 "Archived copy". Archived from the original on February 20, 2011.
Retrieved December 26, 2010.
28 "Intel vPro Provisioning" (PDF). HP (Hewlett Packard). Retrieved 2008-06-02.
29 "vPro Setup and Configuration for the dc7700 Business PC with Intel vPro
Technology"(PDF). HP (Hewlett Packard). Retrieved 2008-06-02.[permanent dead link]
30 "Part 4: Post Deployment of Intel vPro in an Altiris Environment Intel: Partial
UnProvDefault". Intel (forum). Retrieved 2008-09-12.
31 "Technical Considerations for Intel AMT in a Wireless Environment". Intel. 2007-
09-27. Retrieved 2008-08-16.
32 "Intel Active Management Technology Setup and Configuration Service, Version
5.0"(PDF). Intel. Retrieved 2008-08-04.[permanent dead link]
33 "Intel AMT - Fast Call for Help". Intel. 2008-08-15. Retrieved 2008-08-17.
[permanent dead link](Intel developer's blog)
34 "Archived copy". Archived from the original on 2016-01-03. Retrieved 2016-01-16.
35 "Archived copy". Archived from the original on November 1, 2014.
Retrieved February 25, 2014.
36 "Positive Technologies Blog: Disabling Intel ME 11 via undocumented mode".
Retrieved 2017-08-30.
37 Igor Skochinsky (Hex-Rays) Rootkit in your laptop, Ruxcon Breakpoint 2012
38 "Intel Ethernet Controller I210 Datasheet" (PDF). Intel. 2013. pp. 1, 15, 52, 621–
776. Retrieved 2013-11-09.
39 "Intel Ethernet Controller X540 Product Brief" (PDF). Intel. 2012. Retrieved 2014-
02-26.
40 Joanna Rutkowska. "A Quest to the Core" (PDF). Invisiblethingslab.com.
Retrieved 2016-05-25.
41 "Archived copy" (PDF). Archived from the original (PDF) on February 11, 2014.
Retrieved February 26, 2014.
42 "Platforms II" (PDF). Users.nik.uni-obuda.hu. Retrieved 2016-05-25.
43 "New Intel vPro Processor Technology Fortifies Security for Business PCs (news
release)". Intel. 2007-08-27. Archived from the original on 2007-09-12. Retrieved 2007-
08-07.
44 "Intel® AMT Critical Firmware Vulnerability". Intel. Retrieved 10 June 2017.
45 "Intel Software Network, engineer / developers forum". Intel. Retrieved 2008-08-
09.[permanent dead link]
46 "Cisco Security Solutions with Intel Centrino Pro and Intel vPro Processor
Technology"(PDF). Intel. 2007.
47 "Invisible Things Lab to present two new technical presentations disclosing
system-level vulnerabilities affecting modern PC hardware at its
core" (PDF). Invisiblethingslab.com. Retrieved 2016-05-25.
48 "Berlin Institute of Technology : FG Security in telecommunications : Evaluating
"Ring-3" Rootkits" (PDF). Stewin.org. Retrieved 2016-05-25.
49 "Persistent, Stealthy Remote-controlled Dedicated Hardware
Malware" (PDF). Stewin.org. Retrieved 2016-05-25.
50 "Security Evaluation of Intel's Active Management
Technology" (PDF). Web.it.kth.se. Retrieved 2016-05-25.
51 "CVE - CVE-2017-5689". Cve.mitre.org. Retrieved 2017-05-07.
52 "Intel Hidden Management Engine - x86 Security Risk?". Darknet. 2016-06-16.
Retrieved 2017-05-07.
53 Garrett, Matthew (2017-05-01). "Intel's remote AMT
vulnerablity". mjg59.dreamwidth.org. Retrieved 2017-05-07.
54 "2017-05-05 ALERT! Intel AMT EXPLOIT OUT! IT'S BAD! DISABLE AMT
NOW!". Ssh.com\Accessdate=2017-05-07.
55 Dan Goodin (2017-05-06). "The hijacking flaw that lurked in Intel chips is worse
than anyone thought". Ars Technica. Retrieved 2017-05-08.
56 "General: BIOS updates due to Intel AMT IME vulnerability - General Hardware -
Laptop - Dell Community". En.community.dell.com. Retrieved 2017-05-07.
57 "Advisory note: Intel Firmware vulnerability – Fujitsu Technical Support pages
from Fujitsu Fujitsu Continental Europe, Middle East, Africa & India".
Support.ts.fujitsu.com. 2017-05-01. Retrieved 2017-05-08.
58 "HPE | HPE CS700 2.0 for VMware". H22208.www2.hpe.com. 2017-05-01.
Retrieved 2017-05-07.
59 "Intel® Security Advisory regarding escalation o... |Intel
Communities". Communities.intel.com. Retrieved 2017-05-07.
60 "Intel Active Management Technology, Intel Small Business Technology, and
Intel Standard Manageability Remote Privilege Escalation". Support.lenovo.com.
Retrieved 2017-05-07.
61 "MythBusters: CVE-2017-5689". Embedi.com. Retrieved 2017-05-07.
62 Charlie Demerjian (2017-05-01). "Remote security exploit in all 2008+ Intel
platforms". SemiAccurate.com. Retrieved 2017-05-07.
63 "Sneaky hackers use Intel management tools to bypass Windows firewall".
Retrieved 10 June 2017.
64 Tung, Liam. "Windows firewall dodged by 'hot-patching' spies using Intel AMT,
says Microsoft - ZDNet". Retrieved 10 June 2017.
65 "PLATINUM continues to evolve, find ways to maintain invisibility". Retrieved 10
June2017.
66 "Malware Uses Obscure Intel CPU Feature to Steal Data and Avoid Firewalls".
Retrieved 10 June 2017.
67 "Hackers abuse low-level management feature for invisible backdoor". iTnews.
Retrieved 10 June 2017.
68 "Vxers exploit Intel's Active Management for malware-over-LAN • The
Register". www.theregister.co.uk. Retrieved 10 June 2017.
69 Security, heise. "Intel-Fernwartung AMT bei Angriffen auf PCs genutzt". Security.
Retrieved 10 June 2017.
70 "PLATINUM activity group file-transfer method using Intel AMT SOL". Channel 9.
Retrieved 10 June 2017.
71 Researchers find almost EVERY computer with an Intel Skylake and above CPU
can be owned via USB
72 "Intel® Management Engine Critical Firmware Update (Intel SA-00086)". Intel.
73 "Intel Chip Flaws Leave Millions of Devices Exposed". Wired.
74 "Disabling AMT in BIOS". software.intel.com. Retrieved 2017-05-17.
75 "Are consumer PCs safe from the Intel ME/AMT exploit? -
SemiAccurate". semiaccurate.com.
76 "Intel x86s hide another CPU that can take over your machine (you can't audit
it)". Boing Boing. 2016-06-15. Retrieved 2017-05-11.
77 "[coreboot] : AMT bug". Mail.coreboot.org. 2017-05-11. Retrieved 2017-06-13.
78 "Disabling Intel AMT on Windows (and a simpler CVE-2017-5689 Mitigation
Guide)". Social Media Marketing | Digital Marketing | Electronic Commerce. 2017-05-03.
Retrieved 2017-05-17.
79 "bartblaze/Disable-Intel-AMT". GitHub. Retrieved 2017-05-17.
80 "mjg59/mei-amt-check". GitHub. Retrieved 2017-05-17.
81 "Intel® AMT Critical Firmware Vulnerability". Intel. Retrieved 2017-05-17.
82 "Positive Technologies Blog: Disabling Intel ME 11 via undocumented mode".
Retrieved 2017-08-30.
83 "Intel Patches Major Flaws in the Intel Management Engine". Extreme Tech.
84 Vaughan-Nichols, Steven J. "Taurinus X200: Now the most 'Free Software' laptop
on the planet - ZDNet".
85 Kißling, Kristian. "Libreboot: Thinkpad X220 ohne Management Engine » Linux-
Magazin". Linux-Magazin.
86 online, heise. "Libiquity Taurinus X200: Linux-Notebook ohne Intels Management
Engine". heise online.
87 "Intel AMT Vulnerability Shows Intel's Management Engine Can Be Dangerous". 2
May 2017.
88 "Purism Explains Why It Avoids Intel's AMT And Networking Cards For Its
Privacy-Focused 'Librem' Notebooks". Tom's Hardware. 2016-08-29. Retrieved 2017-05-
10.
89 "The Free Software Foundation loves this laptop, but you won't".
90 "FSF Endorses Yet Another (Outdated)
6.2 Countermeasures
6.2.1 Computer and network surveillance
6.2.1.1 Commentery
6.2.1.1.1 Introduction
Computer and network surveillance monitors computer activity and data stored on the
system, or data over the networks. It is carried out covertly and may be completed by
other organizations, or individuals. It may or may not be legal and may or may not
require authorization from some agency. Almost all Internet traffic can be monitored.[1]
and is generally available to governments.[2] but liberty societies have challenged the
situation.[2][3][4][5]
6.2.1.1.2 Network surveillance
Network surveillance is the major aim of agencies for monitoring of data and traffic on
the Internet[6] and phone calls and broadband internet traffic (emails, web
traffic, instant messaging, etc.) can be unimpeded, real-time monitoring by US.[7][8][9]
Packet capture (sniffing) monitors data traffic on a network[10] and is used by law
enforcement and intelligence agencies for .[11][12][13][14][15][16][17][18][19][20][21]
6.2.1.1.3 Corporate surveillance
Corporate surveillance logs are shared with a number of organisations for business
intelligence and marketing.[22][23] It gives an aim for the business purposes of
monitoring, which may include the following:
• Preventing misuse of resources discouraging unproductive personal activities on
company time, monitoring employee performance to reduce unnecessary network
traffic and reducing the consumption of network bandwidth.
• Promoting adherence to policies.
• Preventing lawsuits for discrimination or harassment in the workplace or infringement
suits.
• Safeguarding records e.g. personal data, monitoring compliance with policies.
• Safeguarding company assets e.g. intellectual property, trade secrets, business
strategies and monitor employee actions for company policy.
It checks ownership of technology resources re as only the firm’s networks, servers,
computers, files, and e-mail are as stated.[24][25][26]
6.2.1.1.4 Malicious software
Examining data stored on the computer and monitoring user activities catches
suspicious data, computer use, passwords, and/or report back activities to
administrators.[27][28][29][30][31][32]
6.2.1.1.5 Social network analysis
Surveillance can create maps of social networks from data of social networking
sites and from traffic analysis of phone calls,[33] and internet traffic data. These maps
can be mined for information e.g. personal interests, etc.[34][35][36]
The Defense Advanced Research Projects Agency (DARPA), the National Security
Agency (NSA), and the Department of Homeland Security (DHS) are performing research
on social network analysis.[37][38] Intelligence organisations use these techniques to
find and remove risks.[36][39]
6.2.1.1.6 Monitoring from a distance
Computers and devices can be monitored from a distance through the  electromagnetic
radiation.[40][41][42][43][44] Law makers have developed laws prohibiting
investigators using distance monitoring and such devices.[45][46][47]
6.2.1.1.7 Policeware and govware
Policeware polices citizens by monitoring discussion and interaction of its citizens.[48]
[49][50][51][52][53] Spyware used or made by the government is sometimes
called govware.[54][55][56]
6.2.1.1.8 Surveillance as an aid to censorship
Surveillance can be accomplished without censorship, but censorship needs
surveillance[57] although surveillance leads to self-censorship.[58][59][60][61]
6.2.1.2 References
1. Anne Broache. "FBI wants widespread monitoring of 'illegal' Internet activity". CNET.
Retrieved 25 March 2014.
2. "Is the U.S. Turning Into a Surveillance Society?". American Civil Liberties Union.
Retrieved March 13, 2009.
3. "Bigger Monster, Weaker Chains: The Growth of an American Surveillance
Society" (PDF). American Civil Liberties Union. January 15, 2003. Retrieved March
13, 2009.
4. "Anonymous hacks UK government sites over 'draconian surveillance' ", Emil
Protalinski, ZDNet, 7 April 2012, retrieved 12 March 2013
5. Hacktivists in the frontline battle for the internet retrieved 17 June 2012
6. Diffie, Whitfield; Susan Landau (August 2008). "Internet Eavesdropping: A Brave New
World of Wiretapping". Scientific American. Retrieved 2009-03-13.
7. "CALEA Archive -- Electronic Frontier Foundation". Electronic Frontier Foundation
(website). Retrieved 2009-03-14.
8. "CALEA: The Perils of Wiretapping the Internet". Electronic Frontier Foundation
(website). Retrieved 2009-03-14.
9. "CALEA: Frequently Asked Questions". Electronic Frontier Foundation (website).
Retrieved 2009-03-14.
10.Kevin J. Connolly (2003). Law of Internet Security and Privacy. Aspen Publishers.
p. 131. ISBN 978-0-7355-4273-0.
11.American Council on Education vs. FCC Archived 2012-09-07 at the Wayback Machine.,
Decision, United States Court of Appeals for the District of Columbia Circuit, 9 June
2006. Retrieved 8 September 2013.
12.Hill, Michael (October 11, 2004). "Government funds chat room surveillance research".
USA Today. Associated Press. Retrieved 2009-03-19.
13.McCullagh, Declan (January 30, 2007). "FBI turns to broad new wiretap method". ZDNet
News. Retrieved 2009-03-13.
14."First round in Internet war goes to Iranian intelligence", Debkafile, 28 June
2009. (subscription required)
15.O'Reilly, T. (2005). What is Web 2.0: Design Patterns and Business Models for the Next
Generation of Software. O’Reilly Media, 1-5.
16.Fuchs, C. (2011). New Media, Web 2.0 and Surveillance. Sociology Compass, 134-147.
17.Fuchs, C. (2011). Web 2.0, Presumption, and Surveillance. Surveillance & Society, 289-
309.
18.Muise, A., Christofides, E., & Demsmarais, S. (2014). " Creeping" or just information
seeking? Gender differences in partner monitoring in response to jealousy on
Facebook. Personal Relationships, 21(1), 35-50.
19.[electronics.howstuffworks.com/gadgets/high-tech-gadgets/should-smart-devices-
automatically-call-cops.htm "How Stuff Works"] Check |url= value (help).
Retrieved November 10,2017.
20.[electronics.howstuffworks.com/gadgets/high-tech-gadgets/should-smart-devices-
automatically-call-cops.htm. "How Stuff Works"] Check |url= value (help).
Retrieved November 10,2017.
21.[time.com/4766611/alexa-takes-the-stand-listening-devices-raise-privacy-issues "Time
Alexa Takes the Stand Listening Devices Raise Privacy Issues"] Check |url= value
(help). Retrieved November 10, 2017.
22.Story, Louise (November 1, 2007). "F.T.C. to Review Online Ads and Privacy". New York
Times. Retrieved 2009-03-17.
23.Butler, Don (January 31, 2009). "Are we addicted to being watched?". The Ottawa
Citizen. canada.com. Retrieved 26 May2013.
24.Soghoian, Chris (September 11, 2008). "Debunking Google's log anonymization
propaganda". CNET News. Retrieved 2009-03-21.
25.Joshi, Priyanki (March 21, 2009). "Every move you make, Google will be watching
you". Business Standard. Retrieved 2009-03-21.
26."Advertising and Privacy". Google (company page). 2009. Retrieved 2009-03-21.
27."Spyware Workshop: Monitoring Software on Your OC: Spywae, Adware, and Other
Software", Staff Report, U.S. Federal Trade Commission, March 2005. Retrieved 7
September 2013.
28.Aycock, John (2006). Computer Viruses and Malware. Springer. ISBN 978-0-387-30236-
2.
29."Office workers give away passwords for a cheap pen", John Leyden, The Register, 8
April 2003. Retrieved 7 September 2013.
30."Passwords are passport to theft", The Register, 3 March 2004. Retrieved 7 September
2013.
31."Social Engineering Fundamentals, Part I: Hacker Tactics", Sarah Granger, 18
December 2001.
32."Stuxnet: How does the Stuxnet worm spread?". Antivirus.about.com. 2014-03-03.
Retrieved 2014-05-17.
33.Keefe, Patrick (March 12, 2006). "Can Network Theory Thwart Terrorists?". New York
Times. Retrieved 14 March 2009.
34.Albrechtslund, Anders (March 3, 2008). "Online Social Networking as Participatory
Surveillance". First Monday. 13 (3). Retrieved March 14, 2009.
35.Fuchs, Christian (2009). Social Networking Sites and the Surveillance Society. A
Critical Case Study of the Usage of studiVZ, Facebook, and MySpace by Students in
Salzburg in the Context of Electronic Surveillance (PDF). Salzburg and Vienna:
Forschungsgruppe Unified Theory of Information. ISBN 978-3-200-01428-2.
Retrieved March 14, 2009.
36.Ethier, Jason (27 May 2006). "Current Research in Social Network Theory" (PDF).
Northeastern University College of Computer and Information Science. Retrieved 15
March 2009.
37.Marks, Paul (June 9, 2006). "Pentagon sets its sights on social networking
websites". New Scientist. Retrieved 2009-03-16.
38.Kawamoto, Dawn (June 9, 2006). "Is the NSA reading your MySpace profile?". CNET
News. Retrieved 2009-03-16.
39.Ressler, Steve (July 2006). "Social Network Analysis as an Approach to Combat
Terrorism: Past, Present, and Future Research". Homeland Security Affairs. II (2).
Retrieved March 14, 2009.
40.McNamara, Joel (4 December 1999). "Complete, Unofficial Tempest Page". Retrieved 7
September 2013.
41.Van Eck, Wim (1985). "Electromagnetic Radiation from Video Display Units: An
Eavesdropping Risk?" (PDF). Computers & Security. 4: 269–286. doi:10.1016/0167-
4048(85)90046-X.
42.Kuhn, M.G. (26–28 May 2004). "Electromagnetic Eavesdropping Risks of Flat-Panel
Displays" (PDF). 4th Workshop on Privacy Enhancing Technologies. Toronto: 23–25.
43.Asonov, Dmitri; Agrawal, Rakesh (2004), Keyboard Acoustic Emanations (PDF), IBM
Almaden Research Center
44.Yang, Sarah (14 September 2005), "Researchers recover typed text using audio
recording of keystrokes", UC Berkeley News
45."LA Times". Retrieved November 10, 2017.
46."Wikipedia Stingray phone tracker". Retrieved November 10,2017.
47.Adi Shamir & Eran Tromer. "Acoustic cryptanalysis". Blavatnik School of Computer
Science, Tel Aviv University. Retrieved 1 November 2011.
48.Jeremy Reimer (20 July 2007). "The tricky issue of spyware with a badge: meet
'policeware'". Ars Technica.
49.Hopper, D. Ian (4 May 2001). "FBI's Web Monitoring Exposed". ABC News.
50."Clipper Chip". Retrieved November 10, 2017.
51."New York Times". Retrieved November 10, 2017.
52.[cs.stanford.edu/people/eroberts/cs201/projects/1995-96/clipper-chip/history.html
"Stanford University Clipper Chip"] Check |url=value (help). Retrieved November
10, 2017.
53."Consumer Broadband and Digital Television Promotion Act", U.S. Senate bill S.2048,
107th Congress, 2nd session, 21 March 2002. Retrieved 8 September 2013.
54."Swiss coder publicises government spy Trojan". News.techworld.com. Retrieved 25
March 2014.
55.Basil Cupa, Trojan Horse Resurrected: On the Legality of the Use of Government
Spyware (Govware), LISS 2013, pp. 419-428
56."FAQ – Häufig gestellte Fragen". Ejpd.admin.ch. 2011-11-23. Archived from the
original on 2013-05-06. Retrieved 2014-05-17.
57."Censorship is inseparable from surveillance", Cory Doctorow, The Guardian, 2 March
2012
58."Trends in transition from classical censorship to Internet censorship: selected
country overviews"
59.The Enemies of the Internet Special Edition : Surveillance, Reporters Without Borders,
12 March 2013
60."When Secrets Aren’t Safe With Journalists", Christopher Soghoian, New York Times,
26 October 2011
61.Everyone's Guide to By-passing Internet Censorship, The Citizen Lab, University of
Toronto, September 2007

6.2.2 Operation: Bot Roast


6.2.2.1 Commentary
6.2.2.1.1 Introduction
The FBI used Operation: Bot Roast to track down bot herders, crackers, or virus coders
who install malware on computers without the user’s knowledge. It makes a computer
into a zombie which sends out spam to other computers from the compromised
computer also forming a botnet. The operation countered the massive botnet resources
threatening national security.[1]
6.2.2.1.2 The results
The operation interrupted and broke up bot herders. One million computers were
exposed, enabling malware authors to be arrested. Infected computer owners who
were informed were not cognizant of the situation.[1][2]
6.2.2.2 References
1. "OPERATION: BOT ROAST 'Bot-herders' Charged as Part of Initiative" (Press release).
Federal Bureau of Investigation. 2007-06-13. Retrieved 2012-11-26.
2. "FBI tries to fight zombie hordes" (Press release). BBC News. 2007-06-14.
Retrieved 2007-06-20.
3. Dan Goodin (13 June 2007). "FBI logs its millionth zombie address". the register.
Retrieved 2008-09-26.
4. Akill pleads guilty to all charges, By Ulrika Hedquist, 1 April 2008, Computerworld
6.2.3 Honeypot
6.2.3.1 Commentary
6.2.3.1.1 Introduction
A honeypot is set up to decoy and lure cyberattackers, and to find, divert or research
ways to access computers without authorization. It uses a computer, applications, and
data to imitate a real system as part of a network but is seperate and closely
supervised. A honeypot assumes all communications are hostile and no legitimate
users access it. The log gives insight into threats to a network infrastructure and
deflects attackers from real assets.
High-interaction honeypots reproduce a production system and capture much data.
Pure honeypots are full production systems with a tap on the honeypot's link to the
network. The aim of these honeypots allows the attacker to have root access, and then
examine their actions as they can access to all commands and files on a system with
most risk but most chance of gathering knowledge. Low-interaction honeypots apply to
only the frequently used services and so cover less risk and easier to maintain. Virtual
machines are a base for host honeypots as they can be restored easily if compromised.
Two or more honeypots on a network form a honeynet, while a honeyfarm is a
centralized collection of honeypots and analysis tools.
6.2.3.1.2 Types
Honeypots are categorised on their function and based on their involvement to give
production honeypots and research honeypots.
Production honeypots are easy to use and capture limited information and are inside
the network with other servers to improve security. They are low-interaction honeypots
and are easier to deploy giving less information about attacks or attackers than
research honeypots.
Research honeypots gather data about motives and tactics of attacks and attackers
but help the home organization.[2] They are complex to set up and maintain, capture
extensive information, and are used primarily by research organizations.[3]
Design criteria classify honeypots as pure honeypots, high-interaction honeypots and
low-interaction honeypots.
Pure honeypots are production systems with the attack monitored by a bug tap
installed on the honeypot's link to the network thus no extra software is required.
High-interaction honeypots simulate production systems hosting mixed services and
causing the attacker to waste time on many services. Virtual machines allow multiple
honeypots to be hosted on a single computer so when a honeypot is endangered it is
restored quickly. High-interaction honeypots give more security as they are hard to
notice, but costly to sustain. If there are no virtual machines each honeypot requires
one computer which is expensive.
Low-interaction honeypots give only the services often requested by attackers and as
they use few resources multiple virtual machines are simple to host on one system
with short response times and less code simplifying the complexity of the virtual
system's security.
6.2.3.1.3 Deception technology
Deception technology uses basic honeypot technology with extra advanced automation
for scale. It targets the automated deployment of resources over a large organisations.
[4]
6.2.3.1.4 Malware honeypots
Malware honeypots exploit malware by using the known replication and attack vectors
of malware. Replication vectors are easily verified for evidence of modifications
through manual means or using special-purpose honeypots to emulate drives. Malware
aims more attacks to find and steal crypto-currencies[5] which provides possible
services to create and monitor honeypots for small sums of money to give early
warning alerts of infection.[6]
6.2.3.1.5 Spam versions
Spammers use vulnerable resources e.g. open mail relays and open proxies. System
administrators have made honeypots to imitate these resources to find spammer
activity. These honeypots makes abuse more difficult or risky and are a potent
countermeasure against abuse.
These tools reveal the abuser's IP address and offer bulk spam capture to find
spammers' URLs and response mechanisms. Open relay honeypots ascertain e-mail
addresses (dropboxes) spammers target by test messages and employ to discover open
relays. The countermeasure is to transmit any illicit relay e-mail received addressed to
that dropbox e-mail address that tells the spammer the honeypot is a genuine open
relay, and they often respond by sending large quantities of relay spam to that
honeypot, which stops it.[7] The apparent source may be another abused system—
spammers and other abusers may use a chain of abused systems to make detection of
the original starting point of the abuse traffic difficult.
Spam mostly originating in the U.S.,[8] and flowing through open relays has been
reduced over 10 years. Open relay honeypots have been developed.[9][10][11]
6.2.3.1.6 Email trap
An email trap address to receive spam is a spam honeypot. With a spamtrap, spam
arrives at its destination legitimately like non-spam email. Project Honey Pot uses
honeypot pages on websites to disseminate uniquely tagged spamtrap email addresses
and spammers tracked.
6.2.3.1.7 Database honeypot
Databases are attacked using SQL injection and not recognized by basic firewalls so
database firewalls are used for protection. Some provide/support honeypots for the
intruder to hit a trap database.[12]
6.2.3.1.8 Detection
Honeypot detection systems are spammer-employed counter-weapons and use unique
characteristics of specific honeypots to identify them but this can take too much effort.
[13]
6.2.3.1.9 Honey nets
A honey net is a network of high interaction honeypots that simulates a production
network and configured for all activity to be monitored, recorded and in a degree,
discreetly regulated.
Honeynet Project
Two or more honeypots on a network form a honey net. It monitors a larger and/or more
diverse network where one honeypot is insufficient. Honey nets and honeypots
implement parts of larger network intrusion detection systems. A honey farm is a
centralized collection of honeypots and analysis tools[14] and leading to the Honeynet
Project.[15]
6.2.3.2 References
1 Cole, Eric; Northcutt, Stephen. "Honeypots: A Security Manager's Guide to
Honeypots".
2 Lance Spitzner (2002). Honeypots tracking hackers. Addison-Wesley. pp. 68–
70. ISBN 0-321-10895-7.
3 Katakoglu, Onur (2017-04-03). "Attacks Landscape in the Dark Side of the
Web" (PDF). acm.org. Retrieved 2017-08-09.
4 "Deception related technology – its not just a "nice to have", its a new strategy
of defense – Lawrence Pingree". 28 September 2016.
5 Litke, Pat. "Cryptocurrency-Stealing Malware Landscape". Secureworks.com.
SecureWorks. Retrieved 9 March 2016.
6 "Bitcoin Vigil: Detecting Malware Through Bitcoin". cryptocoins news. May 5,
2014.
7 Edwards, M. "Antispam Honeypots Give Spammers Headaches". Windows IT Pro.
Retrieved 11 March 2015.
8 "Sophos reveals latest spam relaying countries". Help Net Security. Help Net
Security. 24 July 2006. Retrieved 14 June 2013.
9 "Honeypot Software, Honeypot Products, Deception Software". Intrusion
Detection, Honeypots and Incident Handling Resources. Honeypots.net. 2013.
Retrieved 14 June2013.
10 dustintrammell (27 February 2013). "spamhole – The Fake Open SMTP Relay
Beta". SourceForge. Dice Holdings, Inc. Retrieved 14 June 2013.
11 Ec-Council (5 July 2009). Certified Ethical Hacker: Securing Network
Infrastructure in Certified Ethical Hacking. Cengage Learning. pp. 3–. ISBN 978-1-4354-
8365-1. Retrieved 14 June 2013.
12 "Secure Your Database Using Honeypot Architecture". www.dbcoretech.com.
August 13, 2010. Archived from the original on March 8, 2012.
13 "Deception Toolkit". All.net. All.net. 2013. Retrieved 14 June 2013.
14 "cisco router Customer support". Clarkconnect.com. Retrieved 2015-07-31.
15 "Know Your Enemy: GenII Honey Nets Easier to deploy, harder to detect, safer to
maintain". Honeynet Project. Honeynet Project. 12 May 2005. Retrieved 14 June 2013.
16 "The word for "bear"". Pitt.edu. Retrieved 12 Sep 2014.
17 Lance Spitzner (2002). Honeypots tracking hackers. Addison-Wesley. ISBN 0-321-
10895-7.
18 Sean Bodmer, CISSP, CEH, Dr Max Kilger, PhD, DrPH(c) Gregory Carpenter, CISM,
Jade Jones, Esq., JD (2012). Reverse Deception: Organized Cyber Threat Counter-
Exploitation. McGraw-Hill Education. ISBN 978-0-07-177249-5.
6.2.4 Application protocol-based intrusion detection system
6.2.4.1 Commentary
An application protocol-based intrusion detection system (APIDS) detects an intrusion,
monitors and analyses on a specific application protocol / protocols in a computing
system.
An APIDS monitors the dynamic behavior and state of the protocol. It consists of a
system / agent in between a process / servers. It monitors and analyzes the protocol
between connected devices. For example the SQL protocol is checked through
the middleware/business logic to the database between a web server and its database
management system and corroborate / impose the valid function of the protocol.
A profile of the application protocol can be formed from the data log automatically with
statistical analysis or a manual process to give a fingerprinted application. If anything
happens that is outside the bounds of the fingerprinted profile the exception can be
escalated as an alert if the application is subverted or changed and the fingerprint
changed.
6.2.4.2 References
1. Abdullah A. Mohamed, "Design Intrusion Detection System Based On Image
Block Matching", International Journal of Computer and Communication Engineering,
IACSIT Press, Vol. 2, No. 5, September 2013.
2. Brandon Lokesak (December 4, 2008). "A Comparison Between Signature Based
and Anomaly Based Intrusion Detection Systems" (PPT). www.iup.edu.
3. "Gartner report: Market Guide for User and Entity Behavior Analytics".
September 2015.
4. "Gartner: Hype Cycle for Infrastructure Protection, 2016".
5. "Gartner: Defining Intrusion Detection and Prevention Systems".
Retrieved September 20, 2016.
6. Scarfone, Karen; Mell, Peter (February 2007). "Guide to Intrusion Detection and
Prevention Systems (IDPS)" (PDF). Computer Security Resource Center. National
Institute of Standards and Technology (800–94). Retrieved 1 January 2010.
7. "NIST – Guide to Intrusion Detection and Prevention Systems (IDPS)" (PDF).
February 2007. Retrieved 2010-06-25.
8. Robert C. Newman (19 February 2009). Computer Security: Protecting Digital
Resources. Jones & Bartlett Learning. ISBN 978-0-7637-5994-0. Retrieved 25
June 2010.
9. c Michael E. Whitman; Herbert J. Mattord (2009). Principles of Information
Security. Cengage Learning EMEA. ISBN 978-1-4239-0177-8. Retrieved 25 June 2010.
10. Tim Boyles (2010). CCNA Security Study Guide: Exam 640-553. John Wiley and
Sons. p. 249. ISBN 978-0-470-52767-2. Retrieved 29 June 2010.
11. Harold F. Tipton; Micki Krause (2007). Information Security Management
Handbook. CRC Press. p. 1000. ISBN 978-1-4200-1358-0. Retrieved 29 June 2010.
12. John R. Vacca (2010). Managing Information Security. Syngress.
p. 137. ISBN 978-1-59749-533-2. Retrieved 29 June2010.
13. Engin Kirda; Somesh Jha; Davide Balzarotti (2009). Recent Advances in Intrusion
Detection: 12th International Symposium, RAID 2009, Saint-Malo, France, September
23–25, 2009, Proceedings. Springer. p. 162. ISBN 978-3-642-04341-3. Retrieved 29
June 2010.
14. Whitman, Michael.; Mattord, verma (2008). Principles of Information Security.
Course Technology. pp. 290–301. ISBN 978-1-4239-0177-8.
15. Anderson, Ross (2001). Security Engineering: A Guide to Building Dependable
Distributed Systems. New York: John Wiley & Sons. pp. 387–388. ISBN 978-0-471-38922-
4.
16. http://www.giac.org/paper/gsec/235/limitations-network-intrusion-
detection/100739
17. Anderson, James P., "Computer Security Threat Monitoring and Surveillance,"
Washing, PA, James P. Anderson Co., 1980.
18. David M. Chess; Steve R. White (2000). "An Undetectable Computer
Virus". Proceedings of Virus Bulletin Conference.
19. Denning, Dorothy E., "An Intrusion Detection Model," Proceedings of the Seventh
IEEE Symposium on Security and Privacy, May 1986, pages 119–131
20. Lunt, Teresa F., "IDES: An Intelligent System for Detecting Intruders,"
Proceedings of the Symposium on Computer Security; Threats, and Countermeasures;
Rome, Italy, November 22–23, 1990, pages 110–121.
21. Lunt, Teresa F., "Detecting Intruders in Computer Systems," 1993 Conference on
Auditing and Computer Technology, SRI International
22. Sebring, Michael M., and Whitehurst, R. Alan., "Expert Systems in Intrusion
Detection: A Case Study," The 11th National Computer Security Conference, October,
1988
23. Smaha, Stephen E., "Haystack: An Intrusion Detection System," The Fourth
Aerospace Computer Security Applications Conference, Orlando, FL, December, 1988
24. McGraw, Gary (May 2007). "Silver Bullet Talks with Becky Bace" (PDF). IEEE
Security & Privacy Magazine. 5 (3): 6–9. doi:10.1109/MSP.2007.70. Retrieved 18
April 2017.
25. Vaccaro, H.S., and Liepins, G.E., "Detection of Anomalous Computer Session
Activity," The 1989 IEEE Symposium on Security and Privacy, May, 1989
26. Teng, Henry S., Chen, Kaihu, and Lu, Stephen C-Y, "Adaptive Real-time Anomaly
Detection Using Inductively Generated Sequential Patterns," 1990 IEEE Symposium on
Security and Privacy
27. Heberlein, L. Todd, Dias, Gihan V., Levitt, Karl N., Mukherjee, Biswanath, Wood,
Jeff, and Wolber, David, "A Network Security Monitor," 1990 Symposium on Research in
Security and Privacy, Oakland, CA, pages 296–304
28. Winkeler, J.R., "A UNIX Prototype for Intrusion and Anomaly Detection in Secure
Networks," The Thirteenth National Computer Security Conference, Washington, DC.,
pages 115–124, 1990
29. Dowell, Cheri, and Ramstedt, Paul, "The ComputerWatch Data Reduction Tool,"
Proceedings of the 13th National Computer Security Conference, Washington, D.C.,
1990
30. Snapp, Steven R, Brentano, James, Dias, Gihan V., Goan, Terrance L., Heberlein,
L. Todd, Ho, Che-Lin, Levitt, Karl N., Mukherjee, Biswanath, Smaha, Stephen E., Grance,
Tim, Teal, Daniel M. and Mansur, Doug, "DIDS (Distributed Intrusion Detection System)
-- Motivation, Architecture, and An Early Prototype," The 14th National Computer
Security Conference, October, 1991, pages 167–176.
31. Jackson, Kathleen, DuBois, David H., and Stallings, Cathy A., "A Phased
Approach to Network Intrusion Detection," 14th National Computing Security
Conference, 1991
32. Paxson, Vern, "Bro: A System for Detecting Network Intruders in Real-Time,"
Proceedings of the 7th USENIX Security Symposium, San Antonio, TX, 1998
33. Amoroso, Edward, "Intrusion Detection: An Introduction to Internet Surveillance,
Correlation, Trace Back, Traps, and Response," Intrusion.Net Books, Sparta, New
Jersey, 1999, ISBN 0-9666700-7-8
34. Kohlenberg, Toby (Ed.), Alder, Raven, Carter, Dr. Everett F. (Skip), Jr., Esler,
Joel., Foster, James C., Jonkman Marty, Raffael, and Poor, Mike, "Snort IDS and IPS
Toolkit," Syngress, 2007, ISBN 978-1-59749-099-3
35. Barbara, Daniel, Couto, Julia, Jajodia, Sushil, Popyack, Leonard, and Wu,
Ningning, "ADAM: Detecting Intrusions by Data Mining," Proceedings of the IEEE
Workshop on Information Assurance and Security, West Point, NY, June 5–6, 2001
36. Intrusion Detection Techniques for Mobile Wireless Networks, ACM WINET 2003
<http://www.cc.gatech.edu/~wenke/papers/winet03.pdf>
37. Viegas, E.; Santin, A. O.; Fran?a, A.; Jasinski, R.; Pedroni, V. A.; Oliveira, L. S.
(2017-01-01). "Towards an Energy-Efficient Anomaly-Based Intrusion Detection Engine
for Embedded Systems". IEEE Transactions on Computers. 66 (1): 163–
177. doi:10.1109/TC.2016.2560839. ISSN 0018-9340.
38. França, A. L.; Jasinski, R.; Cemin, P.; Pedroni, V. A.; Santin, A. O. (2015-05-
01). "The energy cost of network security: A hardware vs. software comparison". 2015
IEEE International Symposium on Circuits and Systems (ISCAS): 81–
84. doi:10.1109/ISCAS.2015.7168575.
39. França, A. L. P. d; Jasinski, R. P.; Pedroni, V. A.; Santin, A. O. (2014-07-
01). "Moving Network Protection from Software to Hardware: An Energy Efficiency
Analysis". 2014 IEEE Computer Society Annual Symposium on VLSI: 456–
461. doi:10.1109/ISVLSI.2014.89.
40. "Towards an Energy-Efficient Anomaly-Based Intrusion Detection Engine for
Embedded Systems" (PDF). SecPLab.
41.  This article incorporates public domain material from the National Institute of
Standards and Technology document "Guide to Intrusion Detection and Prevention
Systems, SP800-94" by Karen Scarfone, Peter Mell. Retrieved on 1 January 2010.
42. Bace, Rebecca Gurley (2000). Intrusion Detection. Indianapolis, IN: Macmillan
Technical. ISBN 1578701856.
43. Bezroukov, Nikolai (11 December 2008). "Architectural Issues of Intrusion
Detection Infrastructure in Large Enterprises (Revision 0.82)". Softpanorama.
Retrieved 30 July 2010.
44. P.M. Mafra and J.S. Fraga and A.O. Santin (2014). "Algorithms for a distributed
IDS in MANETs". Journal of Computer and System Sciences. 80 (3): 554–
570. doi:10.1016/j.jcss.2013.06.011.
45. Hansen, James V.; Benjamin Lowry, Paul; Meservy, Rayman; McDonald, Dan
(2007). "Genetic programming for prevention of cyberterrorism through dynamic and
evolving intrusion detection". Decision Support Systems (DSS). 43 (4): 1362–
1374. doi:10.1016/j.dss.2006.04.004. SSRN 877981  .
46. Scarfone, Karen; Mell, Peter (February 2007). "Guide to Intrusion Detection and
Prevention Systems (IDPS)" (PDF). Computer Security Resource Center. National
Institute of Standards and Technology (800-94). Retrieved 1 January 2010.
47. Saranya, J.; Padmavathi, G. (2015). "A Brief Study on Different Intrusions and Machine
Learning-based Anomaly Detection Methods in Wireless Sensor
Networks" (PDF). Avinashilingam Institute for Home Science and Higher Education for
Women (6(4)). Retrieved 4 April 2015.
48. Singh, Abhishek. "Evasions In Intrusion Prevention Detection Systems". Virus
Bulletin. Retrieved 1 April 2010.
6.2.5 Cryptography
6.2.5.1 Commentary
6.2.5.1.1 Introduction
Cryptography (cryptology) is the methodology for secure communication in the
presence of third parties (adversaries)[2] and protocols supporting the
process[3] and information security for data confidentiality, data
integrity, authentication, and non-repudiation.[4] The study is part of
mathematics, computer science, electrical engineering, communication science,
and physics. Applications are electronic commerce, chip-based payment cards, digital
currencies, computer passwords, and military communications.
Originally cryptography was encryption of messages to and from nonsense for
transmission between two points.[5] Modern cryptography uses mathematics and
computer science to develop procedures which  have high computational complexity
based on integer factorization algorithms and enhanced computing technology. It has
lead to legal problems, tools for espionage and sedition[6] )with investigators able to
force disclosure of encryption keys),[7][8] digital rights management and copyright
control.[9]
6.2.5.1.2 Terminology
Encryption converts ordinary information (plain-text) into unintelligible text (cipher-
text)[11] and decryption does the reverse using a cipher key for the purpose known
only to the sender and receiver.
Cryptosystems can be symmetric when the same key is used to encrypt and decrypt a
message and asymmetric using a public key to encrypt a message and a private key
used to decrypt it. Asymmetric systems enhance the security of communication but are
slower[12] with examples Rivest-Shamir-Adleman and Elliptic Curve Cryptography.
Symmetric system examples are Advanced Encryption Standard and Data Encryption
Standard.[13]
Cryptography refers specifically to the use and practice of cryptographic techniques
and cryptology to refer to the combined study of cryptography and cryptanalysis.[14]
[15][16]
6.2.5.1.3 Cryptography and Cryptanalysis
The modern field of cryptography can be divided into several areas of study.
6.2.5.1.3.1 Symmetric-key Cryptography
Symmetric-key cryptograms use the same key for encryption and decryption of a
message,[24] though a message or group of messages may have a different key than
others. A disadvantage of symmetric ciphers is the key management necessary to use
them securely. Each distinct pair of communicating parties must share a different key,
and each cipher-text exchanged. The number of keys required increases as
the square of the number of network members, which requires complex key
management schemes to keep them all consistent and secret. The difficulty of securely
establishing a secret key between two communicating parties when a secure
channel does not already exist between them, also presents a  practical problem for
users.
One round of the IDEA cipher in some versions of Pretty Good Privacy for high-speed
encryption of e-mail. They come as block processing using blocks of input or stream
ciphers operating on characters. The Data Encryption Standard and the Advanced
Encryption Standard use block ciphers[27] with applications of ATM encryption,[28] e-
mail privacy[29] and secure remote access[30] while FEAL[4][31] is one of the failed
systems. Stream ciphers are exemplified by
RC4, A5/1, A5/2, Chameleon, FISH, Helix, ISAAC, MUGI, Panama, Phelix, Pike, SEAL, SO
BER, SOBER-128, and WAKE[4] found in Web browsers, e-mail encryption, smart cards
and mobile phones.
6.2.5.1.3.2 Asymmetric Key Cryptography
Asymmetric key (public-key) cryptography uses two different but mathematically
related keys (public key and private key)[34] where the public key system is
constructed with calculation of the private key is computationally infeasible from the
other (the 'public key'), even though they are necessarily related. Instead, both keys
are generated secretly, as an interrelated pair.[35][36]
In public-key cryptosystems, the public key may be freely distributed, while its paired
private key must remain secret. In a public-key encryption system, the public key is
used for encryption, while the private or secret key is used for decryption. Rivest,
Shamir and Adleman found a practical public-key encryption system known as the RSA
algorithm.[37]
6.2.5.1.3.3 Hash Functions
Other algorithms are hash functions which take a message and form a short, fixed
length hash for a digital signature. An attacker cannot find two messages with the
same hash in good hash functions. MD4, MD5, SHA-0, SHA-1, SHA-2 have been found
wanting[32] and SHA-3 was developed[33] to be invertible, cryptographic hash
functions with a hashed output for retrieve the original input data. It verifies the
authenticity of data from a distrusted source or to add a layer of security.
Message authentication codes are like hash functions but a secret key authenticates
the hash value on receipt[4] and blocks attacks scheme against bare digest algorithms
being useful.
6.2.5.1.3.4 Digital Signature
Public-key cryptography can also be used for implementing digital signature schemes.
A digital signature is like an ordinary signature permanently tied to the content of the
message being signed. The two algorithms in digital signatures are for signing where a
secret key is used to process the message and for verification where matching public
key is used with the message to check the validity of the signature. RSA and DSA are
two of the most popular digital signature schemes. Digital signatures are central to the
operation of public key infrastructures and many network security schemes
(e.g. SSL/TLS, many VPNs, etc.).[31]
6.2.5.1.3.5 Diffie–Hellman key exchange protocol
Diffie and Hellman showed that public-key cryptography was possible with the Diffie–
Hellman key exchange protocol used in secure communications to allow two parties to
secretly agree on a shared encryption key.[24] Others include the Cramer–Shoup
cryptosystem, El Gamal encryption, and various elliptic curve techniques. GCHQ beat
several academic developments.[38][39][40]
6.2.5.1.3.6 Mathematical Basis
Public-key algorithms are most often based on the computational complexity of
problems, often from number theory e.g. the hardness of RSA is related to the integer
factorization problem, while Diffie–Hellman and DSA are based to the discrete logarithm
problem. Recently, elliptic curve cryptography has developed using number theoretic
problems and elliptic curves. Most public-key algorithms pertain to operations
like modular multiplication and exponentiation, which are more costly calculations
than the processes for most block ciphers so they use hybrid cryptosystems with a fast
high-quality symmetric-key encryption algorithm for the message and the symmetric
key encrypted using a public-key algorithm sent with the message. Similarly, hybrid
signature schemes are often used, in which a cryptographic hash function is computed,
and only the resulting hash is digitally signed.[4]
6.2.5.1.3.7 Cryptanalysis
Cryptanalysis finds weakness or insecurity in a cryptographic scheme to permit its
subversion or evasion but not all can be broken.[11][41] Some need computational
effort for a brute force attack but this may be excessive being exponentially dependent
on the key size relative to the effort needed to use the cipher and effectively secure. If
no efficient method can be found to break the cipher it is effectively secure.[4][42][43]
Public-key algorithms are based on computational difficulty of various problems e.g.
integer factorization, discrete logarithm problem or elliptic curve-based discrete
logarithm problem which is the best for general ciphers. Cryptanalysis applies
weaknesses in algorithms or attacks based on actual use of the algorithms in real
devices - called side-channel attacks. If an attacker uses the time a device for a
process or the pattern and length of messages he might break the cipher using traffic
analysis.[44] System analysis of the cipher systems leads to cryptographic primitives
and operations such as pseudorandom functions, one-way functions, etc.
[45][46][47][48][49]
6.2.5.1.3.8 Legal issues
6.2.5.1.3.8.1 Prohibitions
Cryptography used by intelligence gathering and law enforcement agencies.[8] Secret
communications are criminal / treasonous and civil rights supporters.[6][50][8] Export
of cryptography and cryptanalysis technology is often restricted by governments based
in national security[51] but with the advent of the Internet this is useless
6.2.5.1.3.8.2 Export controls
Various challenges have been made against governments to remove export
controls[52][53][54] and international agreements have tended to place restrictions on
it.[56] As much internet software is derived from the US using Transport Layer Security.
6.2.5.1.3.8.3 NSA involvement
The NSA helps in cipher development and policy[8] through design with consultancy
and standards.[57][58][59][53]
6.2.5.1.3.8.4 Digital rights management
Digital rights management use cryptography for copyright control with a war between
developers, internet users and authorities.[60[61][62][9]
6.2.5.1.3.8.5 Forced disclosure of encryption keys
Police can force suspects to decrypt files, passwords and encryption keys.[7][63][64]
[65][66][67]
6.2.5.2 References
1. Liddell, Henry George; Scott, Robert; Jones, Henry Stuart; McKenzie, Roderick
(1984). A Greek-English Lexicon. Oxford University Press.
2. Rivest, Ronald L. (1990). "Cryptography". In J. Van Leeuwen. Handbook of
Theoretical Computer Science. 1. Elsevier.
3. Bellare, Mihir; Rogaway, Phillip (21 September 2005). "Introduction". Introduction
to Modern Cryptography. p. 10.
4. Menezes, A. J.; van Oorschot, P. C.; Vanstone, S. A. Handbook of Applied
Cryptography. ISBN 0-8493-8523-7. Archived from the original on 7 March 2005.
5. Biggs, Norman (2008). Codes: An introduction to Information Communication and
Cryptography. Springer. p. 171.
6. "Overview per country". Crypto Law Survey. February 2013. Retrieved 26
March 2015.
7. "UK Data Encryption Disclosure Law Takes Effect". PC World. 1 October 2007.
Retrieved 26 March 2015.
8. Ranger, Steve (24 March 2015). "The undercover war on your internet secrets:
How online surveillance cracked our trust in the web". TechRepublic. Archived from the
original on 2016-06-12. Retrieved 2016-06-12.
9. Doctorow, Cory (2 May 2007). "Digg users revolt over AACS key". Boing Boing.
Retrieved 26 March 2015.
10. Rosenheim 1997, p. 20
11. Kahn, David (1967). The Codebreakers. ISBN 0-684-83130-9.
12. "An Introduction to Modern Cryptosystems".
13. Sharbaf, M.S. (2011-11-01). "Quantum cryptography: An emerging technology in
network security". 2011 IEEE International Conference on Technologies for Homeland
Security (HST): 13–19. doi:10.1109/THS.2011.6107841.
14. Oded Goldreich, Foundations of Cryptography, Volume 1: Basic Tools, Cambridge
University Press, 2001, ISBN 0-521-79172-3
15. "Cryptology (definition)". Merriam-Webster's Collegiate Dictionary (11th
ed.). Merriam-Webster. Retrieved 26 March 2015.
16. "RFC 2828 – Internet Security Glossary". Internet Engineering Task Force. May
2000. Retrieved 26 March 2015.
17. I︠A
︡ shchenko, V. V. (2002). Cryptography: an introduction. AMS Bookstore.
p. 6. ISBN 0-8218-2986-6.
18. http://www.iranicaonline.org/articles/codes-romuz-sg
19. Singh, Simon (2000). The Code Book. New York: Anchor Books. pp. 14–
20. ISBN 9780385495325.
20. Al-Kadi, Ibrahim A. (April 1992). "The origins of cryptology: The Arab
contributions". Cryptologia. 16 (2): 97–126. doi:10.1080/0161-119291866801.
21. Schrödel, Tobias (October 2008). "Breaking Short Vigenère
Ciphers". Cryptologia. 32 (4): 334–337. doi:10.1080/01611190802336097.
22. Hakim, Joy (1995). A History of US: War, Peace and all that Jazz. New
York: Oxford University Press. ISBN 0-19-509514-6.
23. Gannon, James (2001). Stealing Secrets, Telling Lies: How Spies and
Codebreakers Helped Shape the Twentieth Century. Washington, D.C.:
Brassey's. ISBN 1-57488-367-4.
24. Diffie, Whitfield; Hellman, Martin (November 1976). "New Directions in
Cryptography" (PDF). IEEE Transactions on Information Theory. IT-22: 644–
654. doi:10.1109/tit.1976.1055638.
25. Cryptography: Theory and Practice, Third Edition (Discrete Mathematics and Its
Applications), 2005, by Douglas R. Stinson, Chapman and Hall/CRC
26. Blaze, Matt; Diffie, Whitefield; Rivest, Ronald L.; Schneier, Bruce; Shimomura,
Tsutomu; Thompson, Eric; Wiener, Michael (January 1996). "Minimal key lengths for
symmetric ciphers to provide adequate commercial security". Fortify. Retrieved 26
March 2015.
27. "FIPS PUB 197: The official Advanced Encryption Standard"(PDF). Computer
Security Resource Center. National Institute of Standards and Technology.
Retrieved 26 March 2015.
28. "NCUA letter to credit unions" (PDF). National Credit Union Administration. July
2004. Retrieved 26 March 2015.
29. "RFC 2440 - Open PGP Message Format". Internet Engineering Task Force.
November 1998. Retrieved 26 March 2015.
30. Golen, Pawel (19 July 2002). "SSH". WindowSecurity. Retrieved 26 March 2015.
31. Schneier, Bruce (1996). Applied Cryptography (2nd ed.). Wiley. ISBN 0-471-11709-
9.
32. "Notices". Federal Register. 72 (212). 2 November 2007.
Archived 28 February 2008 at the Wayback Machine.
33. "NIST Selects Winner of Secure Hash Algorithm (SHA-3) Competition". Tech
Beat. National Institute of Standards and Technology. October 2, 2012. Retrieved 26
March 2015.
34. Diffie, Whitfield; Hellman, Martin (8 June 1976). "Multi-user cryptographic
techniques". AFIPS Proceedings. 45: 109–112.
35. Ralph Merkle was working on similar ideas at the time and encountered
publication delays, and Hellman has suggested that the term used should be Diffie–
Hellman–Merkle aysmmetric key cryptography.
36. Kahn, David (Fall 1979). "Cryptology Goes Public". Foreign Affairs. 58 (1):
153. doi:10.2307/20040343.
37. Rivest, Ronald L.; Shamir, A.; Adleman, L. (1978). "A Method for Obtaining Digital
Signatures and Public-Key Cryptosystems". Communications of the ACM. Association
for Computing Machinery. 21 (2): 120–126. doi:10.1145/359340.359342.
Archived 16 November 2001 at the Wayback Machine.
Previously released as an MIT "Technical Memo" in April 1977, and published in Martin
Gardner's Scientific American Mathematical recreations column
38. Wayner, Peter (24 December 1997). "British Document Outlines Early Encryption
Discovery". New York Times. Retrieved 26 March 2015.
39. Cocks, Clifford (20 November 1973). "A Note on 'Non-Secret
Encryption'" (PDF). CESG Research Report.
40. Singh, Simon (1999). The Code Book. Doubleday. pp. 279–292.
41. Shannon, Claude; Weaver, Warren (1963). The Mathematical Theory of
Communication. University of Illinois Press. ISBN 0-252-72548-4.
42. "An Example of a Man-in-the-middle Attack Against Server Authenticated SSL-
sessions" (PDF).
43. Junod, Pascal (2001). "On the Complexity of Matsui's Attack"(PDF). Selected
Areas in Cryptography.
44. Song, Dawn; Wagner, David A.; Tian, Xuqing (2001). "Timing Analysis of
Keystrokes and Timing Attacks on SSH" (PDF). Tenth USENIX Security Symposium.
45. Brands, S. (1994). "Untraceable Off-line Cash in Wallets with
Observers". Advances in Cryptology—Proceedings of CRYPTO. Springer-Verlag.
Archived from the original on 26 July 2011.
46. Babai, László (1985). "Trading group theory for randomness". Proceedings of the
Seventeenth Annual Symposium on the Theory of Computing. Association for
Computing Machinery.
47. Goldwasser, S.; Micali, S.; Rackoff, C. (1989). "The Knowledge Complexity of
Interactive Proof Systems". SIAM Journal on Computing. 18 (1): 186–
208. doi:10.1137/0218012.
48. Blakley, G. (June 1979). "Safeguarding cryptographic keys". Proceedings of
AFIPS 1979. 48: 313–317.
49. Shamir, A. (1979). "How to share a secret". Communications of the
ACM. Association for Computing Machinery. 22: 612–613. doi:10.1145/359168.359176.
50. "6.5.1 WHAT ARE THE CRYPTOGRAPHIC POLICIES OF SOME COUNTRIES?". RSA
Laboratories. Retrieved 26 March 2015.
51. Rosenoer, Jonathan (1995). "CRYPTOGRAPHY & SPEECH". CyberLaw. 
Archived 1 December 2005 at the Wayback Machine.
52. "Case Closed on Zimmermann PGP Investigation". IEEE Computer Society's
Technical Committee on Security and Privacy. 14 February 1996. Retrieved 26
March 2015.
53. Levy, Steven (2001). Crypto: How the Code Rebels Beat the Government—Saving
Privacy in the Digital Age. Penguin Books. p. 56. ISBN 0-14-024432-8. OCLC 244148644.
54. "Bernstein v USDOJ". Electronic Privacy Information Center. United States Court
of Appeals for the Ninth Circuit. 6 May 1999. Retrieved 26 March 2015.
55. "DUAL-USE LIST - CATEGORY 5 – PART 2 – "INFORMATION
SECURITY"" (DOC). Wassenaar Arrangement. Retrieved 26 March 2015.[permanent
dead link]
56. "6.4 UNITED STATES CRYPTOGRAPHY EXPORT/IMPORT LAWS". RSA
Laboratories. Retrieved 26 March 2015.
57. Schneier, Bruce (15 June 2000). "The Data Encryption Standard (DES)". Crypto-
Gram. Retrieved 26 March 2015.
58. Coppersmith, D. (May 1994). "The Data Encryption Standard (DES) and its
strength against attacks" (PDF). IBM Journal of Research and Development. 38 (3): 243–
250. doi:10.1147/rd.383.0243. Retrieved 26 March 2015.
59. Biham, E.; Shamir, A. (1991). "Differential cryptanalysis of DES-like
cryptosystems" (PDF). Journal of Cryptology. Springer-Verlag. 4 (1): 3–
72. doi:10.1007/bf00630563. Retrieved 26 March 2015.
60. "The Digital Millennium Copyright Act of 1998" (PDF). United States Copyright
Office. Retrieved 26 March 2015.
61. Ferguson, Niels (15 August 2001). "Censorship in action: why I don't publish my
HDCP results". 
Archived 1 December 2001 at the Wayback Machine.
62. Schneier, Bruce (2001-08-06). "Arrest of Computer Researcher Is Arrest of First
Amendment Rights". InternetWeek. Retrieved 2017-03-07.
63. Williams, Christopher (11 August 2009). "Two convicted for refusal to decrypt
data". The Register. Retrieved 26 March 2015.
64. Williams, Christopher (24 November 2009). "UK jails schizophrenic for refusal to
decrypt files". The Register. Retrieved 26 March 2015.
65. Ingold, John (January 4, 2012). "Password case reframes Fifth Amendment rights
in context of digital world". The Denver Post. Retrieved 26 March 2015.
66. Leyden, John (13 July 2011). "US court test for rights not to hand over crypto
keys". The Register. Retrieved 26 March 2015.
67. "ORDER GRANTING APPLICATION UNDER THE ALL WRITS ACT REQUIRING
DEFENDANT FRICOSU TO ASSIST IN THE EXECUTION OF PREVIOUSLY ISSUED
SEARCH WARRANTS"(PDF). United States District Court for the District of Colorado.
Retrieved 26 March 2015.
68. Becket, B (1988). Introduction to Cryptology. Blackwell Scientific
Publications. ISBN 0-632-01836-4. OCLC 16832704. Excellent coverage of many
classical ciphers and cryptography concepts and of the "modern" DES and RSA
systems.
69. Cryptography and Mathematics by Bernhard Esslinger, 200 pages, part of the
free open-source package CrypTool, PDF download at the Wayback Machine (archived
22 July 2011). CrypTool is the most widespread e-learning program about cryptography
and cryptanalysis, open source.
70. In Code: A Mathematical Journey by Sarah Flannery (with David Flannery).
Popular account of Sarah's award-winning project on public-key cryptography, co-
written with her father.
71. James Gannon, Stealing Secrets, Telling Lies: How Spies and Codebreakers
Helped Shape the Twentieth Century, Washington, D.C., Brassey's, 2001, ISBN 1-57488-
367-4.
72. Oded Goldreich, Foundations of Cryptography, in two volumes, Cambridge
University Press, 2001 and 2004.
73. Introduction to Modern Cryptography by Jonathan Katz and Yehuda Lindell.
74. Alvin's Secret Code by Clifford B. Hicks (children's novel that introduces some
basic cryptography and cryptanalysis).
75. Ibrahim A. Al-Kadi, "The Origins of Cryptology: the Arab Contributions,"
Cryptologia, vol. 16, no. 2 (April 1992), pp. 97–126.
76. Christof Paar, Jan Pelzl, Understanding Cryptography, A Textbook for Students
and Practitioners. Springer, 2009. (Slides, online cryptography lectures and other
information are available on the companion web site.) Very accessible introduction to
practical cryptography for non-mathematicians.
77. Introduction to Modern Cryptography by Phillip Rogaway and Mihir Bellare, a
mathematical introduction to theoretical cryptography including reduction-based
security proofs. PDF download.
78. Johann-Christoph Woltag, 'Coded Communications (Encryption)' in Rüdiger
Wolfrum (ed) Max Planck Encyclopedia of Public International Law (Oxford University
Press 2009).
79. "Max Planck Encyclopedia of Public International Law"., giving an overview of
international law issues regarding cryptography.
80. Jonathan Arbib & John Dwyer, Discrete Mathematics for Cryptography, 1st
Edition ISBN 978-1-907934-01-8.
81. Stallings, William (March 2013). Cryptography and Network Security: Principles
and Practice (6th ed.). Prentice Hall. ISBN 978-0133354690
82. Wikibooks has more on the topic of: Cryptography
83. At Wikiversity, you can learn more and teach others about Cryptography at
the Department of Cryptography
84. Wikisource has the text of the 1911 Encyclopædia
Britannica article Cryptography.
85. The dictionary definition of cryptography at Wiktionary
86. Media related to Cryptography at Wikimedia Commons
87. Cryptography on In Our Time at the BBC.
88. Crypto Glossary and Dictionary of Technical Cryptography
89. NSA's CryptoKids.
90. Overview and Applications of Cryptology by the CrypTool Team; PDF; 3.8 MB.
July 2008
91. A Course in Cryptography by Raphael Pass & Abhi Shelat – offered at Cornell in
the form of lecture notes.
92. For more on the use of cryptographic elements in fiction, see: Dooley, John F.,
William and Marilyn Ingersoll Professor of Computer Science, Knox College (23 August
2012). "Cryptology in Fiction".
93. The George Fabyan Collection at the Library of Congress has early editions of
works of seventeenth-century English literature, publications relating to cryptography.
6.2.6 Diffie–Hellman key exchange
6.2.6.1 Commentary
6.2.6.1.1 Introduction
Diffie–Hellman key exchange which enables cryptographic keys to pass securely over a
public channel[1][2] overcomes the need to first exchange keys by some secure
physical channel by allowing the parties without prior knowledge of each other to
jointly establish a shared secret key over an insecure channel thus using encrypted
communications with a symmetric key cipher though the node keys were lacking in
strength.[3][4][5][6][7]
6.2.6.1.2 Description
Both nodes agree on a finite cyclic group G of order n and a generating element g in G.
The group G is written multiplicatively. The first node generates a random key a, where
1 ≤ a < n, and sends ga to the second node which picks another random key b, where 1
≤ b < n, and sends gb to the first node. The first node uses (gb)a whilst the second node
uses (ga)b. Both have the group element gab, which can serve as the shared secret
key. The group G satisfies the requisite condition for secure communication if there is
not an efficient algorithm for determining gab given g, ga, and gb.[8][9][10][11]
Variations of the technique are elliptic curve Diffie–Hellman protocol based on elliptic
curves or hyperelliptic and supersingular isogeny key exchange.
The operation can be extended to more parties by agreeing the n and g with each node
generating their private key. Each node sends its key to the next and each node takes
the key received, applies its transform and sends it to the next. The process is
repeated until circuit is completed to the second node in the chain. The key used for
the connection is the last one calculated in the process. Any eavesdropper can only
see a partial cipher but not the fully processed cipher. It is conceptualised by two basic
principles:
• Start with an empty key consisting only of g and raise the current value to every
participant’s private exponent once,.
• Any intermediate value (having up to N-1 exponents applied, where N is the number
of participants in the group) may be revealed publicly, but the final value (having had
all N exponents applied) is the shared secret and is kept private. Thus, each user must
obtain their copy of the secret by applying their own private key last (o.
These principles leave open various options for choosing in which order participants
contribute to keys. The simplest and most obvious solution is to arrange
the Nparticipants in a circle and have N keys rotate around the circle, until eventually
every key has been contributed to by all N participants (ending with its owner) and
each participant has contributed to N keys (ending with their own). However, this
requires that every participant perform N modular exponentiations. The processcan be
improved by partitioning the set of nodes.
6.2.6.1.3 Security Application
The protocol is secure against eavesdroppers if G and g are chosen well where the
order of the group G is large especially when the same group is used for large amounts
of traffic. The eavesdropper must solve the Diffie–Hellman problem to find the key
which is difficult for large groups. Solving the discrete logarithm problem easily makes
many other public key cryptosystems insecure along with fields of small characteristic.
[12] Security is driven by the group order being large prime factor to prevent use of
the Pohlig–Hellman algorithm to obtain the chosen keys. It is driven with primes p and q
when p = 2q + 1.[13]
Attacks on Internet traffic are done easily to start but require much storage to solve
the final successful attack.[14][8][3]
6.2.6.1.4 Other uses
Diffie–Hellman key exchange has been used for public key encryption schemes and
the integrated encryption scheme. It is also good for session security for frequent fast
key generation and password-authenticated key agreement. It can be subject to man in
the middle attacks which are overcome by authentication.
6.2.6.2 References
19 Merkle, Ralph C (April 1978). "Secure Communications Over Insecure Channels".
Communications of the ACM. 21 (4): 294–299. doi:10.1145/359460.359473. Received
August, 1975; revised September 1977 
20 Diffie, W.; Hellman, M. (1976). "New directions in cryptography" (PDF). IEEE
Transactions on Information Theory. 22 (6): 644–654. doi:10.1109/TIT.1976.1055638. 
21 Adrian, David; Bhargavan, Karthikeyan; Durumeric, Zakir; Gaudry, Pierrick;
Green, Matthew; Halderman, J. Alex; Heninger, Nadia; Springall, Drew; Thomé,
Emmanuel; Valenta, Luke; VanderSloot, Benjamin; Wustrow, Eric; Zanella-Béguelin,
Santiago; Zimmermann, Paul (October 2015). "Imperfect Forward Secrecy: How Diffie-
Hellman Fails in Practice" (PDF). 
22 Ellis, J. H. (January 1970). "The possibility of Non-Secret digital
encryption" (PDF). CESG Research Report. Archived from the original (PDF) on 2014-10-
30. Retrieved 2015-08-28. 
23 https://www.gchq.gov.uk/sites/default/files/document_files/CESG_Research_Repor
t_No_3006_0.pdf
24 "GCHQ trio recognised for key to secure shopping online". BBC News. 5 October
2010. Retrieved 5 August 2014. 
25 Hellman, Martin E. (May 2002), "An overview of public key cryptography" (PDF),
IEEE Communications Magazine, 40 (5): 42–49, doi:10.1109/MCOM.2002.1006971 
26 "Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice" (PDF).
Retrieved 30 October 2015. 
27 Garzia, F. (2013), Handbook of Communications Security, WIT Press,
p. 182, ISBN 1845647688 
28 Buchanan, Bill, "Diffie-Hellman Example in ASP.NET", Bill's Security Tips,
retrieved 2015-08-27 
29 Buchmann, Johannes A. (2013), Introduction to Cryptography (2nd ed.), Springer
Science & Business Media, pp. 190–191, ISBN 1441990038 
30 Barbulescu, Razvan; Gaudry, Pierrick; Joux, Antoine; Thomé, Emmanuel (2014).
"A Heuristic Quasi-Polynomial Algorithm for Discrete Logarithm in Finite Fields of Small
Characteristic". Advances in Cryptology – EUROCRYPT 2014. Proceedings 33rd Annual
International Conference on the Theory and Applications of Cryptographic Techniques.
Lecture Notes in Computer Science. 8441. Copenhagen, Denmark. pp. 1–
16. doi:10.1007/978-3-642-55220-5_1.ISBN 978-3-642-55220-5. 
31 C. Kaufman (Microsoft) (December 2005). "RFC 4306 Internet Key Exchange
(IKEv2) Protocol". Internet Engineering Task Force (IETF). 
32 Whitfield Diffie, Paul C. Van Oorschot, and Michael J. Wiener "Authentication and
Authenticated Key Exchanges", in Designs, Codes and Cryptography, 2, 107-125 (1992),
Section 5.2, available as Appendix B to U.S. Patent 5,724,425
33 Gollman, Dieter (2011). Computer Security (2nd ed.). West Sussex, England: John
Wiley & Sons, Ltd. ISBN 0470741155. 
34 Williamson, Malcolm J. (January 21, 1974). Non—secret encryption using a finite
field (PDF) (Technical report). Communications Electronics Security Group. Retrieved
2017-03-22. 
35 Williamson, Malcolm J. (August 10, 1976). Thoughts on Cheaper Non-Secret
Encryption (PDF) (Technical report). Communications Electronics Security Group.
Retrieved 2015-08-25. 
36 The History of Non-Secret Encryption JH Ellis 1987 (28K PDF file) (HTML version)
37 The First Ten Years of Public-Key Cryptography Whitfield Diffie, Proceedings of
the IEEE, vol. 76, no. 5, May 1988, pp: 560–577 (1.9MB PDF file)
38 Menezes, Alfred; van Oorschot, Paul; Vanstone, Scott (1997). Handbook of
Applied Cryptography Boca Raton, Florida: CRC Press. ISBN 0-8493-8523-7. (Available
online)
39 Singh, Simon (1999) The Code Book: the evolution of secrecy from Mary Queen of
Scots to quantum cryptography New York: Doubleday ISBN 0-385-49531-5
40 An Overview of Public Key Cryptography Martin E. Hellman, IEEE
Communications Magazine, May 2002, pp:42–49. (123kB PDF file)
41 Oral history interview with Martin Hellman, Charles Babbage Institute, University
of Minnesota. Leading cryptography scholar Martin Hellman discusses the
circumstances and fundamental insights of his invention of public key
cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University
in the mid-1970s.
42 RFC 2631 – Diffie–Hellman Key Agreement Method. E. Rescorla. June 1999.
43 RFC 3526 – More Modular Exponential (MODP) Diffie-Hellman groups for Internet
Key Exchange (IKE). T. Kivinen, M. Kojo, SSH Communications Security. May 2003.
44 Summary of ANSI X9.42: Agreement of Symmetric Keys Using Discrete
Logarithm Cryptography (64K PDF file) (Description of ANSI 9 Standards)
45 Talk by Martin Hellman in 2007, YouTube video
46 Crypto dream team Diffie & Hellman wins $1M 2015 Turing Award (a.k.a. "Nobel
Prize of Computing")
6.2.7 Public-key cryptography
Algorithms
Integer Benaloh
factorization Blum–Goldwasser
Cayley–Purser
Damgård–Jurik
GMR
Goldwasser–Micali
Paillier
Rabin
RSA
Okamoto–Uchiyama
Schmidt–Samoa
Cramer–Shoup
DH
DSA
ECDH
ECDSA
EdDSA
Discrete EKE
logarithm ElGamal
signature scheme
MQV
Schnorr
SPEKE
SRP
STS
AE
CEILIDH
EPOC
HFE
IES
Lamport
McEliece
Others Merkle–Hellman
Naccache–Stern
Naccache–Stern knapsack
cryptosystem
NTRUEncrypt
NTRUSign
Three-pass protocol
XTR
Discrete logarithm
Elliptic-curve cryptography
Theory Non-commutative cryptography
RSA problem
Trapdoor function
CRYPTREC
IEEE P1363
NESSIE
Standardization
NSA Suite B
Post-Quantum Cryptography
Standardization
Topics Digital signature
OAEP
Fingerprint
PKI
Web of trust
Key size
Post-quantum cryptography

6.2.8 Cryptanalysis
6.2.8.1 Commentary
Cryptanalysis analyses information systems to find the hidden views of the
systems[1] and is applied to break ciphers and access to the contents of encrypted
messages. It uses mathematics to study cryptographic algorithms and side-channel
attacks not targeting weaknesses in the algorithms and their implementation including
the use of integer factorization.
The problem can be stated as given some encrypted ciphertext acquire maximum
information about the original unencrypted plaintext by discerning the encipherment
process then finding the unique key for encrypted messages.
Attacks are defined according to the type of information the attacker has. It assumes
the analysis gives a known general algorithm (Shannon's Maxim[2], Kerckhoffs'
principle[3])[4] viz.
• Ciphertext-only
• Known-plaintext
• Chosen-plaintext (chosen-ciphertext)
• Adaptive chosen-plaintext
• Adaptive chosen ciphertext attack.
• Related-key attack
Attacks can also be defined by the computational resources required e.g.
• Time
• Memory
• Data
Estimates are often impractical especially when not tested live[5] but it is sufficient to
find any weakness.[6] Cryptanalysis can give various measures of usefulness e.g.
• Total break
• Global deduction
• Instance (local) deduction
• Information deduction
• Distinguishing algorithm
Academics attack weakened versions of cryptosystems as attacks are normally
exponential in difficulty[7] so the full cryptosystem can be strong even though parts are
weak due to resource requirements. However some of these strategies actually are
fatal e.g. DES, MD5, and SHA-1.[6]
Cryptanalysis history is described in [8][9].[10][11][12][13][14][15][16][17][18][19][20]
[21][22][23][24].
Multiple messages with the same key are insecure and are defined as "in depth."[25] It
is found by messages with the same indicator for key generator initial settings.[26]
Cryptanalysts benefit from lining up identical enciphering operations among a set of
messages. The individual plaintexts can then be worked out linguistically by
trying probable words (or phrases), known as "cribs," at various locations; a correct
guess, when combined with the merged plaintext stream, produces intelligible text
from the other plaintext component. Any recovered fragment of plaintext can often be
extended and the extra characters can be combined with the merged plaintext stream
to extend the plaintext back and forth between the plaintexts with intelligibility
criterion to check guesses to recover much or all of the original plaintexts. When a
recovered plaintext is then combined with its ciphertext, the key is revealed.
Knowledge of a key gives the other messages encrypted with the same key, and
related keys determining the system used for constructing them.[24][27][28]
Symmetric ciphers use differential cryptanalysis, impossible differential cryptanalysis,
improbable differential cryptanalysis, integral cryptanalysis, linear cryptanalysis and
mod-n cryptanalysis. Attacks come from boomerang attack, brute-force attack, Davies'
attack, meet-in-the-middle attack, related-key attack, sandwich attack, slide attack
and XSL attack.
Asymmetric cryptography relies on using two mathematically related keys, a private,
and a public and based hard mathematical problems for security. Attack requires the
solution of the problem. Two-key cryptography uses different mathematical questions
from a single-key cryptography and wider mathematical research.
Advances in computing technology make cryptanalysis easier by speed. Factoring
techniques have developed with mathematical insight and creativity. Asymmetric
schemes suffer from the knowledge from the public key.[29]
Attacking cryptographic hash systems comes from birthday attack and are analysed
with hash function security summary and rainbow table.
Side-channel attacks are analysed with black-bag cryptanalysis, power analysis,
rubber-hose cryptanalysis and timing analysis. The attacks come from man-in-the-
middle attack and replay attack.
Quantum computers have potential use in cryptanalysis by cutting computational
complexity.[30][31]
6.2.8.2 References
1. "Cryptanalysis/Signals Analysis". Nsa.gov. 2009-01-15. Retrieved 2013-04-15.
2. Shannon, Claude (4 October 1949). "Communication Theory of Secrecy
Systems". Bell System Technical Journal. 28: 662. Retrieved 20 June2014.
3. Kahn, David (1996), The Codebreakers: the story of secret writing (second ed.),
Scribners, p. 235
4. Schmeh, Klaus (2003). Cryptography and public key infrastructure on the
Internet. John Wiley & Sons. p. 45. ISBN 978-0-470-84745-9.
5. McDonald, Cameron; Hawkes, Philip; Pieprzyk, Josef, SHA-1 collisions now
252 (PDF), retrieved 4 April 2012
6. Schneier 2000
7. For an example of an attack that cannot be prevented by additional rounds,
see slide attack.
8. Smith 2000, p. 4
9. "Breaking codes: An impossible task?". BBC News. June 21, 2004.
10. History of Islamic philosophy: With View of Greek Philosophy and Early history of
Islam P.199
11. The Biographical Encyclopedia of Islamic Philosophy P.279
12. Crypto History Archived August 28, 2008, at the Wayback Machine.
13. Singh 1999, p. 17
14. Singh 1999, pp. 45–51
15. Singh 1999, pp. 63–78
16. Singh 1999, p. 116
17. Winterbotham 2000, p. 229.
18. Hinsley 1993.
19. Copeland 2006, p. 1
20. Singh 1999, p. 244
21. Churchhouse 2002, pp. 33, 34
22. Budiansky 2000, pp. 97–99
23. Calvocoressi 2001, p. 66
24. Tutte 1998
25. Churchhouse 2002, p. 34
26. Churchhouse 2002, pp. 33, 86
27. David Kahn Remarks on the 50th Anniversary of the National Security Agency,
November 1, 2002.
28. Tim Greene, Network World, Former NSA tech chief: I don't trust the
cloud Archived 2010-03-08 at the Wayback Machine.. Retrieved March 14, 2010.
29. Stallings, William (2010). Cryptography and Network Security: Principles and
Practice. Prentice Hall. ISBN 0136097049.
30. "Shor's Algorithm – Breaking RSA Encryption". AMS Grad Blog. 2014-04-30.
Retrieved 2017-01-17.
31. Daniel J. Bernstein (2010-03-03). "Grover vs. McEliece" (PDF).
32. Ibrahim A. Al-Kadi,"The origins of cryptology: The Arab
contributions", Cryptologia, 16(2) (April 1992) pp. 97–126.
33. Friedrich L. Bauer: "Decrypted Secrets". Springer 2002. ISBN 3-540-42674-4
34. Budiansky, Stephen (10 October 2000), Battle of wits: The Complete Story of
Codebreaking in World War II, Free Press, ISBN 978-0-684-85932-3
35. Burke, Colin B. (2002). "It Wasn't All Magic: The Early Struggle to Automate
Cryptanalysis, 1930s-1960s". Fort Meade: Center for Cryptologic History, National
Security Agency.
36. Calvocoressi, Peter (2001) [1980], Top Secret Ultra, Cleobury Mortimer,
Shropshire: M & M Baldwin, ISBN 0-947712-41-0
37. Churchhouse, Robert (2002), Codes and Ciphers: Julius Caesar, the Enigma and
the Internet, Cambridge: Cambridge University Press, ISBN 978-0-521-00890-7
38. Copeland, B. Jack, ed. (2006), Colossus: The Secrets of Bletchley Park's
Codebreaking Computers, Oxford: Oxford University Press, ISBN 978-0-19-284055-4
39. Helen Fouché Gaines, "Cryptanalysis", 1939, Dover. ISBN 0-486-20097-3
40. David Kahn, "The Codebreakers - The Story of Secret Writing", 1967. ISBN 0-684-
83130-9
41. Lars R. Knudsen: Contemporary Block Ciphers. Lectures on Data Security 1998:
105-126
42. Schneier, Bruce (January 2000). "A Self-Study Course in Block-Cipher
Cryptanalysis". Cryptologia. 24 (1): 18–34. doi:10.1080/0161-110091888754
43. Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach,
Mathematical Association of America, 1966. ISBN 0-88385-622-0
44. Christopher Swenson, Modern Cryptanalysis: Techniques for Advanced Code
Breaking, ISBN 978-0-470-13593-8
45. Friedman, William F., Military Cryptanalysis, Part I, ISBN 0-89412-044-1
46. Friedman, William F., Military Cryptanalysis, Part II, ISBN 0-89412-064-6
47. Friedman, William F., Military Cryptanalysis, Part III, Simpler Varieties of
Aperiodic Substitution Systems, ISBN 0-89412-196-0
48. Friedman, William F., Military Cryptanalysis, Part IV, Transposition and
Fractionating Systems, ISBN 0-89412-198-7
49. Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I,
Volume 1, ISBN 0-89412-073-5
50. Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I,
Volume 2, ISBN 0-89412-074-3
51. Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II,
Volume 1, ISBN 0-89412-075-1
52. Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II,
Volume 2, ISBN 0-89412-076-X
53. Hinsley, F.H. (1993), Introduction: The influence of Ultra in the Second World
War in Hinsley & Stripp 1993, pp. 1–13
54. Singh, Simon (1999), The Code Book: The Science of Secrecy from Ancient Egypt
to Quantum Cryptography, London: Fourth Estate, pp. 143–189, ISBN 1-85702-879-1
55. Smith, Michael (2000), The Emperor's Codes: Bletchley Park and the breaking of
Japan's secret ciphers, London: Random House, ISBN 0-593-04641-2
56. Tutte, W. T. (19 June 1998), Fish and I (PDF), archived from the original (PDF) on
10 July 2007, retrieved 7 October 2010Transcript of a lecture given by Prof. Tutte at
the University of Waterloo
57. Winterbotham, F.W. (2000) [1974], The Ultra secret: the inside story of Operation
Ultra, Bletchley Park and Enigma, London: Orion Books Ltd, ISBN 978-0-7528-3751-
2, OCLC 222735270
58. Bard, Gregory V. (2009). Algebraic Cryptanalysis. Springer. ISBN 978-1-4419-
1019-6.
59. Hinek, M. Jason (2009). Cryptanalysis of RSA and Its Variants. CRC
Press. ISBN 978-1-4200-7518-2.
60. Joux, Antoine (2009). Algorithmic Cryptanalysis. CRC Press. ISBN 978-1-4200-
7002-6.
61. Junod, Pascal; Canteaut, Anne (2011). Advanced Linear Cryptanalysis of Block
and Stream Ciphers. IOS Press. ISBN 978-1-60750-844-1.
62. Stamp, Mark & Low, Richard (2007). Applied Cryptanalysis: Breaking Ciphers in
the Real World. John Wiley & Sons. ISBN 978-0-470-11486-5.
63. Sweigart, Al (2013). Hacking Secret Ciphers with Python. Al Sweigart. ISBN 978-
1482614374.
64. Swenson, Christopher (2008). Modern cryptanalysis: techniques for advanced
code breaking. John Wiley & Sons. ISBN 978-0-470-13593-8.
65. Wagstaff, Samuel S. (2003). Cryptanalysis of number-theoretic ciphers. CRC
Press. ISBN 978-1-58488-153-7.
66. Basic Cryptanalysis (files contain 5 line header, that has to be removed first)
67. Distributed Computing Projects
68. List of tools for cryptanalysis on modern cryptography
69. Simon Singh's crypto corner
70. The National Museum of Computing
71. UltraAnvil tool for attacking simple substitution ciphers
72. How Alan Turing Cracked The Enigma Code Imperial War Museums
6.2.8.3 Examples
6.2.8.3.1 Mod n cryptanalysis
In cryptography, mod n cryptanalysis is an attack applicable to block and stream
ciphers. It is a form of partitioning cryptanalysis that exploits unevenness in how
the cipher operates over equivalence classes (congruence classes) modulo n. The
method was first suggested in 1999 by John Kelsey, Bruce Schneier, and David
Wagner and applied to RC5P (a variant of RC5) and M6 (a family of block ciphers used in
the FireWire standard). These attacks used the properties of binary addition and bit
rotation modulo a Fermat prime.
6.2.8.3.2 References
1. John Kelsey, Bruce Schneier, David Wagner (March 1999). Mod n Cryptanalysis,
with Applications Against RC5P and M6(PDF/PostScript). Fast Software Encryption,
Sixth International Workshop Proceedings. Rome: Springer-Verlag. pp. 139–155.
Retrieved 2007-02-12.
2. Vincent Rijmen (2003-12-01). ""mod n" Cryptanalysis of Rabbit" (PDF). White
paper, Version 1.0. Cryptico. Retrieved 2007-02-12.
3. Toshio Tokita; Tsutomu Matsumoto. "M8". Ipsj Journal. 42 (8).
6.2.8.3.3 Improbable differential cryptanalysis
In cryptography, impossible differential cryptanalysis is a form of differential
cryptanalysis for block ciphers. While ordinary differential cryptanalysis tracks
differences that propagate through the cipher with greater than expected probability,
impossible differential cryptanalysis exploits differences that are impossible (having
probability 0) at some intermediate state of the cipher algorithm.
Lars Knudsen appears to be the first to use a form of this attack, in the 1998 paper
where he introduced his AES candidate, DEAL.[1] The first presentation to attract the
attention of the cryptographic community was later the same year at the rump session
of CRYPTO '98, in which Eli Biham, Alex Biryukov, and Adi Shamir introduced the name
"impossible differential"[2] and used the technique to break 4.5 out of 8.5 rounds
of IDEA[3] and 31 out of 32 rounds of the NSA-designed cipher Skipjack.[4] This
development led cryptographer Bruce Schneier to speculate that the NSA had no
previous knowledge of impossible differential cryptanalysis.[5] The technique has since
been applied to many other ciphers: Khufu and Khafre, E2, variants
of Serpent, MARS, Twofish, Rijndael, CRYPTON, Zodiac, Hierocrypt-3, TEA, XTEA, Mini-
AES, ARIA, Camellia, and SHACAL-2.
Biham, Biryukov and Shamir also presented a relatively efficient specialized method for
finding impossible differentials that they called a miss-in-the-middle attack. This
consists of finding "two events with probability one, whose conditions cannot be met
together."[6]
6.2.8.3.4 References
1. Lars Knudsen (February 21, 1998). "DEAL - A 128-bit Block Cipher". Technical
report no. 151. Department of Informatics, University of Bergen, Norway.
Retrieved 2015-05-28.
2. Shamir, A. (August 25, 1998) Impossible differential attacks.CRYPTO '98 rump
session (video at Google Video—uses Flash)
3. Biryukov, A. (August 25, 1998) Miss-in-the-middle attacks on IDEA. CRYPTO '98
rump session (video at Google Video—uses Flash)
4. Biham, E. (August 25, 1998) Impossible cryptanalysis of Skipjack. CRYPTO '98
rump session (video at Google Video—uses Flash)
5. Bruce Schneier (September 15, 1998). "Impossible Cryptanalysis and
Skipjack". Crypto-Gram Newsletter.
6. E. Biham; A. Biryukov; A. Shamir (March 1999). Miss in the Middle Attacks on
IDEA, Khufu and Khafre. 6th International Workshop on Fast Software Encryption (FSE
1999). Rome: Springer-Verlag. pp. 124–138. Archived from the
original (gzippedPostScript) on 2011-05-15. Retrieved 2007-02-14.
7. Orr Dunkelman (March 1999). An Analysis of Serpent-p and Serpent-p-
ns (PDF/PostScript). Rump session, 2nd AES Candidate Conference. Rome: NIST.
Retrieved 2007-02-27.
8. E. Biham; A. Biryukov; A. Shamir (May 1999). Cryptanalysis of Skipjack Reduced
to 31 Rounds using Impossible Differentials(PDF/PostScript). Advances in Cryptology
- EUROCRYPT '99. Prague: Springer-Verlag. pp. 12–23. Retrieved 2007-02-13.
9. Kazumaro Aoki; Masayuki Kanda (1999). "Search for Impossible Differential of
E2" (PDF/PostScript). Retrieved 2007-02-27.
10. Eli Biham, Vladimir Furman (April 2000). Impossible Differential on 8-Round
MARS' Core (PDF/PostScript). 3rd AES Candidate Conference. pp. 186–194.
Retrieved 2007-02-27.
11. Eli Biham; Vladimir Furman (December 2000). Improved Impossible Differentials
on Twofish (PDF/PostScript). INDOCRYPT 2000. Calcutta: Springer-Verlag. pp. 80–92.
Retrieved 2007-02-27.
12. Deukjo Hong; Jaechul Sung; Shiho Moriai; Sangjin Lee; Jongin Lim (April
2001). Impossible Differential Cryptanalysis of Zodiac(PDF). 8th International Workshop
on Fast Software Encryption (FSE 2001). Yokohama: Springer-Verlag. pp. 300–311.
Retrieved 2006-12-30.
13. Raphael C.-W. Phan; Mohammad Umar Siddiqi (July 2001). "Generalised
Impossible Differentials of Advanced Encryption Standard" (PDF). Electronics
Letters. 37 (14): pp. 896–898. doi:10.1049/el:20010619. Retrieved 2007-07-17.
14. Jung Hee Cheon, MunJu Kim, and Kwangjo Kim (September 2001). Impossible
Differential Cryptanalysis of Hierocrypt-3 Reduced to 3 Rounds (PDF). Proceedings of
2nd NESSIE Workshop. Retrieved 2007-02-27.
15. Jung Hee Cheon; MunJu Kim; Kwangjo Kim; Jung-Yeun Lee; SungWoo Kang
(December 26, 2001). Improved Impossible Differential Cryptanalysis of Rijndael and
Crypton. 4th International Conference on Information Security and Cryptology (ICISC
2001). Seoul: Springer-Verlag. pp. 39–49. CiteSeerX 10.1.1.15.9966  .
16. Dukjae Moon; Kyungdeok Hwang; Wonil Lee; Sangjin Lee; AND Jongin Lim
(February 2002). Impossible Differential Cryptanalysis of Reduced Round XTEA and
TEA (PDF). 9th International Workshop on Fast Software Encryption (FSE 2002). Leuven:
Springer-Verlag. pp. 49–60. Retrieved 2007-02-27.
17. Raphael C.-W. Phan (May 2002). "Classes of Impossible Differentials of Advanced
Encryption Standard" (PDF). Electronics Letters. 38 (11): pp. 508–
510. doi:10.1049/el:20020347. Retrieved 2007-07-17.
18. Raphael C.-W. Phan (October 2003). "Impossible Differential Cryptanalysis of
Mini-AES" (PDF). Cryptologia. XXVII (4): pp. 283–292. doi:10.1080/0161-
110391891964. ISSN 0161-1194. Archived from the original (PDF) on 2007-09-26.
Retrieved 2007-02-27.
19. Raphael C.-W. Phan (July 2004). "Impossible Differential Cryptanalysis of 7-round
AES". Information Processing Letters. 91 (1): pp. 29–32. doi:10.1016/j.ipl.2004.03.006.
Retrieved 2007-07-19.
20. Wenling Wu; Wentao Zhang; Dengguo Feng (2006). "Impossible Differential
Cryptanalysis of ARIA and Camellia" (PDF). Retrieved 2007-02-27.
6.2.8.3.5 Integral cryptanalysis
In cryptography, integral cryptanalysis is a cryptanalytic attack that is particularly
applicable to block ciphers based on substitution-permutation networks. It was
originally designed by Lars Knudsen as a dedicated attack against Square, so it is
commonly known as the Square attack. It was also extended to a few other ciphers
related to Square: CRYPTON, Rijndael, and SHARK. Stefan Lucksgeneralized the attack
to what he called a saturation attack and used it to attack Twofish, which is not at all
similar to Square, having a radically different Feistel network structure. Forms of
integral cryptanalysis have since been applied to a variety of ciphers,
including Hierocrypt, IDEA, Camellia, Skipjack, MISTY1, MISTY2, SAFER++, KHAZAD,
and FOX (now called IDEA NXT).
Unlike differential cryptanalysis, which uses pairs of chosen plaintexts with a
fixed XOR difference, integral cryptanalysis uses sets or even multisets of chosen
plaintexts of which part is held constant and another part varies through all
possibilities. For example, an attack might use 256 chosen plaintexts that have all but
8 of their bits the same, but all differ in those 8 bits. Such a set necessarily has an XOR
sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide
information about the cipher's operation. This contrast between the differences of pairs
of texts and the sums of larger sets of texts inspired the name "integral cryptanalysis",
borrowing the terminology of calculus.
6.2.8.3.6 References
1. Joan Daemen, Lars Knudsen, Vincent Rijmen (January 1997). The Block Cipher
Square (PDF). 4th International Workshop on Fast Software Encryption (FSE '97),
Volume 1267 of Lecture Notes in Computer Science. Haifa: Springer-Verlag. pp. 149–
165. Retrieved 2007-02-15.
2. Carl D'Halluin, Gert Bijnens, Vincent Rijmen, Bart Preneel (March 1999). Attack
on Six Rounds of Crypton (PDF/PostScript). 6th International Workshop on Fast
Software Encryption (FSE '99). Rome: Springer-Verlag. pp. 46–59. Retrieved 2007-03-03.
3. N. Ferguson, J. Kelsey, S. Lucks, B. Schneier, M. Stay, D. Wagner, D. Whiting
(April 2000). Improved Cryptanalysis of Rijndael(PDF/PostScript). 7th International
Workshop on Fast Software Encryption (FSE 2000). New York City: Springer-Verlag.
pp. 213–230. Retrieved 2007-03-06.
4. Stefan Lucks (September 14, 2000). The Saturation Attack - a Bait for
Twofish (PDF/PostScript). 8th International Workshop on Fast Software Encryption (FSE
'01). Yokohama: Springer-Verlag. pp. 1–15. Retrieved 2006-11-30.
5. Paulo S. L. M. Barreto, Vincent Rijmen, Jorge Nakahara, Jr., Bart Preneel, Joos
Vandewalle, Hae Yong Kim (April 2001). Improved SQUARE Attacks against Reduced-
Round HIEROCRYPT (PDF). 8th International Workshop on Fast Software Encryption
(FSE '01). Yokohama: Springer-Verlag. pp. 165–173. Retrieved 2007-03-03.
6. Jorge Nakahara, Jr.; Paulo S.L.M. Barreto; Bart Preneel; Joos Vandewalle; Hae Y.
Kim (2001). "SQUARE Attacks on Reduced-Round PES and IDEA Block
Ciphers" (PDF/PostScript). Retrieved 2007-03-03.
7. Yongjin Yeom; Sangwoo Park; Iljun Kim (February 2002). On the Security of
CAMELLIA against the Square Attack (PDF). 9th International Workshop on Fast
Software Encryption (FSE '02). Leuven: Springer-Verlag. pp. 89–99. Retrieved 2007-03-
03.
8. Kyungdeok Hwang; Wonil Lee; Sungjae Lee; Sangjin Lee; Jongin Lim (February
2002). Saturation Attacks on Reduced Round Skipjack (PDF). 9th International
Workshop on Fast Software Encryption (FSE '02). Leuven: Springer-Verlag. pp. 100–111.
Retrieved 2007-03-03.
9. Lars Knudsen; David Wagner (December 11, 2001). Integral
cryptanalysis (PDF/PostScript). 9th International Workshop on Fast Software Encryption
(FSE '02). Leuven: Springer-Verlag. pp. 112–127. Retrieved 2006-11-30.
10. Gilles Piret, Jean-Jacques Quisquater (February 16, 2003). "Integral
Cryptanalysis on reduced-round Safer++" (PDF/PostScript). Retrieved 2007-03-03.
11. Frédéric Muller (December 2003). A New Attack against Khazad (PDF). Advances
in Cryptology - ASIACRYPT 2003. Taipei: Springer-Verlag. pp. 347–358. Retrieved 2007-
03-03.
12. Wu Wenling; Zhang Wentao; Feng Dengguo (August 25, 2005). "Improved Integral
Cryptanalysis of FOX Block Cipher" (PDF). Retrieved 2007-03-03.
6.2.8.3.7 Linear cryptanalysis
In cryptography, linear cryptanalysis is a general form of cryptanalysis based on
finding affine approximations to the action of a cipher. Attacks have been developed
for block ciphers and stream ciphers. Linear cryptanalysis is one of the two most
widely used attacks on block ciphers; the other being differential cryptanalysis.
The discovery is attributed to Mitsuru Matsui, who first applied the technique to
the FEAL cipher (Matsui and Yamagishi, 1992).[1]Subsequently, Matsui published an
attack on the Data Encryption Standard (DES), eventually leading to the first
experimental cryptanalysis of the cipher reported in the open community (Matsui, 1993;
{
\
d
1994).[2][3] The attack on DES is not generally practical, requiring 247 known
iplaintexts.[3]
s variety of refinements to the attack have been suggested, including using multiple
A
linear approximations or incorporating non-linear expressions, leading to a
pgeneralized partitioning cryptanalysis. Evidence of security against linear cryptanalysis
lis usually expected of new cipher designs.
6.2.8.3.7.1 Overview
aThere are two parts to linear cryptanalysis. The first is to construct linear equations
yrelating plaintext, ciphertext and key bits that have a high bias; that is, whose
probabilities of holding (over the space of all possible values of their variables) are as
s
close as possible to 0 or 1. The second is to use these linear equations in conjunction
t
with known plaintext-ciphertext pairs to derive key bits.
y6.2.8.3.7.2 Constructing linear equations
For the purposes of linear cryptanalysis, a linear equation expresses the equality of two
lexpressions which consist of binary variables combined with the exclusive-or (XOR)
operation.
e For example, the following equation, from a hypothetical cipher, states the
XOR sum of the first and third plaintext bits (as in a block cipher's block) and the first
ciphertext bit is equal to the second bit of the key:
In
P an ideal cipher, any linear equation relating plaintext, ciphertext and key bits would
hold with probability 1/2. Since the equations dealt with in linear cryptanalysis will vary
_in probability, they are more accurately referred to as linear approximations.
{ procedure for constructing approximations is different for each cipher. In the most
The
basic type of block cipher, a substitution-permutation network, analysis is
iconcentrated primarily on the S-boxes, the only nonlinear part of the cipher (i.e. the
_operation of an S-box cannot be encoded in a linear equation). For small enough S-
boxes, it is possible to enumerate every possible linear equation relating the S-box's
{input and output bits, calculate their biases and choose the best ones. Linear
1approximations for S-boxes then must be combined with the cipher's other actions,
such as permutation and key mixing, to arrive at linear approximations for the entire
}cipher. The piling-up lemma is a useful tool for this combination step. There are also
}techniques for iteratively improving linear approximations (Matsui 1994).
\6.2.8.3.7.3 Deriving key bits
Having obtained a linear approximation of the form:
owe can then apply a straightforward algorithm (Matsui's Algorithm 2), using known
plaintext-ciphertext
p pairs, to guess at the values of the key bits involved in the
approximation.
lFor each set of values of the key bits on the right-hand side (referred to as a partial
key),
u count how many times the approximation holds true over all the known plaintext-
ciphertext pairs; call this count T. The partial key whose T has the greatest absolute
sdifference from half the number of plaintext-ciphertext pairs is designated as the most
likely set of values for those key bits. This is because it is assumed that the correct
partial key will cause the approximation to hold with a high bias. The magnitude of the
P
bias is significant here, as opposed to the magnitude of the probability itself.
_ procedure can be repeated with other linear approximations, obtaining guesses at
This
values of key bits, until the number of unknown key bits is low enough that they can be
{attacked with brute force.
i6.2.8.3.7.4 References
1. Matsui, M. & Yamagishi, A. "A new method for known plaintext attack of FEAL
_cipher". Advances in Cryptology - EUROCRYPT1992.
{2. Matsui, M. "The first experimental cryptanalysis of the data encryption
standard". Advances in Cryptology - CRYPTO 1994.
2
}
}
3. ^ Jump up to:a b Matsui, M. "Linear cryptanalysis method for DES
cipher"(PDF). Advances in Cryptology - EUROCRYPT 1993. Archived from the
original (PDF) on 2007-09-26. Retrieved 2007-02-22.
6.2.8.3.8 Differential cryptanalysis
Differential cryptanalysis is a general form of cryptanalysis applicable primarily to
block ciphers, but also to stream ciphers and cryptographic hash functions. In the
broadest sense, it is the study of how differences in information input can affect the
resultant difference at the output. In the case of a block cipher, it refers to a set of
techniques for tracing differences through the network of transformation, discovering
where the cipher exhibits non-random behavior, and exploiting such properties to
recover the secret key (cryptography key).
6.2.8.3.8.1 History
The discovery of differential cryptanalysis is generally attributed to Eli Biham and Adi
Shamir in the late 1980s, who published a number of attacks against various block
ciphers and hash functions, including a theoretical weakness in the Data Encryption
Standard (DES). It was noted by Biham and Shamir that DES is surprisingly resistant to
differential cryptanalysis but small modifications to the algorithm would make it much
more susceptible.[1]
In 1994, a member of the original IBM DES team, Don Coppersmith, published a paper
stating that differential cryptanalysis was known to IBM as early as 1974, and that
defending against differential cryptanalysis had been a design goal.[2] According to
author Steven Levy, IBM had discovered differential cryptanalysis on its own, and
the NSA was apparently well aware of the technique.[3] IBM kept some secrets, as
Coppersmith explains: "After discussions with NSA, it was decided that disclosure of
the design considerations would reveal the technique of differential cryptanalysis, a
powerful technique that could be used against many ciphers. This in turn would
weaken the competitive advantage the United States enjoyed over other countries in
the field of cryptography."[2] Within IBM, differential cryptanalysis was known as the
"T-attack"[2] or "Tickle attack".[4]
While DES was designed with resistance to differential cryptanalysis in mind, other
contemporary ciphers proved to be vulnerable. An early target for the attack was
the FEAL block cipher. The original proposed version with four rounds (FEAL-4) can be
broken using only eight chosen plaintexts, and even a 31-round version of FEAL is
susceptible to the attack. In contrast, the scheme can successfully cryptanalyze DES
with an effort on the order 247 chosen plaintexts.
6.2.8.3.8.2 Attack mechanics
Differential cryptanalysis is usually a chosen plaintext attack, meaning that the
attacker must be able to obtain ciphertexts for some set of plaintexts of their choosing.
There are, however, extensions that would allow a known plaintext or even
a ciphertext-only attack. The basic method uses pairs of plaintext related by a
constant difference; difference can be defined in several ways, but the eXclusive OR
(XOR)operation is usual. The attacker then computes the differences of the
corresponding ciphertexts, hoping to detect statistical patterns in their distribution.
The resulting pair of differences is called a differential. Their statistical properties
depend upon the nature of the S-boxes used for encryption, so the attacker analyses
differentials (ΔX, ΔY), where ΔY = S(X ⊕ ΔX) ⊕ S(X) (and ⊕ denotes exclusive or) for
each such S-box S. In the basic attack, one particular ciphertext difference is expected
to be especially frequent; in this way, the ciphercan be distinguished from random.
More sophisticated variations allow the key to be recovered faster than exhaustive
search.
In the most basic form of key recovery through differential cryptanalysis, an attacker
requests the ciphertexts for a large number of plaintext pairs, then assumes that the
differential holds for at least r − 1 rounds, where r is the total number of rounds. The
attacker then deduces which round keys (for the final round) are possible, assuming
the difference between the blocks before the final round is fixed. When round keys are
short, this can be achieved by simply exhaustively decrypting the ciphertext pairs one
round with each possible round key. When one round key has been deemed a potential
round key considerably more often than any other key, it is assumed to be the correct
round key.
For any particular cipher, the input difference must be carefully selected for the attack
to be successful. An analysis of the algorithm's internals is undertaken; the standard
method is to trace a path of highly probable differences through the various stages of
encryption, termed a differential characteristic.
Since differential cryptanalysis became public knowledge, it has become a basic
concern of cipher designers. New designs are expected to be accompanied by evidence
that the algorithm is resistant to this attack, and many, including the Advanced
Encryption Standard, have been proven secure against the attack.
6.2.8.3.8.3 Attack in detail
The attack relies primarily on the fact that a given input/output difference pattern only
occurs for certain values of inputs. Usually the attack is applied in essence to the non-
linear components as if they were a solid component (usually they are in fact look-up
tables or S-boxes). Observing the desired output difference (between two chosen or
known plaintext inputs) suggests possible key values.
For example, if a differential of 1 => 1 (implying a difference in the least significant
bit (LSB) of the input leads to an output difference in the LSB) occurs with probability of
4/256 (possible with the non-linear function in the AES cipher for instance) then for only
4 values (or 2 pairs) of inputs is that differential possible. Suppose we have a non-linear
function where the key is XOR'ed before evaluation and the values that allow the
differential are {2,3} and {4,5}. If the attacker sends in the values of {6, 7} and observes
the correct output difference it means the key is either 6 ⊕ K = 2, or 6 ⊕ K = 4, meaning
the key K is either 2 or 4.
In essence, for an n-bit non-linear function one would ideally seek as close to 2−(n −
1) as possible to achieve differential uniformity. When this happens, the differential
attack requires as much work to determine the key as simply brute forcing the key.
The AES non-linear function has a maximum differential probability of 4/256 (most
entries however are either 0 or 2). Meaning that in theory one could determine the key
with half as much work as brute force, however, the high branch of AES prevents any
high probability trails from existing over multiple rounds. In fact, the AES cipher would
be just as immune to differential and linear attacks with a much weaker non-linear
function. The incredibly high branch (active S-box count) of 25 over 4R means that over
8 rounds no attack involves fewer than 50 non-linear transforms, meaning that the
probability of success does not exceed Pr[attack] ≤ Pr[best attack on S-box]50. For
example, with the current S-box AES emits no fixed differential with a probability
higher than (4/256)50 or 2−300 which is far lower than the required threshold of
2−128 for a 128-bit block cipher. This would have allowed room for a more efficient S-
box, even if it is 16-uniform the probability of attack would have still been 2−200.
There exist no bijections for even sized inputs/outputs with 2-uniformity. They exist in
odd fields (such as GF(27)) using either cubing or inversion (there are other exponents
that can be used as well). For instance S(x) = x3 in any odd binary field is immune to
differential and linear cryptanalysis. This is in part why the MISTY designs use 7- and 9-
bit functions in the 16-bit non-linear function. What these functions gain in immunity to
differential and linear attacks they lose to algebraic attacks.[why?] That is, they are
possible to describe and solve via a SAT solver. This is in part why AES (for instance)
has an affine mapping after the inversion.
6.2.8.3.8.4 Specialized types
Specialised types can be found with:
• Higher-order differential cryptanalysis
• Truncated differential cryptanalysis
• Impossible differential cryptanalysis
• Boomerang attack
6.2.8.3.8.5 References
1. Biham and Shamir, 1993, pp. 8-9
2. Coppersmith, Don (May 1994). "The Data Encryption Standard (DES) and its
strength against attacks" (PDF). IBM Journal of Research and Development. 38 (3):
243. doi:10.1147/rd.383.0243. (subscription required)
3. Levy, Steven (2001). Crypto: How the Code Rebels Beat the Government —
Saving Privacy in the Digital Age. Penguin Books. pp. 55–56. ISBN 0-14-024432-8.
4. Matt Blaze, sci.crypt, 15 August 1996, Re: Reverse engineering and the Clipper
chip"
5. Eli Biham, Adi Shamir, Differential Cryptanalysis of the Data Encryption Standard,
Springer Verlag, 1993. ISBN 0-387-97930-1, ISBN 3-540-97930-1.
6. Biham, E. and A. Shamir. (1990). Differential Cryptanalysis of DES-like
Cryptosystems. Advances in Cryptology — CRYPTO '90. Springer-Verlag. 2–21.
7. Eli Biham, Adi Shamir,"Differential Cryptanalysis of the Full 16-Round DES," CS
708, Proceedings of CRYPTO '92, Volume 740 of Lecture Notes in Computer Science,
December 1991. (Postscript)
6.2.8.3.9 Symmetric-key algorithm
Symmetric-key algorithms[1] are algorithms for cryptography that use the
same cryptographic keys for both encryption of plaintext and decryption of ciphertext.
The keys may be identical or there may be a simple transformation to go between the
two keys[2]. The keys, in practice, represent a shared secret between two or more
parties that can be used to maintain a private information link.[3] This requirement that
both parties have access to the secret key is one of the main drawbacks of symmetric
key encryption, in comparison to public-key encryption (also known as asymmetric key
encryption).[4]
6.2.8.3.9.1 Types of symmetric-key algorithms
Symmetric-key encryption can use either stream ciphers or block ciphers.[5]
Stream ciphers encrypt the digits (typically bytes) of a message one at a time.
Block ciphers take a number of bits and encrypt them as a single unit, padding the
plaintext so that it is a multiple of the block size. Blocks of 64 bits were commonly
used. The Advanced Encryption Standard (AES) algorithm approved by NIST in
December 2001, and the GCM block cipher mode of operation use 128-bit blocks.
6.2.8.3.9.2 Implementations
Examples of popular symmetric-key algorithms
include Twofish, Serpent, AES (Rijndael), Blowfish, CAST5, Kuznyechik, RC4, 3DES, Ski
pjack, Safer+/++ (Bluetooth), and IDEA.[6][7]
6.2.8.3.9.3 Cryptographic primitives based on symmetric ciphers
Symmetric ciphers are commonly used to achieve other cryptographic primitives than
just encryption.
Encrypting a message does not guarantee that this message is not changed while
encrypted. Hence often a message authentication codeis added to a ciphertext to
ensure that changes to the ciphertext will be noted by the receiver. Message
authentication codes can be constructed from symmetric ciphers (e.g. CBC-MAC).
However, symmetric ciphers cannot be used for non-repudiation purposes except by
involving additional parties. See the ISO/IEC 13888-2 standard.
Another application is to build hash functions from block ciphers. See one-way
compression function for descriptions of several such methods.
6.2.8.3.9.4 Construction of symmetric ciphers
Many modern block ciphers are based on a construction proposed by Horst Feistel.
Feistel's construction makes it possible to build invertible functions from other
functions that are themselves not invertible.
6.2.8.3.9.5 Security of symmetric ciphers
Symmetric ciphers have historically been susceptible to known-plaintext
attacks, chosen-plaintext attacks, differential cryptanalysis and linear cryptanalysis.
Careful construction of the functions for each round can greatly reduce the chances of
a successful attack.
6.2.8.3.10 Key management
6.2.8.3.10.1 Key establishment
Symmetric-key algorithms require both the sender and the recipient of a message to
have the same secret key. All early cryptographic systems required one of those people
to somehow receive a copy of that secret key over a physically secure channel.
Nearly all modern cryptographic systems still use symmetric-key algorithms internally
to encrypt the bulk of the messages, but they eliminate the need for a physically secure
channel by using Diffie–Hellman key exchange or some other public-key protocol to
securely come to agreement on a fresh new secret key for each message (forward
secrecy).
6.2.8.3.10.2 Key generation
When used with asymmetric ciphers for key transfer, pseudorandom key generators are
nearly always used to generate the symmetric cipher session keys. However, lack of
randomness in those generators or in their initialization vectors is disastrous and has
led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation
uses a source of high entropy for its initialization.[8][9][10]
6.2.8.3.11 Reciprocal cipher
A reciprocal cipher is a cipher where, just as one enters the plaintext into
the cryptography system to get the ciphertext, one could enter the ciphertext into the
same place in the system to get the plaintext. A reciprocal cipher is also sometimes
referred as self-reciprocal cipher. Examples of reciprocal ciphers include:
• Beaufort cipher
• Enigma machine
• ROT13
• XOR cipher
• Vatsyayana cipher
Other terms for symmetric-key encryption are secret-key, single-key, shared-key, one-
key, and private-key encryption. Use of the last and first terms can create ambiguity
with similar terminology used in public-key cryptography. Symmetric-key cryptography
is to be contrasted with asymmetric-key cryptography.
6.2.8.3.12 References
1. Kartit, Zaid (February 2016). "Applying Encryption Algorithms for Data Security in
Cloud Storage, Kartit, et. al". Advances in ubiquitous networking: proceedings of
UNet15: 147.
2. Delfs, Hans & Knebl, Helmut (2007). "Symmetric-key encryption". Introduction to
cryptography: principles and applications. Springer. ISBN 9783540492436.
3. Mullen, Gary & Mummert, Carl (2007). Finite fields and applications. American
Mathematical Society. p. 112. ISBN 9780821844182.
4. Pelzl & Paar (2010). Understanding Cryptography. Berlin: Springer-Verlag. p. 30.
5. Ayushi (2010). "A Symmetric Key Cryptographic Algorithm" (PDF). International
Journal of Computer Applications. 1-No 15.
6. Roeder, Tom. "Symmetric-Key Cryptography". www.cs.cornell.edu.
Retrieved 2017-02-05.
7. Ian Goldberg and David Wagner. "Randomness and the Netscape Browser".
January 1996 Dr. Dobb's Journal. quote: "it is vital that the secret keys be generated
from an unpredictable random-number source."
8. Thomas Ristenpart , Scott Yilek. "When Good Randomness Goes Bad: Virtual
Machine Reset Vulnerabilities and Hedging Deployed Cryptography
(2010)" CiteSeerx: 10.1.1.183.3583 quote from abstract: "Random number generators
(RNGs) are consistently a weak link in the secure use of cryptography."
9. "Symmetric Cryptography". James. 2006-03-11.
6.2.8.4 Lattice-based cryptography
Lattice-based cryptography uses  lattices for constructions of cryptographic
primitives in the construction or security proof and aid post-quantum cryptography.
They resist attack by both classical and quantum computers as many lattice
problems cannot be solved efficiently.
6.2.8.4.1 History
Ajtai developed a lattice-based cipher protocol with security based on the
computational complexity of classic lattice problems known as Short Integer
Solutions[1] and the equivalence of a cryptographic hash function those solutions.
Hoffstein, Pipher and Silverman used the  NTRU lattice-based public-key
encryption scheme[2] which is not so successful. Regev[3] proved a security scheme
with Learning with Errors problem and extensions[4][5] and improved efficiency.[6][7]
[8][9] Eventually Gentry formulated a fully homomorphic encryption scheme, using a
lattice problem.[10]
6.2.8.4.2 Mathematical background
The theory applies a lattice as a set of all integer linear combinations of basis vectors
where basis for a lattice is not unique giving a Shortest Vector Problem or minimal
Euclidean length of a non-zero lattice vector. Both problems are hard to solve
efficiently, even with approximation factors that are polynomial leading to secure
lattice-based cryptographic constructions.
6.2.8.4.3 Lattice-based cryptosystems
Gentry used the technique to show a fully homomorphic encryption scheme[11][12]
[13] supporting arbitrary depth circuits with the following properties:
6.2.8.4.3.1 Encryption
• Peikert's Ring - Learning With Errors (Ring-LWE) Key Exchange[7]
• GGH encryption scheme
• NTRUEncrypt
6.2.8.4.4 Signature
• Güneysu, Lyubashevsky, and Poppleman's Ring - Learning with Errors (Ring-LWE)
Signature[14]
• GGH signature scheme
• NTRUSign
6.2.8.4.5 Hash function
• SWIFFT
• LASH (Lattice Based Hash Function)[15][16]
6.2.8.4.6 Security
Lattice-based systems lead to the public-key cryptography race[17] and alternatives
which are based on the computational complexity for factoring and discrete
logarithm and related problems are solvable in polynomial time on a quantum
computer[18][1][3][4] depending on the probability of breaking the cipher.
6.2.8.4.7 Functionality
The elements of security relie fully homomorphic encryption,[10] indistinguishability
obfuscation,[19] cryptographic multilinear maps, and functional encryption.[19]
6.2.8.4.8 References
1. Ajtai, Miklós (1996). "Generating Hard Instances of Lattice Problems".
Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing.
pp. 99–108. doi:10.1145/237814.237838. ISBN 0-89791-785-5. 
2. Hoffstein, Jeffrey; Pipher, Jill; Silverman, Joseph H. (1998-06-21). "NTRU: A ring-
based public key cryptosystem". Algorithmic Number Theory. Springer, Berlin,
Heidelberg: 267–288. doi:10.1007/bfb0054868. 
3. Regev, Oded (2005-01-01). "On Lattices, Learning with Errors, Random Linear
Codes, and Cryptography". Proceedings of the Thirty-seventh Annual ACM Symposium
on Theory of Computing. STOC '05. New York, NY, USA: ACM: 84–
93. doi:10.1145/1060590.1060603. ISBN 1581139608. 
4. Peikert, Chris (2009-01-01). "Public-key Cryptosystems from the Worst-case
Shortest Vector Problem: Extended Abstract". Proceedings of the Forty-first Annual
ACM Symposium on Theory of Computing. STOC '09. New York, NY, USA: ACM: 333–
342. doi:10.1145/1536414.1536461.ISBN 9781605585062. 
5. Brakerski, Zvika; Langlois, Adeline; Peikert, Chris; Regev, Oded; Stehlé, Damien
(2013-01-01). "Classical Hardness of Learning with Errors". Proceedings of the Forty-
fifth Annual ACM Symposium on Theory of Computing. STOC '13. New York, NY, USA:
ACM: 575–584.doi:10.1145/2488608.2488680. ISBN 9781450320290. 
6. Lyubashevsky, Vadim; Peikert, Chris; Regev, Oded (2010-05-30). "On Ideal
Lattices and Learning with Errors over Rings". Advances in Cryptology – EUROCRYPT
2010. Springer, Berlin, Heidelberg: 1–23. doi:10.1007/978-3-642-13190-5_1. 
7. Peikert, Chris (2014-07-16). "Lattice Cryptography for the Internet" (PDF). IACR.
Retrieved 2017-01-11. 
8. Alkim, Erdem; Ducas, Léo; Pöppelmann, Thomas; Schwabe, Peter (2015-01-
01). "Post-quantum key exchange - a new hope". 
9. Bos, Joppe; Costello, Craig; Ducas, Léo; Mironov, Ilya; Naehrig, Michael;
Nikolaenko, Valeria; Raghunathan, Ananth; Stebila, Douglas (2016-01-01). "Frodo: Take
off the ring! Practical, Quantum-Secure Key Exchange from LWE". 
10. Gentry, Craig (2009-01-01). A Fully Homomorphic Encryption Scheme (Thesis).
Stanford, CA, USA: Stanford University. 
11. Gentry, Craig (2009). Fully homomorphic encryption using ideal lattices.
Proceedings of the forty-first annual ACM symposium on Theory of computing. pp. 169–
178. doi:10.1145/1536414.1536440. ISBN 978-1-60558-506-2. 
12. "IBM Researcher Solves Longstanding Cryptographic Challenge". IBM Research.
2009-06-25. Retrieved 2017-01-11. 
13. Michael Cooney (2009-06-25). "IBM touts encryption innovation". Computer
World. Retrieved 2017-01-11. 
14. Güneysu, Tim; Lyubashevsky, Vadim; Pöppelmann, Thomas (2012). "Practical
Lattice-Based Cryptography: A Signature Scheme for Embedded Systems" (PDF).
IACR. doi:10.1007/978-3-642-33027-8_31. Retrieved 2017-01-11. 
15. "LASH: A Lattice Based Hash Function". Archived from the original on October
16, 2008. Retrieved 2008-07-31. CS1 maint: BOT: original-url status unknown
(link) (broken)
16. Scott Contini, Krystian Matusiewicz, Josef Pieprzyk, Ron Steinfeld, Jian Guo, San
Ling and Huaxiong Wang (2008). "Cryptanalysis of LASH" (PDF). doi:10.1007/978-3-540-
71039-4_13. CS1 maint: Uses authors parameter (link)
17. Micciancio, Daniele; Regev, Oded (2008-07-22). "Lattice-based
cryptography" (PDF). Retrieved 2017-01-11. 
18. Shor, Peter W. (1997-10-01). "Polynomial-Time Algorithms for Prime Factorization
and Discrete Logarithms on a Quantum Computer". SIAM Journal on Computing. 26 (5):
1484–1509. doi:10.1137/S0097539795293172. ISSN 0097-5397. 
19. Garg, Sanjam; Gentry, Craig; Halevi, Shai; Raykova, Mariana; Sahai, Amit; Waters,
Brent (2013-01-01). "Candidate Indistinguishability Obfuscation and Functional
Encryption for all circuits". 
20. Oded Goldreich, Shafi Goldwasser, and Shai Halevi. "Public-key cryptosystems
from lattice reduction problems". In CRYPTO ’97: Proceedings of the 17th Annual
International Cryptology Conference on Advances in Cryptology, pages 112–131,
London, UK, 1997. Springer-Verlag.
21. Phong Q. Nguyen. "Cryptanalysis of the Goldreich–Goldwasser–Halevi
cryptosystem from crypto ’97". In CRYPTO ’99: Proceedings of the 19th Annual
International Cryptology Conference on Advances in Cryptology, pages 288–304,
London, UK, 1999. Springer-Verlag.
22. Chris Peikert, “Public-key cryptosystems from the worst-case shortest vector
problem: extended abstract,” in Proceedings of the 41st annual ACM symposium on
Theory of computing (Bethesda, MD, USA: ACM, 2009), 333–342, DOI
10.1145/1536414.1536461
23. Oded Regev. Lattice-based cryptography. In Advances in cryptology (CRYPTO),
pages 131–141, 2006.

6.3 Summary
6.3.1 Defences
IoT solutions have impact on technologies and services that store, integrate, visualize,
and analyze IoT data. Security has to tackle cyberbullying, cybercrime and
cyberwarfare. Infectious malware consists of computer viruses and worms.
Concealment type are Trojan horses, rootkits, backdoors, zombie computer, man-in-the-
middle, man-in-the-browser, man-in-the-mobile and clickjacking. Malware for profit
comes as privacy-invasive software, adware, spyware, botnet, keystroke logging, form
grabbing, web threats, fraudulent dialer, malbot, scareware, rogue security software,
ransomware and crimeware. Threats can show as denial of service, eavesdropping,
exploitation, rootkits and vulnerability.
Security aims to help identify and apply standards for protection and prevention at
physical, personnel and organizational levels. It requires securing networks,
allied infrastructure, securing applications and databases. IoT security needs to cover
information security, mobile security, network security, internet security, application
security, computer security and data-centric security.
Security defence philosophies should be:
• reduction/mitigation – build ways to eliminate problems
• assign/transfer – divert cost through insurance or outsourcing
• accept – if cost of countermeasure v threat is uneconomical
• ignore/reject – if threat can be ignored
Cybersecurity requirements span five key areas:
• Identification—understanding risk profile and current state
• Protection—applying prevention strategies to mitigate vulnerabilities and threats
• Detection—detecting anomalies and events
• Response—incident response, mitigation, and improvements
• Recovery—continuous life cycle improvement
Defences are reduced to “find what defence and where” using known security threats
and risk analysis. Specifically security threats / risks are countered by software,
hardware and procedures.[4] Counter measure procedures are supported by security
testing, information systems auditing, business continuity planning and digital
forensics.
Protection are classified by anti-keylogger, antivirus software, browser security,
internet security, mobile security, network security, defensive computing, firewall,
intrusion detection system, intrusion protection system, ciphering and data loss
prevention software (APIDS). Operational countermeasures consist of computer and
network surveillance, honeypot and operation bot roast.

This can be summarised by the following table.


Threat Classification Type Of Basic Threat Defence
Software Threat Type
Attacks
Theft Of Threat Type
Intellectual
Property
Identity Theft Threat Type
Theft Of Threat Type
Equipment Or
Information
Sabotage Threat Type
Information Threat Type
Extortion
Cyberbullying Application
Cybercrime Application
Cyberwarfare Application
Viruses Infectious Running program AntiVirus
Malware
Worms Infectious Running program Transmit program Antivirus Firewall
Malware
Trojan Horses Concealment Running program Send information Antivirus Firewall
Type
Rootkits Concealment Running program AntiVirus
Type
Backdoors Concealment Running program AntiVirus
Type
Zombie Concealment Running program Send information Antivirus Firewall
Computer Type
Man-In-The- Concealment Running program Send information Antivirus Firewall
Middle Type
Man-In-The- Concealment Running program Send information Antivirus Firewall
Browser Type
Man-In-The- Concealment Running program Send information Antivirus Firewall
Mobile Type
Clickjacking Concealment Running program Send information Antivirus Firewall
Type
Privacy- Malware For Running program Send information Antivirus Firewall
Invasive Profit
Software
Adware Malware For Running program AntiVirus
Profit
Phishing Malware For Running program Send information Antivirus Firewall
Profit
Spyware Malware For Running program Send information Antivirus Firewall
Profit
Botnet Malware For Running program Send information Antivirus Firewall
Profit
Keystroke Malware For Running program AntiVirus
Logging Profit
Form Grabbing Malware For Running program Send information Antivirus Firewall
Profit
Web Threats Malware For Running program AntiVirus
Profit
Fraudulent Malware For Running program AntiVirus
Dialer Profit
Malbot Malware For Running program AntiVirus
Profit
Scareware Malware For Running program AntiVirus
Profit
Rogue Security Malware For Running program AntiVirus
Software Profit
Ransomware Malware For Running program AntiVirus
Profit
Crimeware Malware For Running program AntiVirus
Profit
Eavesdropping Other Threats Running program radiation capture Ciphers
Denial Of Other Threats Running program Send information Antivirus Firewall
Service

Threat Job Defence


Computer Virus Run Program Antivirus
Computer Worm Spread Program Antivirus Firewall
Trojan Horse Collect Information Antivirus Firewall
Send Infomation
Rootkit Run Program Antivirus
Backdoor Run Program Antivirus
Zombie Computer Run Program Antivirus Firewall
Send Info
Man-In-The-Middle Divert Info Antivirus Firewall
Man-In-The-Browser Divert Info Antivirus Firewall
Man-In-The-Mobile Divert Info Antivirus Firewall
Clickjacking Take Over Form Antivirus Firewall
Privacy-Invasive Software  adware, spyware and Antivirus Firewall
Content Hijacking Programs
Adware Select Website Antivirus Firewall
Spyware Collect Info Send To Web Antivirus Firewall
Botnet Send Email Antivirus Firewall
Keystroke Logging Take Over Form Antivirus Firewall
Form Grabbing Take Over Form Antivirus Firewall
Web Threats Attack From Site Firewall
Fraudulent Dialer Phone Number Antivirus
Malbot Send Email Antivirus Firewall
Scareware Send Message To Console Antivirus
Rogue Security Software Combine Antivirus Firewall

Ransomware Send Message To Console Antivirus Firewall


Crimeware Combine Antivirus Firewall

6.3.2 Conclusion
From the analysis in the previous section, an antivirus program, a tuned firewall (for
incoming are sufficient to cover all input sources and all output destinations), APIDS
and agreed ciphers are sufficient to protect IoT security.

6.3.3 References
1. Turner, Dawn M. "Digital Authentication: The Basics". Cryptomathic. Retrieved 9
August 2016.
2. Ahi, Kiarash (May 26, 2016). "Advanced terahertz techniques for quality control
and counterfeit detection". Proc. SPIE 9856, Terahertz Physics, Devices, and Systems
X: Advanced Applications in Industry and Defense, 98560G. doi:10.1117/12.2228684.
Retrieved May 26, 2016.
3. "How to Tell – Software". microsoft.com. Retrieved 11 December 2016.
4. Federal Financial Institutions Examination Council (2008). "Authentication in an
Internet Banking Environment" (PDF). Retrieved 2009-12-31.
5. Committee on National Security Systems. "National Information Assurance (IA)
Glossary" (PDF). National Counterintelligence and Security Center. Retrieved 9
August2016.
6. European Central Bank. "Recommendations for the Security of Internet
Payments"(PDF). European Central Bank. Retrieved 9 August 2016.
7. "FIDO Alliance Passes 150 Post-Password Certified Products". InfoSecurity
Magazine. 2016-04-05. Retrieved 2016-06-13.
8. Brocardo ML, Traore I, Woungang I, Obaidat MS. "Authorship verification using
deep belief network systems". Int J Commun Syst. 2017. doi:10.1002/dac.3259
9. "Draft NIST Special Publication 800-63-3: Digital Authentication Guideline".
National Institute of Standards and Technology, USA. Retrieved 9 August 2016.
10. Eliasson, C; Matousek (2007). "Noninvasive Authentication of Pharmaceutical
Products through Packaging Using Spatially Offset Raman Spectroscopy". Analytical
Chemistry. 79(4): 1696–1701. doi:10.1021/ac062223z. PMID 17297975. Retrieved 9
Nov 2014.
11. Li, Ling (March 2013). "Technology designed to combat fakes in the global supply
chain". Business Horizons. 56 (2): 167–177. doi:10.1016/j.bushor.2012.11.010.
Retrieved 9 Nov 2014.
12. How Anti-shoplifting Devices Work", HowStuffWorks.com
13. Norton, D. E. (2004). The effective teaching of language arts. New York:
Pearson/Merrill/Prentice Hall.
14. McTigue, E.; Thornton, E.; Wiese, P. (2013). "Authentication Projects for
Historical Fiction: Do you believe it?". The Reading Teacher. 66: 495–
505. doi:10.1002/trtr.1132.
15. The Register, UK; Dan Goodin; 30 March 2008; Get your German Interior
Minister's fingerprint, here. Compared to other solutions, "It's basically like leaving the
password to your computer everywhere you go, without you being able to control it
anymore", one of the hackers comments.
16. https://technet.microsoft.com/en-us/library/ff687018.aspx
17. "AuthN, AuthZ and Gluecon – CloudAve". cloudave.com. 26 April 2010.
Retrieved 11 December 2016.
18. A mechanism for identity delegation at authentication level, N Ahmed, C Jensen
– Identity and Privacy in the Internet Age – Springer 2009

7 Methodology
7.1 Commentary
7.1.1 Introduction
This section reviews how some other technologies can contribute to language
processing. It consists of 22 further sub-sections reflecting the 20 theories that are
helpful. They are search theory, network theory, Markov theory, algebraic theory, logic
theory, programming language theory, geographic information systems, quantitative
theory, learning theory, statistics theory, probability theory, communications theory,
compiler technology theory, database technology, curve fitting, configuration
management, continuous integration/delivery and virtual reality. We summarise the
results in the last part of this section.
7.1.2 Methodology from Search Theory
7.1.2.1 Introduction
The operations research technique known as the theory of search is applied to the
study of a system. The results of this theoretical study have implications for the
characteristics of the personnel using the system, user and rules governing the system.
The theory of search is used for firing missiles and prospecting minerals. It uses three
basic algorithms. The mean path theorem defines the chances of a missile hitting a
target. The optimal search effort procedure determines the best place to search based
on known probabilities and income from the targets. The clustering technique uses a
cheap scanning where more detailed searches can be performed.
7.1.2.2 Results
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies. The theory
has resulted in a measurable set of requirements and a method of assessing how good
the system, the system user and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
7.1.3 Methodology from Network Theory
7.1.3.1 Introduction
This section introduces the concepts of network theory as a system. It considers ways in
which methods can help the validation of connections within such a system. A
description is given of the concepts of such a system based on a set of network theory
structures (nodes and edges) analogous to that used in the system. A set of algorithms
are defined which are derived from network theory and can be used to resolve ordered
elements, single root trees and flows in networks. We developed the network structures
and algorithms to define a structure for entities with dependences. It demonstrates how
the method can be applied to the validation of a system for consistency, completeness
and use with and without dangles or loops.
7.1.3.2 Algorithms from Network Theory.
The algorithms from network theory follow the natural structures of a graph. They use
the adjacency matrix, the connectivity matrix and the flow matrix as the basis of the
analysis of the structures under consideration.
a. The first consideration is the property that nodes of a graph are adjacent. In an
undirected graph the adjacency matrix is symmetric about the major diagonal. For a
directed graph the symmetric property implies a loop.
b. The adjacency matrix of a sub-graph will have element values which are zero in the
same position as in the full graph adjacency matrix and many have a zero where there is
a one in the full graph adjacency matrix.
c. A bipartite graph is represented by a reachable matrix which shows a block structure.
d. A path from one node to another node is determined from the reachable matrix by the
common row numbers and column numbers which are covered in the row corresponding
to the starting node and the column responding to the end node.
e. A cycle or circuit is determined by diagonal entries of the reachable matrix being non
zero.
f. A tree is found when there are no non-zero diagonal elements in the reachable matrix.
g. A rooted tree is resolved through the condition that the reachable matrix has all
diagonal elements zero and only one row has all non-zero elements except on the
diagonal position.
h. A block structured graph is found through the circumstances where there is only one
element in a row of the reachable matrix.
i. A digraph is established when the non zero elements Aij of the adjacency matrix imply a
direction from node i to node j.
j. An ancestor and a descendent are determined in the same way as the derivation of a
path shown above.
k. A leaf is determined by finding the row of the adjacency matrix which is zero.
l. A terminal node in a network is decided through the method found as the leaf.
m. The height and depth are calculated by A(1-An-1)D(1-A) where A is the adjacency
matrix and D is the matrix with its elements all unity.
n. The flow through a node is derived from the formula A(1-An-1)D/(1-A) where A is the
flow matrix and D is the input flow values to the nodes.
o. The length of a network path is A(1-An-1) D/(1-A) where A is the adjacency matrix and D
is the matrix is the input flow values to the nodes.
p. The critical path is defined by the algorithm finding the longest path in the network.
q. The number of paths from one node to another is determined by the value of the
corresponding element of the Ak matrix where A is the adjacency matrix.
7.1.3.3 Results
There are six validation cases discussed in this paper. They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
7.1.3.3.1 Well Structured
Let us consider a system where an entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in
section 3.2.3 would highlight this case in the definition of the system by the fact that
there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after
review the network structure is adjusted as appropriate.
7.1.3.3.2 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the network structure is adjusted as appropriate.
7.1.3.3.3 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after
review the network structure is adjusted as appropriate.
7.1.4 Methodology from Markov Theory
7.1.4.1 Introduction
This section introduces the concepts of Markov theory in a system. It considers ways
in which these methods can help the sizing of such a system. A description is given of
the concepts of such a system. It shows a set of Markov theory structures analogous to
that used in a system. A set of algorithms are defined which are derived from Markov
theory and can be used to resolve flows in networks. We developed the Markov
structures and algorithms to define a flow of information. It demonstrates how the
method can be applied to the sizing of a system and its parts.
7.1.4.2 Algorithms from Markov Theory.
The algorithms from Markov theory follow the natural structures of a graph. They use
the adjacency matrix, A, the connectivity matrix and the flow matrix as the basis of the
analysis of the structures under consideration for the sizing of properties by using
probabilities for the connections. They can use the matrices to reflect the transfer
between nodes of the graph one cycle at a time and assess the size of flow between
different nodes.
a. The first consideration is the property that vertices of a graph are adjacent. In an
undirected graph the adjacency matrix has no meaning. For a directed graph the no
zero symmetric property about the major diagonal implies a loop of some form.
b. The adjacency matrix of a sub-graph will have element values which are zero in the
same position as in the full graph adjacency matrix and many have zero where there is
a one in the full graph adjacency matrix.
c. A bipartite graph is represented by a reachable matrix which shows a block
structure and sizes of transfer in the graph.
d. A quantity path from one node to another node is determined from the reachable
matrix by the common row numbers and column numbers which are covered in the row
corresponding to the starting node and the column responding to the end node.
e. A cycle or circuit size is determined by diagonal entries of the reachable matrix
being non zero.
f. A tree is found when there are no non-zero diagonal elements in the reachable matrix.
g. A rooted tree is resolved through the condition that the reachable matrix has all
diagonal elements zero and only one row has all non-zero elements except on the
diagonal position.
h. A block structured graph is found through the circumstances where there is only one
element in a row of the reachable matrix.
i. A digraph is established when the non zero elements A of the adjacency matrix imply
a direction from mode i to node j.
j. An ancestor and a descendent are determined in the same way as the derivation of a
path shown above.
k. A leaf is determined by finding the row of the adjacency matrix which is zero.
l. A terminal node in a network is decided through the method found as the leaf.
m. The height and depth are calculated by A(1-An-1)D(1-A) where A is the adjacency
matrix and D is the matrix with its elements all unit.
n. The flow through a node is derived from the formula A(1-An-1)D/(1-A) where A is the
flow matrix and D is the input flow values to the nodes.
o. The length of a network path is A(1-An-1) D/(1-A) where A is the adjacency matrix and
D is the matrix is the input flow values to the nodes.
p. The critical path is defined by the algorithm finding the longest path in the network.
q. The number of paths from one node to another is determined by the value of the
corresponding element of the Ak matrix where A is the adjacency matrix.
7.1.4.3 Results
Using the algorithms in the previous sub-section we can determine what nodes have
flow through them and which do not. We can find the edges that are used and those
unused. We can ascertain what the flow is between the nodes and which are single
entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network. Using the algorithms in the previous sub-section
we can determine what nodes have flow through them and which do not. We can find
the edges that are used and those unused. We can ascertain what the flow is between
the nodes and which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
7.1.5 Methodology from Algebraic Theory
7.1.5.1 Introduction
This section looks at the contribution that algebra can make to a system that we shall
use later in this paper. The algebraic system starts with a set of entities which have
operations. Formally we define a set and a function that maps domain elements onto
range elements. We define the type of function of the sets that we are considering and
define the associated domain and range.
7.1.5.2 Techniques of Algebraic Theory
We start any algebraic system with a system which is a set of elements. They have a
set of operations which are subject to various rules. The elements have a set of
properties to link the each. The operations that are of use to us in the fields of
linguistics are those of combination and valuation. Other operations that are helpful to
us are those following the associative, commutative and distributive laws. Our basic
system elements are entities, pictures or sounds which combine to services,
sentences, paragraphs, etc. respectively. We obtain meaning to the combinations using
the valuation technique which becomes a more complicated procedure as we continue
to greater quantities of natural language. We use relationships between parts of
language to define difference such as generalisation of a concept or specialisation. We
use equivalence of different elements to give a more varied experience of the language.
7.1.5.3 Results
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a linguistic system. The basic elements are derived from text,
pictures and/or sound. We restrict these basic elements by specifying what is allowed.
We apply rules of combination to the elements to form larger elements that we classify
as compound elements for which we have rules to say what is correct and what is
erroneous. We iterate on the combination for more elements to be validated against
format rules and evaluation rules.
We use valuation rules to classify symbols and numbers and evaluate the number. The
elements are classified into compound elements to help the format rules. Evaluation
rules give meaning (valuation) to elements and compound elements. Relations are
derived from another set of operations which give links such as synonyms, antonyms,
generalisation, and specification based on properties of the elements. Conjunctions
give ways of replicating actions and elements. Other rules are ways of defining
properties of objects or operations whilst some apply to the scope of meaning and the
scope of an object or operation or value.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
7.1.6 Methodology from Logic Theory
7.1.6.1 Introduction
This section looks at the contribution that logic can make to a system that we shall
use later in this paper. A logic system starts with a set of primitive elements with
logical values which can be combined to give statements of logical values and consider
the system for consistency, validity, soundness, and completeness. Formally we define
a set of sentences and derive values from them and then check that they are
consistent, valid, sound, and complete.
7.1.6.2 Techniques of Logic Theory
We start any logic system with a system which is a set of primitives. We give the
primitives values for their truth. The primitives include operations subject to various
rules. The operations that are of use to us in the fields of linguistics are those of
combination and valuation. Our basic system elements are entities, pictures or sounds
which combine to services, sentences, paragraphs, etc. Other operations give
associative, commutative and distributive laws for the evaluation of the combinations.
We obtain meaning to the combinations using the valuation technique which becomes
more complicated procedure as we continue to greater quantities of natural language.
We derive relationships from parts of language to define difference such as
generalisation of a concept or specialisation. We use equivalence of different elements
to give a more varied experience of the language.
7.1.6.3 Results
We have used the scheme of logic theory to give us a set with elements and functions
to be a basis of a linguistic system. We have primitives which are derived from pictures
and/or sound and have values associated with them. We restrict these basic elements
by specifying what is allowed with the rules in the logic. We apply rules of combination
to the elements to form larger elements that we classify as services for which we have
rules to say what is correct and what is erroneous. We iterate on the combination to
form more elements to be validated against syntactic rule and eventually semantic
rules.
We use valuation rules to classify services and numbers and evaluate the meaning. The
services are classified into parts of speech to help the syntactic rules. technique gives
meaning (valuation) to services and combinations of them. Relations are derived from
another set of primitive functions which give links such as synonyms, antonyms,
generalisation, and specification based on values of entities. Conjunctions give ways of
replicating actions and elements with distributive laws defined by the logic. Other parts
of speech are ways of defining rules for properties of objects or operations whilst some
give the scope of meaning and the scope of an object, operation or value.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
7.1.7 Methodology from Programming Language Theory
7.1.7.1 Introduction
This section reviews the history of programming languages starting with the ways in
which they have been formalised over the years. The concepts of sound and graphics
are included into the study. It goes on to recount the ways that object oriented
techniques have been reflected in terms of natural language.
7.1.7.2 Results
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From
graphics and sound technologies we find a similar kind of definition. We discover we
need to set a priority of the rules for catching idioms such as "jack in a box" = toy.
Object oriented programming gives us the concept of scope for meaning, nouns being
objects, adjective as properties, verbs as methods with arguments of nouns and
adverbs, pronouns the equivalent of the "this" operator and the concepts of synonyms,
generalisation and specification. Overloading of definitions allows for meaning to
change according to context. Conjunctions give ways of replicating actions (iterations)
under different cases and follow distributive rules for evaluation. Other parts of speech
are ways of defining properties of objects or actions with polymorphism for nouns and
verbs. Novels are analogous to procedures that do little action and return no results on
exit Research Journals provide articles which are likened to procedures with results on
exit. People specialise the different packages-libraries that they use. Experts extend
their knowledge network in one particular way whilst slaves do it in another.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
7.1.8 Methodology from Quantitative Theory
7.1.8.1 Introduction
This section considers the quantitative aspects of languages. It reflects on software
physics developed by Halstead. It goes on to review extensions of predicted value
based on work of the mean path theorem. Then we consider some interesting results for
measures of complexity and develop formulae for the best strategy of search. Using this
formula we show that we need to search an area the number of times that we expect to
find separate targets and that the value of a measure for the best strategy is unity when
we need to find 6.4 targets.
7.1.8.2 Results
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 = total number of occurrences of operands
N1 = n1log n1
N2 = n2log n2
n= program vocabulary
N= program length
n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
V= actual program volume
V*= theoretical program volume
V = N log n
V* = N* log n*
L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation gives us the expect number of errors or the number of statements when
we apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Variable Value Value
mm 0 1
mP(m,m) 0 6.4

mP(m,m)=0 when m = 0 or -0.5665


The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
7.1.9 Methodology from Learning Theory
7.1.9.1 Introduction
This section introduces the effects from learning theory. It starts 3 methods that are
used for evaluating a learning hypothesis. It shows how learning can vary by
considering the effects of repetitive reinforcement, then the effects of standardisation
and then the effects of tiredness. The story is continued by watching the way children
are taught from early years through pictures, sound and written form. We study the
effects of making errors and how we overcome them. We see how rules are developed
for children to determine easier ways of remembering the language and facts they need
to know. We describe technique to determine heuristics for remembering and learning
by induction and deduction.
7.1.9.2 Results
7.1.9.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn

The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their valuesv for A, where A has v distinct values.
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
7.1.9.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall. The
training of the users affects the speed of the scan and accuracy and can be defined by
the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
7.1.9.2.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to pragmatics. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
7.1.9.2.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
7.1.10 Methodology from Statistics Theory
7.1.10.1 Introduction
This section deals with statistics theory. Statistics studies collection, analysis,
interpretation, presentation, and organization of data. It works with
a population process. Statisticians develop experiment designs and survey samples
and ensure inferences and conclusions can be extended from the sample to the
population. Experiments take measurements of the system, change the system, and
take additional measurements with the same procedure to find the changed
measurements. Observational study does not involve experimental manipulation. We
use descriptive statistics to summarize sample data with indexes and inferential
statistics to draw conclusions from data.
7.1.10.2 Results
We use the network model described above to give a basis for the collection of data
about the language. When we consider the occurrence of an event in language research
we are talking about events, recurring events or choices of event. In the case of strings
of occurrences we have the count of using a particular entity. We use the logical and
operator for using groups of entities based on the recurrence of using a entity. When
we are considering the correctness of the alternatives of entities in a service we use
the logical or operation. When we come across a situation where one entity for a
particular language implies that we will always have to use specific further entities we
will use the dependent forms of the and and or logical operations. The structures of
linguistics imply a network form and we can use the techniques described in the part
on network structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
7.1.11 Methodology from Probability Theory
7.1.11.1 Introduction
This section deals with probability. It defines the term and goes onto describe the
logical operators used in the work of linguistics. Finally the correspondence with
network structures is demonstrated.
7.1.11.2 Results
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in language research we are talking about
events, recurring events or choices of event. In the case of strings of occurrences we
have the probability of selecting the correct entity. We use the logical and operator for
selecting groups of entities based on the recurrence of selecting a entity. When we are
considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
language implies that we will always have to use specific further entities we will use
the dependent forms of the and and or logical operations. The structures of linguistics
imply a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
7.1.12 Methodology from Geographic Information Systems
7.1.12.1 Introduction
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. It is applied to engineering, planning, management,
transport/logistics, telecommunications, and business. It can relate objects to
locations and possibly time.
GIS uses digital information, collected by digitization paper maps or survey plans with
CAD program and ortho-rectified imagery. The data can be represented by discrete
objects and continuous fields as raster images and vector. Displays can illustrate and
analyse features, enhance descriptive understanding and intelligence.
7.1.12.2 Results
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
7.1.13 Methodology from Communications Theory
7.1.13.1 Introduction
This section investigates results from communications theory. It first discusses the
role of communications models in computing and then goes onto review the
communications of people from the view point of artificial intelligence.
7.1.13.2 Results
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between parts in a system and use the same
language. Elements consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing). Protocol architecture is the task of
communication broken up into modules. At each layer, protocols are used to
communicate and control information is added to user data at each layer.
Grammar specifies the compositional structure of complex messages. A formal
language is a set of strings of terminal symbols. Each string in the language can be
analysed/generated by the grammar. The grammar is a set of rewrite rules to form non
terminals. Parse trees demonstrate the grammatical structure of a message to view
structure as an essential step towards meaning. If we add dialogues to non-terminals
to construct messages communications can be incorporated into protocols.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
7.1.14 Methodology from Compiler Technology Theory
7.1.14.1 Introduction
This section reviews the make up of compilers in particular the Production Quality
Compiler-Compiler Project of Carnegie Mellon University and goes onto review how we
can apply the techniques to the language studies. We first give an overall guide to
general compiler workings. We then give a more detailed description of one particular
implementation viz. the Production Quality Compiler-Compiler Project of Carnegie
Mellon University. Next we follow up the previous ideas with the way in which the ideas
may be applied to language studies.
7.1.14.2 Results
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
For language processing we start with a set of basics concepts of picture/sound. We
start applying rules for basic entities, services then combinations of services through
standard and technique to pragmatics. We can apply a bottom up "standard analyser"
as in a compiler to give us rules to add to our knowledge universe. The priority of the
rules gives us ways of catching idioms such as "jack in a box" = toy. We have rules to
give us generalisation e.g. animals and specification e.g. white tailed frog. Nouns
define entities, verbs actions, pronouns the equivalent of "this" in object oriented
programming. Conjunctions give ways of replicating actions under different cases and
follow distributive rules for evaluation. Other parts of speech are ways of defining
properties of objects or actions. Novels provide enclosed knowledge to be deleted as in
some procedures on exit. Research journals provide enclosed knowledge to be not
deleted as in procedures with own variables on exit. Experts extend their knowledge
network in one particular way whilst slaves do it in another. We can prioritise rules and
develop measures to determine the priority of meaning throughout a particular network
of experience and learning. 
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
7.1.15 Methodology from Database Technology
7.1.15.1 Introduction
A database is an aggregation of data to support the modelling of processes using the
data. A database management system is a software application using the data in the
database for users and other applications to collect and analyse the data. It allows the
definition (create, change and remove definitions of the organization of the data using a
data definition language (conceptual definition)), querying (retrieve information usable
for the user or other applications using a query language), update (insert, modify, and
delete of actual data using data manipulation language), and administration (maintain
users, data security, performance, data integrity, concurrency and data recovery using
utilities (physical definition)) of the database. Databases and DBMSs are classified by
the database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security. The database
technology follows the conceptual model (data model structure) i.e. navigational,
relational and post-relational. Navigational data models have examples as the
hierarchical model e.g. IBM's IMS and network model e.g. CODASYL model
implemented as Univac’s DMS. A relational database (exemplified by DB2 and Oracle) is
a set of tables containing data fitted into predefined categories. Each table (relation)
contains one or more data categories in columns. Each row contains a unique instance
of data for the categories defined by the columns (each with constraints that apply to
the data value). The definition of a relational database is a table of metadata (formal
descriptions of the tables, columns, domains, and constraints). A post-relational
database (e.g. NoSQLMongoDB and NewSQL/ScaleBase) are derived from object
databases to overcome the problems met with object programming and relational
database and also the development of hybrid object-relational databases. They use fast
key-value stores and document-oriented databases with XML to give interoperability
between different implementations.
So far we have classified databases by architecture. We can define them by content
e.g. bibliographic, document-text, statistical, or multimedia objects. Other categories
are:
 in-memory database
 event-driven architecture database
 cloud database
 data warehouse
 deductive database
 distributed database
 embedded database
 end-user databases
 federated database system
 multi-database
 graph database
 array database
 hypertext hypermedia database
 knowledge base
 mobile database
 operational databases
 parallel database
 probabilistic database
 real-time database
 spatial database
 temporal database
 terminology-oriented database
 unstructured data database
Logical data models include:
 Hierarchical database model
 Network model
 Relational model
 Entity–relationship model
 Enhanced entity–relationship model
 Object model
 Document model
 Entity–attribute–value model
 Star schema
 An object-relational database combines the two related structures.
Physical data models include:
 Inverted index
 Flat file
 Other models include:
 Associative model
 Multidimensional model
 Array model
 Multivalue model
 Semantic model
 XML database
7.1.15.2 Results
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of language studies.
The applications are bibliographic, document-text, statistical and multimedia objects.
The database management system must support users and other applications to
collect and analyse the data for language processes. The system allows the definition
(create, change and remove definitions of the organization of the data using a data
definition language (conceptual definition)), querying (retrieve information usable for
the user or other applications using a query language), update (insert, modify, and
delete of actual data using a data manipulation language), and administration (maintain
users, data security, performance, data integrity, concurrency and data recovery using
utilities (physical definition)) of the database. The database model most suitable for the
applications relies on post-relational databases (e.g. NoSQLMongoDB or
NewSQL/ScaleBase) are derived from object databases to overcome the problems met
with object programming and relational database and also the development of hybrid
object-relational databases. They use fast key-value stores and document-oriented
databases with XML to give interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
7.1.16 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
7.1.17 Configuration Management
7.1.17.1 Introduction
Configuration management gives an organised method for setting up and ensuring
consistency of a product throughout its life cycle from requirements via acceptance to
operations and maintenance through visibility and control of the reference attributes. It
gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes ate documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation. It sets a technical and
administrative direction to policies, procedures, techniques, and tools for managing,
evaluating proposed changes, tracking status of changes, and maintaining the
inventory of system and support documents as the system changes. It emphasises
personnel, responsibilities and resources, training requirements, administrative
meeting guidelines, definition of procedures and tools, base-lining processes,
configuration control, configuration status accounting, naming conventions, audits and
reviews, and subcontractor/vendor requirements.
Configuration management  is broken down into 4 aspects, viz. configuration
identification, configuration control, configuration status accounting and configuration
audits. The configuration identification defines all attributes of the item with an end-
user purpose documented and base-lined for later tracking of changes. Configuration
change control are processes and approval stages for changing an item's attributes
and making a new baseline. Configuration status accounting records and reports on the
baselines for each item at any time. Configuration audits occur at delivery or
completing the change to check that all requirements have been achieved.
7.1.17.2 Results
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
7.1.18 Continuous Integration
7.1.18.1 Introduction
Continuous integration (compared with frequent integration) merges all developer
working copies to a shared mainline several times a day to prevent integration
problems. It uses automated unit tests for test-driven development. It verifies all unit
tests in the developer's local environment before committing to the mainline to avoid
corrupting other developers' work. The concept evolved into build servers
automatically running unit tests periodically or after every commit to report results to
developers. It extended these to applying quality control in general small effort, applied
frequently by executing the unit and integration tests with static and dynamic tests,
measure and profile performance, extracting and formatting documentation from the
source code and help other QA processes to improve the quality and reduce delivery
time.
The process needs a version control system used by all to make a buildable system on
each fresh checkout without additional dependencies. The tools should support
branching but this should be minimal and built into the main production line. The
control system should support atomic commits, i.e. all of a developer's changes may be
seen as a single commit operation. A build process which is triggered on every commit
to the repository or periodically should be fast and automated to include compiling
binaries, generating documentation, website pages, statistics and distribution media,
integration, deployment into a production-like environment so that it is self-testing i.e.
the code is built and all tests should run to confirm that it behaves as it should.
Everyone commits to the baseline every day to reduce the conflicting changes that can
be resolved quickly and every completed work unit commit (to baseline) should be built
again to reduce and resolve conflicts.
The test system is a clone of the production environment to reduce conflicts between
any test environment and deployment into production. The pre-production test
environment should be a scalable version of the actual production environment to help
minimise costs and verify the production system by using service virtualisation for
dependences. If it is easy to use the latest deliverables for stakeholders and testers
then rework of new features can be reduced whilst defects can be shown up easily.
Automation can be extended to deployment to the pre-production system and
eventually the production environment after defects or regressions have been passed.
Continuous integration saves cost and time by :
• detecting integration bugs early with small change sets.
• avoiding mass integrations near release dates
• reverting small changes due to failure of unit tests or bugs.
• constant availability of a "current" build for testing, demo, or release purposes
• modular, less complex code
Automated testing gives:
• frequent testing
• fast feedback on impact of local changes
• collecting metrics on code coverage, code complexity, and features complete
concentrate on functional, quality code, and team momentum
7.1.18.2 Results
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualisation for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
7.1.19 Continuous Delivery
7.1.19.1 Introduction
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
7.1.19.2 Results
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
7.1.20 Virtual Reality
7.1.20.1 Introduction
Virtual reality is an immersive multimedia or computer-simulated reality which
replicates an environment simulating the user's presence, environment and interaction
of sight, touch, hearing, and smell. Displays use a computer screen or a
special headset and some simulations include sound. Some systems use sight tracking
or tactile information, eg force feedback in medical, gaming and military applications.
Other aspects use remote communication to give virtual presence for
telepresence and telexistence or a virtual artifact through standard input devices or
multimodal devices eg wired glove, data suit or treadmill. An immersive environment
presents a real world for a life like experience eg flight simulation or video games.
Virtual reality is used in education, training eg combat situations in multiple
environments, entertainment and exposure therapy with phobia treatment. It is used
with artificial intelligence to react in different ways or assess the results in the
environment. It allows reset time truncated and more repetition of processes in a
shorter amount of time. There is the opportunity for reducing of transference time
between simulated and real environment with corresponding safety, economy and
possible absence of pollution. It is applied with geographic and geometric situations for
the demonstration of architectural, urban planning, civil engineering and surveying.
7.1.20.2 Results
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
7.1.21 Summary
This section has reviewed how some other technologies can contribute to language
processing. It consisted of 19 further sub-sections reflecting the 19 theories that are
helpful. They are search theory, network theory, Markov theory, algebraic theory, logic
theory, programming language theory, quantitative theory, learning theory, statistics
theory, probability theory, communications theory, compiler technology theory,
database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise now.
We have studied a theory for systems based on the operations research technique
known as the theory of search. We found that there were restrictions between the user,
the system and its documentation which resulted in a measurable set of requirements
and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
A communications model extended the definition of the system requirements for
source, transmission, reception and destination. It defined levels of support,
coordination and corrective recovery of errors. It developed descriptions of the
problems associated with networks due to noise.
There were six validation cases discussed for network analysis. They were:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examined the algorithms based on affinity, connectivity and flow matrices. Using
Markov processes, we determined the flow through nodes and edges and partitioning of
the systems. By including an error sink we resolve probabilities of error for each of the
node states. We used this network analogy with probability to provide overall
probabilities for elements where choices are splits at nodes and combination is
expressed as loops or sub-nodes. We showed how statistics from the experience we
have with the language processes to develop a metric network model.
Using the algebraic, logical and programming language theory we found that the network
is reflected with valid combinations of basic elements through combination and valuation
to reflect syntactic rules and dictionaries to generate objects and actions, generalisation
specification with properties. Other parts of the system are ways of defining properties
of objects or operations whilst some apply to the scope of meaning and the scope of an
object or operation or value. We found how object oriented programming gave us the
concept of scope for objects, properties, methods with arguments, "this" operator and
the concepts of synonyms, generalisation and specification. Overloading of definitions
allows for meaning to change according to context. The system has ways of
polymorphism and replicating actions (iterations) under different cases and follow
distributive rules for evaluation. Rules are analogous to procedures that do little action
and return no results on exit, others which are likened to procedures with results on
exit. Systems specialise the different formats, purposes and persistence and
knowledge is extended according to the speciality of the environment.
We considered the quantitative aspects of systems. It reflected on software physics
developed by Halstead. It went on to review extensions of predicted value based on
work of the mean path theorem. Then we considered some interesting results for
measures of complexity and developed formulae for the best strategy of search. Using
this formula we showed that we need to search an area the number of times that we
expect to find separate targets and that the value of a measure for the best strategy is
unity when we need to find 6.4 targets.
We introduced the effects from learning theory. It studied 3 methods that are used for
evaluating a learning hypothesis. It showed how learning can vary by considering the
effects of repetitive reinforcement, then the effects of standardisation and then the
effects of tiredness. The story continued by watching the way children are taught from
early years through pictures, sound and written form. We studied the effects of making
errors and how we overcome them. We saw how rules are developed for children to
determine easier ways of remembering the system and facts they need to know. We
describe technique to determine heuristics for remembering and learning by induction
and deduction.
Compiler technology led to a processor which follows the network representation of the
language, the macro pre-processor concepts using inclusion of files, macro definitions
and expansions, conditional compilation and line control, context sensitive translation
and the compiler-compiler intermediate language.
Database technology provided the storage basis for the processing. It used storage for
system data, control data, document-text, statistical and multimedia objects and user
data. It supported the irregular update of system definition and the regular update,
insert and deletion data of the knowledge base. It used a post-relational database
model of hybrid object-relational databases based on XML. Other requirements were
support of event-driven architecture, deductive database, graph database, hypertext
hypermedia database, knowledge base, probabilistic database, real-time database and
temporal database.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
7.2 References
1. A.Payne, Paper, On Communications in Organisations. Computer Personnel
Review 1981
2. A.Payne, Paper, A Basis for Software Physics. Sigplan, 1981
3. A.Payne, Paper. Health Registration from Disease Registers. American
Association for Medical Systems and Informatics Conference, 1982
4. A.Payne, Paper. A Measure of Complexity. Sigart Newsletter, 1982
5. A.Payne, Paper. A Basis for the Rate of Change in Programs. Sigplan, 1982
6. A.Payne, Paper. Disease Registers Recycled for Computer Aided Learning. IEEE
General Systems Conference, 1982
7. A.Payne, Paper. Disease Registers and their Applications. IEEE MEDCOMP
Conference, Boston, USA, 1982
8. A.Payne, Paper. Disease Registers and Health Education. AAMSI Conference,
Boston, USA, 1982
9. A.Payne, Paper. A Markov Graph Model for the Planning of University Resources.
AMSE 83 Symposium, Bermuda
10. A.Payne, Paper. Microcomputers in Medical Records. Medinfo 80, Japan.
11. A.Payne, Paper. A Network Structure of Errors in Organisation Information Flow.
5th European Meeting on Cybernetics and Systems Research, 1980, Austria.
12. A.Payne, Seminar. Theory of Search Applied to System Development. Liverpool
University, Loughborough University, Imperial College (London), Manchester University,
Karlsruhe University, 1979.
13. A.Payne, Seminar. Medical Records and Cancer Treatment. Royal Berkshire
Hospital, UK, 1979.
14. A.Payne, Seminar. Theory of Search and System Development,. Hamilton, NZ,
1980.
15. A.Payne, Seminar. Medical Records and Microprocessor. Hamilton, 1980.
16. A.Payne, Paper. Micro Computer and Disease Registers. 20th Conference on
Physical Sciences and Engineering in Medicine and Biology, Christchurch, NZ, 1980.
17. A.Payne, Paper. A Network Structure of Errors in Databases. Congress on
Applied Systems Research and Cybernetics, Mexico, 1980.
18. A.Payne, Paper. The Theory of Search Applied to Programming Language
Studies. 3rd Hungarian Computer Science Conference, 1981
19. A.Payne, Paper. Innovations on Programming Language Studies. International
Computer Symposium, 1980, Taiwan.
20. A.Payne, Paper. Towards a Standard Simulation Language. IMACS TC3, 1981.
21. A.Payne, Paper. A System for Learning. Sigart Newsletter, 1978
22. A.Payne, Paper. Use of Standards in a Production Environment. Sigda
Newsletter, 1978
23. A.Payne, Paper. The Effects of Tiredness in a Production Environment.
Installation Management Review, 1978.
24. A.Payne, Paper. The O and M of System Development. New Zealand Computer
Conference, 1978.
25. A.Payne, Paper. The Application of Search Theory to Simulation Models. New
Zealand OR Conference, 1978.
26. A.Payne, Paper. The Application of Search Theory to Simulation Languages.
Symposium on Modelling and Simulation Methodology. Israel, 1978.
27. A.Payne, Paper. Properties Defining Usable Standard Languages. Sorrento
Workshop for International Standardisation of Simulation Languages, 1979.
28. A.Payne, Paper. A Review of Compiling Methods. LRRC. Maps Project Technical
Paper 10
29. A.Payne, Paper. Theoretical Consideration of Compiler Writing. LRRC. Maps
Project Technical Paper 14
30. A.Payne, Paper. Theoretical Consideration for Optimising Computer Programs.
LRRC. Maps Project Technical Paper 15.
31. A.Payne, Paper. An Approach to Compiler Construction. LRRC. Maps Project
Technical Paper 18.
32. A.Payne, Paper. A Language for Describing the Compilation Process. LRRC. Maps
Project Technical Paper 20
33. A.Payne, Paper. An Implementation of Program Optimisation. LRRC. Maps Project
Technical Paper 22.
34. Houlden, B.T., Some Technique of Operational Research, WUP 1962
35. Landau, L.D., and Lifshitz, E.M. Theory of Elasticity Pergoman, 1959, London
36. Halstead, M.H. Elements of Software Science, Elsevier North Holland, New York,
1977
37. Mohanty, S.N. Models and Measures for Quality Assurance of Software, Computer
Surveys, Vol. 11, No. 3., Sept. 79.
38. L.K. Ford and D.R. Fuller, Flows in Networks, Princeton University Press,
Princeton, New Jersey, 1962.
39. L J. Peters and L L. Tripp , A Model Of Software Engineering, Icse 78 Proceedings
Of The 3rd International Conference On Software Engineering IEEE Press Piscataway,
NJ, USA, 1978
40. D Teichroew, Problem Statement Analysis: Requirements For The Problem
Statement Analyzer (PSA) - System Analysis Techniques, 1974 - John Wiley & Sons
41. Halstead, M.H. Elements of Software Science, Elsevier, New York, 1978
42. Payne, A.J., The Theory of Search Applied to Systems Development, 8th
Australian Computer Conference, 1978
43. Payne, A.J., A Basis for Software Physics, Sigplan 1981
44. Leverett; Cattel; Hobbs; Newcomer; Reiner; Schatz; Wulf, An Overview of the
Production Quality Compiler-Compiler Project, in Computer 13(8):38-49 (August 1980)
45. Scott, Michael Lee, Programming Language Pragmatics, Morgan Kaufmann,
2005, 2nd edition, 912 pages. ISBN 0-12-633951-1.
46. AJ. Payne,Use of Standards In A Production Environment,SIGDA Newsletter ,
Volume 8 Issue 1, March 1978
47. AJ. Payne ,A Basis For Complexity Measurement?SIGART Bulletin , Issue
82,October 1982
48. A.Payne, Paper. Consistency in Systems. New Zealand Mathematics Colloquium,
May 1978
49. A.Payne, Paper. Completeness in Systems, New Zealand Mathematics
Colloquium, May 1980
50. A.Payne, Paper. On Network Structures. New Zealand Mathematics Colloquium,
May 1981
51. A.Payne, Paper. On Language Definitions. New Zealand Mathematics Colloquium,
May 1982
52. A Payne,Chill Compiler, ITT Systems Journal, 1983
53. APayne, Chill Systems, ITTE Systems Journal, 1983
54. A Payne, ITT Optimising Compiler, CCITT Chill Conference, Cambridge, 1984
55. A.Payne, Paper. Consistency in Systems. CCITT Z200 Working Group, 1986
56. A.Payne, Paper. Completeness in Systems, CCITT Z200 Working Group, 1986
57. Gosling, James; Joy, Bill; Steele, Guy; Bracha, Gilad; Buckley, Alex (2014). The
Java® Language Specification (PDF) (Java SE 8 ed.).2015, Oracle America, Inc. and/or
its affiliates. 500 Oracle Parkway, Redwood City, California 94065, U.S.A.
58. Numerical Methods, Robert W. Hornbeck, 1982, Prentice Hall, ISBN-10:
0136266142 • ISBN-13: 9780136266143
59. Applied Numerical Methods for Engineers and Scientists, Singiresu S. Rao, 2002
• Pearson, ISBN-10: 013089480X • ISBN-13: 9780130894809
8 Applications
8.1 Introduction
This section reviews how some other technologies can contribute to IoT security and
its processes. It consists of 21 further sub sections. It considers how the
methodologies from the 19 theories of section 3 are helpful. They are search theory,
network theory, Markov theory, algebraic theory, logic theory, programming language
theory, quantitative theory, learning theory, statistics theory, probability theory,
communications theory, compiler technology theory, database technology, curve
fitting, configuration management, continuous integration/delivery and virtual reality.
The last part presents the summary of the section.
We study a theory for systems based on the operations research technique known as
the theory of search. We find that there were restrictions between the user, the system
and its documentation which results in a measurable set of requirements and a method
of assessing how good the system, the system user and the documentation come up to
the requirements.
A communications modelextends the definition of the system requirements for source,
transmission, reception and destination. It defines levels of support, coordination and
corrective recovery of errors. It develops descriptions of the problems associated with
communication due to noise, differing understanding of current context and ambiguity,
indexicality, vagueness, dialogue structure, metaphor and non-compositionality.
There are six validation cases discussed for network analysis. They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms based on affinity, connectivity and flow matrices. Using
Markov processes, we determine the flow through nodes and edges and partitioning of
the systems. By including an error sink we resolve probabilities of error for each of the
node states. We use this network analogy with probability to provide overall probabilities
for elements where choices are splits at nodes and combination is expressed as loops or
sub-nodes. We show how statistics from the experience we have with the language
processes to develop a metric network model.
Using the algebraic, logical and programming language theory we find that the network
is reflected with valid combinations of basic elements through combination and valuation
to reflect standards and techniques to generate objects and actions, generalisation
specification with properties. Others give ways of defining properties of objects or
operations whilst some apply to the scope of meaning and the scope of an object or
operation or value. We find how object oriented programming gives us the concept of
scope for meaning, nouns being objects, adjective as properties, verbs as methods with
arguments of nouns and adverbs (properties of verbs), pronouns the equivalent of the
"this" operator and the concepts of synonyms, generalisation and specification.
Overloading of definitions allows for meaning to change according to context.
Conjunctions give ways of replicating actions (iterations) under different cases and
follow distributive rules for evaluation. Other parts of speech are ways of defining
properties of objects or actions with polymorphism for nouns and verbs. Novels are
analogous to procedures that do little action and return no results on exit Research
journals provide articles which are likened to procedures with results on exit. People
specialise the different packages libraries that they use. Experts extend their
knowledge network in one particular way whilst slaves do it in another.
We consider the quantitative aspects of languages. It reflects on software physics
developed by Halstead. It goes on to review extensions of predicted value based on
work of the mean path theorem. Then we consider some interesting results for
measures of complexity and developed formulae for the best strategy of search. Using
this formula we show that we need to search an area the number of times that we
expect to find separate targets and that the value of a measure for the best strategy is
unity when we need to find 6.4 targets.
We introduce the effects from learning theory. It study 3 methods that are used for
evaluating a learning hypothesis. It shows how learning can vary by considering the
effects of repetitive reinforcement, then the effects of standardisation and then the
effects of tiredness. The story continues by watching the way children are taught from
early years through pictures, sound and written form. We study the effects of making
errors and how we overcome them. We see how rules are developed for children to
determine easier ways of remembering the language and facts they need to know. We
describe techniques to determine heuristics for remembering and learning by induction
and deduction.
The compiler technology leads to a processor which follows the network representation
of the language, the macro pre-processor concepts using inclusion
of files, macro definitions and expansions, conditional compilation and line control,
context sensitive translation and the compiler-compiler intermediate language
Database technology provides the storage basis for the language processing. It uses
storage for bibliographic, document-text, statistical and multimedia objects. It supports
the irregular update of language definition and the regular update, insert and deletion
data of the knowledge base. It uses a post-relational database model of hybrid object-
relational databases based on XML. Other requirements are support of event-driven
architecture, deductive database, graph database, hypertext hypermedia database,
knowledge base, probabilistic database, real-time database and temporal database
8.2 Search Theory
8.2.1 Introduction
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
8.2.2 Entities
The entity system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the entity system should be modifiable to the experience of
the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no entity is found then the error is reported and after review the entity is added to the
system.
8.2.3 Services
The service system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the system should be modifiable to the experience of the
users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. It should be
standardised and have minimum number of pages and facts. services should be small,
logically place and have minimum number of reference strategies.
If no service is found then the error is reported and after review the service is added to
the system.
8.2.4 Techniques
The techniques system should be standardised, simple, specialised, logically
organised, concise, have minimum ambiguity, have minimum error cases and have
partitioning facilities. The facilities for systems should be modifiable to the experience
of the users.
All reference documentation should have stiff spines, and small thin stiff light pages
with simple content which is adjustable to the experience of the user. It should be
standardised and have minimum number of pages and facts. Facts should be small,
logically place and have minimum number of reference strategies.
If no technique is found then the error is reported and after review the technique is
added to the system.
8.2.5 Communications
The transmission part of the dialogue system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users. The reception part of the dialogue system should be
standardised, simple, specialised, logically organised, concise, have minimum
ambiguity, have minimum error cases and have partitioning facilities. The facilities for
systems should be modifiable to the experience of the users. The interaction of the
dialogue system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for systems should be modifiable to the experience of the users.
Reference documentation for the transmission, reception and interaction parts should
have stiff spines, and small thin stiff light pages with simple content which is
adjustable to the experience of the user. The documentation should be standardised
and have minimum number of pages and facts. Facts should be small, logically place
and have minimum number of reference strategies.
If any error is found then the error is reported and after review the error is added to the
system.
8.2.6 Antivirus
The antivirus system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the antivirus system should be modifiable to the experience
of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If any error is found then the error is reported and after review the error is added to the
antivirus system .
8.2.7 Firewall
The firewall system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the entity system should be modifiable to the experience of
the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If any error is found then the error is reported and after review the error is added to the
firewall system .
8.2.8 APIDS
The APIDS system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the entity system should be modifiable to the experience of
the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If any error is found then the error is reported and after review the error is added to the
APIDS system .
8.2.9 Cipher
The cipher system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for the entity system should be modifiable to the experience of
the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If any error is found then the error is reported and after review the error is added to the
cipher system .
8.3 Algebraic Theory
8.3.1 Introduction
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.2 Entities
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.3 Services
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (entities) of combination of
services to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the services through entities.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.4 Standards
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (techniques) of combination
of standards to form larger elements that we classify as systems or subsystems for
which we have rules (techniques) to say what is correct and what is erroneous. We
iterate on the combination for more complex elements to be validated against further
standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The standards are classified into parts of system with techniques.
Techniques give meaning to standards and combinations of them. Relations are derived
from another set of operations which give links such as generalisation (techniques) and
specification based on properties of the entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.5 Techniques
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (standards) of combination
of techniques to form larger elements that we classify as systems or subsystems for
which we have rules (standards) to say what is correct and what is erroneous. We
iterate on the combination for more complex elements to be validated against further
standards.
We use valuation rules to classify entities, services, standards, techniques and
communications. The techniques are classified into parts of system with standards.
Standards give meaning to techniques and combinations of them. Relations are derived
from another set of operations which give links such as generalisation (standards) and
specification based on properties of the entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.6 Communication
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process.
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a dialogue system based on the results of previous
subsections with basic elements derived from entities for the data/information for
sources, destinations and transmission systems. The basic elements are derived from
entities, services, standards, techniques and communications. We restrict these basic
elements by specifying what is allowed. We apply rules of combination to the elements
to form larger elements that we classify as systems or subsystems for which we have
rules to say what is correct and what is erroneous. We iterate on the combination for
more complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.7 Antivirus
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Antivirus adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.8 Firewall
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Firewall adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.9 APIDS
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
ADIPS adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.3.10 Ciphers
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Ciphers adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4 Logic Theory
8.4.1 Introduction
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.2 Entities
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.3 Services
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (entities) of combination of
services to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the services through entities.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.4 Standards
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (techniques) of combination
of standards to form larger elements that we classify as systems or subsystems for
which we have rules (techniques) to say what is correct and what is erroneous. We
iterate on the combination for more complex elements to be validated against further
standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The standards are classified into parts of system with techniques.
Techniques give meaning to standards and combinations of them. Relations are derived
from another set of operations which give links such as generalisation (techniques) and
specification based on properties of the entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.5 Techniques
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (standards) of combination
of techniques to form larger elements that we classify as systems or subsystems for
which we have rules (standards) to say what is correct and what is erroneous. We
iterate on the combination for more complex elements to be validated against further
standards.
We use valuation rules to classify entities, services, standards, techniques and
communications. The techniques are classified into parts of system with standards.
Standards give meaning to techniques and combinations of them. Relations are derived
from another set of operations which give links such as generalisation (standards) and
specification based on properties of the entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.6 Communication
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process.
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a dialogue system based on the results of previous
subsections with basic elements derived from entities for the data/information for
sources, destinations and transmission systems. The basic elements are derived from
entities, services, standards, techniques and communications. We restrict these basic
elements by specifying what is allowed. We apply rules of combination to the elements
to form larger elements that we classify as systems or subsystems for which we have
rules to say what is correct and what is erroneous. We iterate on the combination for
more complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.7 Antivirus
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Antivirus adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.8 Firewall
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Firewall adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.9 APIDS
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
APIDS adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.4.10 Ciphers
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities and
services and communications. We restrict these basic elements by specifying what is
allowed through standards and techniques. We apply rules (services) of combination of
entities to form larger elements that we classify as systems or subsystems for which
we have rules (standards and techniques) to say what is correct and what is erroneous.
We iterate on the combination for more complex elements to be validated against
further standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
Ciphers adds extra restrictions on entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.5 Network Theory
8.5.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.1.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.1.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.1.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.2 Entities
8.5.2.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. In this case the network system is based on entities as
units and services, standards, techniques and communications as relations. There are
six validation cases discussed in this paper. They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.2.2 Well Structured
Let us consider a system where a entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in section
3.2.3 would highlight this case in the definition of the system by the fact that there is a
connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.2.3 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.2.4 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.3 Services
8.5.3.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. In this case the network system is based on services
as units and entities, standards, techniques and communications as relations. There are
six validation cases discussed in this paper. They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.3.2 Well Structured
Let us consider a system where a service is connected to other services. What will the
source of the connection be with the other services? Will it be with one particular
service or another? There will be confusion and the well structured criterion described in
section 3.2.3 would highlight this case in the definition of the system by the fact that
there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.3.3 Consistency
A service is accessed from two other different services. What interpretation will be
placed on the meaning by the recipient service? The consistency condition under portion
3.2.3 will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.3.4 Completeness
From the service viewpoint, we can assume that there are services being defined but
unused. The services are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.4 Standards
8.5.4.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. In this case the network system is based on standards
as units and services, entities, techniques and communications as relations. There are
six validation cases discussed in this paper. They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.4.2 Well Structured
Let us consider a system where a standard is connected to other standards. What will
the source of the connection be with the other standards? Will it be with one particular
standard or another? There will be confusion and the well structured criterion described
in section 3.2.3 would highlight this case in the definition of the system by the fact that
there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.4.3 Consistency
A standard is accessed from two other different standards. What interpretation will be
placed on the meaning by the recipient standard? The consistency condition under
portion 3.2.3 will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.4.4 Completeness
From the standard viewpoint, we can assume that there are standards being defined but
unused. The standards are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.5 Techniques
8.5.5.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. In this case the network system is based on techniques
as units and services, standards, entities and communications as relations. There are six
validation cases discussed in this paper. They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.5.2 Well Structured
Let us consider a system where a technique is connected to other techniques. What will
the source of the connection be with the other techniques? Will it be with one particular
technique or another? There will be confusion and the well structured criterion described
in section 3.2.3 would highlight this case in the definition of the system by the fact that
there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.5.3 Consistency
A technique is accessed from two other different techniques. What interpretation will be
placed on the meaning by the recipient technique? The consistency condition under
portion 3.2.3 will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.5.4 Completeness
From the technique viewpoint, we can assume that there are techniques being defined
but unused. The techniques are a waste and would cause confusion if they are known.
The completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.6 Communications
8.5.6.1 Introduction
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process.
8.5.6.2 Network Structure
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. In this case the network system is based on
communications as units and services, standards, techniques and entities as relations.
There are six validation cases discussed in this paper. They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.6.3 Well Structured
Let us consider a system where a communication is connected to other communications.
What will the source of the connection be with the other communications? Will it be with
one particular communication or another? There will be confusion and the well
structured criterion described in section 3.2.3 would highlight this case in the definition
of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.6.4 Consistency
A communication is accessed from two other different communications. What
interpretation will be placed on the meaning by the recipient communication? The
consistency condition under portion 3.2.3 will detect the problem within the system.
8.5.6.5 Completeness
From the communication viewpoint, we can assume that there are communications
being defined but unused. The communications are a waste and would cause confusion if
they are known. The completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.7 Antivirus
8.5.7.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques, communications and antivirus constraints. In this case the network system
is based on entities as units and services, standards, techniques, communications and
antivirus constraints as relations. There are six validation cases discussed in this paper.
They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.7.2 Well Structured
Let us consider a system where an entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in section
3.2.3 would highlight this case in the definition of the system by the fact that there is a
connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.7.3 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.7.4 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.8 Firewall
8.5.8.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques, communications and firewall constraints. In this case the network system is
based on entities as units and services, standards, techniques, communications and
firewall constraints as relations. There are six validation cases discussed in this paper.
They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.8.2 Well Structured
Let us consider a system where a entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in section
3.2.3 would highlight this case in the definition of the system by the fact that there is a
connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.8.3 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.8.4 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.9 APIDS
8.5.9.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques, communications and APIDS constraints. In this case the network system is
based on entities as units and services, standards, techniques, communications and
APIDS constraints as relations. There are six validation cases discussed in this paper.
They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.9.2 Well Structured
Let us consider a system where an entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in section
3.2.3 would highlight this case in the definition of the system by the fact that there is a
connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.9.3 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.9.4 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.10 Ciphers
8.5.10.1 Introduction
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques, communications and cipher constraints. In this case the network system is
based on entities as units and services, standards, techniques, communications and
cipher constraints as relations. There are six validation cases discussed in this paper.
They are:
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
8.5.10.2 Well Structured
Let us consider a system where an entity is connected to other entities. What will the
source of the connection be with the other entities? Will it be with one particular entity
or another? There will be confusion and the well structured criterion described in section
3.2.3 would highlight this case in the definition of the system by the fact that there is a
connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.10.3 Consistency
An entity is accessed from two other different entities. What interpretation will be placed
on the meaning by the recipient entity? The consistency condition under portion 3.2.3
will detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.5.10.4 Completeness
From the entity viewpoint, we can assume that there are entities being defined but
unused. The entities are a waste and would cause confusion if they are known. The
completeness prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
8.6 Markov Theory
8.6.1 Introduction
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
8.6.2 Entities
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on entities classified as
nodes and the services, standards, techniques and communications as edges.
Using the algorithms in the previous sub-section on network theory for entities we can
determine what entities have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
entities and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.3 Services
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on services classified as a
unit and the entities, standards, techniques and communications as edges.
Using the algorithms in the previous sub-section on network theory for services we can
determine what services have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
services and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.4 Standards
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on standards classified as a
unit and the entities, services, techniques and communications as edges.
Using the algorithms in the previous sub-section on network theory for standards we
can determine what standards have flow through them and which do not. We can find
the edges that are used and those unused. We can ascertain what the flow is between
the standards and which are single entry or single exit blocks (groupings) within the
system. If we make a node which is to be taken as the error sink we can use the extra
edges to discover what is the probability of error at different parts in the system and
the error node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.5 Techniques
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on techniques classified as
nodes and the entities, services, standards and communications as edges.
Using the algorithms in the previous sub-section on network theory for techniques we
can determine what techniques have flow through them and which do not. We can find
the edges that are used and those unused. We can ascertain what the flow is between
the techniques and which are single entry or single exit blocks (groupings) within the
system. If we make a node which is to be taken as the error sink we can use the extra
edges to discover what is the probability of error at different parts in the system and
the error node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.6 Communications
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on communications classified
as nodes and the entities, services, standards and techniques as edges.
Using the algorithms in the previous sub-section on network theory for communication
nodes we can determine what communication nodes have flow through them and which
do not. We can find the edges that are used and those unused. We can ascertain what
the flow is between the communication nodes and which are single entry or single exit
blocks (groupings) within the system. If we make a node which is to be taken as the
error sink we can use the extra edges to discover what is the probability of error at
different parts in the system and the error node gives an estimate of the total error rate
of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.7 Antivirus
The network system is based on entities, services, standards, techniques,
communications and antivirus constraints. In this case the network system is based on
entities classified as nodes and the services, standards, techniques, communications
and antivirus constraints as edges.
Using the algorithms in the previous sub-section on network theory for entities we can
determine what entities have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
entities and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.8 Firewall
The network system is based on entities, services, standards, techniques,
communications and firewall constraints. In this case the network system is based on
entities classified as nodes and the services, standards, techniques, communications
and firewall constraints as edges.
Using the algorithms in the previous sub-section on network theory for entities we can
determine what entities have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
entities and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.9 APIDS
The network system is based on entities, services, standards, techniques,
communications and APIDS constraints. In this case the network system is based on
entities classified as nodes and the services, standards, techniques, communications
and APIDS constraints as edges.
Using the algorithms in the previous sub-section on network theory for entities we can
determine what entities have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
entities and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.6.10 Ciphers
The network system is based on entities, services, standards, techniques,
communications and ciphers constraints. In this case the network system is based on
entities classified as nodes and the services, standards, techniques, communications
and ciphers constraints as edges.
Using the algorithms in the previous sub-section on network theory for entities we can
determine what entities have flow through them and which do not. We can find the
edges that are used and those unused. We can ascertain what the flow is between the
entities and which are single entry or single exit blocks (groupings) within the system.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the system and the error
node gives an estimate of the total error rate of the system.
If a node or edge is not found then the error is reported as a stack dump and after
review the matrix structure is adjusted as appropriate.
8.7 Quantitative Theory
8.7.1 Introduction
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.

Variable Value Value


m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.7.2 Entities
Using the concepts of Halstead's software physics we have the relations for entities and
services in systems with deviations due to impurities in systems:
If n1=number of entity types in the system
n2 = number of service types in the system
N1 =total number of occurrences of entities in the system
N2 =total number of occurrences of services in the system
then N1 = n1log n1
N2 = n2log n2
If n= number of entity types and service types in the system
N= entity system size
N*= theoretical entity system size
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual entity system volume
V*= theoretical entity system volume
then V = N log n
V* = N* log n*
If L = V*/V= entity system level
λ = LV*= possible entity system level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for an entity system is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of finding the entity m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the number of entities covered by the search
process and the entity set
a = entity search rate
z = the size of the entity set
m = number of finds that are successful
n = number of attempts
s = speed searcher passes over the entity set
t = time searcher passes over the entity set
p= probability of being used each time it is found
P = total value of probability
N = total number of attempts
where x = and D =
M = total number of entities used
S = total speed of search
T = total time of search
Z = total virtual entity set
A = total entity search rate
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of entities used
S1 = average speed of search
T1 = average time of search
Z1 = average virtual entity set
A1 = average entity search rate
The Z equation with the relation between the number of attempts and the entity system
size over an average entity set size explains the system in terms of actions of search.
The N relation shows that an entity system size can be calculated as the average
number of attempts in a particular entity set. Specifically we can estimate the number of
checks n that we can expect to apply to find m errors in an entity system of size A or the
number of attempts n that we expect to apply when constructing an entity system of m
units in an entity set of size z. Conversely the M relation give us the expect number of
errors or the number of entities when we apply a specific number of checks or produce a
number of entities.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search activity
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of entities
whilst the s, t and a relations reflect the relations between S, T and A described earlier,
m shows the normalised result for M and n is rather too complicated to envisage
generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Variable Value Value
m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a m entities is to search the whole area m times and that
mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding to a
measure of complexity with a value of 1 for m = 6.4 approximately or the lucky seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.7.3 Services
Using the concepts of Halstead's software physics we have the relations for entities and
services in systems with deviations due to impurities in systems:
If n1=number of service types in the system
n2 = number of technique types in the system
N1 =total number of occurrences of services in the system
N2 =total number of occurrences of techniques in the system
then N1 = n1log n1
N2 = n2log n2
If n= number of service types and technique types in the system
N= service system size
N*= theoretical service system size
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual service system volume
V*= theoretical service system volume
then V = N log n
V* = N* log n*
If L = V*/V= service system level
λ = LV*= possible service system level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a service system is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of finding the service m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the number of services covered by the search
process and the service set
a = service search rate
z = the size of the service set
m = number of finds that are successful
n = number of attempts
s = speed searcher passes over the service set
t = time searcher passes over the service set
p= probability of being used each time it is found
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of services used
S = total speed of search
T = total time of search
Z = total virtual service set
A = total service search rate
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of services used
S1 = average speed of search
T1 = average time of search
Z1 = average virtual service set
A1 = average service search rate
The Z equation with the relation between the number of attempts and the service system
size over an average service set size explains a system in terms of actions of search.
The N relation shows that a service system size can be calculated as the average
number of attempts in a particular service set. Specifically we can estimate the number
of checks n that we can expect to apply to find m errors in a service system of size A or
the number of attempts n that we expect to apply when constructing a service system of
m units in an service set of size z. Conversely the M relation give us the expect number
of errors or the number of services when we apply a specific number of checks or
produce a number of services.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search activity
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of services
whilst the s, t and a relations reflect the relations between S, T and A described earlier,
m shows the normalised result for M and n is rather too complicated to envisage
generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Variable Value Value
m 0 1
mP(m,m) 0 6.4

mP(m,m)=0 when m = 0 or -0.5665


The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a m services is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.7.4 Standards
Using the concepts of Halstead's software physics we have the relations for standards
and IoT standards in systems with deviations due to impurities in systems:
If n1=number of IoT standard types in the system
n2 = number of standard types in the system
N1 =total number of occurrences of IoT standards in the system
N2 =total number of occurrences of standards in the system
then N1 = n1log n1
N2 = n2log n2
If n= number of IoT standard types and standard types in the system
N= IoT standard system size
N*= theoretical IoT standard system size
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual IoT standard system volume
V*= theoretical IoT standard system volume
then V = N log n
V* = N* log n*
If L = V*/V= IoT standard system level
λ = LV*= possible IoT standard system level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a IoT standard system is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of finding the IoT standard m times as for a coverage ratio
C.
C =nast/z= coverage ratio = ratio between the number of IoT standards covered by the
search process and the IoT standard set
a = IoT standard search rate
z = the size of the IoT standard set
m = number of finds that are successful
n = number of attempts
s = speed searcher passes over the IoT standard set
t = time searcher passes over the IoT standard set
p= probability of being used each time it is found
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of IoT standards used
S = total speed of search
T = total time of search
Z = total virtual IoT standard set
A = total IoT standard search rate
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of IoT standards used
S1 = average speed of search
T1 = average time of search
Z1 = average virtual IoT standard set
A1 = average IoT standard search rate
The Z equation with the relation between the number of attempts and the IoT standard
system size over an average IoT standard set size explains a system in terms of actions
of search.
The N relation shows that IoT standard system size can be calculated as the average
number of attempts in a particular IoT standard set. Specifically we can estimate the
number of checks n that we can expect to apply to find m errors in a IoT standard
system of size A or the number of attempts n that we expect to apply when constructing
a IoT standard system of m units in an IoT standard set of size z. Conversely the M
relation give us the expect number of errors or the number of IoT standards when we
apply a specific number of checks or produce a number of IoT standards.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search activity
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of IoT
standards whilst the s, t and a relations reflect the relations between S, T and A
described earlier, m shows the normalised result for M and n is rather too complicated to
envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Variable Value Value
m 0 1
mP(m,m) 0 6.4

mP(m,m)=0 when m = 0 or -0.5665


The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a m IoT standards is to search the whole area m times
and that mm+1 e-m/m! is an increasing function for m increasing above zero and
corresponding to a measure of complexity with a value of 1 for m = 6.4 approximately or
the lucky seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.7.5 Techniques
Using the concepts of Halstead's software we have the relations for standards and
techniques in systems with deviations due to impurities in systems:
If n1=number of technique types in the system
n2 = number of standard types in the system
N1 =total number of occurrences of techniques in the system
N2 =total number of occurrences of standards in the system
then N1 = n1log n1
N2 = n2log n2
If n= number of technique types and standard types in the system
N= technique system size
N*= theoretical technique system size
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual technique system volume
V*= theoretical technique system volume
then V = N log n
V* = N* log n*
If L = V*/V= technique system level
λ = LV*= possible technique system level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a technique system is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of finding the technique m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the number of techniques covered by the
search process and the technique set
a = technique search rate
z = the size of the technique set
m = number of finds that are successful
n = number of attempts
s = speed searcher passes over the technique set
t = time searcher passes over the technique set
p= probability of being used each time it is found
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of techniques used
S = total speed of search
T = total time of search
Z = total virtual technique set
A = total technique search rate
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of techniques used
S1 = average speed of search
T1 = average time of search
Z1 = average virtual technique set
A1 = average technique search rate
The Z equation with the relation between the number of attempts and the technique
system size over an average technique set size explains a system in terms of actions of
search.
The N relation shows that technique system size can be calculated as the average
number of attempts in a particular technique set. Specifically we can estimate the
number of checks n that we can expect to apply to find m errors in a technique system of
size A or the number of attempts n that we expect to apply when constructing a
technique system of m units in an technique set of size z. Conversely the M relation give
us the expect number of errors or the number of techniques when we apply a specific
number of checks or produce a number of techniques.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search activity
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of techniques
whilst the s, t and a relations reflect the relations between S, T and A described earlier,
m shows the normalised result for M and n is rather too complicated to envisage
generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Variable Value Value
m 0 1
mP(m,m) 0 6.4

mP(m,m)=0 when m = 0 or -0.5665


The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a m techniques is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.7.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications metrics are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.7.7 Antivirus
Using the concepts of Halstead's software physics we have the relations for entities,
services and antivirus constraints in systems with deviations due to impurities in
systems:
If n1=number of entity types in the system
n2 = number of service types in the system
N1 =total number of occurrences of entities in the system
N2 =total number of occurrences of services in the system
then N1 = n1log n1
N2 = n2log n2
If n= number of entity types and service types in the system
N= entity system size
N*= theoretical entity system size
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual entity system volume
V*= theoretical entity system volume
then V = N log n
V* = N* log n*
If L = V*/V= entity system level
λ = LV*= possible entity system level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for an entity system is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of finding the entity m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the number of entities covered by the search
process and the entity set
a = entity search rate
z = the size of the entity set
m = number of finds that are successful
n = number of attempts
s = speed searcher passes over the entity set
t = time searcher passes over the entity set
p= probability of being used each time it is found
P = total value of probability
N = total number of attempts
where x = and D =
M = total number of entities used
S = total speed of search
T = total time of search
Z = total virtual entity set
A = total entity search rate
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of entities used
S1 = average speed of search
T1 = average time of search
Z1 = average virtual entity set
A1 = average entity search rate
The Z equation with the relation between the number of attempts and the entity system
size over an average entity set size explains the system in terms of actions of search.
The N relation shows that an entity system size can be calculated as the average
number of attempts in a particular entity set. Specifically we can estimate the number of
checks n that we can expect to apply to find m errors in an entity system of size A or the
number of attempts n that we expect to apply when constructing an entity system of m
units in an entity set of size z. Conversely the M relation give us the expect number of
errors or the number of entities when we apply a specific number of checks or produce a
number of entities.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search activity
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of entities
whilst the s, t and a relations reflect the relations between S, T and A described earlier,
m shows the normalised result for M and n is rather too complicated to envisage
generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Variable Value Value
m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a m entities is to search the whole area m times and that
mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding to a
measure of complexity with a value of 1 for m = 6.4 approximately or the lucky seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.7.8 Firewall
Using the concepts of Halstead's software physics we have the relations for entities,
services and firewall constraints in systems with deviations due to impurities in systems:
If n1=number of entity types in the system
n2 = number of service types in the system
N1 =total number of occurrences of entities in the system
N2 =total number of occurrences of services in the system
then N1 = n1log n1
N2 = n2log n2
If n= number of entity types and service types in the system
N= entity system size
N*= theoretical entity system size
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual entity system volume
V*= theoretical entity system volume
then V = N log n
V* = N* log n*
If L = V*/V= entity system level
λ = LV*= possible entity system level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for an entity system is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of finding the entity m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the number of entities covered by the search
process and the entity set
a = entity search rate
z = the size of the entity set
m = number of finds that are successful
n = number of attempts
s = speed searcher passes over the entity set
t = time searcher passes over the entity set
p= probability of being used each time it is found
P = total value of probability
N = total number of attempts
where x = and D =
M = total number of entities used
S = total speed of search
T = total time of search
Z = total virtual entity set
A = total entity search rate
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of entities used
S1 = average speed of search
T1 = average time of search
Z1 = average virtual entity set
A1 = average entity search rate
The Z equation with the relation between the number of attempts and the entity system
size over an average entity set size explains the system in terms of actions of search.
The N relation shows that an entity system size can be calculated as the average
number of attempts in a particular entity set. Specifically we can estimate the number of
checks n that we can expect to apply to find m errors in an entity system of size A or the
number of attempts n that we expect to apply when constructing an entity system of m
units in an entity set of size z. Conversely the M relation give us the expect number of
errors or the number of entities when we apply a specific number of checks or produce a
number of entities.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search activity
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of entities
whilst the s, t and a relations reflect the relations between S, T and A described earlier,
m shows the normalised result for M and n is rather too complicated to envisage
generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Variable Value Value
m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a m entities is to search the whole area m times and that
mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding to a
measure of complexity with a value of 1 for m = 6.4 approximately or the lucky seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.7.9 APIDS
Using the concepts of Halstead's software physics we have the relations for entities,
services and APIDS constraints in systems with deviations due to impurities in systems:
If n1=number of entity types in the system
n2 = number of service types in the system
N1 =total number of occurrences of entities in the system
N2 =total number of occurrences of services in the system
then N1 = n1log n1
N2 = n2log n2
If n= number of entity types and service types in the system
N= entity system size
N*= theoretical entity system size
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual entity system volume
V*= theoretical entity system volume
then V = N log n
V* = N* log n*
If L = V*/V= entity system level
λ = LV*= possible entity system level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for an entity system is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of finding the entity m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the number of entities covered by the search
process and the entity set
a = entity search rate
z = the size of the entity set
m = number of finds that are successful
n = number of attempts
s = speed searcher passes over the entity set
t = time searcher passes over the entity set
p= probability of being used each time it is found
P = total value of probability
N = total number of attempts
where x = and D =
M = total number of entities used
S = total speed of search
T = total time of search
Z = total virtual entity set
A = total entity search rate
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of entities used
S1 = average speed of search
T1 = average time of search
Z1 = average virtual entity set
A1 = average entity search rate
The Z equation with the relation between the number of attempts and the entity system
size over an average entity set size explains the system in terms of actions of search.
The N relation shows that an entity system size can be calculated as the average
number of attempts in a particular entity set. Specifically we can estimate the number of
checks n that we can expect to apply to find m errors in an entity system of size A or the
number of attempts n that we expect to apply when constructing an entity system of m
units in an entity set of size z. Conversely the M relation give us the expect number of
errors or the number of entities when we apply a specific number of checks or produce a
number of entities.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search activity
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of entities
whilst the s, t and a relations reflect the relations between S, T and A described earlier,
m shows the normalised result for M and n is rather too complicated to envisage
generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Variable Value Value
m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a m entities is to search the whole area m times and that
mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding to a
measure of complexity with a value of 1 for m = 6.4 approximately or the lucky seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.7.10 Ciphers
Using the concepts of Halstead's software physics we have the relations for entities,
services and ciphers constraints in systems with deviations due to impurities in systems:
If n1=number of entity types in the system
n2 = number of service types in the system
N1 =total number of occurrences of entities in the system
N2 =total number of occurrences of services in the system
then N1 = n1log n1
N2 = n2log n2
If n= number of entity types and service types in the system
N= entity system size
N*= theoretical entity system size
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual entity system volume
V*= theoretical entity system volume
then V = N log n
V* = N* log n*
If L = V*/V= entity system level
λ = LV*= possible entity system level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for an entity system is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of finding the entity m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the number of entities covered by the search
process and the entity set
a = entity search rate
z = the size of the entity set
m = number of finds that are successful
n = number of attempts
s = speed searcher passes over the entity set
t = time searcher passes over the entity set
p= probability of being used each time it is found
P = total value of probability
N = total number of attempts
where x = and D =
M = total number of entities used
S = total speed of search
T = total time of search
Z = total virtual entity set
A = total entity search rate
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of entities used
S1 = average speed of search
T1 = average time of search
Z1 = average virtual entity set
A1 = average entity search rate
The Z equation with the relation between the number of attempts and the entity system
size over an average entity set size explains the system in terms of actions of search.
The N relation shows that an entity system size can be calculated as the average
number of attempts in a particular entity set. Specifically we can estimate the number of
checks n that we can expect to apply to find m errors in an entity system of size A or the
number of attempts n that we expect to apply when constructing an entity system of m
units in an entity set of size z. Conversely the M relation give us the expect number of
errors or the number of entities when we apply a specific number of checks or produce a
number of entities.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search activity
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of entities
whilst the s, t and a relations reflect the relations between S, T and A described earlier,
m shows the normalised result for M and n is rather too complicated to envisage
generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.
Variable Value Value
m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a m entities is to search the whole area m times and that
mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding to a
measure of complexity with a value of 1 for m = 6.4 approximately or the lucky seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.8 Learning Theory
8.8.1 Introduction
8.8.1.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn

The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
8.8.1.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
8.8.1.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to meaning. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
8.8.1.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
8.8.2 Entities
8.8.2.1 General Methods
Applying the general techniques defined above to entities, we find that we can assess
the priority of attributes that we need to use to develop the decision tree structure
(reflected in the network section for entities). We evaluate the largest information gain
to give the entity.
Using the maximum a posteriori learning strategy we evaluate the derivative of the log
likelihood with respect to each parameter and find the parameter values such that the
derivatives are zero using computer optimization techniques to select the entity.
8.8.2.2 Theoretical Studies
Using the formulae in the introduction of this section, we find that the user should be
experienced, particularly in the specialised selection of an entity They should be good
workers (accurate, efficient, good memory, careful, precise, fast learner) who is able to
settle to work quickly and continue to concentrate for long periods. They should have
aptitude and fast recall.
8.8.2.3 Child Learning
As entities are added to the system they are defined by their connections through
services, techniques, standards and communications to generalise, standardise and
specify rules to reflect the network model defined in previous sections. At this stage of
the study we select the network structure with error analysis for entities only.
8.8.2.4 Medical Systems
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the entity from the feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for entities only.
8.8.3 Services
8.8.3.1 General Methods
Applying the general techniques defined above to entities, we find that we can assess
the priority of attributes that we need to use to develop the decision tree structure
(reflected in the network section for entities). We evaluate the largest information gain
to give the entity.
Using the maximum a posteriori learning strategy we evaluate the derivative of the log
likelihood with respect to each parameter and find the parameter values such that the
derivatives are zero using computer optimization techniques to select the entity.
8.8.3.2 Theoretical Studies
Using the formulae in the introduction of this section, we find that the user should be
experienced, particularly in the specialised selection of a entity They should be good
workers (accurate, efficient, good memory, careful, precise, fast learner) who is able to
settle to work quickly and continue to concentrate for long periods. They should have
aptitude and fast recall.
8.8.3.3 Child Learning
As services are added to the system they are defined by their connections through
entities, techniques, standards and communications to generalise, standardise and
specify rules to reflect the network model defined in previous sections. At this stage of
the study we select the network structure with error analysis for services only.
8.8.3.4 Medical Systems
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the entity from the feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for entities only.
8.8.4 Standards
8.8.4.1 General Methods
Applying the general techniques defined above to entities, we find that we can assess
the priority of attributes that we need to use to develop the decision tree structure
(reflected in the network section for entities). We evaluate the largest information gain
to give the entity.
Using the maximum a posteriori learning strategy we evaluate the derivative of the log
likelihood with respect to each parameter and find the parameter values such that the
derivatives are zero using computer optimization techniques to select the entity.
8.8.4.2 Theoretical Studies
Using the formulae in the introduction of this section, we find that the user should be
experienced, particularly in the specialised selection of a entity They should be good
workers (accurate, efficient, good memory, careful, precise, fast learner) who is able to
settle to work quickly and continue to concentrate for long periods. They should have
aptitude and fast recall.
8.8.4.3 Child Learning
As standards services are added to the system they are defined by their connections
through entities, techniques, services and communications to generalise, standardise
and specify rules to reflect the network model defined in previous sections. At this stage
of the study we select the network structure with error analysis for standards only.
8.8.4.4 Medical Systems
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the entity from the feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for entities only.
8.8.5 Techniques
8.8.5.1 General Methods
Applying the general techniques defined above to entities, we find that we can assess
the priority of attributes that we need to use to develop the decision tree structure
(reflected in the network section for entities). We evaluate the largest information gain
to give the entity.
Using the maximum a posteriori learning strategy we evaluate the derivative of the log
likelihood with respect to each parameter and find the parameter values such that the
derivatives are zero using computer optimization techniques to select the entity.
8.8.5.2 Theoretical Studies
Using the formulae in the introduction of this section, we find that the user should be
experienced, particularly in the specialised selection of a entity They should be good
workers (accurate, efficient, good memory, careful, precise, fast learner) who is able to
settle to work quickly and continue to concentrate for long periods. They should have
aptitude and fast recall.
8.8.5.3 Child Learning
As techniques services are added to the system they are defined by their connections
through entities, standards, services and communications to generalise, standardise and
specify rules to reflect the network model defined in previous sections. At this stage of
the study we select the network structure with error analysis for techniques only.
8.8.5.4 Medical Systems
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the entity from the feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for entities only.
8.8.6 Communications
8.8.6.1 Introduction
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications metrics are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present.
8.8.6.2 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. It can be reflected as inductive learning using a
set of examples with attributes expressed as tables or a decision tree.
Using information theory we can assess the priority of attributes that we need to use to
develop the decision tree structure. We calculate the information content (entropy)
using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give: p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
Finally we choose the attribute with the largest IG.
p n
IG ( A)  I ( , )  remainder ( A)
pn pn

Learning can be viewed as a Bayesian updating of a probability distribution over the


hypothesis space. It uses predictions of likelihood-weighted average over the
hypotheses to asses the results but this can be too problematic. This can be overcome
with the maximum a posteriori (MAP) learning choosing to maximise the probability of
each hypothesis for all outcomes of the training data, expressing it in terms of the full
data for each hypothesis and taking logs to give a measure of bits to encode data given
the hypothesis and bits to encode the hypothesis (minimum description length). For
large datasets, we can use maximum likelihood (ML) learning by maximising the
probability of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
8.8.6.3 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
8.8.6.4 Child Learning
As communications services are added to the system they are defined by their
connections through entities, standards, services and techniques to generalise,
standardise and specify rules to reflect the network model defined in previous sections.
At this stage of the study we select the network structure with error analysis for
communications only.
8.8.6.5 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
8.8.7 Antivirus
8.8.7.1 General Methods
Applying the general techniques defined above to entities, we find that we can assess
the priority of attributes that we need to use to develop the decision tree structure
(reflected in the network section for entities). We evaluate the largest information gain
to give the entity.
Using the maximum a posteriori learning strategy we evaluate the derivative of the log
likelihood with respect to each parameter and find the parameter values such that the
derivatives are zero using computer optimization techniques to select the entity.
8.8.7.2 Theoretical Studies
Using the formulae in the introduction of this section, we find that the user should be
experienced, particularly in the specialised selection of a entity They should be good
workers (accurate, efficient, good memory, careful, precise, fast learner) who is able to
settle to work quickly and continue to concentrate for long periods. They should have
aptitude and fast recall.
8.8.7.3 Child Learning
As entities are added to the system they are defined by their connections through
services, techniques, standards and communications to generalise, standardise and
specify rules to reflect the network model defined in previous sections. At this stage of
the study we select the network structure with error analysis for entities only.
8.8.7.4 Medical Systems
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the entity from the feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for entities only.
8.8.8 Firewall
8.8.8.1 General Methods
Applying the general techniques defined above to entities, we find that we can assess
the priority of attributes that we need to use to develop the decision tree structure
(reflected in the network section for entities). We evaluate the largest information gain
to give the entity.
Using the maximum a posteriori learning strategy we evaluate the derivative of the log
likelihood with respect to each parameter and find the parameter values such that the
derivatives are zero using computer optimization techniques to select the entity.
8.8.8.2 Theoretical Studies
Using the formulae in the introduction of this section, we find that the user should be
experienced, particularly in the specialised selection of a entity They should be good
workers (accurate, efficient, good memory, careful, precise, fast learner) who is able to
settle to work quickly and continue to concentrate for long periods. They should have
aptitude and fast recall.
8.8.8.3 Child Learning
As entities are added to the system they are defined by their connections through
services, techniques, standards and communications to generalise, standardise and
specify rules to reflect the network model defined in previous sections. At this stage of
the study we select the network structure with error analysis for entities only.
8.8.8.4 Medical Systems
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the entity from the feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for entities only.
8.8.9 APIDS
8.8.9.1 General Methods
Applying the general techniques defined above to entities, we find that we can assess
the priority of attributes that we need to use to develop the decision tree structure
(reflected in the network section for entities). We evaluate the largest information gain
to give the entity.
Using the maximum a posteriori learning strategy we evaluate the derivative of the log
likelihood with respect to each parameter and find the parameter values such that the
derivatives are zero using computer optimization techniques to select the entity.
8.8.9.2 Theoretical Studies
Using the formulae in the introduction of this section, we find that the user should be
experienced, particularly in the specialised selection of a entity They should be good
workers (accurate, efficient, good memory, careful, precise, fast learner) who is able to
settle to work quickly and continue to concentrate for long periods. They should have
aptitude and fast recall.
8.8.9.3 Child Learning
As entities are added to the system they are defined by their connections through
services, techniques, standards and communications to generalise, standardise and
specify rules to reflect the network model defined in previous sections. At this stage of
the study we select the network structure with error analysis for entities only.
8.8.9.4 Medical Systems
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the entity from the feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for entities only.
8.8.10 Ciphers
8.8.10.1 General Methods
Applying the general techniques defined above to entities, we find that we can assess
the priority of attributes that we need to use to develop the decision tree structure
(reflected in the network section for entities). We evaluate the largest information gain
to give the entity.
Using the maximum a posteriori learning strategy we evaluate the derivative of the log
likelihood with respect to each parameter and find the parameter values such that the
derivatives are zero using computer optimization techniques to select the entity.
8.8.10.2 Theoretical Studies
Using the formulae in the introduction of this section, we find that the user should be
experienced, particularly in the specialised selection of a entity They should be good
workers (accurate, efficient, good memory, careful, precise, fast learner) who is able to
settle to work quickly and continue to concentrate for long periods. They should have
aptitude and fast recall.
8.8.10.3 Child Learning
As entities are added to the system they are defined by their connections through
services, techniques, standards and communications to generalise, standardise and
specify rules to reflect the network model defined in previous sections. At this stage of
the study we select the network structure with error analysis for entities only.
8.8.10.4 Medical Systems
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the entity from the feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for entities only.
8.9 Statistics Theory
8.9.1 Introduction
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular unit. We use the
logical and operator for using groups of units based on the recurrence of using a unit.
When we are considering the correctness of the alternatives of units in a system we
use the logical or operation. When we come across a situation where one unit for a
particular system implies that we will always have to use specific further units we will
use the dependent forms of the and and or logical operations. The structures of
systems imply a network form and we can use the methods described in the part on
network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.9.2 Entities
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular entity. We use the
logical and operator for using groups of entities based on the recurrence of using a
entity. When we are considering the correctness of the alternatives of entities in a
system we use the logical or operation. When we come across a situation where one
entity for a particular system implies that we will always have to use specific further
entities we will use the dependent forms of the and and or logical operations. The
structures of systems imply a network form and we can use the methods described in
the part on network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.9.3 Services
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular service. We use the
logical and operator for using groups of services based on the recurrence of using a
service. When we are considering the correctness of the alternatives of services in a
system we use the logical or operation. When we come across a situation where one
service for a particular system implies that we will always have to use specific further
services we will use the dependent forms of the and and or logical operations. The
structures of systems imply a network form and we can use the methods described in
the part on network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.9.4 Standards
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular standard. We use the
logical and operator for using groups of standards based on the recurrence of using a
standard. When we are considering the correctness of the alternatives of standards in a
system we use the logical or operation. When we come across a situation where one
standard for a particular system implies that we will always have to use specific
further standards we will use the dependent forms of the and and or logical operations.
The structures of systems imply a network form and we can use the methods described
in the part on network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.9.5 Techniques
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular technique. We use
the logical and operator for using groups of techniques based on the recurrence of
using a technique. When we are considering the correctness of the alternatives of
techniques in a system we use the logical or operation. When we come across a
situation where one technique for a particular system implies that we will always have
to use specific further techniques we will use the dependent forms of the and and or
logical operations. The structures of systems imply a network form and we can use the
methods described in the part on network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.9.6 Communications
The communications statistics are based on a mixture of entities, services, standards
and techniques which seem to be too complicated to analyse at present. We use the
network model described above to give a basis for the collection of data about the
system. When we consider the occurrence of an event in system research we are
talking about events, recurring events or choices of event. In the case of sequences of
occurrences we have the count of using a particular unit. We use the logical and
operator for using groups of units based on the recurrence of using a unit. When we are
considering the correctness of the alternatives of units in a system we use the logical
or operation. When we come across a situation where one unit for a particular system
implies that we will always have to use specific further units we will use the dependent
forms of the and and or logical operations. The structures of systems imply a network
form and we can use the methods described in the part on network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.9.7 Antivirus
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular entity. We use the
logical and operator for using groups of entities based on the recurrence of using a
entity. When we are considering the correctness of the alternatives of entities in a
system we use the logical or operation. When we come across a situation where one
entity for a particular system implies that we will always have to use specific further
entities we will use the dependent forms of the and and or logical operations. The
structures of systems imply a network form and we can use the methods described in
the part on network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.9.8 Firewall
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular entity. We use the
logical and operator for using groups of entities based on the recurrence of using a
entity. When we are considering the correctness of the alternatives of entities in a
system we use the logical or operation. When we come across a situation where one
entity for a particular system implies that we will always have to use specific further
entities we will use the dependent forms of the and and or logical operations. The
structures of systems imply a network form and we can use the methods described in
the part on network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.9.9 APIDS
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular entity. We use the
logical and operator for using groups of entities based on the recurrence of using a
entity. When we are considering the correctness of the alternatives of entities in a
system we use the logical or operation. When we come across a situation where one
entity for a particular system implies that we will always have to use specific further
entities we will use the dependent forms of the and and or logical operations. The
structures of systems imply a network form and we can use the methods described in
the part on network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.9.10 Ciphers
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular entity. We use the
logical and operator for using groups of entities based on the recurrence of using a
entity. When we are considering the correctness of the alternatives of entities in a
system we use the logical or operation. When we come across a situation where one
entity for a particular system implies that we will always have to use specific further
entities we will use the dependent forms of the and and or logical operations. The
structures of systems imply a network form and we can use the methods described in
the part on network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10 Probability Theory
8.10.1 Introduction
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.2 Entities
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct entity. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a entity. When we
are considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
system implies that we will always have to use specific further entities we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the methods described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.3 Services
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct service. We use the logical and
operator for selecting groups of services based on the recurrence of selecting a
service. When we are considering the correctness of the alternatives of services in a
service we use the logical or operation. When we come across a situation where one
service for a particular system implies that we will always have to use specific further
services we will use the dependent forms of the and and or logical operations. The
structures of a system imply a network form and we can use the methods described in
the part on network structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.4 Standards
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct standard. We use the logical and
operator for selecting groups of standards based on the recurrence of selecting a
standard. When we are considering the correctness of the alternatives of standards in a
service we use the logical or operation. When we come across a situation where one
standard for a particular system implies that we will always have to use specific
further standards we will use the dependent forms of the and and or logical operations.
The structures of a system imply a network form and we can use the methods
described in the part on network structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.5 Techniques
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct technique. We use the logical and
operator for selecting groups of techniques based on the recurrence of selecting a
technique. When we are considering the correctness of the alternatives of techniques
in a service we use the logical or operation. When we come across a situation where
one technique for a particular system implies that we will always have to use specific
further techniques we will use the dependent forms of the and and or logical
operations. The structures of a system imply a network form and we can use the
methods described in the part on network structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.6 Communications
The communications probability is based on a mixture of entities, services, standards
and techniques which seem to be too complicated to analyse at present. We use the
network model described above to give a basis for the collection of data about the
system. When we consider the occurrence of an event in system research we are
talking about events, recurring events or choices of event. In the case of sequences of
occurrences we have the count of using a particular node. We use the logical and
operator for using groups of nodes based on the recurrence of using a node. When we
are considering the correctness of the alternatives of nodes in a system we use the
logical or operation. When we come across a situation where one node for a particular
system implies that we will always have to use specific further nodes we will use the
dependent forms of the and and or logical operations. The structures of systems imply
a network form and we can use the methods described in the part on network
structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.7 Antivirus
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct entity. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a entity. When we
are considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
system implies that we will always have to use specific further entities we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the methods described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.8 Firewall
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct entity. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a entity. When we
are considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
system implies that we will always have to use specific further entities we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the methods described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.9 APIDS
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct entity. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a entity. When we
are considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
system implies that we will always have to use specific further entities we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the methods described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.10.10 Ciphers
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B
When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct entity. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a entity. When we
are considering the correctness of the alternatives of entities in a service we use the
logical or operation. When we come across a situation where one entity for a particular
system implies that we will always have to use specific further entities we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the methods described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
8.11 Geographic Information Systems
8.11.1 Introduction
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.2 Entity
Entities fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables entities to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.3 Services
Services fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a service type. The other is dependent on
position eg a service in the network. The data can be held as discrete objects (raster)
and continuous fields (vector). It enables services to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.4 Standards
Standards fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg a physical electrical conection in the network. The data can be held as
discrete objects (raster) and continuous fields (vector). It enables standards to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.5 Techniques
Techniques fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a bluetooth type. The other is dependent on
position eg a radio connection in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables techniques to be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process.
Communications fall into 2 forms. The first is pure data values which are not effected
by position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables communications to be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.7 Antivirus
Entities fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables entities to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.8 Firewall
Entities fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables entities to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.9 APIDS
Entities fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables entities to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.11.10 Ciphers
Entities fall into 2 forms. The first is pure data values which are not effected by
position eg the general description of a hardware type. The other is dependent on
position eg hardware unit in the network. The data can be held as discrete objects
(raster) and continuous fields (vector). It enables entities to be positioned, monitored,
analysed and displayed for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
8.12 Curve Fitting
8.12.1 Introduction
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.2 Entities
Curve fitting is a method of selecting the correct entity from the group of entities that
make up a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.3 Services
Curve fitting is a method of selecting the correct service from the group of services to
process an entity.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.4 Standards
Curve fitting is a method of selecting the correct standard from the group of standards
to process a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.5 Techniques
Curve fitting is a method of selecting the correct technique from the group of
techniques to apply to a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. Curve fitting is a
method of selecting the correct source from the group of nodes to apply to a
communications service. It is used to select the correct destination from the group of
nodes to apply to a communications service and then to select the correct connection
from the group of routes to apply to a communications service. Curve fitting is a
method of checking the entity, service, technique, standard and communications from
the components that make up the system.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.7 Antivirus
Curve fitting is a method of selecting the correct entity from the group of entities that
make up a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.8 Firewall
Curve fitting is a method of selecting the correct entity from the group of entities that
make up a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.9 APIDS
Curve fitting is a method of selecting the correct entity from the group of entities that
make up a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
8.12.10 Ciphers
Curve fitting is a method of selecting the correct entity from the group of entities that
make up a service.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.

8.13 Configuration Management


8.13.1 Introduction
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.13.2 Entities
Configuration management  requires configuration identification defining attributes of
the entity for base-lining, configuration control at approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It uses the structure of the system in its parts so that changes are documented,
assessed in a standardised way to avoid any disadvantages and then tracked to
implementation. It is reflected by a version for the configuration identification.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.13.3 Services
Configuration management  requires configuration identification defining attributes of
the service for base-lining, configuration control at approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It uses the structure of the system in its parts so that changes are documented,
assessed in a standardised way to avoid any disadvantages and then tracked to
implementation. It is reflected by a version for the configuration identification.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.13.4 Standards
Configuration management  requires configuration identification defining attributes of
the standard for base-lining, configuration control at approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It uses the structure of the system in its parts so that changes are documented,
assessed in a standardised way to avoid any disadvantages and then tracked to
implementation. It is reflected by a version for the configuration identification.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.13.5 Techniques
Configuration management  requires configuration identification defining attributes of
the technique for base-lining, configuration control at approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It uses the structure of the system in its parts so that changes are documented,
assessed in a standardised way to avoid any disadvantages and then tracked to
implementation. It is reflected by a version for the configuration identification.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.13.6 Communications
Communications is a dialogue process using protocols defining services and
techniques between entities. Services, techniques and entities are defined in terms of
modules performing tasks with data at nested levels.
Configuration management  requires configuration identification defining attributes of
the communication entity for base-lining, configuration control at approval stages and
baselines, configuration status accounting recording and reporting on the baselines as
required and configuration audits at delivery or completion of changes to validate
requirements.
It uses the structure of the system in its parts so that changes are documented,
assessed in a standardised way to avoid any disadvantages and then tracked to
implementation. It is reflected by a version for the configuration identification.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.13.7 Antivirus
Configuration management  requires configuration identification defining attributes of
the entity for base-lining, configuration control at approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It uses the structure of the system in its parts so that changes are documented,
assessed in a standardised way to avoid any disadvantages and then tracked to
implementation. It is reflected by a version for the configuration identification.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.13.8 Firewall
Configuration management  requires configuration identification defining attributes of
the entity for base-lining, configuration control at approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It uses the structure of the system in its parts so that changes are documented,
assessed in a standardised way to avoid any disadvantages and then tracked to
implementation. It is reflected by a version for the configuration identification.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.13.9 APIDS
Configuration management  requires configuration identification defining attributes of
the entity for base-lining, configuration control at approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It uses the structure of the system in its parts so that changes are documented,
assessed in a standardised way to avoid any disadvantages and then tracked to
implementation. It is reflected by a version for the configuration identification.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.13.10 Ciphers
Configuration management  requires configuration identification defining attributes of
the entity for base-lining, configuration control at approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It uses the structure of the system in its parts so that changes are documented,
assessed in a standardised way to avoid any disadvantages and then tracked to
implementation. It is reflected by a version for the configuration identification.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.14 Continuous Integration
8.14.1 Introduction
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.14.2 Entities
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.14.3 Services
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.14.4 Standards
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.14.5 Techniques
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.14.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process.
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.14.7 Antivirus
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.14.8 Firewall
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.14.9 APIDS
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.14.10 Ciphers
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.15 Continuous Delivery
8.15.1 Introduction
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.15.2 Entity
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.15.3 Services
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.15.4 Standards
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.15.5 Techniques
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.15.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process.
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.15.7 Antivirus
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.15.8 Firewall
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.15.9 APIDS
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.15.10 Ciphers
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.16 Virtual Reality
8.16.1 Introduction
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
8.16.2 Entities
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
8.16.3 Services
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
8.16.4 Standards
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
8.16.5 Techniques
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
8.16.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
8.16.7 Antivirus
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
8.16.8 Firewall
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
8.16.9 APIDS
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
8.16.10 Ciphers
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
8.17 Programming Language Theory
8.17.1 Introduction
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.2 Entities
Programming language theory gives us the rules for a formalised standard and
technique for the definition of the system in terms of a formal language. From media
technologies we find a similar kind of definition. We define a set of base elements as
the entities of the system. The entity set has a name, iteration control, type, identity
for sound and picture, hardware representation, meaning, version, timestamp,
geographic position, properties (name and value), statistics and nesting.
An escape sequence gives a way for extending the entity set.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.3 Services
The requirements for the service data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
 real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It has an iteration control, name, identity by sound and picture, hardware
representation, meaning, version, timestamp, geographic position, properties (name
and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.4 Standards
The requirements for the standard data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
 real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It has name, hardware representation, rules, version, timestamp, statistics, entities,
services and techniques. We define a set of rules for extending the standard of the
system which are performed in coordination with the extended services and extended
technique definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.

8.17.5 Techniques
The requirements for the technique data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
 real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It contains iteration control, name as string, sound and picture, hardware
representation, meaning, version, timestamp, properties (name and value), statistics,
nesting, events (name, value and interrupt service), priority and relative to technique.
We define a set of rules for extending the techniques of the system which are
performed in coordination with the extended standard and extended technique
definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications metrics are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present.
The requirements for the communications data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
 real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It defines name (string, sound, picture), hardware representation, version, timestamp,
statistics, entities, services,techniques and standards. Extensions are defined from a
similar set of rules.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.7 Antivirus
The requirements for the service data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
 real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It has an iteration control, name, identity by sound and picture, hardware
representation, meaning, version, timestamp, geographic position, properties (name
and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.8 Firewall
The requirements for the service data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
 real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It has an iteration control, name, identity by sound and picture, hardware
representation, meaning, version, timestamp, geographic position, properties (name
and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.9 APIDS
The requirements for the service data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
 real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It has an iteration control, name, identity by sound and picture, hardware
representation, meaning, version, timestamp, geographic position, properties (name
and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.17.10 Ciphers
The requirements for the service data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
 real-time data set
The logical data set structure must follow the object oriented type with the XML tags.
It has an iteration control, name, identity by sound and picture, hardware
representation, meaning, version, timestamp, geographic position, properties (name
and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
8.18 Compiler Technology Theory
8.18.1 Introduction
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.2 Entities
As regards entities, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the entities are processed
based on the learning, probability, network analysis and Markov theory for the entities
sections. If a entity is not recognised then the input entity is queried to see if there is
an error or the entity should be added to the entity set. An escape sequence can be
used to extend the entity set.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.3 Services
As regards services, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the services are processed
based on the learning, probability, network analysis and Markov theory for the services
sections. If a service is not recognised then the input service is queried to see if there
is an error or the service should be added to the service set. We define a set of rules for
extending the services of the language which are performed in coordination with the
extended standard and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.4 Standards
As regards standard, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They are defined by the standard definitions for each of the
entities, services and techniques. They also give priorities of how the standard are
processed based on the learning, probability, network analysis and Markov theory for
the standard sections. If a standard is not recognised then the input standard is
queried to see if there is an error or the standard should be added to the standard set.
We define a set of rules for extending the standard of the system which are performed
in coordination with the extended services and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.5 Techniques
As regards technique, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They are defined by the technique definitions for each of the
system. They also give priorities of how the technique are processed based on the
learning, probability, network analysis and Markov theory for the technique sections. If
a technique definition is not recognised then the input definition is queried to see if
there is an error or the technique definition should be added to the technique definition
set. We define a set of rules for extending the technique of the system which are
performed in coordination with the extended standard and extended services definition
sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.6 Communications
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. As regards
technique, compiler technology follows the formal definition found in programming
languages for both source (input) language, intermediate language and target (output)
language. They are defined by the technique definitions for each of the system. They
also give priorities of how the technique are processed based on the learning,
probability, network analysis and Markov theory for the technique sections. If a
technique definition is not recognised then the input definition is queried to see if there
is an error or the technique definition should be added to the technique definition set.
We define a set of rules for extending the technique of the system which are performed
in coordination with the extended standard and extended services definition sections.
We start with a set of base elements as the entities, services, techniques and
standards of the system and a sequence for extending the entity set, services set,
techniques set and standards set.
If a system element is not recognised then the element is queried to see if there is an
error or the element should be added to the system set. An escape sequence can be
used to extend the system element set with a definition based on appropriate
references to previously defined elements.
By analogy with object oriented programming we find it gives us the concept of scope
for meaning, objects, properties, methods with arguments, the "this" operator and the
concepts of synonyms, generalisation and specification. Overloading of definitions
allows for meaning to change according to context. Replicating actions (iterations) can
be performed under different cases. Other operation are ways of defining properties of
objects or actions with polymorphism. We note that the multiple definitions of system
elements found in the system dictionaries are equivalent to the conditional
compilations and macros such as found in the c programming language. We define a set
of rules for extending the entities, services, standards and techniques of the system
which are performed in coordination with the extended entities, extended services,
extended standard and extended technique definition sections. This technique is also
used to extend the power of the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.7 Antivirus
As regards services, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the services are processed
based on the learning, probability, network analysis and Markov theory for the services
sections. If a service is not recognised then the input service is queried to see if there
is an error or the service should be added to the service set. We define a set of rules for
extending the services of the language which are performed in coordination with the
extended standard and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.8 Firewall
As regards services, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the services are processed
based on the learning, probability, network analysis and Markov theory for the services
sections. If a service is not recognised then the input service is queried to see if there
is an error or the service should be added to the service set. We define a set of rules for
extending the services of the language which are performed in coordination with the
extended standard and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.9 APIDS
As regards services, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the services are processed
based on the learning, probability, network analysis and Markov theory for the services
sections. If a service is not recognised then the input service is queried to see if there
is an error or the service should be added to the service set. We define a set of rules for
extending the services of the language which are performed in coordination with the
extended standard and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.18.10 Ciphers
As regards services, compiler technology follows the formal definition found in
programming languages for both source (input) language, intermediate language and
target (output) language. They also give priorities of how the services are processed
based on the learning, probability, network analysis and Markov theory for the services
sections. If a service is not recognised then the input service is queried to see if there
is an error or the service should be added to the service set. We define a set of rules for
extending the services of the language which are performed in coordination with the
extended standard and extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
8.19 Communications Theory
8.19.1 Introduction
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.2 Entities
As regards entities, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the entities are processed based on the learning, probability,
network analysis and Markov theory for the entities sections. If a entity is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the entity is not recovered, the entity is queried to
a human to see if there is an error or the entity should be added to the entity set. An
escape sequence can be used to extend the entity set.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.3 Services
As regards services, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the services are processed based on the learning, probability,
network analysis and Markov theory for the services sections. If a service is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the service is not recovered, the service is queried
to a human to see if there is an error or the service should be added to the dictionary
set. We define a set of rules for extending the services of the language which are
performed in coordination with the extended standard and extended technique
definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.4 Standards
As regards standard, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the standard is processed based on the learning, probability,
network analysis and Markov theory for the standard sections. If a standard is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the standard is not recovered, the standard is
queried to a human to see if there is an error or the standard unit should be added to
the standard dictionary set. We define a set of rules for extending the standard of the
language which are performed in coordination with the extended services and
extended technique definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.5 Techniques
As regards technique, communications technology follows the formal definition found
in programming languages for source, transmission and destination languages. They
also give priorities of how the technique is processed based on the learning,
probability, network analysis and Markov theory for the technique sections. If a
technique is not recognised then it is passed to a recovery process based on repeated
analysis of the situation by some parallel check. If the technique is not recovered, the
technique is queried to a human to see if there is an error or the technique unit should
be added to the technique dictionary set. We define a set of rules for extending the
technique of the system which are performed in coordination with the extended
services and extended standard definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.6 Communications
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
Grammar specifies the compositional structure of complex messages. A formal
language is a set of strings of terminal symbols. Each string in the language can be
analysed-generated by the grammar. The grammar is a set of rewrite rules to form non
terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a message.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.7 Antivirus
As regards services, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the services are processed based on the learning, probability,
network analysis and Markov theory for the services sections. If a service is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the service is not recovered, the service is queried
to a human to see if there is an error or the service should be added to the dictionary
set. We define a set of rules for extending the services of the language which are
performed in coordination with the extended standard and extended technique
definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.8 Firewall
As regards services, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the services are processed based on the learning, probability,
network analysis and Markov theory for the services sections. If a service is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the service is not recovered, the service is queried
to a human to see if there is an error or the service should be added to the dictionary
set. We define a set of rules for extending the services of the language which are
performed in coordination with the extended standard and extended technique
definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.9 APIDS
As regards services, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the services are processed based on the learning, probability,
network analysis and Markov theory for the services sections. If a service is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the service is not recovered, the service is queried
to a human to see if there is an error or the service should be added to the dictionary
set. We define a set of rules for extending the services of the language which are
performed in coordination with the extended standard and extended technique
definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.19.10 Ciphers
As regards services, communications technology follows the formal definition found in
programming languages for source, transmission and destination languages. They also
give priorities of how the services are processed based on the learning, probability,
network analysis and Markov theory for the services sections. If a service is not
recognised then it is passed to a recovery process based on repeated analysis of the
situation by some parallel check. If the service is not recovered, the service is queried
to a human to see if there is an error or the service should be added to the dictionary
set. We define a set of rules for extending the services of the language which are
performed in coordination with the extended standard and extended technique
definition sections.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
8.20 Database Technology
8.20.1 Introduction
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.2 Entities
The requirements for the entity database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.1 (Appendix – Database Scheme – Entities) and an escape sequence for
extending the entity set as in section 8.1 (Appendix – Database Scheme - Entities).
The entity definition set above is created once when the entity is added to the system
and changed and removed infrequently as the entity set is extended. It is queried
frequently for every entity that is read. The entities are updated (inserted, modified,
and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.3 Services
The requirements for the service database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.2 (Appendix – Database Scheme – Services). We define a set of rules for
extending the services of the system which are performed in coordination with the
extended standard and extended technique definition sections as in section 8.2
(Appendix – Database Scheme - Services).
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every service that is read. The services are updated (inserted,
modified, and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.4 Standards
The requirements for the standard database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.3 (Appendix – Database Scheme – Standards).
We define a set of rules for extending the standard of the system which are performed
in coordination with the extended services and extended technique definition sections
as in section 8.3 (Appendix – Database Scheme – Standards).
The standard definition set above is created once when the language is added to the
system and changed and removed infrequently as the language standard set is
extended. It is queried frequently for every standard unit that is read. The standard
rules are updated (inserted, modified, and deleted) infrequently. The administration
(maintain users, data security, performance, data integrity, concurrency and data
recovery using utilities) of the database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.5 Techniques
The requirements for the technique database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML
tagsas in section 8.4 (Appendix – Database Scheme – Techniques). We define a set of
rules for extending the techniques of the system which are performed in coordination
with the extended standard and extended technique definition sections as in section
8.4 (Appendix – Database Scheme – Techniques).
The technique definition set above is created once when the technique is added to the
system and changed and removed infrequently as the technique set is extended. It is
queried frequently for every technique rule that is read. The technique are updated
(inserted, modified, and deleted) infrequently. The administration (maintain users, data
security, performance, data integrity, concurrency and data recovery using utilities) of
the database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.6 Communications
In electronics a dialogue is a communications operation which uses a source,
generating data to be transmitted, a transmitter, converting data into transmittable
information, a transmission system, carrying information, a receiver, converting
received information into data, and a destination taking incoming data. The
communications is a two way process with the source talking to the destination and
the destination returning the conversation to the source.
Key communications tasks consist of transmission system utilization, interfacing,
information generation, synchronization, exchange management, error detection and
correction, addressing and routing, recovery, information formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Key elements of a protocol are standard (data information formats),
technique (control information, error handling) and timing (speed matching,
sequencing). Protocol architecture is the task of communication broken up into
modules for processing at different levels of functionality.
The requirements for the communications database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.5 (Appendix – Database Scheme – Communications). Extensions are
defined from as in section 8.5 (Appendix – Database Scheme – Communications).
The communications definition set above is created once when the communications is
added to the system and changed and removed infrequently as the communications set
is extended. It is queried frequently for every communications rule that is read. The
communications are updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.7 Antivirus
The requirements for the service database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.2 (Appendix – Database Scheme – Services). We define a set of rules for
extending the services of the system which are performed in coordination with the
extended standard and extended technique definition sections as in section 8.2
(Appendix – Database Scheme - Services).
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every service that is read. The services are updated (inserted,
modified, and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.8 Firewall
The requirements for the service database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.2 (Appendix – Database Scheme – Services). We define a set of rules for
extending the services of the system which are performed in coordination with the
extended standard and extended technique definition sections as in section 8.2
(Appendix – Database Scheme - Services).
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every service that is read. The services are updated (inserted,
modified, and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.9 APIDS
The requirements for the service database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.2 (Appendix – Database Scheme – Services). We define a set of rules for
extending the services of the system which are performed in coordination with the
extended standard and extended technique definition sections as in section 8.2
(Appendix – Database Scheme - Services).
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every service that is read. The services are updated (inserted,
modified, and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.20.10 Ciphers
The requirements for the service database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8.2 (Appendix – Database Scheme – Services). We define a set of rules for
extending the services of the system which are performed in coordination with the
extended standard and extended technique definition sections as in section 8.2
(Appendix – Database Scheme - Services).
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every service that is read. The services are updated (inserted,
modified, and deleted) infrequently. The administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities) of the
database will be done on a regular basis.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
8.21 Summary
8.21.1 Introduction
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.2 Entities
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.3 Services
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.4 Standards
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.5 Techniques
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.6 Communications
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.7 Antivirus
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.8 Firewall
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.9 APIDS
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.21.10 Ciphers
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
8.22 Implementation
8.22.1 General Commentary
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.

The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:

Escape sequences are defined as follows:

The logical database structure must follow the object oriented type with the XML tags:

8.22.2 Entity
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.

The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:

Escape sequences are defined as follows:

The logical database structure must follow the object oriented type with the XML tags:

8.22.3 Services
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.

The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:

Escape sequences are defined as follows:


The logical database structure must follow the object oriented type with the XML tags:

8.22.4 Standards
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.

The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:

Escape sequences are defined as follows:

The logical database structure must follow the object oriented type with the XML tags:

8.22.5 Techniques
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.

The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:

Escape sequences are defined as follows:

The logical database structure must follow the object oriented type with the XML tags:

8.22.6 Communications
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.

The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:
Escape sequences are defined as follows:

The logical database structure must follow the object oriented type with the XML tags:

8.22.7 Antivirus
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.

The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:

Escape sequences are defined as follows:

The logical database structure must follow the object oriented type with the XML tags:

8.22.8 Firewall
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.

The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:

Escape sequences are defined as follows:

The logical database structure must follow the object oriented type with the XML tags:

8.22.9 APIDS
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.

The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:

Escape sequences are defined as follows:

The logical database structure must follow the object oriented type with the XML tags:

8.22.10 Ciphers
The implementation stage of languages studies reflects programming language theory,
learning theory and statistics theory.

The language definition set above is created once when the language is added to the
system and changed and removed infrequently as the language technique set is
extended. It is queried frequently for every technique rule that is read. The language
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags:

Escape sequences are defined as follows:

The logical database structure must follow the object oriented type with the XML tags:

9 IoT Security Processing


9.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for entity, services,
standards, techniques, communications, antivirus, firewall, APIDS and ciphers.
9.2 General Review
9.2.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution.
9.2.2 Theoretical Studies
9.2.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.2.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.2.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.

Variable Value Value


m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.2.2.4 Network Theory
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
9.2.2.4.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.2.2.4.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.2.2.4.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.2.2.5 Communications Theory
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.2.2.6 Markov Theory
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
9.2.2.7 Probability Theory
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.2.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.2.3 Analysis
9.2.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.2.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.2.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.2.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.2.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.2.4 Implementation
9.2.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.2.4.2 Learning Theory
9.2.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn

The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.2.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
9.2.4.2.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to meaning. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
9.2.4.2.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
9.2.4.3 Statistics Theory
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular unit. We use the
logical and operator for using groups of units based on the recurrence of using a unit.
When we are considering the correctness of the alternatives of units in a system we
use the logical or operation. When we come across a situation where one unit for a
particular system implies that we will always have to use specific further units we will
use the dependent forms of the and and or logical operations. The structures of
systems imply a network form and we can use the methods described in the part on
network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.2.4.4 Geographic Information Systems
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
9.2.4.5 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
9.2.4.6 Configuration Manage
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted. ment
9.2.4.7 Continuous Integration
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.2.4.8 Continuous Delivery
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.2.4.9 Virtual Reality
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
9.2.5 Summary
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.

9.3 Entities
9.3.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for entities.
9.3.2 Theoretical Studies
9.3.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.3.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.3.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.

Variable Value Value


m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.3.2.4 Network Theory
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
9.3.2.4.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.3.2.4.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.3.2.4.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.3.2.5 Communications Theory
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.3.2.6 Markov Theory
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
9.3.2.7 Probability Theory
Probability is a measure of the likeliness that an event will occur.
Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.3.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.3.3 Analysis
9.3.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.3.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.3.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.3.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.3.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.3.4 Implementation
9.3.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.3.4.2 Learning Theory
9.3.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn

The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.3.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
9.3.4.2.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to meaning. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
9.3.4.2.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
9.3.4.3 Statistics Theory
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular unit. We use the
logical and operator for using groups of units based on the recurrence of using a unit.
When we are considering the correctness of the alternatives of units in a system we
use the logical or operation. When we come across a situation where one unit for a
particular system implies that we will always have to use specific further units we will
use the dependent forms of the and and or logical operations. The structures of
systems imply a network form and we can use the methods described in the part on
network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.3.4.4 Geographic Information Systems
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
9.3.4.5 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
9.3.4.6 Configuration Manage
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted. ment
9.3.4.7 Continuous Integration
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.3.4.8 Continuous Delivery
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.3.4.9 Virtual Reality
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
9.3.5 Summary
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.

9.4 Services
9.4.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for services.
9.4.2 Theoretical Studies
9.4.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.4.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.4.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.

Variable Value Value


m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.4.2.4 Network Theory
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
9.4.2.4.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.4.2.4.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.4.2.4.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.4.2.5 Communications Theory
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.4.2.6 Markov Theory
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
9.4.2.7 Probability Theory
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.4.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.4.3 Analysis
9.4.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.4.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.4.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.4.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.4.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.4.4 Implementation
9.4.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.4.4.2 Learning Theory
9.4.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn

The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.4.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
9.4.4.2.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to meaning. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
9.4.4.2.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
9.4.4.3 Statistics Theory
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular unit. We use the
logical and operator for using groups of units based on the recurrence of using a unit.
When we are considering the correctness of the alternatives of units in a system we
use the logical or operation. When we come across a situation where one unit for a
particular system implies that we will always have to use specific further units we will
use the dependent forms of the and and or logical operations. The structures of
systems imply a network form and we can use the methods described in the part on
network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.4.4.4 Geographic Information Systems
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
9.4.4.5 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
9.4.4.6 Configuration Manage
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted. ment
9.4.4.7 Continuous Integration
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.4.4.8 Continuous Delivery
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.4.4.9 Virtual Reality
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
9.4.5 Summary
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.

9.5 Standards
9.5.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for standards.
9.5.2 Theoretical Studies
9.5.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.5.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.5.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.

Variable Value Value


m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.5.2.4 Network Theory
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
9.5.2.4.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.5.2.4.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.5.2.4.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.5.2.5 Communications Theory
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.5.2.6 Markov Theory
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
9.5.2.7 Probability Theory
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.5.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.5.3 Analysis
9.5.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.5.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.5.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.5.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.5.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.5.4 Implementation
9.5.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.5.4.2 Learning Theory
9.5.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.5.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
9.5.4.2.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to meaning. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
9.5.4.2.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
9.5.4.3 Statistics Theory
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular unit. We use the
logical and operator for using groups of units based on the recurrence of using a unit.
When we are considering the correctness of the alternatives of units in a system we
use the logical or operation. When we come across a situation where one unit for a
particular system implies that we will always have to use specific further units we will
use the dependent forms of the and and or logical operations. The structures of
systems imply a network form and we can use the methods described in the part on
network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.5.4.4 Geographic Information Systems
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
9.5.4.5 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
9.5.4.6 Configuration Manage
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted. ment
9.5.4.7 Continuous Integration
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.5.4.8 Continuous Delivery
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.5.4.9 Virtual Reality
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
9.5.5 Summary
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
9.6 Techniques
9.6.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for techniques
9.6.2 Theoretical Studies
9.6.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.6.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.6.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.

Variable Value Value


m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.6.2.4 Network Theory
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
9.6.2.4.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.6.2.4.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.6.2.4.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.6.2.5 Communications Theory
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.6.2.6 Markov Theory
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
9.6.2.7 Probability Theory
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.6.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.6.3 Analysis
9.6.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.6.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.6.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.6.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.6.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.6.4 Implementation
9.6.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.6.4.2 Learning Theory
9.6.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn

The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.6.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
9.6.4.2.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to meaning. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
9.6.4.2.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
9.6.4.3 Statistics Theory
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular unit. We use the
logical and operator for using groups of units based on the recurrence of using a unit.
When we are considering the correctness of the alternatives of units in a system we
use the logical or operation. When we come across a situation where one unit for a
particular system implies that we will always have to use specific further units we will
use the dependent forms of the and and or logical operations. The structures of
systems imply a network form and we can use the methods described in the part on
network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.6.4.4 Geographic Information Systems
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
9.6.4.5 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
9.6.4.6 Configuration Manage
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted. ment
9.6.4.7 Continuous Integration
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.6.4.8 Continuous Delivery
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.6.4.9 Virtual Reality
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
9.6.5 Summary
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.

9.7 Communications
9.7.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for communications.
9.7.2 Theoretical Studies
9.7.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.7.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.7.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.

Variable Value Value


m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.7.2.4 Network Theory
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
9.7.2.4.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.7.2.4.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.7.2.4.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.7.2.5 Communications Theory
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.7.2.6 Markov Theory
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
9.7.2.7 Probability Theory
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.7.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.7.3 Analysis
9.7.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.7.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.7.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.7.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.7.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.7.4 Implementation
9.7.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.7.4.2 Learning Theory
9.7.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn

The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.7.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
9.7.4.2.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to meaning. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
9.7.4.2.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
9.7.4.3 Statistics Theory
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular unit. We use the
logical and operator for using groups of units based on the recurrence of using a unit.
When we are considering the correctness of the alternatives of units in a system we
use the logical or operation. When we come across a situation where one unit for a
particular system implies that we will always have to use specific further units we will
use the dependent forms of the and and or logical operations. The structures of
systems imply a network form and we can use the methods described in the part on
network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.7.4.4 Geographic Information Systems
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
9.7.4.5 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
9.7.4.6 Configuration Manage
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted. ment
9.7.4.7 Continuous Integration
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.7.4.8 Continuous Delivery
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.7.4.9 Virtual Reality
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
9.7.5 Summary
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.

9.8 Antivirus
9.8.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for antivirus.
9.8.2 Theoretical Studies
9.8.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.8.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.8.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.

Variable Value Value


m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.8.2.4 Network Theory
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
9.8.2.4.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.8.2.4.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.8.2.4.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.8.2.5 Communications Theory
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.8.2.6 Markov Theory
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
9.8.2.7 Probability Theory
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.8.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.8.3 Analysis
9.8.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.8.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.8.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.8.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.8.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.8.4 Implementation
9.8.4.1 Introduction
The implementation stage of antivirus studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.8.4.2 Learning Theory
9.8.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn

The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.8.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
9.8.4.2.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to meaning. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
9.8.4.2.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
9.8.4.3 Statistics Theory
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular unit. We use the
logical and operator for using groups of units based on the recurrence of using a unit.
When we are considering the correctness of the alternatives of units in a system we
use the logical or operation. When we come across a situation where one unit for a
particular system implies that we will always have to use specific further units we will
use the dependent forms of the and and or logical operations. The structures of
systems imply a network form and we can use the methods described in the part on
network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.8.4.4 Geographic Information Systems
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
9.8.4.5 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
9.8.4.6 Configuration Manage
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted. ment
9.8.4.7 Continuous Integration
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.8.4.8 Continuous Delivery
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.8.4.9 Virtual Reality
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
9.8.5 Summary
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
9.9 Firewall
9.9.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for firewals.
9.9.2 Theoretical Studies
9.9.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.9.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.9.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.

Variable Value Value


m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.9.2.4 Network Theory
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
9.9.2.4.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.9.2.4.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.9.2.4.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.9.2.5 Communications Theory
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.9.2.6 Markov Theory
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
9.9.2.7 Probability Theory
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.9.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.9.3 Analysis
9.9.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.9.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.9.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.9.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.9.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.9.4 Implementation
9.9.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.9.4.2 Learning Theory
9.9.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn
The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.9.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
9.9.4.2.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to meaning. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
9.9.4.2.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
9.9.4.3 Statistics Theory
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular unit. We use the
logical and operator for using groups of units based on the recurrence of using a unit.
When we are considering the correctness of the alternatives of units in a system we
use the logical or operation. When we come across a situation where one unit for a
particular system implies that we will always have to use specific further units we will
use the dependent forms of the and and or logical operations. The structures of
systems imply a network form and we can use the methods described in the part on
network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.9.4.4 Geographic Information Systems
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
9.9.4.5 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
9.9.4.6 Configuration Manage
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted. ment
9.9.4.7 Continuous Integration
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.9.4.8 Continuous Delivery
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.9.4.9 Virtual Reality
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
9.9.5 Summary
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
9.10 APIDS
9.10.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for APIDS.
9.10.2 Theoretical Studies
9.10.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.10.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.10.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.

Variable Value Value


m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.10.2.4 Network Theory
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
9.10.2.4.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.10.2.4.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.10.2.4.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.10.2.5 Communications Theory
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.10.2.6 Markov Theory
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
9.10.2.7 Probability Theory
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.10.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.10.3 Analysis
9.10.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.10.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.10.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.10.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.10.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.10.4 Implementation
9.10.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.10.4.2 Learning Theory
9.10.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn

The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.10.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
9.10.4.2.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to meaning. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
9.10.4.2.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
9.10.4.3 Statistics Theory
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular unit. We use the
logical and operator for using groups of units based on the recurrence of using a unit.
When we are considering the correctness of the alternatives of units in a system we
use the logical or operation. When we come across a situation where one unit for a
particular system implies that we will always have to use specific further units we will
use the dependent forms of the and and or logical operations. The structures of
systems imply a network form and we can use the methods described in the part on
network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.10.4.4 Geographic Information Systems
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
9.10.4.5 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
9.10.4.6 Configuration Manage
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted. ment
9.10.4.7 Continuous Integration
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.10.4.8 Continuous Delivery
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.10.4.9 Virtual Reality
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
9.10.5 Summary
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.

9.11 Ciphers
9.11.1 Introduction
This section reviews how some other technologies can contribute to IoT security. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now. They
are reflected as theoretical studies, analysis and execution for ciphers.
9.11.2 Theoretical Studies
9.11.2.1 Introduction
The theoretical studies for IoT security consists of search theory, quantitative theory,
network theory, communications theory, Markov theory, probability theory and
programming language theory.
9.11.2.2 Search Theory
We have studied a theory for systems based on the operations research technique
known as the theory of search. We have found that the user should be experienced,
particularly in the specialised field of the system and its reference documentation. The
user should be a good worker (accurate, efficient, good memory, careful, precise, fast
learner) who is able to settle to work quickly and continue to concentrate for long
periods. He should use his memory rather than documentation. If he is forced to use
documentation, he should have supple joints, long light fingers which allow pages to
slip through them when making a reference. Finger motion should be kept gentle and
within the range of movement and concentrated to the fingers only. The user should
have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The theory has resulted in a measurable set of requirements and a method of assessing
how good the system, the system user and the documentation come up to the
requirements.
If no target is found then the error is reported and after review the target is added to
the system.
9.11.2.3 Quantitative Theory
Software physics, introduced by Halstead, led to the relations for programs and
languages with deviations due to impurities in programs:
If n1=number of operators
n2 = number of operands
N1 =total number of occurrences of operators
N2 =total number of occurrences of operands
then N1 = n1log n1
N2 = n2log n2
If n= program vocabulary
N= program length
then n = n1 + n2
n* = n
N = N1 + N2
N* = N1 log n1 + N2 log n2
If V= actual program volume
V*= theoretical program volume
then V = N log n
V* = N* log n*
If L = V*/V= program level
λ = LV*= programming language level
S= Stroud Number then
m = V/L= number of mental discriminations
d = m/S=development time.
Mohanty showed that the error rate E for a program is given by
E = n1 log n/1000n2
The mean free path theorem derives the relations:
P(m,C) = Cm/m!eC = probability of hitting the target m times as for a coverage ratio C.
C =nast/z= coverage ratio = ratio between the area covered by the search process and
the search area
a = search range
z = search area size
m = number of hits that are successful
n = number of attempts
s = speed searcher passes over search area
t = time searcher passes over search area
p= probability of being eliminated each time it is hit
P == total value of probability
N = total number of attempts
where x = and D =
M = total number of hits
S = total speed of movement
T = total time of movement
Z = total search area
A = total hit range
P1 = average value of probability
N1 = average number of attempts
where x = and D =
M1 = average number of hits
S1 = average speed of movement
T1 = average time of movement
Z1 = average search area
A1 = average hit range
The Z equation with the relation between the search effort and the search results over
an average search area explains software physics in terms of actions of search.
The N relation shows that number of targets can be calculated as the average number of
attempts in a particular search area. Specifically we can estimate the number of checks
n that we can expect to apply to find m errors in a text of size A or the number of rules n
that we expect to apply when writing a text of m units in a language of size z. Conversely
the M relation give us the expect number of errors or the number of statements when we
apply a specific number of checks or produce a number of ideas.
The A, S and T relations show that there are simple relations between the expected and
the actual values for the range, the speed and the time for a search.
e.g.
In each case we see that the effort needed to be expended on the search is proportional
to the search area and decreases with the elimination probability raised to the search
number. This means that we need to consider the total effort in all our calculations.
The P relation shows that the probability reduces in relation to the number of hits whilst
the s, t and a relations reflect the relations between S, T and A described earlier, m
shows the normalised result for M and n is rather too complicated to envisage generally.
P(m,m) is a function of m and that the function mP(m,m) has interesting coincidences of
values.

Variable Value Value


m 0 1
mP(m,m) 0 6.4
mP(m,m)=0 when m = 0 or -0.5665
The negative value is a minimum whereas the zero value is an inflexion point which is
not genuine optimal value.
Thus the best policy for finding a target m times is to search the whole area m times and
that mm+1 e-m/m! is an increasing function for m increasing above zero and corresponding
to a measure of complexity with a value of 1 for m = 6.4 approximately or the lucky
seven.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.11.2.4 Network Theory
The network theory model reflects the properties of the algebraic and logic theory
sections of this paper. The network system is based on entities, services, standards,
techniques and communications. There are six validation cases discussed in this paper.
They are
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing.
f. There is an optimal way for structuring the system to maximise the ease of look up.
We examine the algorithms of each of the cases in the following subsections.
9.11.2.4.1 Well Structured
Let us consider a system where a unit is connected to other units. What will the source
of the connection be with the other units? Will it be with one particular unit or another?
There will be confusion and the well structured criterion described in section 3.2.3 would
highlight this case in the definition of the system by the fact that there is a connection.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.11.2.4.2 Consistency
A unit is accessed from two other different units. What interpretation will be placed on
the meaning by the recipient unit? The consistency condition under portion 3.2.3 will
detect the problem within the system.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.11.2.4.3 Completeness
From the unit viewpoint, we can assume that there are units being defined but unused.
The units are a waste and would cause confusion if they are known. The completeness
prerequisite will eliminate this difficulty.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
9.11.2.5 Communications Theory
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management.
Protocols are used for communications between entities in a system and must speak
the same language. Entities consist of user applications, e-mail facilities and terminals.
Systems are computer, terminal or remote sensor. Key elements of a protocol are
standard (data formats, signal levels), technique (control information, error handling)
and timing (speed matching, sequencing).
Protocol architecture is the task of communication broken up into modules. At each
layer, protocols are used to communicate and control information is added to user data
at each layer.
A formal language is a set of strings of terminal symbols. Each string in the language
can be analysed-generated by the grammar. The grammar is a set of rewrite rules to
form non terminals. Grammar types are regular, context-free and context-sensitive and
recursively enumerable with natural languages probably context-free and parsable in
real time. Parse trees demonstrate the grammatical structure of a sentence.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.11.2.6 Markov Theory
Using the algorithms in the previous sub-section on network theory we can determine
what nodes have flow through them and which do not. We can find the edges that are
used and those unused. We can ascertain what the flow is between the nodes and
which are single entry or single exit blocks of nodes.
If we make a node which is to be taken as the error sink we can use the extra edges to
discover what is the probability of error at different parts in the network system, the
size of error at each point of the Markov process and the error node gives an estimate
of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
9.11.2.7 Probability Theory
Probability is a measure of the likeliness that an event will occur.

Summary of probabilities
Event Probability
A A
not A ¬A
A or B A˅B
A and B A˄B
A given B A│B

When we consider the probability of an event in system research we are talking about
events, recurring events or choices of event. In the case of sequences of occurrences
we have the probability of selecting the correct unit. We use the logical and operator
for selecting groups of entities based on the recurrence of selecting a unit. When we
are considering the correctness of the alternatives of units in a service we use the
logical or operation. When we come across a situation where one unit for a particular
system implies that we will always have to use specific further units we will use the
dependent forms of the and and or logical operations. The structures of a system imply
a network form and we can use the techniques described in the part on network
structures.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.11.2.8 Programming Language Theory
Programming language theory gives us the rules for formalised standard and technique
for the definition of a programming language in terms of a formal language. From media
technologies we find a similar kind of definition. We use the network model described
above to give a basis for the collection of data about the system. Programming
language theory gives us the rules for formalised standard and technique for the
definition of a programming language in terms of a formal language and likewise for
media. We discover we need to set a priority of the rules for evaluating units and
processes. Object oriented programming gives us the concept of scope for meaning,
objects, properties, methods with arguments, the "this" operator and the concepts of
synonyms, generalisation and specification. Overloading of definitions allows for
meaning to change according to context. Replicating actions use iterations under
different cases. Conditional compilations, macros and packages-libraries assist the use
of previous work.
If an object, property or method is not found then the error is reported as a stack dump
and after review adjust language structure.
9.11.3 Analysis
9.11.3.1 Introduction
The analysis portion of the language processing is made up of algebraic theory, logic
theory, compiler technology theory and database technology.
9.11.3.2 Algebraic Theory
We have used the concept from algebraic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.11.3.3 Logic Theory
We have used the concept from logic theory to give us a set with elements and
functions to be a basis of a system. The basic elements are derived from entities,
services, standards, techniques and communications. We restrict these basic elements
by specifying what is allowed. We apply rules of combination to the elements to form
larger elements that we classify as systems or subsystems for which we have rules to
say what is correct and what is erroneous. We iterate on the combination for more
complex elements to be validated against standards and techniques.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques gives meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation, and specification based on
properties of the entities and services. Other parts of entities and
services/communications are ways of defining properties of objects or operations
whilst some apply to the scope of entities, services, standards, techniques and
communications.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
9.11.3.4 Compiler Technology Theory
A compiler translates high-level language source programs to the target code for
running on computer hardware. It follows a set of operations from  lexical analysis, pre-
processing, parsing, semantic analysis (standard-directed translation), code
generation, and optimization. A compiler-compiler is a parser generator which helps
create the lexer and parser.
A pre-processor can accompany a compiler and is usually a macro pre-processor for
the programming language. It provides the ability for the inclusion
of files, macro expansions, conditional compilation and line control. The pre-
processor directives is only weakly related to the programming language. The pre-
processor is often used to include other files. It replaces the directive line with the text
of the file. Conditional compilation directives allow the inclusion or exclusion of lines of
code. Macro definition and expansion is provided by the definition of sets code which
can be expanded when it is required at various points in the text of the code unit.
The Production Quality Compiler-Compiler Project of Carnegie Mellon University
introduced the terms front end, middle end, and back end. The front end verifies
standard and technique, and generates an intermediate representation. It generates
errors and warning messages. It uses the three phases of lexing, parsing, and semantic
analysis. Lexing and parsing are syntactic analysis for services and phrases and can be
automatically generated from the grammar for the language. The lexical and phrase
grammars help processing of context-sensitivity handled at the semantic analysis
phase which can be automated using attribute grammars. The middle end does some
optimizations for the back end. The back end generates the target code and performs
more optimisation.
An intermediate language is used to aid in the analysis of computer programs
within compilers, where the source code of a program is translated into a form more
suitable for code-improving transformations before being used to generate object  code
for a target machine. An intermediate representation (IR) is a data structure that is
constructed from input data to a program, and from which part or all of the output data
of the program is constructed in turn. Use of the term usually implies that most of
the information present in the input is retained by the intermediate representation, with
further annotations or rapid lookup features.
If an element or function is not found then the error is reported as a stack dump and
after review adjust processing structure.
9.11.3.5 Database Technology
Databases and database management systems are classified by the application,
database model, the execution computer, the query language and the internal
engineering, reflecting performance, scalability, resilience and security.
The database is an aggregation of data to support the modelling of IoT studies. The
applications are bibliographic, document-text, statistical and multimedia objects. The
database management system must support users and other applications to collect and
analyse the data for IoT processes. The system allows the definition (create, change
and remove definitions of the organization of the data using a data definition language
(conceptual definition)), querying (retrieve information usable for the user or other
applications using a query language), update (insert, modify, and delete of actual data
using a data manipulation language), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities (physical
definition)) of the database. The database model most suitable for the applications
relies on post-relational databases (e.g. NoSQLMongoDB or NewSQL/ScaleBase) are
derived from object databases to overcome the problems met with object programming
and relational database and also the development of hybrid object-relational databases.
They use fast key-value stores and document-oriented databases with XML to give
interoperability between different implementations.
Other requirements are:
 event-driven architecture database
 deductive database
 multi-database
 graph database
 hypertext hypermedia database
 knowledge base
 probabilistic database
 real-time database
 temporal database
Logical data models are:
 object model
 document model
 object-relational database combines the two related structures.
Physical data models are:
 Semantic model
 XML database
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.11.4 Implementation
9.11.4.1 Introduction
The implementation stage of languages studies reflects learning theory, statistics
theory, geographic information systems, curve fitting, configuration management,
continuous integration, continuous delivery and virtual reality.
9.11.4.2 Learning Theory
9.11.4.2.1 General Methods
Learning is performed in finding how to improve the state in some environment. It can
be done by observation or by training. There are 2 different types of technique – the
inductive method and the Bayesian procedure.
Inductive learning uses a set of examples with attributes expressed as tables or a
decision tree. Using information theory we can assess the priority of attributes that we
need to use to develop the decision tree structure. We calculate the information
content (entropy) using the formula:
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
For a training set containing p positive examples and n negative examples this would
give:

p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn

The information gain for a chosen attribute A divides the training set E into subsets E 1,
… , Ev according to their values for A, where A has v distinct values.

v
p i  ni pi ni
remainder ( A)   I( , )
i 1 pn pi  ni pi  ni

The information gain (IG) or reduction in entropy from the attribute test is shown to be:
p n
IG ( A)  I ( , )  remainder ( A)
pn pn
Finally we choose the attribute with the largest IG.
Learning viewed as a Bayesian updating of a probability distribution over the
hypothesis space uses predictions of likelihood-weighted average over the hypotheses
to asses the results but this can be too problematic. This can be overcome with the
maximum a posteriori (MAP) learning choosing to maximise the probability of each
hypothesis for all outcomes of the training data, expressing it in terms of the full data
for each hypothesis and taking logs to give a measure of bits to encode data given the
hypothesis and bits to encode the hypothesis (minimum description length). For large
datasets, we can use maximum likelihood (ML) learning by maximising the probability
of all the training data per hypothesis giving standard statistical learning.
To summarise full Bayesian learning gives best possible predictions but is intractable,
MAP learning balances complexity with accuracy on training data and maximum
likelihood assumes uniform prior, and is satisfactory for large data sets.
1. Choose a parametrized family of models to describe the data requires substantial
insight and sometimes new models.
2. Write down the likelihood of the data as a function of the parameters may require
summing over hidden variables, i.e., inference.
3. Write down the derivative of the log likelihood with respect to each parameter.
4. Find the parameter values such that the derivatives are zero may be hard/impossible;
modern optimization techniques do help.
9.11.4.2.2 Theoretical Studies
The training of the users affects the speed of the scan and accuracy and can be defined
by the function F1 as

n 0 (1-f ak )21-D +n 00 (G s +G r )f KT (1-f aDK )


F1 (n 0 ,n 00' ,D)=
(1-f ak )21-D +(G s +G f )f KT (1-f aDK )

where Gs is the reinforcement of each successful scan


Gf is the reinforcement for each erroneous scan
a is the reinforcement rate
f is the extinction rate for memory (0<f<1)
T is the time over which analyses are made
K is the power law describing extinction of memory
When part of the process is standard we have
F2 (u10 ,u .1 ,R1 ,D1 )=(1-R1 )F1 (u 0 , u  ,D)+R1F1 (u'0 , u' ,D-D1 )
to define the modification resulting from changing the work by a proportion R1 after D1
applications out of a total training of D applications. u0 for the untrained user and u for
the fully trained user and u' are the values under the changed regime.
The effects of exhaustion on the performance of the user is demonstrated by slower
operation speeds and increased randomness in probabilities and search scan following
inverted -U graphs from ergonomics.
Thus:
uij =u ijmax (1-U1 (m-m1 ) 2 )+u ijmin U1 (m-m1 ) 2
where uij have minimum values uijmin and maximum values uijmax. m1 is the value of m
giving maximum productivity and U1 is a normalising factor dependent on the energy
consumed in the process.
Using these formulae we find that the user should be experienced, particularly in the
specialised field of the system. They should be good workers (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should have aptitude and fast recall.
9.11.4.2.3 Child Learning
When a child starts learning, they start with a set of basics concepts of picture/sound
and develop written script from that position. They start applying rules for basic
concepts then combinations of concepts through rules to meaning. They apply a
bottom up analysis as in a compiler to give us rules to add to the knowledge base. The
priority of the rules gives them ways of catching idioms. They develop rules to give
them generalisation e.g. animals and specification e.g. white tailed frog. Nouns define
objects, verbs actions, pronouns the replacement for nouns. Conjunctions give ways of
replicating actions under different situations. Other parts of speech are ways of
defining specifics for objects or actions.
Some language is used for pleasure and can be forgotten as soon as it has been
processed others are needed for retention for later times. These aspects vary from
person to person depending on their background and depending on that background will
be understood in different ways.
9.11.4.2.4 Medical Systems
We assume that an element of a system has n characteristics so that characteristic i
has mi possible values aij for j=1 to pi. We find that there are two types of value. The
first case is numeric and the second kind is a classification value such as yes or no. On
many occasions we find that we need the condition "don't know" with classification
when the value cannot be specified. The value of each characteristic can change over
a set of time periods so that at period k the value of the characteristic b ik which can
have one of the pi values ranging ail ... and aipi. The value bik will reflect the profile of the
system at period k for all the n characteristics and the variation of a characteristic i
over time periods k.
To resolve "don't know" values in the profile, if an element l has a "known" decoded
value for a characteristic i at time period k as cikl for r elements then the "don't know"
decoded profile value can be calculated by:
bik = Σ r l=1 cikl/r
Statistics can be calculated for a system from the value of the profile characteristic b ik
When we accumulate data for characteristics of elements over time periods for a
system we can use the data to predict various attributes. We can use the system data
to extrapolate the trend of the values of the profile. If we add a new element to the set
we can predict its pseudo time period from the profile of the data. We can use that time
period to forecast the development of values of the characteristics of the new element
over time. We can assess from the library of data the most effective form of calculation
for the system and express these actions mathematically by
a. given cikl for all i we can find k so that |bik - cikl| is a minimum
b. given bik for all i we can find j so that |bik - aij| is a minimum
c. given bik for all i and all k then these tend to values di where di are limit values for
characteristic i.
The concept can be used in two different ways in the educational field – the browsing
mode and the revision mode. The browsing phase can be expressed as specifying
characteristic values ei for i = 1 to q and finding that other characteristic values fi for i
= q + 1,..., n.
In revision mode the student suggests values of fi and when we are assessing the
computer specifies the values of q and fi so the student specified the fi and the
computer performs the check as stated above.
9.11.4.3 Statistics Theory
We use the network model described above to give a basis for the collection of data
about the system. When we consider the occurrence of an event in system research we
are talking about events, recurring events or choices of event. In the case of
sequences of occurrences we have the count of using a particular unit. We use the
logical and operator for using groups of units based on the recurrence of using a unit.
When we are considering the correctness of the alternatives of units in a system we
use the logical or operation. When we come across a situation where one unit for a
particular system implies that we will always have to use specific further units we will
use the dependent forms of the and and or logical operations. The structures of
systems imply a network form and we can use the methods described in the part on
network structures.
The values show 2 forms of information. There are the values for the locality. The
second set of values is the general statistics for global system.
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
9.11.4.4 Geographic Information Systems
A geographic information system is a database system for holding geographic data. It
collects, processes and reports on all types of spatial information for working with
maps, visualization and intelligence associated with a number of technologies,
processes, and methods. GIS uses digital information represented by discrete objects
and continuous fields as raster images and vector. Displays can illustrate and analyse
features, enhance descriptive understanding and intelligence.
If a unit is not found then an error report is generated as a device stack and position
and after review the GIS database is adjusted.
9.11.4.5 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the
variables. Extrapolation takes the fitted curve to calculate values beyond the range of
the observed data, and gives uncertainty due to which particular curve has been
determined. Curve fitting relies on various types of constraints such as a specific
point, angle, curvature or other higher order constraints especially at the ends of the
points being considered. The number of constraints sets a limit on the number of
combined functions defining the fitted curve even then there is no guarantee that all
constraints are met or the exact curve is found. Curves are assessed by various
measures with a popular procedure being the least squares method which measures
the deviations of the given data points. With language processing it is found that affine
matrix transformations help deal with problems of transition and different axes.
If any error is found then an error report is generated as a device stack and position
then evaluated with respect to time, device, device type, position and after review the
system structure is modified appropriately.
9.11.4.6 Configuration Manage
Configuration management  requires configuration identification defining attributes of
the item for base-lining, configuration control with approval stages and baselines,
configuration status accounting recording and reporting on the baselines as required
and configuration audits at delivery or completion of changes to validate requirements.
It gives the benefits of easier revision and defect correction, improved performance,
reliability and maintainability, extended life, reduced cost, risk and liability for small
cost compared with the situation where there is no control. It allows for root cause
analysis, impact analysis, change management, and assessment for future
development. Configuration management uses the structure of the system in its parts
so that changes are documented, assessed in a standardised way to avoid any
disadvantages and then tracked to implementation.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted. ment
9.11.4.7 Continuous Integration
Continuous integration uses a version control system. The developer extracts a copy of
the system from the repository performs a build and a set of automated tests to ensure
that their environment is valid for update. He performs his update work and rebuilds the
system using the build server for compiling binaries, generating documentation,
website pages, statistics and distribution media, integration, deployment into a
scalable version clone of production environment through service virtualization for
dependences. It is ready to run a set of automated tests consisting of all unit and
integration (defect or regression) tests with static and dynamic tests, measure and
profile performance to confirm that it behaves as it should. He resubmits the updates to
the repository which triggers another build process and tests. The new updates are
committed to the repository when all the tests have been verified otherwise they are
rollback. At that stage the new system is available to stakeholders and testers. The
build process is repeated periodically with the tests to ensure that there is no
corruption of the system.
The advantages are derived from frequent testing and fast feedback on impact of local
changes. By collecting metrics, information can be accumulated on code coverage,
code complexity, and features complete concentrating on functional, quality code, and
team momentum
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.11.4.8 Continuous Delivery
In continuous delivery, teams produce software in short cycles to give a system
release at any time. It does the build, test, and release phases faster and more
frequently reduce cost, time, and risk of delivered changes with small incremental
updates. A simple and iterable deployment process is important for continuous
delivery.
It uses a deployment pipeline to give visibility, feedback, and continually deploy. The
visibility analyses the activities viz. build, deploy, test, and release and reports the
status to the development team. The feedback informs the team of problems so that
they can soon be resolved. The continually deploy uses an automated process to
deploy and release any version of the system to any environment.
Continuous delivery automates source control all the way through to production. It
include continuous Integration, application release automation, build automation,
and application life cycle management.
It improves time to market, productivity and efficiency, product quality, customer
satisfaction, reliability of releases and consistency of the system with requirements
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
9.11.4.9 Virtual Reality
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. It uses a screen or a special headset to
display sight and sound information. Input is made through standard computer input,
sight tracking or tactile information. Other technology is remote communication,
artificial intellegence and spacial data to assist the technology.
If any error is found then an error report is generated and displayed as a device stack
and position then evaluated with respect to time, device, device type, position and
after review the system structure is modified appropriately.
9.12 Summary
We have reviewed how some other technologies can contribute to IoT. It has consisted
of 22 further sub-sections reflecting the 19 theories that are helpful. They are search
theory, network theory, Markov theory, algebraic theory, logic theory, programming
language theory, geographic information systems, quantitative theory, learning theory,
statistics theory, probability theory, communications theory, compiler technology
theory, database technology, curve fitting , configuration management, continuous
integration/delivery and virtual reality. We summarise the results now.
The operations research technique, search theory, gives us a measurable set of
requirements and a method of assessing how good the system, the system user and the
documentation come up to the requirements.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. They should be a good worker (accurate, efficient, good
memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. They should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
The system should be standardised, simple, specialised, logically organised, concise,
have minimum ambiguity, have minimum error cases and have partitioning facilities.
The facilities for systems should be modifiable to the experience of the users.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
If no target is found then the error is reported and after review the target is added to
the system.
Algebraic and logic theory use a set of basic elements (entities, services, standards,
techniques, communications). We apply rules of combination to the basic elements to
form larger elements that we classify as entities, services, standards, techniques and
communications. We iterate on the combination for more elements to be validated
against techniques (using recursion) and standards. We have rules to say what is
correct and what is erroneous.
We use valuation rules to classify entities, services, standards, techniques and
communications. The entities, services and communications are classified into parts of
system with standards and techniques. Techniques give meaning to entities and
services and combinations of them. Relations are derived from another set of
operations which give links such as generalisation (standards, techniques) and
specification based on properties of the entities through services.
We use a static set of definitions to specify the entities, services, standards,
techniques and communications of the system to define the language properties and a
dynamic set of definitions to determine the schema for the entities, services,
standards, techniques and communications of the input source. Services processes the
dynamic input from a source to give valid results with the rules reflecting the actions of
the system.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
Network analysis for entity, services, standards, techniques and communications takes
the properties of the algebraic and logic theory and views them in a different light with
the language entities as nodes and there connections as edges. We have discussed the
following six validation cases:
● The system is well structured
● It is consistent
● It is complete
● It has a way of completing its processes
● There is an optimal way for structuring the system to minimise the time of
processing.
● There is an optimal way for structuring the system to maximise the ease of look
up.
If a node or edge is not found then the error is reported as a stack dump and after review
the network structure is adjusted as appropriate.
Markov processes uses the connections of the network analysis model to determine
what nodes have flow through them and which do not. We find the edges that are used
and those unused. We can determine what the flow is between the nodes and
partitioning of the structures through single entry or single exit blocks of nodes.
By the introduction of an error sink node we can use the extra edges to discover what
is the probability of error at different parts in the network system, the size of error at
each point of the Markov process and the error node gives an estimate of the total error
rate of the network.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
Software theory has given us a quantitative basis of an IoT system. At each level
(entities, services, standards, technique, communications), we have applied the
quantitative analysis to estimate sizes of entities, errors, system, etc.
Learning theory has given us an insight into the processes of the changes that are
made to people over the period of training and experience with the system using the
network analysis structure for the system. It has given us estimates for the
improvement to the learning of the language and the attributes of the learner. We have
found that the learner should be experienced, particularly in the specialised field of the
system. They should be good students (accurate, efficient, good memory, careful,
precise, fast learner) who is able to settle to work quickly and continue to concentrate
for long periods. They should have aptitude and fast recall.
We looked at child learning and the way children develop their use of a system. They
start with a set of basics concepts of entities, services, standards, technique and
communications and develop an understanding of the system from that position. They
start applying rules for basic entities, services then combinations of them through
communications, standards and techniques to the system. They develop rules to give
them generalisation e.g. standards, techniques and specification e.g. entity properties.
Each reflects the network analysis section for the system.
As things are added to the system they are defined by their connections through entities,
techniques, standards and communications to generalise, standardise and specify rules
to reflect the network model defined in previous sections. At this stage of the study we
select the network structure with error analysis for the additional part only.
We used the concepts in the medical systems to build a data source from the learning
process and then uses the minimum “distance” to select the system part from a feature
list. At this stage of the study we select the Markov matrix structure with error analysis
for the part only.
Probability has been used to estimate the parts of the usage of the system. The
structures of IoT imply a network form for both the static and dynamic and we can use
the techniques described in the part on network structures. We can back up the
probability with the collection of statistics.
System Elements
System Elements Number of System Elements
Entities Number of Entities in the System
Services Number of Services in the System
Standards Number of Standards in the System
Techniques Number of Techniques in the System
Communications Number of Communications in the System
We found that:
● For entities, the correctness is improved by the use of services validated by
standards and techniques.
● For services the correctness is improved by the use of techniques and
standards.
● For standard, the probability of correctness is improved by the use of formal
standard rules.
● For technique, the probability of correctness is improved by the use of
standards.
● For communications, the probability of correctness is improved by the use of
services, techniques and standards.
Curve fitting helps illustrate interpolation and extrapolation of sets of values with
different kinds of constraints. It is particularly good for estimates in learning schemes
and for predicting performance based on the statistics collected into the IoT system.
Configuration management identifies item attributes for control recording and reporting
on the baselines for audits at delivery or completion of changes to validate
requirements. It requires versions or time stamps.
Continuous integration uses version control and automatic triggers to validate stages
of the update process. It builds all generated system and documentation and runs
automated unit and integration (defect or regression) tests with static and dynamic
tests, measure and profile performance to ensure that their environment is valid. The
trigger points are before and after update and at release to the production system
when triggers force commits to the repository or rollback to avoid corruption of the
system. Reports are collected on metrics about code coverage, code complexity, and
features complete concentrating on functional, quality code, and team momentum.
In continuous delivery, the development / deployment activity is smaller by automating
all the processes for source control through to production.
Geographical information systems hold data that fall into 2 forms. The first is pure data
values which are not effected by position eg the general description of a hardware
type. The other is dependent on position eg hardware unit in the network. The data is
discrete objects (raster) and continuous fields (vector). It enables entities to be
positioned, monitored, analysed and displayed for visualization, understanding and
intelligence when combined with other technologies, processes, and methods.
Virtual reality simulates an environment of the user's presence, environment and
interaction of sight, touch, hearing, and smell. Input is made through standard
computer input, sight tracking or tactile information. Other technology is remote
communication, artificial intellegence and spacial data to assist the technology. In IoT
we use the technology to control all hardware and routing entities and perform
remedial action when this is requiredProgramming language theory and media
technologies gives us the rules for formalised standard and technique for the defining
the language. We use the network model described above to give a basis for the
collection of data about the system. We discover we need to set a priority of the rules
for evaluating units and processes. Object oriented programming gives us the concept
of scope for meaning, objects, properties, methods with arguments, the "this" operator
and the concepts of synonyms, generalisation and specification. Overloading of
definitions allows for meaning to change according to context. Replicating actions use
iterations under different cases. Conditional compilations, macros and packages-
libraries assist the use of previous work.
The requirements for the IoT data set are:
 object oriented type
 event-driven architecture data set
 hypertext hypermedia data set
 probabilistic data set
● real-time data set
We define a set of base elements as the entities of the system. The entity set has a
name, iteration control, type, identity for sound and picture, hardware representation,
meaning, version, timestamp, geographic position, properties (name and value),
statistics and nesting. An escape sequence gives a way for extending the entity set.
The services data set has an iteration control, name, identity by sound and picture,
hardware representation, meaning, version, timestamp, geographic position, properties
(name and value), statistics, events (name and value), interupt recovery service and
arguments, priority value and relative to other services and nesting. We define a set of
rules for extending the services of the system which are performed in coordination with
the extended standard and extended technique definition sections.
The standards data set has name, hardware representation, rules, version, timestamp,
statistics, entities, services and techniques. We define a set of rules for extending the
standard of the system which are performed in coordination with the extended services
and extended technique definition sections.
The techniques data set contains iteration control, name as string, sound and picture,
hardware representation, meaning, version, timestamp, properties (name and value),
statistics, nesting, events (name, value and interrupt service), priority and relative to
technique. We define a set of rules for extending the techniques of the system which
are performed in coordination with the extended standard and extended technique
definition sections.
Communications consists of a dialogue between a source and a destination over a
transmission medium. We use protocols (rules) to govern the process. The
communications processes are based on a mixture of entities, services, standards and
techniques which seem to be too complicated to analyse at present. It defines name
(string, sound, picture), hardware representation, version, timestamp, statistics,
entities, services, techniques and standards. Extensions are defined from a similar set
of rules.
Compiler technology follows the formal definition found in programming languages for
both source (input) language, intermediate language and target (output) language. They
also give priorities of how the entities, services, standards, techniques and
communications are processed based on the learning, probability, network analysis and
Markov theory for the sections. If an element is not recognised then the input element
is queried to see if there is an error or the element should be added to the appropriate
data set. An escape sequence can be used to extend the data set in conjunction with
the other entities,, services, standards, techniques and communications.
A communications model consists of a source, generating data to be transmitted, a
transmitter, converting data into transmittable signals, a transmission system, carrying
data, a receiver, converting received signal into data, and a destination taking
incoming data. Key communications tasks consist of transmission system utilization,
interfacing, signal generation, synchronization, exchange management, error detection
and correction, addressing and routing, recovery, message formatting, security and
network management – these are classified as services.
Protocols are techniques used for communications between entities in a system and
must speak the same language throughout. Entities consist of user applications or item
of hardware or the messages passing between source and destination. Systems are
made up of computer, terminal or remote sensor. Key elements of a protocol are
standards (data formats, signal levels), techniques (control information, error handling)
and timing (speed matching, sequencing). The protocols become standards as they are
formalised.
Protocol architecture is the task of communication broken up into modules which are
entities when they are stored as files and become services as they are executed. At
each layer, protocols are used to communicate and control information is added to user
data at each layer.
Each element give priorities of how the entities are processed based on the learning,
probability, network analysis and Markov theory for the entities sections. If a entity is
not recognised then it is passed to a recovery process based on repeated analysis of
the situation by some parallel check. If the entity is not recovered, the entity is queried
to a human to see if there is an error or the entity should be aded to the entity set.
We define a set of rules for extending the elements of the communication which are
performed in coordination with the extensions of entities, services, techniques and
standard.
The requirements for the system database are:
 object oriented type
 event-driven architecture database
 hypertext hypermedia database
 probabilistic database
 real-time database
The logical database structure must follow the object oriented type with the XML tags
as in section 8 (Appendix – Database Scheme).
The system definition set out in section 8 (Appendix – Database Scheme) is created
once when the system is added to and changed and removed infrequently as the
system is extended. It is queried frequently for every element that is read. The
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities) of the database will be done on a regular basis.
10 IoT Security Implementation Requirements
10.1 Introduction
This section reviews how some other technologies can contribute to IoT security. They
are reflected as theoretical studies, analysis and execution for entities, services,
standards, techniques, communications, antivirus, firewall, APIDS and ciphers. It
consists of 22 further sub-sections reflecting the 20 theories that are helpful. They are
search theory, network theory, Markov theory, algebraic theory, logic theory,
programming language theory, geographic information systems, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, curve fitting, configuration management,
continuous integration/delivery and virtual reality. We summarise the results now.
10.2 Entity Processing
Entity processing is the subfunction of extracting entities from a set of information.
The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity definition set above is created once when the entity is added to the system
and changed and removed infrequently as the entity set is extended. It is queried
frequently for every service, standard and technique rule that is read. The entity
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
10.3 Service Processing
Services processing is the subfunction of extracting services from a set of information.
The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of services, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of services, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
10.4 Standard Processing
Standards processing is the subfunction of extracting standards from a set of
information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of standards, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of standards, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies standard attributes for control, recording
and reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for standards.
The standard definition set above is created once when the standard is added to the
system and changed and removed infrequently as the standard set is extended. It is
queried frequently for every service, entity and technique rule that is read. The
standard definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
10.5 Technique Processing
Technique processing is the subfunction of extracting techniques from a set of
information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of techniques, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of techniques, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies technique attributes for control, recording
and reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for techniques.
The technique definition set above is created once when the technique is added to the
system and changed and removed infrequently as the technique set is extended. It is
queried frequently for every service, standard and entity rule that is read. The
technique definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
10.6 Communications Processing
Communications processing is the subfunction of extracting communications from a
set of information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of communications ,
groups, modification, substitution and valuation.
i) Logic theory endows processing and validation of communications, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies communications attributes for control,
recording and reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for communications .
The communications definition set above is created once when the communications is
added to the system and changed and removed infrequently as the communications set
is extended. It is queried frequently for every entity, service, standard and technique
rule that is read. The communications definition set is updated (inserted, modified, and
deleted) infrequently. The administration (maintain users, data security, performance,
data integrity, concurrency and data recovery using utilities - services) of the database
will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
10.7 Antivirus Processing
Antivirus services processing is the subfunction of extracting Antivirus services from a
set of information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of Antivirus services,
groups, modification, substitution and valuation.
i) Logic theory endows processing and validation of Antivirus services, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - Antivirus services) of the database will be done on a
regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
10.8 Firewall Processing
Firewall services processing is the subfunction of extracting Firewall services from a
set of information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of Firewall services,
groups, modification, substitution and valuation.
i) Logic theory endows processing and validation of Firewall services, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
10.9 APIDS Processing
APIDS services processing is the subfunction of extracting APIDS services from a set of
information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of APIDS services, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of APIDS services, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
10.10 Cipher Processing
Cipher services processing is the subfunction of extracting Cipher services from a set
of information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of Cipher services, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of Cipher services, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
The named entity recognition process has a basic process of syntax analysis and
semantics lookup (compiler technology) to analyse the input and the database
technology to look up the syntax rules, semantics rules, words and the classification
information for the activity. The logic theory is used to validate the language input and
in the case of error the human user of the system is queried to correct the error or
extend the language using the learning algorithms. The tuning of the database is
provided with the statistics data that is collected over the use of the database and
language data.

10.11 Summary
IoT security processing is the function of extracting entities, services, standards,
techniques, communications, antivirus, firewall, APIDS and ciphers from a set of
information. The contributions of IoT technology are defined below:
10.11.1 IoT Technology
IoT processing is the function of extracting entities, services, standards, techniques
and communications from a set of information. The contributions of IoT technology are
defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation of the IoT system.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity, service, standard, technique and communications definition set above is
created once when the entity, service, standard, technique or communications is added
to the system and changed and removed infrequently as the service set is extended. It
is queried frequently for every entity, service, standard, technique and communications
rule that is read. The definition set is updated (inserted, modified, and deleted)
infrequently. The administration (maintain users, data security, performance, data
integrity, concurrency and data recovery using utilities - services) of the database will
be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
10.11.2 IoT Security Technology
IoT security processing is the function of extracting entities, services, standards,
techniques and communications from a set of information for antivirus, firewall, APIDS
and ciphers. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation of the IoT system.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity, service, standard, technique and communications definition set above for
antivirus, firewall, APIDS and ciphers is created once when the entity, service,
standard, technique or communications for antivirus, firewall, APIDS and ciphers is
added to the system and changed and removed infrequently as the service set is
extended. It is queried frequently for every entity, service, standard, technique and
communications rule that is read. The definition set is updated (inserted, modified, and
deleted) infrequently. The administration (maintain users, data security, performance,
data integrity, concurrency and data recovery using utilities - services) of the database
will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.

11 Specification of IoT Security Tools


11.1 Introduction
This section reviews how technologies can be developed as IoT security tools
processing. We consider entity, services, standards, techniques, communications,
antivirus, firewall, APIDS and ciphers. We take the results from sections four and five
above. We have considered the methodologies from the 21 theories of section 7 for
help. They were search theory, network theory, Markov theory, algebraic theory, logic
theory, programming language theory, quantitative theory, learning theory, statistics
theory, probability theory, communications theory, compiler technology theory,
database technology, geographic information systems, curve fitting, configuration
management continuous integration/delivery and virtual reality.
In general, we note that there is a need to load a control set of data (the language
definition) and extend it. We need an input processor, an analyser, a database
management system and one for output.
The specification of the tools requires that there are a number of data input needs,
processing that data with a number of tests and output those results.
The input requirements are:
a. Roles of personnel
b. Types of products
c. Life cycle states for the products and personnel
d. Structural connections between the products
e. Frequency flows between states
f. Time connections between states
Processing of the input data uses the procedures listed in section 7.1.
a. The system is well structured
b. It is consistent
c. It is complete
d. It has a way of completing its processes
e. There is an optimal way for structuring the system to minimise the time of processing
f. There is an optimal way for structuring the system to maximise the ease of look up.
The output is a set of error reports which are diagnosed by the cases that would be used
in the context of a system. Finally we have provided a specification for a tool which
would help validate a system with those algorithms.
11.2 Entity Processing
Entity processing is the subfunction of extracting entities from a set of information.
The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity definition set above is created once when the entity is added to the system
and changed and removed infrequently as the entity set is extended. It is queried
frequently for every service, standard and technique rule that is read. The entity
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
11.3 Service Processing
Services processing is the subfunction of extracting services from a set of information.
The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of services, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of services, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
11.4 Standard Processing
Standards processing is the subfunction of extracting standards from a set of
information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of standards, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of standards, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies standard attributes for control, recording
and reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for standards.
The standard definition set above is created once when the standard is added to the
system and changed and removed infrequently as the standard set is extended. It is
queried frequently for every service, entity and technique rule that is read. The
standard definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
11.5 Technique Processing
Technique processing is the subfunction of extracting techniques from a set of
information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of techniques, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of techniques, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies technique attributes for control, recording
and reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for techniques.
The technique definition set above is created once when the technique is added to the
system and changed and removed infrequently as the technique set is extended. It is
queried frequently for every service, standard and entity rule that is read. The
technique definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
11.6 Communications Processing
Communications processing is the subfunction of extracting communications from a
set of information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of communications ,
groups, modification, substitution and valuation.
i) Logic theory endows processing and validation of communications, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies communications attributes for control,
recording and reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for communications .
The communications definition set above is created once when the communications is
added to the system and changed and removed infrequently as the communications set
is extended. It is queried frequently for every entity, service, standard and technique
rule that is read. The communications definition set is updated (inserted, modified, and
deleted) infrequently. The administration (maintain users, data security, performance,
data integrity, concurrency and data recovery using utilities - services) of the database
will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
11.7 Antivirus Processing
Antivirus services processing is the subfunction of extracting Antivirus services from a
set of information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of Antivirus services,
groups, modification, substitution and valuation.
i) Logic theory endows processing and validation of Antivirus services, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - Antivirus services) of the database will be done on a
regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
11.8 Firewall Processing
Firewall services processing is the subfunction of extracting Firewall services from a
set of information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of Firewall services,
groups, modification, substitution and valuation.
i) Logic theory endows processing and validation of Firewall services, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
11.9 APIDS Processing
APIDS services processing is the subfunction of extracting APIDS services from a set of
information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of APIDS services, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of APIDS services, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
11.10 Cipher Processing
Cipher services processing is the subfunction of extracting Cipher services from a set
of information. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of Cipher services, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of Cipher services, groups,
modification, substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation ( extrapolation / interpolation) of the
IoT system through various measures.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production status.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
The named entity recognition process has a basic process of syntax analysis and
semantics lookup (compiler technology) to analyse the input and the database
technology to look up the syntax rules, semantics rules, words and the classification
information for the activity. The logic theory is used to validate the language input and
in the case of error the human user of the system is queried to correct the error or
extend the language using the learning algorithms. The tuning of the database is
provided with the statistics data that is collected over the use of the database and
language data.
11.11 Summary
IoT security processing is the function of extracting entities, services, standards,
techniques, communications, antivirus, firewall, APIDS and ciphers from a set of
information. The contributions of IoT technology are defined below:
11.11.1 IoT Technology
IoT processing is the function of extracting entities, services, standards, techniques
and communications from a set of information. The contributions of IoT technology are
defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation of the IoT system.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity, service, standard, technique and communications definition set above is
created once when the entity, service, standard, technique or communications is added
to the system and changed and removed infrequently as the service set is extended. It
is queried frequently for every entity, service, standard, technique and communications
rule that is read. The definition set is updated (inserted, modified, and deleted)
infrequently. The administration (maintain users, data security, performance, data
integrity, concurrency and data recovery using utilities - services) of the database will
be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
11.11.2 IoT Security Technology
IoT security processing is the function of extracting entities, services, standards,
techniques and communications from a set of information for antivirus, firewall, APIDS
and ciphers. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation of the IoT system.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity, service, standard, technique and communications definition set above for
antivirus, firewall, APIDS and ciphers is created once when the entity, service,
standard, technique or communications for antivirus, firewall, APIDS and ciphers is
added to the system and changed and removed infrequently as the service set is
extended. It is queried frequently for every entity, service, standard, technique and
communications rule that is read. The definition set is updated (inserted, modified, and
deleted) infrequently. The administration (maintain users, data security, performance,
data integrity, concurrency and data recovery using utilities - services) of the database
will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.

12 IoT Security Implementation


12.1 Introduction
This section reviews how some other technologies can contribute to IoT security. They
are reflected as theoretical studies, analysis and execution for entities, services,
standards, techniques, communications, antivirus, firewall, APIDS and ciphers
functions. It consists of 22 further sub-sections reflecting the 20 theories that are
helpful. They are search theory, network theory, Markov theory, algebraic theory, logic
theory, programming language theory, geographic information systems, quantitative
theory, learning theory, statistics theory, probability theory, communications theory,
compiler technology theory, database technology, curve fitting, configuration
management, continuous integration/delivery and virtual reality. We summarise the
results now.
IOT technology is based on a logical data base defined in the Appendix - Database
Scheme for IOT. The physical database is a one level store laid across the IOT network
as a ciphered set of entities, services, standards, techniques and communications.
Communication actions follow the Diffie–Hellman key exchange method at each level of
transmission. Database processes follow the Diffie–Hellman key exchange method at
each level of transfer. IOT security processing is the function of extracting entities,
services, standards, techniques and communications from a set of information for
antivirus, firewall, APIDS and ciphers.

12.2 Entity Processing


Entity processing is the subfunction of extracting entities from a set of information.
12.2.1 Logical Design
12.2.1.1 Database Processing
The database supports IoT activities with applications reflecting bibliographic,
document-text, statistical and multimedia objects. The database management system
supports users and other applications to collect and analyse the data for IoT
processes. The system allows the definition (create, change and remove definitions of
the organization of the data using a data definition language - schema definition (DDL)),
querying (retrieve information usable for the user or other applications using a query
language (QL)), update (insert, modify, and delete of actual data using a data
manipulation language (DML)), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities - physical
definition (PDL)) of the database. The most suitable database model for the
applications is a hybrid object-relational databases. They use fast key-value stores and
document-oriented databases with XML to give interoperability between different
implementations.
The logical database structure follows the definition in section 14 (Appendix –
Database Scheme) and is extended with an escape sequence in section 14 (Appendix –
Database Scheme). The schema is maintained with DDL, is used with DML and QL
(especially for entities and services) and cleaned up with PDL
DDL, QL, DML and PDL are implemented as services
Other application requirements are:
• event-driven architecture database - implemented as services
• deductive database - implemented as services
• multi-database - implemented as services
• graph database - implemented as services
• hypertext hypermedia database - implemented as services
• knowledge base - implemented as services
• probabilistic database - implemented as services
• real-time database - implemented as services
• temporal database - implemented as services
12.2.1.2 Geographic Information Systems
The database system holds geographic data for collecting, processing and reporting
spatial information for maps, visualization and intelligence. GIS supports discrete
objects and continuous fields as raster images and vector.
Entity data includes position and non position attributes. Entities can be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other information. The entities are processed with services,
standards and techniques.
Communications use dialogues between source and destination over a transmission
medium with protocols governing the process. The protocol is defined by a standard or
technique controlling entities and services which may use GIS data.
12.2.1.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.2.1.4 Network Theory
Network theory considers ways to validate connections within a graph structure (nodes
and edges). Algorithms can resolve ordered elements, single root trees and flows in
networks to validate consistency, completeness and use with and without dangles or
loops as described in section 7.1.3.2.
Validation cases are well structured, consistent, complete, completion of processes,
optimal structure for minimum time of processing and maximum ease of look up.
a. The system is well structured when an element is connected to other elements (for
entity v entity, entity v service, service v technique, service v standard, technique v
standard).
b. It is consistent when an element is not accessed from two other different elements
(for entity v entity, entity v service, service v technique, service v standard, technique v
standard).
c. It is complete when there are elements being defined but unused. The elements are a
waste and would cause confusion if they are known. The completeness prerequisite will
eliminate this difficulty (for entity v entity, entity v service, service v technique, service v
standard, technique v standard).
d. It has a way of completing its processes – every entity has an input or an output
service
e. There is an optimal way for structuring the system to minimise the time of processing
using a CPM process on the entities, services, techniques and standard respectively.
f. There is an optimal way for structuring the system to maximise the ease of look up by
minimising the height / depth of the network for entities, services, techniques and
standard respectively.
12.2.1.5 Markov Theory
Markov theory extends network theory to determine what the flow through nodes and
edges and hence the unused nodes and edges for zero flow through them. It also finds
the flow between the nodes and which are single entry or single exit blocks (grouping)
of nodes.
By introducing a node as an error sink and extra edges from the remaining nodes the
probability of error at different parts in the network system can be calculated along
with the size of error at each point of the Markov process and the error node gives an
estimate of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges. The implementation is based on services for the schema and
the database contents separately on a timed basis.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
• When entities can form the network system services can be considered as edges
for the Markov analysis.
• When entities can form the network system standards can be considered as
edges for the Markov analysis.
• When entities can form the network system techniques can be considered as
edges for the Markov analysis.
• When entities can form the network system services and standards can be
considered as edges for the Markov analysis.
• When entities can form the network system services and techniques can be
considered as edges for the Markov analysis.
• When entities can form the network system services, standards and techniques
can be considered as edges for the Markov analysis.
• When services can form the network system entities can be considered as edges
for the Markov analysis.
• When services can form the network system standards can be considered as
edges for the Markov analysis.
• When services can form the network system techniques can be considered as
edges for the Markov analysis.
• When services can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When services can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When services can form the network system entities, standards and techniques
can be considered as edges for the Markov analysis.
• When standards can form the network system entities can be considered as
edges for the Markov analysis.
• When standards can form the network system services can be considered as
edges for the Markov analysis.
• When standards can form the network system techniques can be considered as
edges for the Markov analysis.
• When standards can form the network system entities and services can be
considered as edges for the Markov analysis.
• When standards can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When standards can form the network system entities, services and techniques
can be considered as edges for the Markov analysis.
• When techniques can form the network system entities can be considered as
edges for the Markov analysis.
• When techniques can form the network system services can be considered as
edges for the Markov analysis.
• When techniques can form the network system standards can be considered as
edges for the Markov analysis.
• When techniques can form the network system entities and services can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities, services and standards
can be considered as edges for the Markov analysis.
Communications are based on protocols which are standards or techniques so the
analyses taken above can be applied. Additionally the protocols can be taken as nodes
with entities, services, standards and techniques as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
12.2.1.6 Algebraic Theory
Algebraic theory gives a set with elements and functions to be a basis of a system.
Individual services can be validated using algebraic theory. Restrictions on valid basic
elements are defined. Combinations of the elements forming larger elements are
classified as systems or subsystems with rules for correctness and erroneous.
Iterations on the combination process for more complex elements are validated against
rules giving meaning to elements and combinations of them. Relations are derived from
another set of functions which give links such as generalisation, and specification
based on properties of the elements. Other parts are ways of defining properties of
elements or functions whilst some apply to the scope of elements and functions.
When entities are considered as elements and functions are services, standards and
techniques define combinations and validation rules and valuations. Similarly the
duality gives services as elements, entities as functions and standards and techniques
validate combinations and valuations.
Communications consists of a dialogue between a source and a destination over a
transmission medium. Protocols (rules) govern the process which may be standards or
techniques to give entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
12.2.1.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.2.1.8 Programming Language Theory
Programming language theory gives rules for formalised standard and technique for
entities, graphics, sound and media technologies, services, database schema,
techniques and standards and database models. The IoT system schema and data is
covered by nesting, scope for meaning, objects, properties, methods with arguments,
the "this" operator and the concepts of synonyms, generalisation and specification,
overloaded definitions changed by context, replication, iteration, conditional meaning,
libraries, events, priority as set out in Section 14 (Appendix Database Schema).
12.2.1.9 Compiler Technology Theory
A compiler translates high-level language source programs to the target code. It takes
the formal definition in section 14 (Appendix – Database Scheme) to the physical
database representation starting with a set of base elements as the entities, services,
techniques and standards of the system and a sequence for extending the entity set,
services set, techniques set and standards set.

Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.

12.2.1.10 Communications Theory


Communications are controlled by protocols which consist of standards and techniques
applied to services and entities. Standards’ protocols are expressed in terms of a
formal language. Services are given priorities on how the entities are processed based
on the protocol. The protocol can be improved through the learning, probability,
network analysis and Markov theory for the entities, services, standards and
techniques sections. The definitions are translated using compiler technology.
If an entity or a service is not recognised then it is passed to a recovery process based
on repeated analysis of the situation by some parallel check. If the entity or service is
not recovered, the entity / service stack dump is reviewed to determine if there is an
error or the entity / service should be extended to the entity / service set with an
escape sequence. Similarly standards and techniques can be updated by experience.
12.2.1.11 Learning theory
Learning theory affords a set of methods for adding data, relations and modifications to
the knowledge database of the IoT system using the procedure for learning described
in section 7.1.9.2.4.
12.2.1.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
12.2.1.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.2.1.14 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the variables. Extrapolation
takes the fitted curve to calculate values beyond the range of the observed data, and
gives uncertainty due to which particular curve has been determined. Curve fitting
relies on various types of constraints such as a specific point, angle, curvature or other
higher order constraints especially at the ends of the points being considered. The
number of constraints sets a limit on the number of combined functions defining the
fitted curve even then there is no guarantee that all constraints are met or the exact
curve is found. Curves are assessed by various measures with a popular procedure
being the least squares method which measures the deviations of the given data
points.
Curve fitting can select the correct source / destination from the group of nodes to
apply to a communications service and then to select the correct connection from the
group of routes to apply to the service. Curve fitting can check the entity, service,
technique, standard and communications from the components that make up the
system.
12.2.1.15 Configuration Management
Configuration management  follows the process and database described in section
7.1.17.2. Each entity, service, standard, technique and communications protocol is
subject to a configuration management life cycle and is supported by the appropriate
services and database. If an element or relation is not found then the error is reported
as a stack dump and after review the database structure is adjusted.
12.2.1.16 Continuous Integration
Continuous integration uses an extension of the configuration management as
described in section 7.1.18.2. It applies to entities, services, standards and techniques.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
12.2.1.17 Continuous Delivery
Continuous delivery extends the processes and databases along the lines of section
7.19.1.2. It is supported by services as appropriate and applies to developments of
entities, services, standards and techniques. If an element or relation is not found then
the error is reported as a stack dump and after review the database structure is
adjusted.
12.2.1.18 Virtual Reality
Virtual reality is the user interface for monitoring and control of the IOT system. It
works with other technology such as remote communication, artificial intelligence and
spacial data to assist the technology. Errors for entities, services, standards,
techniques and communications are reported using this method so that corrective
actions can be made remotely. The reported error is displayed as a device stack and
position then evaluated with respect to time, device, device type, position and after
review the system structure is modified appropriately.
12.2.2 Physical System
12.2.2.1 Database Processing
The database supports IoT activities with a multimedia hybrid object-relational NOSQL
multi-database with appropriate DDL, QL, DML and PDL. It supports an XML schema
defined in section 14 (Appendix – Database Scheme) with services giving facilities for
event-driven architecture, deduction, graph structures, hypertext hypermedia,
knowledge base, probability, real-time and temporal information. It is a virtual store.
12.2.2.2 Geographic Information Systems
Entities use GIS data and communications protocols, services, standards and
techniques collect, process and report the GIS information for visualization and
analysis.
12.2.2.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.2.2.4 Network Theory
Network theory gives algorithms for services to validate and optimise links in the
database schema and the data for well structured-ness, consistency, completeness,
completion of processes, optimal structure for minimum time of processing and
maximum ease of look up applied to entities, services, techniques, standards and
communications.
12.2.2.5 Markov Theory
Markov theory extends network theory with services to validate and optimise nodes
and edges for schema structure and database structure for entities, services,
techniques, standards and communications respectively and review any problems
reported in the database.
12.2.2.6 Algebraic Theory
Algebraic theory transforms the logic of of services as assertions and form constraints
of input using compiler technology. Entities are analysed with set theory to verify
constraints on them. The system is combined with logic flows to verify that the outputs
and following inputs are consistent. Techniques and standards follow the integration
process. Communications follow the same integration actions.
12.2.2.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.2.2.8 Programming Language Theory
Programming language theory gives formalised ways for defining entities, services,
techniques, standards and communications. It gives a basis for compiler technology to
process entities, services, techniques, standards and communications for network,
Markov, algebraic and logical validation of the schema and database.
12.2.2.9 Compiler Technology Theory
Compiler technology translates the definitions of entities, services, techniques,
standards and communications for validation processes of database and its schema. It
is also used to optimise the system entities, services, techniques, standards and
communications through learning, probability, network analysis and Markov theory.
12.2.2.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities best expressed in a formal language. The
representation is translated with compiler technology to give validation through
network, Markov, algebraic and logical analysis and improved through the learning,
probability, network analysis and Markov theory.
12.2.2.11 Learning theory
Learning theory uses the procedure for learning described in section 7.1.9.2.4 to
optimise the database schema and data into a knowledge database for the IoT system.
12.2.2.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
12.2.2.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.2.2.14 Curve Fitting
Curve fitting uses extended Pearson coefficient analysis to assess trust in the curve
fitting process. The curve fitting uses Chebycheff polynomials and splines for
interpolation and extrapolation for multi dimensional analysis. It is particularly useful
for selecting the correct source from the group of nodes to apply to a communications
service, the correct destination from the group of nodes to apply to a communications
service and then the correct connection from the group of routes to apply to a
communications service.
12.2.2.15 Configuration Management
Configuration management  has the database include system construction and item
identity and status as entities. Services provide base-lining, configuration control with
approval stages and baselines, configuration status accounting and audits versus
revision and defect correction.
12.2.2.16 Continuous Integration
Continuous integration extends configuration management with services to extract a
copy of the system from a repository and perform a build and a set of automated tests
to ensure that their environment is valid for update. A build server builds the system,
documentation, statistics, distribution media, integrates and deploys into a scalable
version clone of production environment through service virtualization for
dependences. Automated tests for unit and integration (defect or regression) tests with
static and dynamic tests, measure and profile performance confirms it behaves as it
should. The updated repository triggers another build process and tests. The new
updates are committed to the repository when all the tests have been verified and
delivered to stakeholders and testers otherwise updates have rollback. The build / test
process is repeated periodically to ensure no corruption of the system.
12.2.2.17 Continuous Delivery
Continuous delivery automates source control all the way through to production. It
include continuous integration, application release automation, build automation,
and application life cycle management.
12.2.2.18 Virtual Reality
Virtual reality is the user interface with services for monitoring and control of the IOT
system. It works with other technology such as remote communication, artificial
intelligence and spacial data to assist the technology. Errors for entities, services,
standards, techniques and communications are reported using this method so that
corrective actions can be made remotely. The reported error is displayed as a device
stack and position then evaluated with respect to time, device, device type, position
and after review the system structure is modified appropriately.
12.2.2.19 Commentary
The entity definition set above is created once when the entity is added to the system
and changed and removed infrequently as the entity set is extended. It is queried
frequently for every service, standard and technique rule that is read. The entity
definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
12.3 Service Processing
Services processing is the subfunction of extracting services from a set of information.
12.3.1 Logical Design
12.3.1.1 Database Processing
The database supports IoT activities with applications reflecting execution libraries,
bibliographic, document-text, statistical and multimedia objects. The database
management system supports users and other applications to collect and analyse the
data for IoT processes. The system allows the definition (create, change and remove
definitions of the organization of the data using a data definition language - schema
definition (DDL)), querying (retrieve information usable for the user or other
applications using a query language (QL)), update (insert, modify, and delete of actual
data using a data manipulation language (DML)), and administration (maintain users,
data security, performance, data integrity, concurrency and data recovery using
utilities - physical definition (PDL)) of the database. The most suitable database model
for the applications is a hybrid object-relational databases. They use fast key-value
stores and document-oriented databases with XML to give interoperability between
different implementations.
The logical database structure follows the definition in section 14 (Appendix –
Database Scheme) and is extended with an escape sequence in section 14 (Appendix –
Database Scheme). The schema is maintained with DDL, is used with DML and QL
(especially for entities and services) and cleaned up with PDL
DDL, QL, DML and PDL are implemented as services
Other application requirements are:
• event-driven architecture database - implemented as services
• deductive database - implemented as services
• multi-database - implemented as services
• graph database - implemented as services
• hypertext hypermedia database - implemented as services
• knowledge base - implemented as services
• probabilistic database - implemented as services
• real-time database - implemented as services
• temporal database - implemented as services
• loading and executing services from libraries – implemented as services
12.3.1.2 Geographic Information Systems
The database system holds geographic data for collecting, processing and reporting
spatial information for maps, visualization and intelligence. GIS supports discrete
objects and continuous fields as raster images and vector.
Entity data includes position and non position attributes. Entities can be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other information. The entities are processed with services,
standards and techniques.
Communications use dialogues between source and destination over a transmission
medium with protocols governing the process. The protocol is defined by a standard or
technique controlling entities and services which may use GIS data.
12.3.1.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.3.1.4 Network Theory
Network theory considers ways to validate connections within a graph structure (nodes
and edges). Algorithms can resolve ordered elements, single root trees and flows in
networks to validate consistency, completeness and use with and without dangles or
loops as described in section 7.1.3.2.
Validation cases are well structured, consistent, complete, completion of processes,
optimal structure for minimum time of processing and maximum ease of look up.
a. The system is well structured when an element is connected to other elements (for
entity v entity, entity v service, service v technique, service v standard, technique v
standard).
b. It is consistent when an element is not accessed from two other different elements
(for entity v entity, entity v service, service v technique, service v standard, technique v
standard).
c. It is complete when there are elements being defined but unused. The elements are a
waste and would cause confusion if they are known. The completeness prerequisite will
eliminate this difficulty (for entity v entity, entity v service, service v technique, service v
standard, technique v standard).
d. It has a way of completing its processes – every entity has an input or an output
service
e. There is an optimal way for structuring the system to minimise the time of processing
using a CPM process on the entities, services, techniques and standard respectively.
f. There is an optimal way for structuring the system to maximise the ease of look up by
minimising the height / depth of the network for entities, services, techniques and
standard respectively.
12.3.1.5 Markov Theory
Markov theory extends network theory to determine what the flow through nodes and
edges and hence the unused nodes and edges for zero flow through them. It also finds
the flow between the nodes and which are single entry or single exit blocks (grouping)
of nodes.
By introducing a node as an error sink and extra edges from the remaining nodes the
probability of error at different parts in the network system can be calculated along
with the size of error at each point of the Markov process and the error node gives an
estimate of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges. The implementation is based on services for the schema and
the database contents separately on a timed basis.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
• When entities can form the network system services can be considered as edges
for the Markov analysis.
• When entities can form the network system standards can be considered as
edges for the Markov analysis.
• When entities can form the network system techniques can be considered as
edges for the Markov analysis.
• When entities can form the network system services and standards can be
considered as edges for the Markov analysis.
• When entities can form the network system services and techniques can be
considered as edges for the Markov analysis.
• When entities can form the network system services, standards and techniques
can be considered as edges for the Markov analysis.
• When services can form the network system entities can be considered as edges
for the Markov analysis.
• When services can form the network system standards can be considered as
edges for the Markov analysis.
• When services can form the network system techniques can be considered as
edges for the Markov analysis.
• When services can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When services can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When services can form the network system entities, standards and techniques
can be considered as edges for the Markov analysis.
• When standards can form the network system entities can be considered as
edges for the Markov analysis.
• When standards can form the network system services can be considered as
edges for the Markov analysis.
• When standards can form the network system techniques can be considered as
edges for the Markov analysis.
• When standards can form the network system entities and services can be
considered as edges for the Markov analysis.
• When standards can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When standards can form the network system entities, services and techniques
can be considered as edges for the Markov analysis.
• When techniques can form the network system entities can be considered as
edges for the Markov analysis.
• When techniques can form the network system services can be considered as
edges for the Markov analysis.
• When techniques can form the network system standards can be considered as
edges for the Markov analysis.
• When techniques can form the network system entities and services can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities, services and standards
can be considered as edges for the Markov analysis.
Communications are based on protocols which are standards or techniques so the
analyses taken above can be applied. Additionally the protocols can be taken as nodes
with entities, services, standards and techniques as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
12.3.1.6 Algebraic Theory
Algebraic theory gives a set with elements and functions to be a basis of a system.
Individual services can be validated using algebraic theory. Restrictions on valid basic
elements are defined. Combinations of the elements forming larger elements are
classified as systems or subsystems with rules for correctness and erroneous.
Iterations on the combination process for more complex elements are validated against
rules giving meaning to elements and combinations of them. Relations are derived from
another set of functions which give links such as generalisation, and specification
based on properties of the elements. Other parts are ways of defining properties of
elements or functions whilst some apply to the scope of elements and functions.
When entities are considered as elements and functions are services, standards and
techniques define combinations and validation rules and valuations. Similarly the
duality gives services as elements, entities as functions and standards and techniques
validate combinations and valuations.
Communications consists of a dialogue between a source and a destination over a
transmission medium. Protocols (rules) govern the process which may be standards or
techniques to give entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
12.3.1.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.3.1.8 Programming Language Theory
Programming language theory gives rules for formalised standard and technique for
entities, graphics, sound and media technologies, services, database schema,
techniques and standards and database models. The IoT system schema and data is
covered by nesting, scope for meaning, objects, properties, methods with arguments,
the "this" operator and the concepts of synonyms, generalisation and specification,
overloaded definitions changed by context, replication, iteration, conditional meaning,
libraries, events, priority as set out in Section 14 (Appendix Database Schema).
12.3.1.9 Compiler Technology Theory
A compiler translates high-level language source programs to the target code. It takes
the formal definition in section 14 (Appendix – Database Scheme) to the physical
database representation starting with a set of base elements as the entities, services,
techniques and standards of the system and a sequence for extending the entity set,
services set, techniques set and standards set.

Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.

12.3.1.10 Communications Theory


Communications are controlled by protocols which consist of standards and techniques
applied to services and entities. Standards’ protocols are expressed in terms of a
formal language. Services are given priorities on how the entities are processed based
on the protocol. The protocol can be improved through the learning, probability,
network analysis and Markov theory for the entities, services, standards and
techniques sections. The definitions are translated using compiler technology.
If an entity or a service is not recognised then it is passed to a recovery process based
on repeated analysis of the situation by some parallel check. If the entity or service is
not recovered, the entity / service stack dump is reviewed to determine if there is an
error or the entity / service should be extended to the entity / service set with an
escape sequence. Similarly standards and techniques can be updated by experience.
12.3.1.11 Learning theory
Learning theory affords a set of methods for adding data, relations and modifications to
the knowledge database of the IoT system using the procedure for learning described
in section 7.1.9.2.4.
12.3.1.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
12.3.1.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.3.1.14 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the variables. Extrapolation
takes the fitted curve to calculate values beyond the range of the observed data, and
gives uncertainty due to which particular curve has been determined. Curve fitting
relies on various types of constraints such as a specific point, angle, curvature or other
higher order constraints especially at the ends of the points being considered. The
number of constraints sets a limit on the number of combined functions defining the
fitted curve even then there is no guarantee that all constraints are met or the exact
curve is found. Curves are assessed by various measures with a popular procedure
being the least squares method which measures the deviations of the given data
points.
Curve fitting can select the correct source / destination from the group of nodes to
apply to a communications service and then to select the correct connection from the
group of routes to apply to the service. Curve fitting can check the entity, service,
technique, standard and communications from the components that make up the
system.
12.3.1.15 Configuration Management
Configuration management  follows the process and database described in section
7.1.17.2. Each entity, service, standard, technique and communications protocol is
subject to a configuration management life cycle and is supported by the appropriate
services and database. If an element or relation is not found then the error is reported
as a stack dump and after review the database structure is adjusted.
12.3.1.16 Continuous Integration
Continuous integration uses an extension of the configuration management as
described in section 7.1.18.2. It applies to entities, services, standards and techniques.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
12.3.1.17 Continuous Delivery
Continuous delivery extends the processes and databases along the lines of section
7.19.1.2. It is supported by services as appropriate and applies to developments of
entities, services, standards and techniques. If an element or relation is not found then
the error is reported as a stack dump and after review the database structure is
adjusted.
12.3.1.18 Virtual Reality
Virtual reality is the user interface for monitoring and control of the IOT system. It
works with other technology such as remote communication, artificial intelligence and
spacial data to assist the technology. Errors for entities, services, standards,
techniques and communications are reported using this method so that corrective
actions can be made remotely. The reported error is displayed as a device stack and
position then evaluated with respect to time, device, device type, position and after
review the system structure is modified appropriately.
12.3.2 Physical System
12.3.2.1 Database Processing
The database supports IoT activities with a multimedia hybrid object-relational NOSQL
multi-database with appropriate DDL, QL, DML and PDL. It supports an XML schema
defined in section 14 (Appendix – Database Scheme) with services giving facilities for
event-driven architecture, deduction, graph structures, hypertext hypermedia,
knowledge base, probability, real-time, loading and executing services from libraries
and temporal information. It is a virtual store.
12.3.2.2 Geographic Information Systems
Entities use GIS data and communications protocols, services, standards and
techniques collect, process and report the GIS information for visualization and
analysis.
12.3.2.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.3.2.4 Network Theory
Network theory gives algorithms for services to validate and optimise links in the
database schema and the data for well structured-ness, consistency, completeness,
completion of processes, optimal structure for minimum time of processing and
maximum ease of look up applied to entities, services, techniques, standards and
communications.
12.3.2.5 Markov Theory
Markov theory extends network theory with services to validate and optimise nodes
and edges for schema structure and database structure for entities, services,
techniques, standards and communications respectively and review any problems
reported in the database.
12.3.2.6 Algebraic Theory
Algebraic theory transforms the logic of of services as assertions and form constraints
of input using compiler technology. Entities are analysed with set theory to verify
constraints on them. The system is combined with logic flows to verify that the outputs
and following inputs are consistent. Techniques and standards follow the integration
process. Communications follow the same integration actions.
12.3.2.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.3.2.8 Programming Language Theory
Programming language theory gives formalised ways for defining entities, services,
techniques, standards and communications. It gives a basis for compiler technology to
process entities, services, techniques, standards and communications for network,
Markov, algebraic and logical validation of the schema and database.
12.3.2.9 Compiler Technology Theory
Compiler technology translates the definitions of entities, services, techniques,
standards and communications for validation processes of database and its schema. It
is also used to optimise the system entities, services, techniques, standards and
communications through learning, probability, network analysis and Markov theory.
12.3.2.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities best expressed in a formal language. The
representation is translated with compiler technology to give validation through
network, Markov, algebraic and logical analysis and improved through the learning,
probability, network analysis and Markov theory.
12.3.2.11 Learning theory
Learning theory uses the procedure for learning described in section 7.1.9.2.4 to
optimise the database schema and data into a knowledge database for the IoT system.
12.3.2.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
12.3.2.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.3.2.14 Curve Fitting
Curve fitting uses extended Pearson coefficient analysis to assess trust in the curve
fitting process. The curve fitting uses Chebycheff polynomials and splines for
interpolation and extrapolation for multi dimensional analysis. It is particularly useful
for selecting the correct source from the group of nodes to apply to a communications
service, the correct destination from the group of nodes to apply to a communications
service and then the correct connection from the group of routes to apply to a
communications service.
12.3.2.15 Configuration Management
Configuration management  has the database include system construction and item
identity and status as entities. Services provide base-lining, configuration control with
approval stages and baselines, configuration status accounting and audits versus
revision and defect correction.
12.3.2.16 Continuous Integration
Continuous integration extends configuration management with services to extract a
copy of the system from a repository and perform a build and a set of automated tests
to ensure that their environment is valid for update. A build server builds the system,
documentation, statistics, distribution media, integrates and deploys into a scalable
version clone of production environment through service virtualization for
dependences. Automated tests for unit and integration (defect or regression) tests with
static and dynamic tests, measure and profile performance confirms it behaves as it
should. The updated repository triggers another build process and tests. The new
updates are committed to the repository when all the tests have been verified and
delivered to stakeholders and testers otherwise updates have rollback. The build / test
process is repeated periodically to ensure no corruption of the system.
12.3.2.17 Continuous Delivery
Continuous delivery automates source control all the way through to production. It
include continuous integration, application release automation, build automation,
and application life cycle management.
12.3.2.18 Virtual Reality
Virtual reality is the user interface with services for monitoring and control of the IOT
system. It works with other technology such as remote communication, artificial
intelligence and spacial data to assist the technology. Errors for entities, services,
standards, techniques and communications are reported using this method so that
corrective actions can be made remotely. The reported error is displayed as a device
stack and position then evaluated with respect to time, device, device type, position
and after review the system structure is modified appropriately.
12.3.2.19 Commentary
The service definition set above is created once when the service is added to the
system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
12.4 Standard Processing
Standards processing is the subfunction of extracting standards from a set of
information.
12.4.1 Logical Design
12.4.1.1 Database Processing
The database supports IoT activities with applications reflecting bibliographic,
document-text, statistical and multimedia objects. The database management system
supports users and other applications to collect and analyse the data for IoT
processes. The system allows the definition (create, change and remove definitions of
the organization of the data using a data definition language - schema definition (DDL)),
querying (retrieve information usable for the user or other applications using a query
language (QL)), update (insert, modify, and delete of actual data using a data
manipulation language (DML)), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities - physical
definition (PDL)) of the database. The most suitable database model for the
applications is a hybrid object-relational databases. They use fast key-value stores and
document-oriented databases with XML to give interoperability between different
implementations.
The logical database structure follows the definition in section 14 (Appendix –
Database Scheme) and is extended with an escape sequence in section 14 (Appendix –
Database Scheme). The schema is maintained with DDL, is used with DML and QL
(especially for entities and services) and cleaned up with PDL
DDL, QL, DML and PDL are implemented as services
Other application requirements are:
• event-driven architecture database - implemented as services
• deductive database - implemented as services
• multi-database - implemented as services
• graph database - implemented as services
• hypertext hypermedia database - implemented as services
• knowledge base - implemented as services
• probabilistic database - implemented as services
• real-time database - implemented as services
• temporal database - implemented as services
12.4.1.2 Geographic Information Systems
The database system holds geographic data for collecting, processing and reporting
spatial information for maps, visualization and intelligence. GIS supports discrete
objects and continuous fields as raster images and vector.
Entity data includes position and non position attributes. Entities can be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other information. The entities are processed with services,
standards and techniques.
Communications use dialogues between source and destination over a transmission
medium with protocols governing the process. The protocol is defined by a standard or
technique controlling entities and services which may use GIS data.
12.4.1.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.4.1.4 Network Theory
Network theory considers ways to validate connections within a graph structure (nodes
and edges). Algorithms can resolve ordered elements, single root trees and flows in
networks to validate consistency, completeness and use with and without dangles or
loops as described in section 7.1.3.2.
Validation cases are well structured, consistent, complete, completion of processes,
optimal structure for minimum time of processing and maximum ease of look up.
a. The system is well structured when an element is connected to other elements (for
entity v entity, entity v service, service v technique, service v standard, technique v
standard).
b. It is consistent when an element is not accessed from two other different elements
(for entity v entity, entity v service, service v technique, service v standard, technique v
standard).
c. It is complete when there are elements being defined but unused. The elements are a
waste and would cause confusion if they are known. The completeness prerequisite will
eliminate this difficulty (for entity v entity, entity v service, service v technique, service v
standard, technique v standard).
d. It has a way of completing its processes – every entity has an input or an output
service
e. There is an optimal way for structuring the system to minimise the time of processing
using a CPM process on the entities, services, techniques and standard respectively.
f. There is an optimal way for structuring the system to maximise the ease of look up by
minimising the height / depth of the network for entities, services, techniques and
standard respectively.
12.4.1.5 Markov Theory
Markov theory extends network theory to determine what the flow through nodes and
edges and hence the unused nodes and edges for zero flow through them. It also finds
the flow between the nodes and which are single entry or single exit blocks (grouping)
of nodes.
By introducing a node as an error sink and extra edges from the remaining nodes the
probability of error at different parts in the network system can be calculated along
with the size of error at each point of the Markov process and the error node gives an
estimate of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges. The implementation is based on services for the schema and
the database contents separately on a timed basis.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
• When entities can form the network system services can be considered as edges
for the Markov analysis.
• When entities can form the network system standards can be considered as
edges for the Markov analysis.
• When entities can form the network system techniques can be considered as
edges for the Markov analysis.
• When entities can form the network system services and standards can be
considered as edges for the Markov analysis.
• When entities can form the network system services and techniques can be
considered as edges for the Markov analysis.
• When entities can form the network system services, standards and techniques
can be considered as edges for the Markov analysis.
• When services can form the network system entities can be considered as edges
for the Markov analysis.
• When services can form the network system standards can be considered as
edges for the Markov analysis.
• When services can form the network system techniques can be considered as
edges for the Markov analysis.
• When services can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When services can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When services can form the network system entities, standards and techniques
can be considered as edges for the Markov analysis.
• When standards can form the network system entities can be considered as
edges for the Markov analysis.
• When standards can form the network system services can be considered as
edges for the Markov analysis.
• When standards can form the network system techniques can be considered as
edges for the Markov analysis.
• When standards can form the network system entities and services can be
considered as edges for the Markov analysis.
• When standards can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When standards can form the network system entities, services and techniques
can be considered as edges for the Markov analysis.
• When techniques can form the network system entities can be considered as
edges for the Markov analysis.
• When techniques can form the network system services can be considered as
edges for the Markov analysis.
• When techniques can form the network system standards can be considered as
edges for the Markov analysis.
• When techniques can form the network system entities and services can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities, services and standards
can be considered as edges for the Markov analysis.
Communications are based on protocols which are standards or techniques so the
analyses taken above can be applied. Additionally the protocols can be taken as nodes
with entities, services, standards and techniques as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
12.4.1.6 Algebraic Theory
Algebraic theory gives a set with elements and functions to be a basis of a system.
Individual services can be validated using algebraic theory. Restrictions on valid basic
elements are defined. Combinations of the elements forming larger elements are
classified as systems or subsystems with rules for correctness and erroneous.
Iterations on the combination process for more complex elements are validated against
rules giving meaning to elements and combinations of them. Relations are derived from
another set of functions which give links such as generalisation, and specification
based on properties of the elements. Other parts are ways of defining properties of
elements or functions whilst some apply to the scope of elements and functions.
When entities are considered as elements and functions are services, standards and
techniques define combinations and validation rules and valuations. Similarly the
duality gives services as elements, entities as functions and standards and techniques
validate combinations and valuations.
Communications consists of a dialogue between a source and a destination over a
transmission medium. Protocols (rules) govern the process which may be standards or
techniques to give entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
12.4.1.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.4.1.8 Programming Language Theory
Programming language theory gives rules for formalised standard and technique for
entities, graphics, sound and media technologies, services, database schema,
techniques and standards and database models. The IoT system schema and data is
covered by nesting, scope for meaning, objects, properties, methods with arguments,
the "this" operator and the concepts of synonyms, generalisation and specification,
overloaded definitions changed by context, replication, iteration, conditional meaning,
libraries, events, priority as set out in Section 14 (Appendix Database Schema).
12.4.1.9 Compiler Technology Theory
A compiler translates high-level language source programs to the target code. It takes
the formal definition in section 14 (Appendix – Database Scheme) to the physical
database representation starting with a set of base elements as the entities, services,
techniques and standards of the system and a sequence for extending the entity set,
services set, techniques set and standards set.

Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.

12.4.1.10 Communications Theory


Communications are controlled by protocols which consist of standards and techniques
applied to services and entities. Standards’ protocols are expressed in terms of a
formal language. Services are given priorities on how the entities are processed based
on the protocol. The protocol can be improved through the learning, probability,
network analysis and Markov theory for the entities, services, standards and
techniques sections. The definitions are translated using compiler technology.
If an entity or a service is not recognised then it is passed to a recovery process based
on repeated analysis of the situation by some parallel check. If the entity or service is
not recovered, the entity / service stack dump is reviewed to determine if there is an
error or the entity / service should be extended to the entity / service set with an
escape sequence. Similarly standards and techniques can be updated by experience.
12.4.1.11 Learning theory
Learning theory affords a set of methods for adding data, relations and modifications to
the knowledge database of the IoT system using the procedure for learning described
in section 7.1.9.2.4.
12.4.1.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
12.4.1.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.4.1.14 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the variables. Extrapolation
takes the fitted curve to calculate values beyond the range of the observed data, and
gives uncertainty due to which particular curve has been determined. Curve fitting
relies on various types of constraints such as a specific point, angle, curvature or other
higher order constraints especially at the ends of the points being considered. The
number of constraints sets a limit on the number of combined functions defining the
fitted curve even then there is no guarantee that all constraints are met or the exact
curve is found. Curves are assessed by various measures with a popular procedure
being the least squares method which measures the deviations of the given data
points.
Curve fitting can select the correct source / destination from the group of nodes to
apply to a communications service and then to select the correct connection from the
group of routes to apply to the service. Curve fitting can check the entity, service,
technique, standard and communications from the components that make up the
system.
12.4.1.15 Configuration Management
Configuration management  follows the process and database described in section
7.1.17.2. Each entity, service, standard, technique and communications protocol is
subject to a configuration management life cycle and is supported by the appropriate
services and database. If an element or relation is not found then the error is reported
as a stack dump and after review the database structure is adjusted.
12.4.1.16 Continuous Integration
Continuous integration uses an extension of the configuration management as
described in section 7.1.18.2. It applies to entities, services, standards and techniques.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
12.4.1.17 Continuous Delivery
Continuous delivery extends the processes and databases along the lines of section
7.19.1.2. It is supported by services as appropriate and applies to developments of
entities, services, standards and techniques. If an element or relation is not found then
the error is reported as a stack dump and after review the database structure is
adjusted.
12.4.1.18 Virtual Reality
Virtual reality is the user interface for monitoring and control of the IOT system. It
works with other technology such as remote communication, artificial intelligence and
spacial data to assist the technology. Errors for entities, services, standards,
techniques and communications are reported using this method so that corrective
actions can be made remotely. The reported error is displayed as a device stack and
position then evaluated with respect to time, device, device type, position and after
review the system structure is modified appropriately.
12.4.2 Physical System
12.4.2.1 Database Processing
The database supports IoT activities with a multimedia hybrid object-relational NOSQL
multi-database with appropriate DDL, QL, DML and PDL. It supports an XML schema
defined in section 14 (Appendix – Database Scheme) with services giving facilities for
event-driven architecture, deduction, graph structures, hypertext hypermedia,
knowledge base, probability, real-time and temporal information. It is a virtual store.
12.4.2.2 Geographic Information Systems
Entities use GIS data and communications protocols, services, standards and
techniques collect, process and report the GIS information for visualization and
analysis.
12.4.2.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.4.2.4 Network Theory
Network theory gives algorithms for services to validate and optimise links in the
database schema and the data for well structured-ness, consistency, completeness,
completion of processes, optimal structure for minimum time of processing and
maximum ease of look up applied to entities, services, techniques, standards and
communications.
12.4.2.5 Markov Theory
Markov theory extends network theory with services to validate and optimise nodes
and edges for schema structure and database structure for entities, services,
techniques, standards and communications respectively and review any problems
reported in the database.
12.4.2.6 Algebraic Theory
Algebraic theory transforms the logic of of services as assertions and form constraints
of input using compiler technology. Entities are analysed with set theory to verify
constraints on them. The system is combined with logic flows to verify that the outputs
and following inputs are consistent. Techniques and standards follow the integration
process. Communications follow the same integration actions.
12.4.2.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.4.2.8 Programming Language Theory
Programming language theory gives formalised ways for defining entities, services,
techniques, standards and communications. It gives a basis for compiler technology to
process entities, services, techniques, standards and communications for network,
Markov, algebraic and logical validation of the schema and database.
12.4.2.9 Compiler Technology Theory
Compiler technology translates the definitions of entities, services, techniques,
standards and communications for validation processes of database and its schema. It
is also used to optimise the system entities, services, techniques, standards and
communications through learning, probability, network analysis and Markov theory.
12.4.2.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities best expressed in a formal language. The
representation is translated with compiler technology to give validation through
network, Markov, algebraic and logical analysis and improved through the learning,
probability, network analysis and Markov theory.
12.4.2.11 Learning theory
Learning theory uses the procedure for learning described in section 7.1.9.2.4 to
optimise the database schema and data into a knowledge database for the IoT system.
12.4.2.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
12.4.2.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.4.2.14 Curve Fitting
Curve fitting uses extended Pearson coefficient analysis to assess trust in the curve
fitting process. The curve fitting uses Chebycheff polynomials and splines for
interpolation and extrapolation for multi dimensional analysis. It is particularly useful
for selecting the correct source from the group of nodes to apply to a communications
service, the correct destination from the group of nodes to apply to a communications
service and then the correct connection from the group of routes to apply to a
communications service.
12.4.2.15 Configuration Management
Configuration management  has the database include system construction and item
identity and status as entities. Services provide base-lining, configuration control with
approval stages and baselines, configuration status accounting and audits versus
revision and defect correction.
12.4.2.16 Continuous Integration
Continuous integration extends configuration management with services to extract a
copy of the system from a repository and perform a build and a set of automated tests
to ensure that their environment is valid for update. A build server builds the system,
documentation, statistics, distribution media, integrates and deploys into a scalable
version clone of production environment through service virtualization for
dependences. Automated tests for unit and integration (defect or regression) tests with
static and dynamic tests, measure and profile performance confirms it behaves as it
should. The updated repository triggers another build process and tests. The new
updates are committed to the repository when all the tests have been verified and
delivered to stakeholders and testers otherwise updates have rollback. The build / test
process is repeated periodically to ensure no corruption of the system.
12.4.2.17 Continuous Delivery
Continuous delivery automates source control all the way through to production. It
include continuous integration, application release automation, build automation,
and application life cycle management.
12.4.2.18 Virtual Reality
Virtual reality is the user interface with services for monitoring and control of the IOT
system. It works with other technology such as remote communication, artificial
intelligence and spacial data to assist the technology. Errors for entities, services,
standards, techniques and communications are reported using this method so that
corrective actions can be made remotely. The reported error is displayed as a device
stack and position then evaluated with respect to time, device, device type, position
and after review the system structure is modified appropriately.
12.4.2.19 Commentary
The standard definition set above is created once when the standard is added to the
system and changed and removed infrequently as the standard set is extended. It is
queried frequently for every service, entity and technique rule that is read. The
standard definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
12.5 Technique Processing
Technique processing is the subfunction of extracting techniques from a set of
information.
12.5.1 Logical Design
12.5.1.1 Database Processing
The database supports IoT activities with applications reflecting bibliographic,
document-text, statistical and multimedia objects. The database management system
supports users and other applications to collect and analyse the data for IoT
processes. The system allows the definition (create, change and remove definitions of
the organization of the data using a data definition language - schema definition (DDL)),
querying (retrieve information usable for the user or other applications using a query
language (QL)), update (insert, modify, and delete of actual data using a data
manipulation language (DML)), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities - physical
definition (PDL)) of the database. The most suitable database model for the
applications is a hybrid object-relational databases. They use fast key-value stores and
document-oriented databases with XML to give interoperability between different
implementations.
The logical database structure follows the definition in section 14 (Appendix –
Database Scheme) and is extended with an escape sequence in section 14 (Appendix –
Database Scheme). The schema is maintained with DDL, is used with DML and QL
(especially for entities and services) and cleaned up with PDL
DDL, QL, DML and PDL are implemented as services
Other application requirements are:
• event-driven architecture database - implemented as services
• deductive database - implemented as services
• multi-database - implemented as services
• graph database - implemented as services
• hypertext hypermedia database - implemented as services
• knowledge base - implemented as services
• probabilistic database - implemented as services
• real-time database - implemented as services
• temporal database - implemented as services
12.5.1.2 Geographic Information Systems
The database system holds geographic data for collecting, processing and reporting
spatial information for maps, visualization and intelligence. GIS supports discrete
objects and continuous fields as raster images and vector.
Entity data includes position and non position attributes. Entities can be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other information. The entities are processed with services,
standards and techniques.
Communications use dialogues between source and destination over a transmission
medium with protocols governing the process. The protocol is defined by a standard or
technique controlling entities and services which may use GIS data.
12.5.1.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.5.1.4 Network Theory
Network theory considers ways to validate connections within a graph structure (nodes
and edges). Algorithms can resolve ordered elements, single root trees and flows in
networks to validate consistency, completeness and use with and without dangles or
loops as described in section 7.1.3.2.
Validation cases are well structured, consistent, complete, completion of processes,
optimal structure for minimum time of processing and maximum ease of look up.
a. The system is well structured when an element is connected to other elements (for
entity v entity, entity v service, service v technique, service v standard, technique v
standard).
b. It is consistent when an element is not accessed from two other different elements
(for entity v entity, entity v service, service v technique, service v standard, technique v
standard).
c. It is complete when there are elements being defined but unused. The elements are a
waste and would cause confusion if they are known. The completeness prerequisite will
eliminate this difficulty (for entity v entity, entity v service, service v technique, service v
standard, technique v standard).
d. It has a way of completing its processes – every entity has an input or an output
service
e. There is an optimal way for structuring the system to minimise the time of processing
using a CPM process on the entities, services, techniques and standard respectively.
f. There is an optimal way for structuring the system to maximise the ease of look up by
minimising the height / depth of the network for entities, services, techniques and
standard respectively.
12.5.1.5 Markov Theory
Markov theory extends network theory to determine what the flow through nodes and
edges and hence the unused nodes and edges for zero flow through them. It also finds
the flow between the nodes and which are single entry or single exit blocks (grouping)
of nodes.
By introducing a node as an error sink and extra edges from the remaining nodes the
probability of error at different parts in the network system can be calculated along
with the size of error at each point of the Markov process and the error node gives an
estimate of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges. The implementation is based on services for the schema and
the database contents separately on a timed basis.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
• When entities can form the network system services can be considered as edges
for the Markov analysis.
• When entities can form the network system standards can be considered as
edges for the Markov analysis.
• When entities can form the network system techniques can be considered as
edges for the Markov analysis.
• When entities can form the network system services and standards can be
considered as edges for the Markov analysis.
• When entities can form the network system services and techniques can be
considered as edges for the Markov analysis.
• When entities can form the network system services, standards and techniques
can be considered as edges for the Markov analysis.
• When services can form the network system entities can be considered as edges
for the Markov analysis.
• When services can form the network system standards can be considered as
edges for the Markov analysis.
• When services can form the network system techniques can be considered as
edges for the Markov analysis.
• When services can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When services can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When services can form the network system entities, standards and techniques
can be considered as edges for the Markov analysis.
• When standards can form the network system entities can be considered as
edges for the Markov analysis.
• When standards can form the network system services can be considered as
edges for the Markov analysis.
• When standards can form the network system techniques can be considered as
edges for the Markov analysis.
• When standards can form the network system entities and services can be
considered as edges for the Markov analysis.
• When standards can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When standards can form the network system entities, services and techniques
can be considered as edges for the Markov analysis.
• When techniques can form the network system entities can be considered as
edges for the Markov analysis.
• When techniques can form the network system services can be considered as
edges for the Markov analysis.
• When techniques can form the network system standards can be considered as
edges for the Markov analysis.
• When techniques can form the network system entities and services can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities, services and standards
can be considered as edges for the Markov analysis.
Communications are based on protocols which are standards or techniques so the
analyses taken above can be applied. Additionally the protocols can be taken as nodes
with entities, services, standards and techniques as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
12.5.1.6 Algebraic Theory
Algebraic theory gives a set with elements and functions to be a basis of a system.
Individual services can be validated using algebraic theory. Restrictions on valid basic
elements are defined. Combinations of the elements forming larger elements are
classified as systems or subsystems with rules for correctness and erroneous.
Iterations on the combination process for more complex elements are validated against
rules giving meaning to elements and combinations of them. Relations are derived from
another set of functions which give links such as generalisation, and specification
based on properties of the elements. Other parts are ways of defining properties of
elements or functions whilst some apply to the scope of elements and functions.
When entities are considered as elements and functions are services, standards and
techniques define combinations and validation rules and valuations. Similarly the
duality gives services as elements, entities as functions and standards and techniques
validate combinations and valuations.
Communications consists of a dialogue between a source and a destination over a
transmission medium. Protocols (rules) govern the process which may be standards or
techniques to give entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
12.5.1.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.5.1.8 Programming Language Theory
Programming language theory gives rules for formalised standard and technique for
entities, graphics, sound and media technologies, services, database schema,
techniques and standards and database models. The IoT system schema and data is
covered by nesting, scope for meaning, objects, properties, methods with arguments,
the "this" operator and the concepts of synonyms, generalisation and specification,
overloaded definitions changed by context, replication, iteration, conditional meaning,
libraries, events, priority as set out in Section 14 (Appendix Database Schema).
12.5.1.9 Compiler Technology Theory
A compiler translates high-level language source programs to the target code. It takes
the formal definition in section 14 (Appendix – Database Scheme) to the physical
database representation starting with a set of base elements as the entities, services,
techniques and standards of the system and a sequence for extending the entity set,
services set, techniques set and standards set.

Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.

12.5.1.10 Communications Theory


Communications are controlled by protocols which consist of standards and techniques
applied to services and entities. Standards’ protocols are expressed in terms of a
formal language. Services are given priorities on how the entities are processed based
on the protocol. The protocol can be improved through the learning, probability,
network analysis and Markov theory for the entities, services, standards and
techniques sections. The definitions are translated using compiler technology.
If an entity or a service is not recognised then it is passed to a recovery process based
on repeated analysis of the situation by some parallel check. If the entity or service is
not recovered, the entity / service stack dump is reviewed to determine if there is an
error or the entity / service should be extended to the entity / service set with an
escape sequence. Similarly standards and techniques can be updated by experience.
12.5.1.11 Learning theory
Learning theory affords a set of methods for adding data, relations and modifications to
the knowledge database of the IoT system using the procedure for learning described
in section 7.1.9.2.4.
12.5.1.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
12.5.1.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.5.1.14 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the variables. Extrapolation
takes the fitted curve to calculate values beyond the range of the observed data, and
gives uncertainty due to which particular curve has been determined. Curve fitting
relies on various types of constraints such as a specific point, angle, curvature or other
higher order constraints especially at the ends of the points being considered. The
number of constraints sets a limit on the number of combined functions defining the
fitted curve even then there is no guarantee that all constraints are met or the exact
curve is found. Curves are assessed by various measures with a popular procedure
being the least squares method which measures the deviations of the given data
points.
Curve fitting can select the correct source / destination from the group of nodes to
apply to a communications service and then to select the correct connection from the
group of routes to apply to the service. Curve fitting can check the entity, service,
technique, standard and communications from the components that make up the
system.
12.5.1.15 Configuration Management
Configuration management  follows the process and database described in section
7.1.17.2. Each entity, service, standard, technique and communications protocol is
subject to a configuration management life cycle and is supported by the appropriate
services and database. If an element or relation is not found then the error is reported
as a stack dump and after review the database structure is adjusted.
12.5.1.16 Continuous Integration
Continuous integration uses an extension of the configuration management as
described in section 7.1.18.2. It applies to entities, services, standards and techniques.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
12.5.1.17 Continuous Delivery
Continuous delivery extends the processes and databases along the lines of section
7.19.1.2. It is supported by services as appropriate and applies to developments of
entities, services, standards and techniques. If an element or relation is not found then
the error is reported as a stack dump and after review the database structure is
adjusted.
12.5.1.18 Virtual Reality
Virtual reality is the user interface for monitoring and control of the IOT system. It
works with other technology such as remote communication, artificial intelligence and
spacial data to assist the technology. Errors for entities, services, standards,
techniques and communications are reported using this method so that corrective
actions can be made remotely. The reported error is displayed as a device stack and
position then evaluated with respect to time, device, device type, position and after
review the system structure is modified appropriately.
12.5.2 Physical System
12.5.2.1 Database Processing
The database supports IoT activities with a multimedia hybrid object-relational NOSQL
multi-database with appropriate DDL, QL, DML and PDL. It supports an XML schema
defined in section 14 (Appendix – Database Scheme) with services giving facilities for
event-driven architecture, deduction, graph structures, hypertext hypermedia,
knowledge base, probability, real-time and temporal information. It is a virtual store.
12.5.2.2 Geographic Information Systems
Entities use GIS data and communications protocols, services, standards and
techniques collect, process and report the GIS information for visualization and
analysis.
12.5.2.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.5.2.4 Network Theory
Network theory gives algorithms for services to validate and optimise links in the
database schema and the data for well structured-ness, consistency, completeness,
completion of processes, optimal structure for minimum time of processing and
maximum ease of look up applied to entities, services, techniques, standards and
communications.
12.5.2.5 Markov Theory
Markov theory extends network theory with services to validate and optimise nodes
and edges for schema structure and database structure for entities, services,
techniques, standards and communications respectively and review any problems
reported in the database.
12.5.2.6 Algebraic Theory
Algebraic theory transforms the logic of of services as assertions and form constraints
of input using compiler technology. Entities are analysed with set theory to verify
constraints on them. The system is combined with logic flows to verify that the outputs
and following inputs are consistent. Techniques and standards follow the integration
process. Communications follow the same integration actions.
12.5.2.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.5.2.8 Programming Language Theory
Programming language theory gives formalised ways for defining entities, services,
techniques, standards and communications. It gives a basis for compiler technology to
process entities, services, techniques, standards and communications for network,
Markov, algebraic and logical validation of the schema and database.
12.5.2.9 Compiler Technology Theory
Compiler technology translates the definitions of entities, services, techniques,
standards and communications for validation processes of database and its schema. It
is also used to optimise the system entities, services, techniques, standards and
communications through learning, probability, network analysis and Markov theory.
12.5.2.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities best expressed in a formal language. The
representation is translated with compiler technology to give validation through
network, Markov, algebraic and logical analysis and improved through the learning,
probability, network analysis and Markov theory.
12.5.2.11 Learning theory
Learning theory uses the procedure for learning described in section 7.1.9.2.4 to
optimise the database schema and data into a knowledge database for the IoT system.
12.5.2.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
12.5.2.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.5.2.14 Curve Fitting
Curve fitting uses extended Pearson coefficient analysis to assess trust in the curve
fitting process. The curve fitting uses Chebycheff polynomials and splines for
interpolation and extrapolation for multi dimensional analysis. It is particularly useful
for selecting the correct source from the group of nodes to apply to a communications
service, the correct destination from the group of nodes to apply to a communications
service and then the correct connection from the group of routes to apply to a
communications service.
12.5.2.15 Configuration Management
Configuration management  has the database include system construction and item
identity and status as entities. Services provide base-lining, configuration control with
approval stages and baselines, configuration status accounting and audits versus
revision and defect correction.
12.5.2.16 Continuous Integration
Continuous integration extends configuration management with services to extract a
copy of the system from a repository and perform a build and a set of automated tests
to ensure that their environment is valid for update. A build server builds the system,
documentation, statistics, distribution media, integrates and deploys into a scalable
version clone of production environment through service virtualization for
dependences. Automated tests for unit and integration (defect or regression) tests with
static and dynamic tests, measure and profile performance confirms it behaves as it
should. The updated repository triggers another build process and tests. The new
updates are committed to the repository when all the tests have been verified and
delivered to stakeholders and testers otherwise updates have rollback. The build / test
process is repeated periodically to ensure no corruption of the system.
12.5.2.17 Continuous Delivery
Continuous delivery automates source control all the way through to production. It
include continuous integration, application release automation, build automation,
and application life cycle management.
12.5.2.18 Virtual Reality
Virtual reality is the user interface with services for monitoring and control of the IOT
system. It works with other technology such as remote communication, artificial
intelligence and spacial data to assist the technology. Errors for entities, services,
standards, techniques and communications are reported using this method so that
corrective actions can be made remotely. The reported error is displayed as a device
stack and position then evaluated with respect to time, device, device type, position
and after review the system structure is modified appropriately.
12.5.2.19 Commentary
The technique definition set above is created once when the technique is added to the
system and changed and removed infrequently as the technique set is extended. It is
queried frequently for every service, standard and entity rule that is read. The
technique definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
12.6 Communications Processing
Communications processing is the subfunction of extracting communications from a
set of information.
12.6.1 Logical Design
12.6.1.1 Database Processing
The database supports IoT activities with applications reflecting bibliographic,
document-text, statistical and multimedia objects. The database management system
supports users and other applications to collect and analyse the data for IoT
processes. The system allows the definition (create, change and remove definitions of
the organization of the data using a data definition language - schema definition (DDL)),
querying (retrieve information usable for the user or other applications using a query
language (QL)), update (insert, modify, and delete of actual data using a data
manipulation language (DML)), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities - physical
definition (PDL)) of the database. The most suitable database model for the
applications is a hybrid object-relational databases. They use fast key-value stores and
document-oriented databases with XML to give interoperability between different
implementations.
The logical database structure follows the definition in section 14 (Appendix –
Database Scheme) and is extended with an escape sequence in section 14 (Appendix –
Database Scheme). The schema is maintained with DDL, is used with DML and QL
(especially for entities and services) and cleaned up with PDL
DDL, QL, DML and PDL are implemented as services
Other application requirements are:
• event-driven architecture database - implemented as services
• deductive database - implemented as services
• multi-database - implemented as services
• graph database - implemented as services
• hypertext hypermedia database - implemented as services
• knowledge base - implemented as services
• probabilistic database - implemented as services
• real-time database - implemented as services
• temporal database - implemented as services
12.6.1.2 Geographic Information Systems
The database system holds geographic data for collecting, processing and reporting
spatial information for maps, visualization and intelligence. GIS supports discrete
objects and continuous fields as raster images and vector.
Entity data includes position and non position attributes. Entities can be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other information. The entities are processed with services,
standards and techniques.
Communications use dialogues between source and destination over a transmission
medium with protocols governing the process. The protocol is defined by a standard or
technique controlling entities and services which may use GIS data.
12.6.1.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.6.1.4 Network Theory
Network theory considers ways to validate connections within a graph structure (nodes
and edges). Algorithms can resolve ordered elements, single root trees and flows in
networks to validate consistency, completeness and use with and without dangles or
loops as described in section 7.1.3.2.
Validation cases are well structured, consistent, complete, completion of processes,
optimal structure for minimum time of processing and maximum ease of look up.
a. The system is well structured when an element is connected to other elements (for
entity v entity, entity v service, service v technique, service v standard, technique v
standard).
b. It is consistent when an element is not accessed from two other different elements
(for entity v entity, entity v service, service v technique, service v standard, technique v
standard).
c. It is complete when there are elements being defined but unused. The elements are a
waste and would cause confusion if they are known. The completeness prerequisite will
eliminate this difficulty (for entity v entity, entity v service, service v technique, service v
standard, technique v standard).
d. It has a way of completing its processes – every entity has an input or an output
service
e. There is an optimal way for structuring the system to minimise the time of processing
using a CPM process on the entities, services, techniques and standard respectively.
f. There is an optimal way for structuring the system to maximise the ease of look up by
minimising the height / depth of the network for entities, services, techniques and
standard respectively.
12.6.1.5 Markov Theory
Markov theory extends network theory to determine what the flow through nodes and
edges and hence the unused nodes and edges for zero flow through them. It also finds
the flow between the nodes and which are single entry or single exit blocks (grouping)
of nodes.
By introducing a node as an error sink and extra edges from the remaining nodes the
probability of error at different parts in the network system can be calculated along
with the size of error at each point of the Markov process and the error node gives an
estimate of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges. The implementation is based on services for the schema and
the database contents separately on a timed basis.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
• When entities can form the network system services can be considered as edges
for the Markov analysis.
• When entities can form the network system standards can be considered as
edges for the Markov analysis.
• When entities can form the network system techniques can be considered as
edges for the Markov analysis.
• When entities can form the network system services and standards can be
considered as edges for the Markov analysis.
• When entities can form the network system services and techniques can be
considered as edges for the Markov analysis.
• When entities can form the network system services, standards and techniques
can be considered as edges for the Markov analysis.
• When services can form the network system entities can be considered as edges
for the Markov analysis.
• When services can form the network system standards can be considered as
edges for the Markov analysis.
• When services can form the network system techniques can be considered as
edges for the Markov analysis.
• When services can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When services can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When services can form the network system entities, standards and techniques
can be considered as edges for the Markov analysis.
• When standards can form the network system entities can be considered as
edges for the Markov analysis.
• When standards can form the network system services can be considered as
edges for the Markov analysis.
• When standards can form the network system techniques can be considered as
edges for the Markov analysis.
• When standards can form the network system entities and services can be
considered as edges for the Markov analysis.
• When standards can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When standards can form the network system entities, services and techniques
can be considered as edges for the Markov analysis.
• When techniques can form the network system entities can be considered as
edges for the Markov analysis.
• When techniques can form the network system services can be considered as
edges for the Markov analysis.
• When techniques can form the network system standards can be considered as
edges for the Markov analysis.
• When techniques can form the network system entities and services can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities, services and standards
can be considered as edges for the Markov analysis.
Communications are based on protocols which are standards or techniques so the
analyses taken above can be applied. Additionally the protocols can be taken as nodes
with entities, services, standards and techniques as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
12.6.1.6 Algebraic Theory
Algebraic theory gives a set with elements and functions to be a basis of a system.
Individual services can be validated using algebraic theory. Restrictions on valid basic
elements are defined. Combinations of the elements forming larger elements are
classified as systems or subsystems with rules for correctness and erroneous.
Iterations on the combination process for more complex elements are validated against
rules giving meaning to elements and combinations of them. Relations are derived from
another set of functions which give links such as generalisation, and specification
based on properties of the elements. Other parts are ways of defining properties of
elements or functions whilst some apply to the scope of elements and functions.
When entities are considered as elements and functions are services, standards and
techniques define combinations and validation rules and valuations. Similarly the
duality gives services as elements, entities as functions and standards and techniques
validate combinations and valuations.
Communications consists of a dialogue between a source and a destination over a
transmission medium. Protocols (rules) govern the process which may be standards or
techniques to give entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
12.6.1.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.6.1.8 Programming Language Theory
Programming language theory gives rules for formalised standard and technique for
entities, graphics, sound and media technologies, services, database schema,
techniques and standards and database models. The IoT system schema and data is
covered by nesting, scope for meaning, objects, properties, methods with arguments,
the "this" operator and the concepts of synonyms, generalisation and specification,
overloaded definitions changed by context, replication, iteration, conditional meaning,
libraries, events, priority as set out in Section 14 (Appendix Database Schema).
12.6.1.9 Compiler Technology Theory
A compiler translates high-level language source programs to the target code. It takes
the formal definition in section 14 (Appendix – Database Scheme) to the physical
database representation starting with a set of base elements as the entities, services,
techniques and standards of the system and a sequence for extending the entity set,
services set, techniques set and standards set.

Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.

12.6.1.10 Communications Theory


Communications are controlled by protocols which consist of standards and techniques
applied to services and entities. Standards’ protocols are expressed in terms of a
formal language. Services are given priorities on how the entities are processed based
on the protocol. The protocol can be improved through the learning, probability,
network analysis and Markov theory for the entities, services, standards and
techniques sections. The definitions are translated using compiler technology.
If an entity or a service is not recognised then it is passed to a recovery process based
on repeated analysis of the situation by some parallel check. If the entity or service is
not recovered, the entity / service stack dump is reviewed to determine if there is an
error or the entity / service should be extended to the entity / service set with an
escape sequence. Similarly standards and techniques can be updated by experience.
12.6.1.11 Learning theory
Learning theory affords a set of methods for adding data, relations and modifications to
the knowledge database of the IoT system using the procedure for learning described
in section 7.1.9.2.4.
12.6.1.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
12.6.1.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.6.1.14 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the variables. Extrapolation
takes the fitted curve to calculate values beyond the range of the observed data, and
gives uncertainty due to which particular curve has been determined. Curve fitting
relies on various types of constraints such as a specific point, angle, curvature or other
higher order constraints especially at the ends of the points being considered. The
number of constraints sets a limit on the number of combined functions defining the
fitted curve even then there is no guarantee that all constraints are met or the exact
curve is found. Curves are assessed by various measures with a popular procedure
being the least squares method which measures the deviations of the given data
points.
Curve fitting can select the correct source / destination from the group of nodes to
apply to a communications service and then to select the correct connection from the
group of routes to apply to the service. Curve fitting can check the entity, service,
technique, standard and communications from the components that make up the
system.
12.6.1.15 Configuration Management
Configuration management  follows the process and database described in section
7.1.17.2. Each entity, service, standard, technique and communications protocol is
subject to a configuration management life cycle and is supported by the appropriate
services and database. If an element or relation is not found then the error is reported
as a stack dump and after review the database structure is adjusted.
12.6.1.16 Continuous Integration
Continuous integration uses an extension of the configuration management as
described in section 7.1.18.2. It applies to entities, services, standards and techniques.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
12.6.1.17 Continuous Delivery
Continuous delivery extends the processes and databases along the lines of section
7.19.1.2. It is supported by services as appropriate and applies to developments of
entities, services, standards and techniques. If an element or relation is not found then
the error is reported as a stack dump and after review the database structure is
adjusted.
12.6.1.18 Virtual Reality
Virtual reality is the user interface for monitoring and control of the IOT system. It
works with other technology such as remote communication, artificial intelligence and
spacial data to assist the technology. Errors for entities, services, standards,
techniques and communications are reported using this method so that corrective
actions can be made remotely. The reported error is displayed as a device stack and
position then evaluated with respect to time, device, device type, position and after
review the system structure is modified appropriately.
12.6.2 Physical System
12.6.2.1 Database Processing
The database supports IoT activities with a multimedia hybrid object-relational NOSQL
multi-database with appropriate DDL, QL, DML and PDL. It supports an XML schema
defined in section 14 (Appendix – Database Scheme) with services giving facilities for
event-driven architecture, deduction, graph structures, hypertext hypermedia,
knowledge base, probability, real-time and temporal information. It is a virtual store.
12.6.2.2 Geographic Information Systems
Entities use GIS data and communications protocols, services, standards and
techniques collect, process and report the GIS information for visualization and
analysis.
12.6.2.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.6.2.4 Network Theory
Network theory gives algorithms for services to validate and optimise links in the
database schema and the data for well structured-ness, consistency, completeness,
completion of processes, optimal structure for minimum time of processing and
maximum ease of look up applied to entities, services, techniques, standards and
communications.
12.6.2.5 Markov Theory
Markov theory extends network theory with services to validate and optimise nodes
and edges for schema structure and database structure for entities, services,
techniques, standards and communications respectively and review any problems
reported in the database.
12.6.2.6 Algebraic Theory
Algebraic theory transforms the logic of of services as assertions and form constraints
of input using compiler technology. Entities are analysed with set theory to verify
constraints on them. The system is combined with logic flows to verify that the outputs
and following inputs are consistent. Techniques and standards follow the integration
process. Communications follow the same integration actions.
12.6.2.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.6.2.8 Programming Language Theory
Programming language theory gives formalised ways for defining entities, services,
techniques, standards and communications. It gives a basis for compiler technology to
process entities, services, techniques, standards and communications for network,
Markov, algebraic and logical validation of the schema and database.
12.6.2.9 Compiler Technology Theory
Compiler technology translates the definitions of entities, services, techniques,
standards and communications for validation processes of database and its schema. It
is also used to optimise the system entities, services, techniques, standards and
communications through learning, probability, network analysis and Markov theory.
12.6.2.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities best expressed in a formal language. The
representation is translated with compiler technology to give validation through
network, Markov, algebraic and logical analysis and improved through the learning,
probability, network analysis and Markov theory.
12.6.2.11 Learning theory
Learning theory uses the procedure for learning described in section 7.1.9.2.4 to
optimise the database schema and data into a knowledge database for the IoT system.
12.6.2.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
12.6.2.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.6.2.14 Curve Fitting
Curve fitting uses extended Pearson coefficient analysis to assess trust in the curve
fitting process. The curve fitting uses Chebycheff polynomials and splines for
interpolation and extrapolation for multi dimensional analysis. It is particularly useful
for selecting the correct source from the group of nodes to apply to a communications
service, the correct destination from the group of nodes to apply to a communications
service and then the correct connection from the group of routes to apply to a
communications service.
12.6.2.15 Configuration Management
Configuration management  has the database include system construction and item
identity and status as entities. Services provide base-lining, configuration control with
approval stages and baselines, configuration status accounting and audits versus
revision and defect correction.
12.6.2.16 Continuous Integration
Continuous integration extends configuration management with services to extract a
copy of the system from a repository and perform a build and a set of automated tests
to ensure that their environment is valid for update. A build server builds the system,
documentation, statistics, distribution media, integrates and deploys into a scalable
version clone of production environment through service virtualization for
dependences. Automated tests for unit and integration (defect or regression) tests with
static and dynamic tests, measure and profile performance confirms it behaves as it
should. The updated repository triggers another build process and tests. The new
updates are committed to the repository when all the tests have been verified and
delivered to stakeholders and testers otherwise updates have rollback. The build / test
process is repeated periodically to ensure no corruption of the system.
12.6.2.17 Continuous Delivery
Continuous delivery automates source control all the way through to production. It
include continuous integration, application release automation, build automation,
and application life cycle management.
12.6.2.18 Virtual Reality
Virtual reality is the user interface with services for monitoring and control of the IOT
system. It works with other technology such as remote communication, artificial
intelligence and spacial data to assist the technology. Errors for entities, services,
standards, techniques and communications are reported using this method so that
corrective actions can be made remotely. The reported error is displayed as a device
stack and position then evaluated with respect to time, device, device type, position
and after review the system structure is modified appropriately.
12.6.2.19 Commentary
The communications definition set above is created once when the communications is
added to the system and changed and removed infrequently as the communications set
is extended. It is queried frequently for every entity, service, standard and technique
rule that is read. The communications definition set is updated (inserted, modified, and
deleted) infrequently. The administration (maintain users, data security, performance,
data integrity, concurrency and data recovery using utilities - services) of the database
will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
12.7 Antivirus Processing
An antivirus prevents, detects, and removes malware such as malicious browser helper
objects, browser hijackers, ransomware, key-loggers, back-doors, rootkits, Trojan
horses, worms, malicious Layered Service Providers, dialers, fraud
tools, adware and spyware. Some products include protection from other threats, like
infected and malicious URLs, spam, scam and phishing attacks, online
identity (privacy), online banking attacks, social engineering techniques, advanced
persistent threat (APT) and botnet DDoS attacks.
Antivirus engines identify malware as follows:
a) Sandbox detection - pseudo simulation mode to classify file behaviour.
b) Data mining and machine learning algorithms classifying the behaviour of the
file.
c) Signature-based detection from a dictionary from authorities writing
oligomorphic, polymorphic and metamorphic viruses,
d) Heuristics detection of variants (mutation / refinements) through generic
signature or through an inexact match to an existing signature.
e) Rootkit detection of administrator level malware.
f) Real-time protection with automatic protection monitoring for suspicious activity
in real-time while data opened or executed.
Problems arise from:
a) Rogue security applications
b) False positives detected
c) Running concurrent multiple antiviruses degrade performance and create
conflicts.
d) Disabling virus protection when installing major updates.
e) Program conflicts to cause malfunction or impair performance and stability.
f) Support issues of interoperability of remote and network access control.
g) Damaged files after removing viruses
Other techniques comprise of:
• Cloud antivirus uses lightweight agents on the protected computer, while
offloading the majority of data analysis to the provider's infrastructure.
• One approach to implementing cloud antivirus involves scanning suspicious files
using multiple antivirus engines.
• Some websites provide online scanning capability of the computer, critical
areas only, local disks, folders or files. Periodic full scans and incremental scans
give a basis of optimal effort.
Virus removal tools are available to help remove stubborn infections or certain types of
infection. A bootable rescue disk can be used to run antivirus software outside of the
installed operating system, in order to remove infections while they are dormant.
Antivirus services processing is the subfunction of extracting Antivirus services from a
set of information. Antivirus detects and eliminates known viruses when the computer
attempts to download or run the executable file based on a list of virus
signature definitions and heuristic algorithm based on common virus behaviours
12.7.1 Logical Design
12.7.1.1 Database Processing
The database supports IoT activities with applications reflecting bibliographic,
document-text, statistical and multimedia objects. The database management system
supports users and other applications to collect and analyse the data for IoT
processes. The system allows the definition (create, change and remove definitions of
the organization of the data using a data definition language - schema definition (DDL)),
querying (retrieve information usable for the user or other applications using a query
language (QL)), update (insert, modify, and delete of actual data using a data
manipulation language (DML)), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities - physical
definition (PDL)) of the database. The most suitable database model for the
applications is a hybrid object-relational databases. They use fast key-value stores and
document-oriented databases with XML to give interoperability between different
implementations.
The logical database structure follows the definition in section 14 (Appendix –
Database Scheme) and is extended with an escape sequence in section 14 (Appendix –
Database Scheme). The schema is maintained with DDL, is used with DML and QL
(especially for entities and services) and cleaned up with PDL
DDL, QL, DML and PDL are implemented as services
Other application requirements are:
• event-driven architecture database - implemented as services
• deductive database - implemented as services
• multi-database - implemented as services
• graph database - implemented as services
• hypertext hypermedia database - implemented as services
• knowledge base - implemented as services
• probabilistic database - implemented as services
• real-time database - implemented as services
• temporal database - implemented as services
• loading and executing antivirus from libraries – implemented as services
12.7.1.2 Geographic Information Systems
The database system holds geographic data for collecting, processing and reporting
spatial information for maps, visualization and intelligence. GIS supports discrete
objects and continuous fields as raster images and vector.
Entity data includes position and non position attributes. Entities can be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other information. The entities are processed with services,
standards and techniques.
Communications use dialogues between source and destination over a transmission
medium with protocols governing the process. The protocol is defined by a standard or
technique controlling entities and services which may use GIS data.
12.7.1.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.7.1.4 Network Theory
Network theory considers ways to validate connections within a graph structure (nodes
and edges). Algorithms can resolve ordered elements, single root trees and flows in
networks to validate consistency, completeness and use with and without dangles or
loops as described in section 7.1.3.2.
Validation cases are well structured, consistent, complete, completion of processes,
optimal structure for minimum time of processing and maximum ease of look up.
a. The system is well structured when an element is connected to other elements (for
entity v entity, entity v service, service v technique, service v standard, technique v
standard).
b. It is consistent when an element is not accessed from two other different elements
(for entity v entity, entity v service, service v technique, service v standard, technique v
standard).
c. It is complete when there are elements being defined but unused. The elements are a
waste and would cause confusion if they are known. The completeness prerequisite will
eliminate this difficulty (for entity v entity, entity v service, service v technique, service v
standard, technique v standard).
d. It has a way of completing its processes – every entity has an input or an output
service
e. There is an optimal way for structuring the system to minimise the time of processing
using a CPM process on the entities, services, techniques and standard respectively.
f. There is an optimal way for structuring the system to maximise the ease of look up by
minimising the height / depth of the network for entities, services, techniques and
standard respectively.
12.7.1.5 Markov Theory
Markov theory extends network theory to determine what the flow through nodes and
edges and hence the unused nodes and edges for zero flow through them. It also finds
the flow between the nodes and which are single entry or single exit blocks (grouping)
of nodes.
By introducing a node as an error sink and extra edges from the remaining nodes the
probability of error at different parts in the network system can be calculated along
with the size of error at each point of the Markov process and the error node gives an
estimate of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges. The implementation is based on services for the schema and
the database contents separately on a timed basis.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
• When entities can form the network system services can be considered as edges
for the Markov analysis.
• When entities can form the network system standards can be considered as
edges for the Markov analysis.
• When entities can form the network system techniques can be considered as
edges for the Markov analysis.
• When entities can form the network system services and standards can be
considered as edges for the Markov analysis.
• When entities can form the network system services and techniques can be
considered as edges for the Markov analysis.
• When entities can form the network system services, standards and techniques
can be considered as edges for the Markov analysis.
• When services can form the network system entities can be considered as edges
for the Markov analysis.
• When services can form the network system standards can be considered as
edges for the Markov analysis.
• When services can form the network system techniques can be considered as
edges for the Markov analysis.
• When services can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When services can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When services can form the network system entities, standards and techniques
can be considered as edges for the Markov analysis.
• When standards can form the network system entities can be considered as
edges for the Markov analysis.
• When standards can form the network system services can be considered as
edges for the Markov analysis.
• When standards can form the network system techniques can be considered as
edges for the Markov analysis.
• When standards can form the network system entities and services can be
considered as edges for the Markov analysis.
• When standards can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When standards can form the network system entities, services and techniques
can be considered as edges for the Markov analysis.
• When techniques can form the network system entities can be considered as
edges for the Markov analysis.
• When techniques can form the network system services can be considered as
edges for the Markov analysis.
• When techniques can form the network system standards can be considered as
edges for the Markov analysis.
• When techniques can form the network system entities and services can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities, services and standards
can be considered as edges for the Markov analysis.
Communications are based on protocols which are standards or techniques so the
analyses taken above can be applied. Additionally the protocols can be taken as nodes
with entities, services, standards and techniques as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
12.7.1.6 Algebraic Theory
Algebraic theory gives a set with elements and functions to be a basis of a system.
Individual services can be validated using algebraic theory. Restrictions on valid basic
elements are defined. Combinations of the elements forming larger elements are
classified as systems or subsystems with rules for correctness and erroneous.
Iterations on the combination process for more complex elements are validated against
rules giving meaning to elements and combinations of them. Relations are derived from
another set of functions which give links such as generalisation, and specification
based on properties of the elements. Other parts are ways of defining properties of
elements or functions whilst some apply to the scope of elements and functions.
When entities are considered as elements and functions are services, standards and
techniques define combinations and validation rules and valuations. Similarly the
duality gives services as elements, entities as functions and standards and techniques
validate combinations and valuations.
Communications consists of a dialogue between a source and a destination over a
transmission medium. Protocols (rules) govern the process which may be standards or
techniques to give entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
12.7.1.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.7.1.8 Programming Language Theory
Programming language theory gives rules for formalised standard and technique for
entities, graphics, sound and media technologies, services, database schema,
techniques and standards and database models. The IoT system schema and data is
covered by nesting, scope for meaning, objects, properties, methods with arguments,
the "this" operator and the concepts of synonyms, generalisation and specification,
overloaded definitions changed by context, replication, iteration, conditional meaning,
libraries, events, priority as set out in Section 14 (Appendix Database Schema).
12.7.1.9 Compiler Technology Theory
A compiler translates high-level language source programs to the target code. It takes
the formal definition in section 14 (Appendix – Database Scheme) to the physical
database representation starting with a set of base elements as the entities, services,
techniques and standards of the system and a sequence for extending the entity set,
services set, techniques set and standards set.

Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.

12.7.1.10 Communications Theory


Communications are controlled by protocols which consist of standards and techniques
applied to services and entities. Standards’ protocols are expressed in terms of a
formal language. Services are given priorities on how the entities are processed based
on the protocol. The protocol can be improved through the learning, probability,
network analysis and Markov theory for the entities, services, standards and
techniques sections. The definitions are translated using compiler technology.
If an entity or a service is not recognised then it is passed to a recovery process based
on repeated analysis of the situation by some parallel check. If the entity or service is
not recovered, the entity / service stack dump is reviewed to determine if there is an
error or the entity / service should be extended to the entity / service set with an
escape sequence. Similarly standards and techniques can be updated by experience.
12.7.1.11 Learning theory
Learning theory affords a set of methods for adding data, relations and modifications to
the knowledge database of the IoT system using the procedure for learning described
in section 7.1.9.2.4.
12.7.1.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
12.7.1.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.7.1.14 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the variables. Extrapolation
takes the fitted curve to calculate values beyond the range of the observed data, and
gives uncertainty due to which particular curve has been determined. Curve fitting
relies on various types of constraints such as a specific point, angle, curvature or other
higher order constraints especially at the ends of the points being considered. The
number of constraints sets a limit on the number of combined functions defining the
fitted curve even then there is no guarantee that all constraints are met or the exact
curve is found. Curves are assessed by various measures with a popular procedure
being the least squares method which measures the deviations of the given data
points.
Curve fitting can select the correct source / destination from the group of nodes to
apply to a communications service and then to select the correct connection from the
group of routes to apply to the service. Curve fitting can check the entity, service,
technique, standard and communications from the components that make up the
system.
12.7.1.15 Configuration Management
Configuration management  follows the process and database described in section
7.1.17.2. Each entity, service, standard, technique and communications protocol is
subject to a configuration management life cycle and is supported by the appropriate
services and database. If an element or relation is not found then the error is reported
as a stack dump and after review the database structure is adjusted.
12.7.1.16 Continuous Integration
Continuous integration uses an extension of the configuration management as
described in section 7.1.18.2. It applies to entities, services, standards and techniques.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
12.7.1.17 Continuous Delivery
Continuous delivery extends the processes and databases along the lines of section
7.19.1.2. It is supported by services as appropriate and applies to developments of
entities, services, standards and techniques. If an element or relation is not found then
the error is reported as a stack dump and after review the database structure is
adjusted.
12.7.1.18 Virtual Reality
Virtual reality is the user interface for monitoring and control of the IOT system. It
works with other technology such as remote communication, artificial intelligence and
spacial data to assist the technology. Errors for entities, services, standards,
techniques and communications are reported using this method so that corrective
actions can be made remotely. The reported error is displayed as a device stack and
position then evaluated with respect to time, device, device type, position and after
review the system structure is modified appropriately.
12.7.2 Physical System
12.7.2.1 Database Processing
The database supports IoT activities with a multimedia hybrid object-relational NOSQL
multi-database with appropriate DDL, QL, DML and PDL. It supports an XML schema
defined in section 14 (Appendix – Database Scheme) with services giving facilities for
event-driven architecture, deduction, graph structures, hypertext hypermedia,
knowledge base, probability, real-time, loading and executing antivirus from libraries
and temporal information. It is a virtual store.
12.7.2.2 Geographic Information Systems
Entities use GIS data and communications protocols, services, standards and
techniques collect, process and report the GIS information for visualization and
analysis.
12.7.2.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.7.2.4 Network Theory
Network theory gives algorithms for services to validate and optimise links in the
database schema and the data for well structured-ness, consistency, completeness,
completion of processes, optimal structure for minimum time of processing and
maximum ease of look up applied to entities, services, techniques, standards and
communications.
12.7.2.5 Markov Theory
Markov theory extends network theory with services to validate and optimise nodes
and edges for schema structure and database structure for entities, services,
techniques, standards and communications respectively and review any problems
reported in the database.
12.7.2.6 Algebraic Theory
Algebraic theory transforms the logic of of services as assertions and form constraints
of input using compiler technology. Entities are analysed with set theory to verify
constraints on them. The system is combined with logic flows to verify that the outputs
and following inputs are consistent. Techniques and standards follow the integration
process. Communications follow the same integration actions.
12.7.2.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.7.2.8 Programming Language Theory
Programming language theory gives formalised ways for defining entities, services,
techniques, standards and communications. It gives a basis for compiler technology to
process entities, services, techniques, standards and communications for network,
Markov, algebraic and logical validation of the schema and database.
12.7.2.9 Compiler Technology Theory
Compiler technology translates the definitions of entities, services, techniques,
standards and communications for validation processes of database and its schema. It
is also used to optimise the system entities, services, techniques, standards and
communications through learning, probability, network analysis and Markov theory.
12.7.2.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities best expressed in a formal language. The
representation is translated with compiler technology to give validation through
network, Markov, algebraic and logical analysis and improved through the learning,
probability, network analysis and Markov theory.
12.7.2.11 Learning theory
Learning theory uses the procedure for learning described in section 7.1.9.2.4 to
optimise the database schema and data into a knowledge database for the IoT system.
12.7.2.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
12.7.2.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.7.2.14 Curve Fitting
Curve fitting uses extended Pearson coefficient analysis to assess trust in the curve
fitting process. The curve fitting uses Chebycheff polynomials and splines for
interpolation and extrapolation for multi dimensional analysis. It is particularly useful
for selecting the correct source from the group of nodes to apply to a communications
service, the correct destination from the group of nodes to apply to a communications
service and then the correct connection from the group of routes to apply to a
communications service.
12.7.2.15 Configuration Management
Configuration management  has the database include system construction and item
identity and status as entities. Services provide base-lining, configuration control with
approval stages and baselines, configuration status accounting and audits versus
revision and defect correction.
12.7.2.16 Continuous Integration
Continuous integration extends configuration management with services to extract a
copy of the system from a repository and perform a build and a set of automated tests
to ensure that their environment is valid for update. A build server builds the system,
documentation, statistics, distribution media, integrates and deploys into a scalable
version clone of production environment through service virtualization for
dependences. Automated tests for unit and integration (defect or regression) tests with
static and dynamic tests, measure and profile performance confirms it behaves as it
should. The updated repository triggers another build process and tests. The new
updates are committed to the repository when all the tests have been verified and
delivered to stakeholders and testers otherwise updates have rollback. The build / test
process is repeated periodically to ensure no corruption of the system.
12.7.2.17 Continuous Delivery
Continuous delivery automates source control all the way through to production. It
include continuous integration, application release automation, build automation,
and application life cycle management.
12.7.2.18 Virtual Reality
Virtual reality is the user interface with services for monitoring and control of the IOT
system. It works with other technology such as remote communication, artificial
intelligence and spacial data to assist the technology. Errors for entities, services,
standards, techniques and communications are reported using this method so that
corrective actions can be made remotely. The reported error is displayed as a device
stack and position then evaluated with respect to time, device, device type, position
and after review the system structure is modified appropriately.
12.7.2.19 Commentary
The antivirus service definition set above is created once when the service is added to
the system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - Antivirus services) of the database will be done on a
regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
12.8 Firewall Processing
A firewall monitors and controls incoming and outgoing network traffic in a system
using predetermined rules to provide security giving a gateway between trusted and
untrusted networks.
Software firewalls use a daemon / service of the operating system or an agent to give
endpoint security / protection working at the application level of the TCP/IP
stack whilst hardware units filter traffic between the networks operating at a low level
of the TCP/IP protocol stack and stopping packets to pass through the firewall unless
they match the administrator established rule set.
An application firewall uses software processes with defined sockets and process
identities to filter the connections between the application layer and the lower layers
of the OSI model. They can allow/block on a per process basis instead of filtering
connections on a per port basis for hardware firewalls.
Application firewalls rely on mandatory access control, sandboxing, to protect
vulnerable services.
A proxy server can be a firewall by responding to input packets like an application,
while blocking other packets. It is a gateway between networks for a specific network
application allowing network address translation and network user.
Firewall services processing is the subfunction of extracting Firewall services from a
set of information. A firewall is a network security system that monitors and controls
incoming and outgoing network traffic based on predetermined security rules setup by
the administrator.
12.8.1 Logical Design
12.8.1.1 Database Processing
The database supports IoT activities with applications reflecting bibliographic,
document-text, statistical and multimedia objects. The database management system
supports users and other applications to collect and analyse the data for IoT
processes. The system allows the definition (create, change and remove definitions of
the organization of the data using a data definition language - schema definition (DDL)),
querying (retrieve information usable for the user or other applications using a query
language (QL)), update (insert, modify, and delete of actual data using a data
manipulation language (DML)), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities - physical
definition (PDL)) of the database. The most suitable database model for the
applications is a hybrid object-relational databases. They use fast key-value stores and
document-oriented databases with XML to give interoperability between different
implementations.
The logical database structure follows the definition in section 14 (Appendix –
Database Scheme) and is extended with an escape sequence in section 14 (Appendix –
Database Scheme). The schema is maintained with DDL, is used with DML and QL
(especially for entities and services) and cleaned up with PDL
DDL, QL, DML and PDL are implemented as services
Other application requirements are:
• event-driven architecture database - implemented as services
• deductive database - implemented as services
• multi-database - implemented as services
• graph database - implemented as services
• hypertext hypermedia database - implemented as services
• knowledge base - implemented as services
• probabilistic database - implemented as services
• real-time database - implemented as services
• temporal database - implemented as services
• loading and executing firewall from libraries – implemented as services
12.8.1.2 Geographic Information Systems
The database system holds geographic data for collecting, processing and reporting
spatial information for maps, visualization and intelligence. GIS supports discrete
objects and continuous fields as raster images and vector.
Entity data includes position and non position attributes. Entities can be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other information. The entities are processed with services,
standards and techniques.
Communications use dialogues between source and destination over a transmission
medium with protocols governing the process. The protocol is defined by a standard or
technique controlling entities and services which may use GIS data.
12.8.1.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.8.1.4 Network Theory
Network theory considers ways to validate connections within a graph structure (nodes
and edges). Algorithms can resolve ordered elements, single root trees and flows in
networks to validate consistency, completeness and use with and without dangles or
loops as described in section 7.1.3.2.
Validation cases are well structured, consistent, complete, completion of processes,
optimal structure for minimum time of processing and maximum ease of look up.
a. The system is well structured when an element is connected to other elements (for
entity v entity, entity v service, service v technique, service v standard, technique v
standard).
b. It is consistent when an element is not accessed from two other different elements
(for entity v entity, entity v service, service v technique, service v standard, technique v
standard).
c. It is complete when there are elements being defined but unused. The elements are a
waste and would cause confusion if they are known. The completeness prerequisite will
eliminate this difficulty (for entity v entity, entity v service, service v technique, service v
standard, technique v standard).
d. It has a way of completing its processes – every entity has an input or an output
service
e. There is an optimal way for structuring the system to minimise the time of processing
using a CPM process on the entities, services, techniques and standard respectively.
f. There is an optimal way for structuring the system to maximise the ease of look up by
minimising the height / depth of the network for entities, services, techniques and
standard respectively.
12.8.1.5 Markov Theory
Markov theory extends network theory to determine what the flow through nodes and
edges and hence the unused nodes and edges for zero flow through them. It also finds
the flow between the nodes and which are single entry or single exit blocks (grouping)
of nodes.
By introducing a node as an error sink and extra edges from the remaining nodes the
probability of error at different parts in the network system can be calculated along
with the size of error at each point of the Markov process and the error node gives an
estimate of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges. The implementation is based on services for the schema and
the database contents separately on a timed basis.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
• When entities can form the network system services can be considered as edges
for the Markov analysis.
• When entities can form the network system standards can be considered as
edges for the Markov analysis.
• When entities can form the network system techniques can be considered as
edges for the Markov analysis.
• When entities can form the network system services and standards can be
considered as edges for the Markov analysis.
• When entities can form the network system services and techniques can be
considered as edges for the Markov analysis.
• When entities can form the network system services, standards and techniques
can be considered as edges for the Markov analysis.
• When services can form the network system entities can be considered as edges
for the Markov analysis.
• When services can form the network system standards can be considered as
edges for the Markov analysis.
• When services can form the network system techniques can be considered as
edges for the Markov analysis.
• When services can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When services can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When services can form the network system entities, standards and techniques
can be considered as edges for the Markov analysis.
• When standards can form the network system entities can be considered as
edges for the Markov analysis.
• When standards can form the network system services can be considered as
edges for the Markov analysis.
• When standards can form the network system techniques can be considered as
edges for the Markov analysis.
• When standards can form the network system entities and services can be
considered as edges for the Markov analysis.
• When standards can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When standards can form the network system entities, services and techniques
can be considered as edges for the Markov analysis.
• When techniques can form the network system entities can be considered as
edges for the Markov analysis.
• When techniques can form the network system services can be considered as
edges for the Markov analysis.
• When techniques can form the network system standards can be considered as
edges for the Markov analysis.
• When techniques can form the network system entities and services can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities, services and standards
can be considered as edges for the Markov analysis.
Communications are based on protocols which are standards or techniques so the
analyses taken above can be applied. Additionally the protocols can be taken as nodes
with entities, services, standards and techniques as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
12.8.1.6 Algebraic Theory
Algebraic theory gives a set with elements and functions to be a basis of a system.
Individual services can be validated using algebraic theory. Restrictions on valid basic
elements are defined. Combinations of the elements forming larger elements are
classified as systems or subsystems with rules for correctness and erroneous.
Iterations on the combination process for more complex elements are validated against
rules giving meaning to elements and combinations of them. Relations are derived from
another set of functions which give links such as generalisation, and specification
based on properties of the elements. Other parts are ways of defining properties of
elements or functions whilst some apply to the scope of elements and functions.
When entities are considered as elements and functions are services, standards and
techniques define combinations and validation rules and valuations. Similarly the
duality gives services as elements, entities as functions and standards and techniques
validate combinations and valuations.
Communications consists of a dialogue between a source and a destination over a
transmission medium. Protocols (rules) govern the process which may be standards or
techniques to give entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
12.8.1.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.8.1.8 Programming Language Theory
Programming language theory gives rules for formalised standard and technique for
entities, graphics, sound and media technologies, services, database schema,
techniques and standards and database models. The IoT system schema and data is
covered by nesting, scope for meaning, objects, properties, methods with arguments,
the "this" operator and the concepts of synonyms, generalisation and specification,
overloaded definitions changed by context, replication, iteration, conditional meaning,
libraries, events, priority as set out in Section 14 (Appendix Database Schema).
12.8.1.9 Compiler Technology Theory
A compiler translates high-level language source programs to the target code. It takes
the formal definition in section 14 (Appendix – Database Scheme) to the physical
database representation starting with a set of base elements as the entities, services,
techniques and standards of the system and a sequence for extending the entity set,
services set, techniques set and standards set.

Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.

12.8.1.10 Communications Theory


Communications are controlled by protocols which consist of standards and techniques
applied to services and entities. Standards’ protocols are expressed in terms of a
formal language. Services are given priorities on how the entities are processed based
on the protocol. The protocol can be improved through the learning, probability,
network analysis and Markov theory for the entities, services, standards and
techniques sections. The definitions are translated using compiler technology.
If an entity or a service is not recognised then it is passed to a recovery process based
on repeated analysis of the situation by some parallel check. If the entity or service is
not recovered, the entity / service stack dump is reviewed to determine if there is an
error or the entity / service should be extended to the entity / service set with an
escape sequence. Similarly standards and techniques can be updated by experience.
12.8.1.11 Learning theory
Learning theory affords a set of methods for adding data, relations and modifications to
the knowledge database of the IoT system using the procedure for learning described
in section 7.1.9.2.4.
12.8.1.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
12.8.1.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.8.1.14 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the variables. Extrapolation
takes the fitted curve to calculate values beyond the range of the observed data, and
gives uncertainty due to which particular curve has been determined. Curve fitting
relies on various types of constraints such as a specific point, angle, curvature or other
higher order constraints especially at the ends of the points being considered. The
number of constraints sets a limit on the number of combined functions defining the
fitted curve even then there is no guarantee that all constraints are met or the exact
curve is found. Curves are assessed by various measures with a popular procedure
being the least squares method which measures the deviations of the given data
points.
Curve fitting can select the correct source / destination from the group of nodes to
apply to a communications service and then to select the correct connection from the
group of routes to apply to the service. Curve fitting can check the entity, service,
technique, standard and communications from the components that make up the
system.
12.8.1.15 Configuration Management
Configuration management  follows the process and database described in section
7.1.17.2. Each entity, service, standard, technique and communications protocol is
subject to a configuration management life cycle and is supported by the appropriate
services and database. If an element or relation is not found then the error is reported
as a stack dump and after review the database structure is adjusted.
12.8.1.16 Continuous Integration
Continuous integration uses an extension of the configuration management as
described in section 7.1.18.2. It applies to entities, services, standards and techniques.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
12.8.1.17 Continuous Delivery
Continuous delivery extends the processes and databases along the lines of section
7.19.1.2. It is supported by services as appropriate and applies to developments of
entities, services, standards and techniques. If an element or relation is not found then
the error is reported as a stack dump and after review the database structure is
adjusted.
12.8.1.18 Virtual Reality
Virtual reality is the user interface for monitoring and control of the IOT system. It
works with other technology such as remote communication, artificial intelligence and
spacial data to assist the technology. Errors for entities, services, standards,
techniques and communications are reported using this method so that corrective
actions can be made remotely. The reported error is displayed as a device stack and
position then evaluated with respect to time, device, device type, position and after
review the system structure is modified appropriately.
12.8.2 Physical System
12.8.2.1 Database Processing
The database supports IoT activities with a multimedia hybrid object-relational NOSQL
multi-database with appropriate DDL, QL, DML and PDL. It supports an XML schema
defined in section 14 (Appendix – Database Scheme) with services giving facilities for
event-driven architecture, deduction, graph structures, hypertext hypermedia,
knowledge base, probability, real-time, loading and executing firewall from libraries and
temporal information. It is a virtual store.
12.8.2.2 Geographic Information Systems
Entities use GIS data and communications protocols, services, standards and
techniques collect, process and report the GIS information for visualization and
analysis.
12.8.2.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.8.2.4 Network Theory
Network theory gives algorithms for services to validate and optimise links in the
database schema and the data for well structured-ness, consistency, completeness,
completion of processes, optimal structure for minimum time of processing and
maximum ease of look up applied to entities, services, techniques, standards and
communications.
12.8.2.5 Markov Theory
Markov theory extends network theory with services to validate and optimise nodes
and edges for schema structure and database structure for entities, services,
techniques, standards and communications respectively and review any problems
reported in the database.
12.8.2.6 Algebraic Theory
Algebraic theory transforms the logic of of services as assertions and form constraints
of input using compiler technology. Entities are analysed with set theory to verify
constraints on them. The system is combined with logic flows to verify that the outputs
and following inputs are consistent. Techniques and standards follow the integration
process. Communications follow the same integration actions.
12.8.2.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.8.2.8 Programming Language Theory
Programming language theory gives formalised ways for defining entities, services,
techniques, standards and communications. It gives a basis for compiler technology to
process entities, services, techniques, standards and communications for network,
Markov, algebraic and logical validation of the schema and database.
12.8.2.9 Compiler Technology Theory
Compiler technology translates the definitions of entities, services, techniques,
standards and communications for validation processes of database and its schema. It
is also used to optimise the system entities, services, techniques, standards and
communications through learning, probability, network analysis and Markov theory.
12.8.2.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities best expressed in a formal language. The
representation is translated with compiler technology to give validation through
network, Markov, algebraic and logical analysis and improved through the learning,
probability, network analysis and Markov theory.
12.8.2.11 Learning theory
Learning theory uses the procedure for learning described in section 7.1.9.2.4 to
optimise the database schema and data into a knowledge database for the IoT system.
12.8.2.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
12.8.2.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.8.2.14 Curve Fitting
Curve fitting uses extended Pearson coefficient analysis to assess trust in the curve
fitting process. The curve fitting uses Chebycheff polynomials and splines for
interpolation and extrapolation for multi dimensional analysis. It is particularly useful
for selecting the correct source from the group of nodes to apply to a communications
service, the correct destination from the group of nodes to apply to a communications
service and then the correct connection from the group of routes to apply to a
communications service.
12.8.2.15 Configuration Management
Configuration management  has the database include system construction and item
identity and status as entities. Services provide base-lining, configuration control with
approval stages and baselines, configuration status accounting and audits versus
revision and defect correction.
12.8.2.16 Continuous Integration
Continuous integration extends configuration management with services to extract a
copy of the system from a repository and perform a build and a set of automated tests
to ensure that their environment is valid for update. A build server builds the system,
documentation, statistics, distribution media, integrates and deploys into a scalable
version clone of production environment through service virtualization for
dependences. Automated tests for unit and integration (defect or regression) tests with
static and dynamic tests, measure and profile performance confirms it behaves as it
should. The updated repository triggers another build process and tests. The new
updates are committed to the repository when all the tests have been verified and
delivered to stakeholders and testers otherwise updates have rollback. The build / test
process is repeated periodically to ensure no corruption of the system.
12.8.2.17 Continuous Delivery
Continuous delivery automates source control all the way through to production. It
include continuous integration, application release automation, build automation,
and application life cycle management.
12.8.2.18 Virtual Reality
Virtual reality is the user interface with services for monitoring and control of the IOT
system. It works with other technology such as remote communication, artificial
intelligence and spacial data to assist the technology. Errors for entities, services,
standards, techniques and communications are reported using this method so that
corrective actions can be made remotely. The reported error is displayed as a device
stack and position then evaluated with respect to time, device, device type, position
and after review the system structure is modified appropriately.
12.8.2.19 Commentary
The firewall service definition set above is created once when the service is added to
the system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
12.9 APIDS Processing
APIDS consists of:
• Artificial immune system
• Bypass switch
• Denial-of-service attack
• DNS analytics
• Intrusion Detection Message Exchange Format
• Protocol-based intrusion detection system (PIDS)
• Real-time adaptive security
• Security management
• Software-defined protection
• Defensive programming
• Secure input and output handling
• Security bug
APIDS detects an intrusion, monitors and analyses on a specific application protocol /
protocols in a computing system. An APIDS monitors the dynamic behavior and state of
the protocol. It consists of a system / agent in between a process / servers.
It monitors and analyzes the protocol between connected devices. A profile of the
application protocol can be formed from the data log automatically with statistical
analysis or a manual process to give a fingerprinted application. If anything happens
that is outside the bounds of the fingerprinted profile the exception can be escalated
as an alert if the application is subverted or changed and the fingerprint changed.APIDS
services processing is the subfunction of extracting APIDS services from a set of
information.
application protocols in use by the system for the dynamic behavior and state of the
protocol using a system / agent positioned between processes at strategic points for
malicious activity or policy violations. The APIDS logs and constrains the valid use of a
protocol for security information and event management alarms and can be extended
to learn and reduce large sets to a practical understandable subset used for
fingerprinting and deducing any change whether valid or invalid.
Detection methods are signature-based, statistical anomaly-based, and stateful
protocol analysis. Signature-based IDS monitors packets in the network and compares
with pre-configured and pre-determined attack patterns known as signatures
(recognizing bad patterns, such as malware). Statistical anomaly-based detection
(subject to false positive alarms) checks network traffic and compare it against an
established normal baseline for the network re bandwidth, protocols, etc. (detecting
deviations from a model of good traffic, often relying on machine learning) including
unusual traffic flows, distributed denial of service attacks, some malware and policy
violations.Stateful protocol analysis detection shows protocol deviation states by
comparing observed events with pre-determined sets of activity. Other tasks take a
snapshot of system files and matches it to the previous snapshot then generate an
alert if any are modified or deleted. If an attack is identified, or there abnormal
behavior, an alert is generated and harmful detected packets are dropped.
Limitations are derived from line noise, false positives shrouding real attacks, lack of
updated signatures libraries, weak identification and authentication access
mechanisms / protocols, encrypted packets, faked or scrambled IP packet and invalid
data and TCP/IP stack attacks.
Evasion techniques are derived from fragmentation of packets, avoiding defaults,
coordinated, low-bandwidth attacks, address spoofing/proxying and pattern change
evasion.
12.9.1 Logical Design
12.9.1.1 Database Processing
The database supports IoT activities with applications reflecting bibliographic,
document-text, statistical and multimedia objects. The database management system
supports users and other applications to collect and analyse the data for IoT
processes. The system allows the definition (create, change and remove definitions of
the organization of the data using a data definition language - schema definition (DDL)),
querying (retrieve information usable for the user or other applications using a query
language (QL)), update (insert, modify, and delete of actual data using a data
manipulation language (DML)), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities - physical
definition (PDL)) of the database. The most suitable database model for the
applications is a hybrid object-relational databases. They use fast key-value stores and
document-oriented databases with XML to give interoperability between different
implementations.
The logical database structure follows the definition in section 14 (Appendix –
Database Scheme) and is extended with an escape sequence in section 14 (Appendix –
Database Scheme). The schema is maintained with DDL, is used with DML and QL
(especially for entities and services) and cleaned up with PDL
DDL, QL, DML and PDL are implemented as services
Other application requirements are:
• event-driven architecture database - implemented as services
• deductive database - implemented as services
• multi-database - implemented as services
• graph database - implemented as services
• hypertext hypermedia database - implemented as services
• knowledge base - implemented as services
• probabilistic database - implemented as services
• real-time database - implemented as services
• temporal database - implemented as services
• loading and executing APIDS from libraries – implemented as services
12.9.1.2 Geographic Information Systems
The database system holds geographic data for collecting, processing and reporting
spatial information for maps, visualization and intelligence. GIS supports discrete
objects and continuous fields as raster images and vector.
Entity data includes position and non position attributes. Entities can be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other information. The entities are processed with services,
standards and techniques.
Communications use dialogues between source and destination over a transmission
medium with protocols governing the process. The protocol is defined by a standard or
technique controlling entities and services which may use GIS data.
12.9.1.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.9.1.4 Network Theory
Network theory considers ways to validate connections within a graph structure (nodes
and edges). Algorithms can resolve ordered elements, single root trees and flows in
networks to validate consistency, completeness and use with and without dangles or
loops as described in section 7.1.3.2.
Validation cases are well structured, consistent, complete, completion of processes,
optimal structure for minimum time of processing and maximum ease of look up.
a. The system is well structured when an element is connected to other elements (for
entity v entity, entity v service, service v technique, service v standard, technique v
standard).
b. It is consistent when an element is not accessed from two other different elements
(for entity v entity, entity v service, service v technique, service v standard, technique v
standard).
c. It is complete when there are elements being defined but unused. The elements are a
waste and would cause confusion if they are known. The completeness prerequisite will
eliminate this difficulty (for entity v entity, entity v service, service v technique, service v
standard, technique v standard).
d. It has a way of completing its processes – every entity has an input or an output
service
e. There is an optimal way for structuring the system to minimise the time of processing
using a CPM process on the entities, services, techniques and standard respectively.
f. There is an optimal way for structuring the system to maximise the ease of look up by
minimising the height / depth of the network for entities, services, techniques and
standard respectively.
12.9.1.5 Markov Theory
Markov theory extends network theory to determine what the flow through nodes and
edges and hence the unused nodes and edges for zero flow through them. It also finds
the flow between the nodes and which are single entry or single exit blocks (grouping)
of nodes.
By introducing a node as an error sink and extra edges from the remaining nodes the
probability of error at different parts in the network system can be calculated along
with the size of error at each point of the Markov process and the error node gives an
estimate of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges. The implementation is based on services for the schema and
the database contents separately on a timed basis.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
• When entities can form the network system services can be considered as edges
for the Markov analysis.
• When entities can form the network system standards can be considered as
edges for the Markov analysis.
• When entities can form the network system techniques can be considered as
edges for the Markov analysis.
• When entities can form the network system services and standards can be
considered as edges for the Markov analysis.
• When entities can form the network system services and techniques can be
considered as edges for the Markov analysis.
• When entities can form the network system services, standards and techniques
can be considered as edges for the Markov analysis.
• When services can form the network system entities can be considered as edges
for the Markov analysis.
• When services can form the network system standards can be considered as
edges for the Markov analysis.
• When services can form the network system techniques can be considered as
edges for the Markov analysis.
• When services can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When services can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When services can form the network system entities, standards and techniques
can be considered as edges for the Markov analysis.
• When standards can form the network system entities can be considered as
edges for the Markov analysis.
• When standards can form the network system services can be considered as
edges for the Markov analysis.
• When standards can form the network system techniques can be considered as
edges for the Markov analysis.
• When standards can form the network system entities and services can be
considered as edges for the Markov analysis.
• When standards can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When standards can form the network system entities, services and techniques
can be considered as edges for the Markov analysis.
• When techniques can form the network system entities can be considered as
edges for the Markov analysis.
• When techniques can form the network system services can be considered as
edges for the Markov analysis.
• When techniques can form the network system standards can be considered as
edges for the Markov analysis.
• When techniques can form the network system entities and services can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities, services and standards
can be considered as edges for the Markov analysis.
Communications are based on protocols which are standards or techniques so the
analyses taken above can be applied. Additionally the protocols can be taken as nodes
with entities, services, standards and techniques as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
12.9.1.6 Algebraic Theory
Algebraic theory gives a set with elements and functions to be a basis of a system.
Individual services can be validated using algebraic theory. Restrictions on valid basic
elements are defined. Combinations of the elements forming larger elements are
classified as systems or subsystems with rules for correctness and erroneous.
Iterations on the combination process for more complex elements are validated against
rules giving meaning to elements and combinations of them. Relations are derived from
another set of functions which give links such as generalisation, and specification
based on properties of the elements. Other parts are ways of defining properties of
elements or functions whilst some apply to the scope of elements and functions.
When entities are considered as elements and functions are services, standards and
techniques define combinations and validation rules and valuations. Similarly the
duality gives services as elements, entities as functions and standards and techniques
validate combinations and valuations.
Communications consists of a dialogue between a source and a destination over a
transmission medium. Protocols (rules) govern the process which may be standards or
techniques to give entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
12.9.1.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.9.1.8 Programming Language Theory
Programming language theory gives rules for formalised standard and technique for
entities, graphics, sound and media technologies, services, database schema,
techniques and standards and database models. The IoT system schema and data is
covered by nesting, scope for meaning, objects, properties, methods with arguments,
the "this" operator and the concepts of synonyms, generalisation and specification,
overloaded definitions changed by context, replication, iteration, conditional meaning,
libraries, events, priority as set out in Section 14 (Appendix Database Schema).
12.9.1.9 Compiler Technology Theory
A compiler translates high-level language source programs to the target code. It takes
the formal definition in section 14 (Appendix – Database Scheme) to the physical
database representation starting with a set of base elements as the entities, services,
techniques and standards of the system and a sequence for extending the entity set,
services set, techniques set and standards set.
Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.

12.9.1.10 Communications Theory


Communications are controlled by protocols which consist of standards and techniques
applied to services and entities. Standards’ protocols are expressed in terms of a
formal language. Services are given priorities on how the entities are processed based
on the protocol. The protocol can be improved through the learning, probability,
network analysis and Markov theory for the entities, services, standards and
techniques sections. The definitions are translated using compiler technology.
If an entity or a service is not recognised then it is passed to a recovery process based
on repeated analysis of the situation by some parallel check. If the entity or service is
not recovered, the entity / service stack dump is reviewed to determine if there is an
error or the entity / service should be extended to the entity / service set with an
escape sequence. Similarly standards and techniques can be updated by experience.
12.9.1.11 Learning theory
Learning theory affords a set of methods for adding data, relations and modifications to
the knowledge database of the IoT system using the procedure for learning described
in section 7.1.9.2.4.
12.9.1.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
12.9.1.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.9.1.14 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the variables. Extrapolation
takes the fitted curve to calculate values beyond the range of the observed data, and
gives uncertainty due to which particular curve has been determined. Curve fitting
relies on various types of constraints such as a specific point, angle, curvature or other
higher order constraints especially at the ends of the points being considered. The
number of constraints sets a limit on the number of combined functions defining the
fitted curve even then there is no guarantee that all constraints are met or the exact
curve is found. Curves are assessed by various measures with a popular procedure
being the least squares method which measures the deviations of the given data
points.
Curve fitting can select the correct source / destination from the group of nodes to
apply to a communications service and then to select the correct connection from the
group of routes to apply to the service. Curve fitting can check the entity, service,
technique, standard and communications from the components that make up the
system.
12.9.1.15 Configuration Management
Configuration management  follows the process and database described in section
7.1.17.2. Each entity, service, standard, technique and communications protocol is
subject to a configuration management life cycle and is supported by the appropriate
services and database. If an element or relation is not found then the error is reported
as a stack dump and after review the database structure is adjusted.
12.9.1.16 Continuous Integration
Continuous integration uses an extension of the configuration management as
described in section 7.1.18.2. It applies to entities, services, standards and techniques.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
12.9.1.17 Continuous Delivery
Continuous delivery extends the processes and databases along the lines of section
7.19.1.2. It is supported by services as appropriate and applies to developments of
entities, services, standards and techniques. If an element or relation is not found then
the error is reported as a stack dump and after review the database structure is
adjusted.
12.9.1.18 Virtual Reality
Virtual reality is the user interface for monitoring and control of the IOT system. It
works with other technology such as remote communication, artificial intelligence and
spacial data to assist the technology. Errors for entities, services, standards,
techniques and communications are reported using this method so that corrective
actions can be made remotely. The reported error is displayed as a device stack and
position then evaluated with respect to time, device, device type, position and after
review the system structure is modified appropriately.
12.9.2 Physical System
12.9.2.1 Database Processing
The database supports IoT activities with a multimedia hybrid object-relational NOSQL
multi-database with appropriate DDL, QL, DML and PDL. It supports an XML schema
defined in section 14 (Appendix – Database Scheme) with services giving facilities for
event-driven architecture, deduction, graph structures, hypertext hypermedia,
knowledge base, probability, real-time, loading and executing APIDS from libraries and
temporal information. It is a virtual store.
12.9.2.2 Geographic Information Systems
Entities use GIS data and communications protocols, services, standards and
techniques collect, process and report the GIS information for visualization and
analysis.
12.9.2.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.9.2.4 Network Theory
Network theory gives algorithms for services to validate and optimise links in the
database schema and the data for well structured-ness, consistency, completeness,
completion of processes, optimal structure for minimum time of processing and
maximum ease of look up applied to entities, services, techniques, standards and
communications.
12.9.2.5 Markov Theory
Markov theory extends network theory with services to validate and optimise nodes
and edges for schema structure and database structure for entities, services,
techniques, standards and communications respectively and review any problems
reported in the database.
12.9.2.6 Algebraic Theory
Algebraic theory transforms the logic of of services as assertions and form constraints
of input using compiler technology. Entities are analysed with set theory to verify
constraints on them. The system is combined with logic flows to verify that the outputs
and following inputs are consistent. Techniques and standards follow the integration
process. Communications follow the same integration actions.
12.9.2.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.9.2.8 Programming Language Theory
Programming language theory gives formalised ways for defining entities, services,
techniques, standards and communications. It gives a basis for compiler technology to
process entities, services, techniques, standards and communications for network,
Markov, algebraic and logical validation of the schema and database.
12.9.2.9 Compiler Technology Theory
Compiler technology translates the definitions of entities, services, techniques,
standards and communications for validation processes of database and its schema. It
is also used to optimise the system entities, services, techniques, standards and
communications through learning, probability, network analysis and Markov theory.
12.9.2.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities best expressed in a formal language. The
representation is translated with compiler technology to give validation through
network, Markov, algebraic and logical analysis and improved through the learning,
probability, network analysis and Markov theory.
12.9.2.11 Learning theory
Learning theory uses the procedure for learning described in section 7.1.9.2.4 to
optimise the database schema and data into a knowledge database for the IoT system.
12.9.2.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
12.9.2.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.9.2.14 Curve Fitting
Curve fitting uses extended Pearson coefficient analysis to assess trust in the curve
fitting process. The curve fitting uses Chebycheff polynomials and splines for
interpolation and extrapolation for multi dimensional analysis. It is particularly useful
for selecting the correct source from the group of nodes to apply to a communications
service, the correct destination from the group of nodes to apply to a communications
service and then the correct connection from the group of routes to apply to a
communications service.
12.9.2.15 Configuration Management
Configuration management  has the database include system construction and item
identity and status as entities. Services provide base-lining, configuration control with
approval stages and baselines, configuration status accounting and audits versus
revision and defect correction.
12.9.2.16 Continuous Integration
Continuous integration extends configuration management with services to extract a
copy of the system from a repository and perform a build and a set of automated tests
to ensure that their environment is valid for update. A build server builds the system,
documentation, statistics, distribution media, integrates and deploys into a scalable
version clone of production environment through service virtualization for
dependences. Automated tests for unit and integration (defect or regression) tests with
static and dynamic tests, measure and profile performance confirms it behaves as it
should. The updated repository triggers another build process and tests. The new
updates are committed to the repository when all the tests have been verified and
delivered to stakeholders and testers otherwise updates have rollback. The build / test
process is repeated periodically to ensure no corruption of the system.
12.9.2.17 Continuous Delivery
Continuous delivery automates source control all the way through to production. It
include continuous integration, application release automation, build automation,
and application life cycle management.
12.9.2.18 Virtual Reality
Virtual reality is the user interface with services for monitoring and control of the IOT
system. It works with other technology such as remote communication, artificial
intelligence and spacial data to assist the technology. Errors for entities, services,
standards, techniques and communications are reported using this method so that
corrective actions can be made remotely. The reported error is displayed as a device
stack and position then evaluated with respect to time, device, device type, position
and after review the system structure is modified appropriately.
12.9.2.19 Commentary
The APIDS service definition set above is created once when the service is added to
the system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
12.10 Cipher Processing
Cryptography is the methodology for secure communication in the presence of third
parties (adversaries) and protocols supporting the process and information security for
data confidentiality, data integrity, authentication, and non-repudiation.
Encryption converts ordinary information (plain-text) into unintelligible text (cipher-
text) and decryption does the reverse using a cipher key for the purpose known only to
the sender and receiver.
Cipher services processing is the subfunction of extracting Cipher services from a set
of information.
12.10.1 Logical Design
12.10.1.1 Database Processing
The database supports IoT activities with applications reflecting bibliographic,
document-text, statistical and multimedia objects. The database management system
supports users and other applications to collect and analyse the data for IoT
processes. The system allows the definition (create, change and remove definitions of
the organization of the data using a data definition language - schema definition (DDL)),
querying (retrieve information usable for the user or other applications using a query
language (QL)), update (insert, modify, and delete of actual data using a data
manipulation language (DML)), and administration (maintain users, data security,
performance, data integrity, concurrency and data recovery using utilities - physical
definition (PDL)) of the database. The most suitable database model for the
applications is a hybrid object-relational databases. They use fast key-value stores and
document-oriented databases with XML to give interoperability between different
implementations.
The logical database structure follows the definition in section 14 (Appendix –
Database Scheme) and is extended with an escape sequence in section 14 (Appendix –
Database Scheme). The schema is maintained with DDL, is used with DML and QL
(especially for entities and services) and cleaned up with PDL
DDL, QL, DML and PDL are implemented as services
Other application requirements are:
• event-driven architecture database - implemented as services
• deductive database - implemented as services
• multi-database - implemented as services
• graph database - implemented as services
• hypertext hypermedia database - implemented as services
• knowledge base - implemented as services
• probabilistic database - implemented as services
• real-time database - implemented as services
• temporal database - implemented as services
• loading and executing cipher from libraries – implemented as services
12.10.1.2 Geographic Information Systems
The database system holds geographic data for collecting, processing and reporting
spatial information for maps, visualization and intelligence. GIS supports discrete
objects and continuous fields as raster images and vector.
Entity data includes position and non position attributes. Entities can be positioned,
monitored, analysed and displayed for visualization, understanding and intelligence
when combined with other information. The entities are processed with services,
standards and techniques.
Communications use dialogues between source and destination over a transmission
medium with protocols governing the process. The protocol is defined by a standard or
technique controlling entities and services which may use GIS data.
12.10.1.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.10.1.4 Network Theory
Network theory considers ways to validate connections within a graph structure (nodes
and edges). Algorithms can resolve ordered elements, single root trees and flows in
networks to validate consistency, completeness and use with and without dangles or
loops as described in section 7.1.3.2.
Validation cases are well structured, consistent, complete, completion of processes,
optimal structure for minimum time of processing and maximum ease of look up.
a. The system is well structured when an element is connected to other elements (for
entity v entity, entity v service, service v technique, service v standard, technique v
standard).
b. It is consistent when an element is not accessed from two other different elements
(for entity v entity, entity v service, service v technique, service v standard, technique v
standard).
c. It is complete when there are elements being defined but unused. The elements are a
waste and would cause confusion if they are known. The completeness prerequisite will
eliminate this difficulty (for entity v entity, entity v service, service v technique, service v
standard, technique v standard).
d. It has a way of completing its processes – every entity has an input or an output
service
e. There is an optimal way for structuring the system to minimise the time of processing
using a CPM process on the entities, services, techniques and standard respectively.
f. There is an optimal way for structuring the system to maximise the ease of look up by
minimising the height / depth of the network for entities, services, techniques and
standard respectively.
12.10.1.5 Markov Theory
Markov theory extends network theory to determine what the flow through nodes and
edges and hence the unused nodes and edges for zero flow through them. It also finds
the flow between the nodes and which are single entry or single exit blocks (grouping)
of nodes.
By introducing a node as an error sink and extra edges from the remaining nodes the
probability of error at different parts in the network system can be calculated along
with the size of error at each point of the Markov process and the error node gives an
estimate of the total error rate of the network.
The network system is based on entities, services, standards, techniques and
communications. In this case the network system is based on one classified as nodes
and the others as edges. The implementation is based on services for the schema and
the database contents separately on a timed basis.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
• When entities can form the network system services can be considered as edges
for the Markov analysis.
• When entities can form the network system standards can be considered as
edges for the Markov analysis.
• When entities can form the network system techniques can be considered as
edges for the Markov analysis.
• When entities can form the network system services and standards can be
considered as edges for the Markov analysis.
• When entities can form the network system services and techniques can be
considered as edges for the Markov analysis.
• When entities can form the network system services, standards and techniques
can be considered as edges for the Markov analysis.
• When services can form the network system entities can be considered as edges
for the Markov analysis.
• When services can form the network system standards can be considered as
edges for the Markov analysis.
• When services can form the network system techniques can be considered as
edges for the Markov analysis.
• When services can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When services can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When services can form the network system entities, standards and techniques
can be considered as edges for the Markov analysis.
• When standards can form the network system entities can be considered as
edges for the Markov analysis.
• When standards can form the network system services can be considered as
edges for the Markov analysis.
• When standards can form the network system techniques can be considered as
edges for the Markov analysis.
• When standards can form the network system entities and services can be
considered as edges for the Markov analysis.
• When standards can form the network system entities and techniques can be
considered as edges for the Markov analysis.
• When standards can form the network system entities, services and techniques
can be considered as edges for the Markov analysis.
• When techniques can form the network system entities can be considered as
edges for the Markov analysis.
• When techniques can form the network system services can be considered as
edges for the Markov analysis.
• When techniques can form the network system standards can be considered as
edges for the Markov analysis.
• When techniques can form the network system entities and services can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities and standards can be
considered as edges for the Markov analysis.
• When techniques can form the network system entities, services and standards
can be considered as edges for the Markov analysis.
Communications are based on protocols which are standards or techniques so the
analyses taken above can be applied. Additionally the protocols can be taken as nodes
with entities, services, standards and techniques as edges.
If a node or edge is not found then the error is reported as a stack dump and after review
the matrix structure is adjusted as appropriate.
12.10.1.6 Algebraic Theory
Algebraic theory gives a set with elements and functions to be a basis of a system.
Individual services can be validated using algebraic theory. Restrictions on valid basic
elements are defined. Combinations of the elements forming larger elements are
classified as systems or subsystems with rules for correctness and erroneous.
Iterations on the combination process for more complex elements are validated against
rules giving meaning to elements and combinations of them. Relations are derived from
another set of functions which give links such as generalisation, and specification
based on properties of the elements. Other parts are ways of defining properties of
elements or functions whilst some apply to the scope of elements and functions.
When entities are considered as elements and functions are services, standards and
techniques define combinations and validation rules and valuations. Similarly the
duality gives services as elements, entities as functions and standards and techniques
validate combinations and valuations.
Communications consists of a dialogue between a source and a destination over a
transmission medium. Protocols (rules) govern the process which may be standards or
techniques to give entities and services.
If an element or function is not found then the error is reported as a stack dump and
after review adjust rule structure.
12.10.1.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.10.1.8 Programming Language Theory
Programming language theory gives rules for formalised standard and technique for
entities, graphics, sound and media technologies, services, database schema,
techniques and standards and database models. The IoT system schema and data is
covered by nesting, scope for meaning, objects, properties, methods with arguments,
the "this" operator and the concepts of synonyms, generalisation and specification,
overloaded definitions changed by context, replication, iteration, conditional meaning,
libraries, events, priority as set out in Section 14 (Appendix Database Schema).
12.10.1.9 Compiler Technology Theory
A compiler translates high-level language source programs to the target code. It takes
the formal definition in section 14 (Appendix – Database Scheme) to the physical
database representation starting with a set of base elements as the entities, services,
techniques and standards of the system and a sequence for extending the entity set,
services set, techniques set and standards set.

Services are translated from techniques and standards like macro processing along
with the priority processes of learning, probability, network analysis and Markov theory
for the entities and services sections. If an entity, service, standard or technique is not
recognised then the input entity / service / standard / technique is queried to see if
there is an error or the entity / service / standard / technique should be added to the
database set. An escape sequence can be used to extend the entity / service / standard
/ technique set as in section 14 (Appendix – Database Scheme).
Communications use protocols (rules) to govern the process based on standard /
technique definitions for each of the system defined in formal specifications as
described above.
12.10.1.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities. Standards’ protocols are expressed in terms of a
formal language. Services are given priorities on how the entities are processed based
on the protocol. The protocol can be improved through the learning, probability,
network analysis and Markov theory for the entities, services, standards and
techniques sections. The definitions are translated using compiler technology.
If an entity or a service is not recognised then it is passed to a recovery process based
on repeated analysis of the situation by some parallel check. If the entity or service is
not recovered, the entity / service stack dump is reviewed to determine if there is an
error or the entity / service should be extended to the entity / service set with an
escape sequence. Similarly standards and techniques can be updated by experience.
12.10.1.11 Learning theory
Learning theory affords a set of methods for adding data, relations and modifications to
the knowledge database of the IoT system using the procedure for learning described
in section 7.1.9.2.4.
12.10.1.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
If any error is found then it is reported as a device stack and position then it is evaluated
with respect to time, device, device type, position and after review the data and
processing structures are adjusted.
12.10.1.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.10.1.14 Curve Fitting
Curve fitting constructs a curve / mathematical function best fitting a series of given
data points, subject to constraints. It uses two main methods namely interpolation, for
an exact fit of data or smoothing, for a "smooth" curve function approximating to the
data. Regression analysis, gives a measure of uncertainty of the curve due to random
data errors. The fitted curves help picture the data,  and estimate values of a function
for empty data values. They also summarize relations of the variables. Extrapolation
takes the fitted curve to calculate values beyond the range of the observed data, and
gives uncertainty due to which particular curve has been determined. Curve fitting
relies on various types of constraints such as a specific point, angle, curvature or other
higher order constraints especially at the ends of the points being considered. The
number of constraints sets a limit on the number of combined functions defining the
fitted curve even then there is no guarantee that all constraints are met or the exact
curve is found. Curves are assessed by various measures with a popular procedure
being the least squares method which measures the deviations of the given data
points.
Curve fitting can select the correct source / destination from the group of nodes to
apply to a communications service and then to select the correct connection from the
group of routes to apply to the service. Curve fitting can check the entity, service,
technique, standard and communications from the components that make up the
system.
12.10.1.15 Configuration Management
Configuration management  follows the process and database described in section
7.1.17.2. Each entity, service, standard, technique and communications protocol is
subject to a configuration management life cycle and is supported by the appropriate
services and database. If an element or relation is not found then the error is reported
as a stack dump and after review the database structure is adjusted.
12.10.1.16 Continuous Integration
Continuous integration uses an extension of the configuration management as
described in section 7.1.18.2. It applies to entities, services, standards and techniques.
If an element or relation is not found then the error is reported as a stack dump and
after review the database structure is adjusted.
12.10.1.17 Continuous Delivery
Continuous delivery extends the processes and databases along the lines of section
7.19.1.2. It is supported by services as appropriate and applies to developments of
entities, services, standards and techniques. If an element or relation is not found then
the error is reported as a stack dump and after review the database structure is
adjusted.
12.10.1.18 Virtual Reality
Virtual reality is the user interface for monitoring and control of the IOT system. It
works with other technology such as remote communication, artificial intelligence and
spacial data to assist the technology. Errors for entities, services, standards,
techniques and communications are reported using this method so that corrective
actions can be made remotely. The reported error is displayed as a device stack and
position then evaluated with respect to time, device, device type, position and after
review the system structure is modified appropriately.
12.10.2 Physical System
12.10.2.1 Database Processing
The database supports IoT activities with a multimedia hybrid object-relational NOSQL
multi-database with appropriate DDL, QL, DML and PDL. It supports an XML schema
defined in section 14 (Appendix – Database Scheme) with services giving facilities for
event-driven architecture, deduction, graph structures, hypertext hypermedia,
knowledge base, probability, real-time, loading and executing cipher from libraries and
temporal information. It is a virtual store.
12.10.2.2 Geographic Information Systems
Entities use GIS data and communications protocols, services, standards and
techniques collect, process and report the GIS information for visualization and
analysis.
12.10.2.3 Search theory
Search theory gives a measurable set of requirements (logical database) and a method
of assessing how good the process (physical database) and the documentation come
up to the requirements. The database system should be standardised, simple,
specialised, logically organised, concise, have minimum ambiguity, have minimum error
cases and have partitioning facilities. The facilities for systems should be modifiable to
the experience of the users and environment. If no element is found then the error is
reported and after review the element is added to the system.
The utilities and services should be well versed, particularly in the specialised field of
the application system. They should be a good implementation leading to accurate,
efficient, well guided, helpful, precise, fast adjustment and able to start execution
quickly and continue for long periods. They should use previous data or automatic
controls rather than human intervention.
The input system should be standardised, simple, specialised, logically organised,
concise, have minimum ambiguity, have minimum error cases and have partitioning
facilities. The facilities for input should be modifiable to the experience of the users
and environment.
Reference documentation should have stiff spines, and small thin stiff light pages with
simple content which is adjustable to the experience of the user. The documentation
should be standardised and have minimum number of pages and facts. Facts should be
small, logically place and have minimum number of reference strategies.
The user should be experienced, particularly in the specialised field of the system and
its reference documentation. The user should be a good worker (accurate, efficient,
good memory, careful, precise, fast learner) who is able to settle to work quickly and
continue to concentrate for long periods. He should use his memory rather than
documentation. If he is forced to use documentation, he should have supple joints, long
light fingers which allow pages to slip through them when making a reference. Finger
motion should be kept gentle and within the range of movement and concentrated to
the fingers only. The user should have natural dexterity, aptitude and fast recall.
12.10.2.4 Network Theory
Network theory gives algorithms for services to validate and optimise links in the
database schema and the data for well structured-ness, consistency, completeness,
completion of processes, optimal structure for minimum time of processing and
maximum ease of look up applied to entities, services, techniques, standards and
communications.
12.10.2.5 Markov Theory
Markov theory extends network theory with services to validate and optimise nodes
and edges for schema structure and database structure for entities, services,
techniques, standards and communications respectively and review any problems
reported in the database.
12.10.2.6 Algebraic Theory
Algebraic theory transforms the logic of of services as assertions and form constraints
of input using compiler technology. Entities are analysed with set theory to verify
constraints on them. The system is combined with logic flows to verify that the outputs
and following inputs are consistent. Techniques and standards follow the integration
process. Communications follow the same integration actions.
12.10.2.7 Logic Theory
Logic theory follows the same processes as algebraic theory with the exception that
values are derived from functions.
12.10.2.8 Programming Language Theory
Programming language theory gives formalised ways for defining entities, services,
techniques, standards and communications. It gives a basis for compiler technology to
process entities, services, techniques, standards and communications for network,
Markov, algebraic and logical validation of the schema and database.
12.10.2.9 Compiler Technology Theory
Compiler technology translates the definitions of entities, services, techniques,
standards and communications for validation processes of database and its schema. It
is also used to optimise the system entities, services, techniques, standards and
communications through learning, probability, network analysis and Markov theory.
12.10.2.10 Communications Theory
Communications are controlled by protocols which consist of standards and techniques
applied to services and entities best expressed in a formal language. The
representation is translated with compiler technology to give validation through
network, Markov, algebraic and logical analysis and improved through the learning,
probability, network analysis and Markov theory.
12.10.2.11 Learning theory
Learning theory uses the procedure for learning described in section 7.1.9.2.4 to
optimise the database schema and data into a knowledge database for the IoT system.
12.10.2.12 Quantitative Theory
Quantitative theory in section 7.1.8.2 leads to metrics relating
• entities and services, development time, number of errors, number of tests, ideal
relation of entities and services
• services and techniques, development time, number of errors, number of tests,
ideal relation of services and techniques
• techniques and standards, development time, number of errors, number of tests,
ideal relation of techniques and standards
12.10.2.13 Probability Theory
Probability is driven by by initial heuristics of equal probability. After that it is driven by
statistics collected into the database schema and the database for items with the
network and the Markov theory sections above.
12.10.2.14 Curve Fitting
Curve fitting uses extended Pearson coefficient analysis to assess trust in the curve
fitting process. The curve fitting uses Chebycheff polynomials and splines for
interpolation and extrapolation for multi dimensional analysis. It is particularly useful
for selecting the correct source from the group of nodes to apply to a communications
service, the correct destination from the group of nodes to apply to a communications
service and then the correct connection from the group of routes to apply to a
communications service.
12.10.2.15 Configuration Management
Configuration management  has the database include system construction and item
identity and status as entities. Services provide base-lining, configuration control with
approval stages and baselines, configuration status accounting and audits versus
revision and defect correction.
12.10.2.16 Continuous Integration
Continuous integration extends configuration management with services to extract a
copy of the system from a repository and perform a build and a set of automated tests
to ensure that their environment is valid for update. A build server builds the system,
documentation, statistics, distribution media, integrates and deploys into a scalable
version clone of production environment through service virtualization for
dependences. Automated tests for unit and integration (defect or regression) tests with
static and dynamic tests, measure and profile performance confirms it behaves as it
should. The updated repository triggers another build process and tests. The new
updates are committed to the repository when all the tests have been verified and
delivered to stakeholders and testers otherwise updates have rollback. The build / test
process is repeated periodically to ensure no corruption of the system.
12.10.2.17 Continuous Delivery
Continuous delivery automates source control all the way through to production. It
include continuous integration, application release automation, build automation,
and application life cycle management.
12.10.2.18 Virtual Reality
Virtual reality is the user interface with services for monitoring and control of the IOT
system. It works with other technology such as remote communication, artificial
intelligence and spacial data to assist the technology. Errors for entities, services,
standards, techniques and communications are reported using this method so that
corrective actions can be made remotely. The reported error is displayed as a device
stack and position then evaluated with respect to time, device, device type, position
and after review the system structure is modified appropriately.
12.10.2.19 Commentary
The cipher service definition set above is created once when the service is added to
the system and changed and removed infrequently as the service set is extended. It is
queried frequently for every entity, standard and technique rule that is read. The
service definition set is updated (inserted, modified, and deleted) infrequently. The
administration (maintain users, data security, performance, data integrity, concurrency
and data recovery using utilities - services) of the database will be done on a regular
basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
The named entity recognition process has a basic process of syntax analysis and
semantics lookup (compiler technology) to analyse the input and the database
technology to look up the syntax rules, semantics rules, words and the classification
information for the activity. The logic theory is used to validate the language input and
in the case of error the human user of the system is queried to correct the error or
extend the language using the learning algorithms. The tuning of the database is
provided with the statistics data that is collected over the use of the database and
language data.

12.11 Summary
IoT security processing is the function of extracting entities, services, standards,
techniques, communications, antivirus, firewall, APIDS and ciphers from a set of
information. The contributions of IoT technology are defined below:
12.11.1 IoT Technology
IoT processing is the function of extracting entities, services, standards, techniques
and communications from a set of information. The contributions of IoT technology are
defined below:
t) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
u) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
v) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
w) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
x) Markov theory can determine usage and errors of the structure of IoT system.
y) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
z) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
aa) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
ab) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
ac) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
ad) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
ae) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
af) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
ag) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
ah) Curve fitting is used for the interpretation of the IoT system.
ai) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
aj) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
ak) Continuous delivery extends continuous integration by automating the process
from start to production.
al) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity, service, standard, technique and communications definition set above is
created once when the entity, service, standard, technique or communications is added
to the system and changed and removed infrequently as the service set is extended. It
is queried frequently for every entity, service, standard, technique and communications
rule that is read. The definition set is updated (inserted, modified, and deleted)
infrequently. The administration (maintain users, data security, performance, data
integrity, concurrency and data recovery using utilities - services) of the database will
be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.
12.11.2 IoT Security Technology
IoT security processing is the function of extracting entities, services, standards,
techniques and communications from a set of information for antivirus, firewall, APIDS
and ciphers. The contributions of IoT technology are defined below:
a) Search theory gives a measurable set of requirements and a method of
assessing how good the process and the documentation come up to the requirements.
If no target is found then the error is reported and after review the target is added to
the system.
b) Quantitative theory provides the opportunity for giving estimates of the size and
errors of the IoT processing parts and relations between them.
c) Network theory ensures that the system is well structured, consistent and
complete, defines a way of completing processing and optimising structuring the system
to minimise the time of processing and maximise the ease of look up.
d) Communications theory offers a basis for the collection of data for the IoT
system knowledge database.
e) Markov theory can determine usage and errors of the structure of IoT system.
f) Probability theory is a method of predicting the changes that occur from the
processing of the IoT system experience over time.
g) Programming language theory grants a basis for the holding the structure of the
knowledge held by the IoT and the processing so far.
h) Algebraic theory allows the processing and validation of entities, groups,
modification, substitution and valuation.
i) Logic theory endows processing and validation of entities, groups, modification,
substitution and valuation.
j) Compiler technology theory supplies a basis for analysing the input which is met
in the processing of the IoT system data.
k) Database technology bestows a method for easy access to the knowledge that
accumulated about the IoT system being processed
l) Learning theory affords a set of methods for adding data, relations and
modifications to the knowledge database of the IoT system.
m) Statistics theory provides ways of analysing the changes that occur from the
processing of the IoT system experience over time.
n) Geographical information systems hold data dependent on position for position,
monitor, analyse and display for visualization, understanding and intelligence when
combined with other technologies, processes, and methods.
o) Curve fitting is used for the interpretation of the IoT system.
p) Configuration management identifies entity attributes for control, recording and
reporting on the system status.
q) Continuous integration automates updates, builds and tests by measuring and
profiling performance to ensure that their environment is valid.
r) Continuous delivery extends continuous integration by automating the process
from start to production.
s) Virtual reality simulates an environment of the user's presence, environment and
interaction for entities.
The entity, service, standard, technique and communications definition set above for
antivirus, firewall, APIDS and ciphers is created once when the entity, service,
standard, technique or communications for antivirus, firewall, APIDS and ciphers is
added to the system and changed and removed infrequently as the service set is
extended. It is queried frequently for every entity, service, standard, technique and
communications rule that is read. The definition set is updated (inserted, modified, and
deleted) infrequently. The administration (maintain users, data security, performance,
data integrity, concurrency and data recovery using utilities - services) of the database
will be done on a regular basis.
The logical database structure must follow the object oriented type with the XML tags
in the appendix as are the escape sequences. The logical database structure must
follow the object oriented type with the XML tags.

13 Conclusions
This paper reviews how some other technologies can contribute to IoT and its
processes. It consists of 12 further sections. The next gives a summary of IoT. Section
4 considers the Intel Active Management Technology whilst the fifth part describes IoT
security followed by a component on IoT security solutions. The seventh section is
devoted to methodologies that can be add to the techniques for IoT processing. There
are 20 theories that are helpful. They are search theory, network theory, Markov theory,
algebraic theory, logic theory, programming language theory, quantitative theory,
learning theory, statistics theory, probability theory, communications theory, compiler
technology theory, database technology, geographic information systems, curve fitting,
configuration management continuous integration/delivery and virtual reality. These
techniques are applied in turn to IoT studies in the part eight in turn to entity (address,
database, entities, firmware, functions, hardware, languages, network hardware,
network media), services (access management, accounting management, address
management, application management, communications management, content
management, continuous delivery, data analysis, data transfer, network management,
protocol management, reliability and fault tolerance, resource management, search
management, security management, service engineering management, statistics
management, status management, test facility management, development facility
management, virtualisation), standards and techniques (cognitive networks,
cooperative networks, machine learning, neural networks, reinforcement learning, self-
organizing distributed networks, surrogate models, time series analysis) and
communications. In the ninth section we study IoT security processing from the view of
its different activities with the requirements being described in part 10. Part 11 gives a
specification of the IoT security tools and implementation specified in section 12. The
penultimate part presents the conclusions of the paper whilst the final section is a set
of references to help the reader.

14 Appendix - Database Scheme for IoT


14.1 Entities
a. <entity set>
<standard set name><string></standard set name>
<technique set name><string></technique set name>
<entity> <iteration control>
<entity name><string></entity name>
<entity type><string></entity type>
<entity name sound> <entity name sound file> </entity name sound>
<entity hardware representation><string</entity hardware representation>
<entity picture> <entity picture file> </entity picture>
<entity sound> <entity sound file> </entity sound>
<entity meaning> <string> </entity meaning>
<entity version> <string> </entity version>
<entity timestamp> <timestamp> </entity timestamp>
<geographic position><coordinates></geographic position>
<properties>
<property> <property name> <property value>
<entity property system statistic> <number> </entity property system statistic>
<entity property personal statistic> <number> </entity property personal statistic>
<entity subset values>
</property>
.........
</properties>
<entity system statistic> <number> </entity system statistic>
<entity personal statistic> <number> </entity personal statistic>
<service set name><string></service set name>
</entity></iteration control>
.........
….......
</entity set>

b. <entity subset values>


<standard set name><string></standard set name>
<technique set name><string></technique set name>
<entity><iteration control>
<entity name><string></entity name>
<entity type><string></entity type>
<entity name sound> <entity name sound file> </entity name sound>
<entity hardware representation><string</entity hardware representation>
<entity picture> <entity picture file> </entity picture>
<entity sound> <entity sound file> </entity sound>
<entity meaning> <string> </entity meaning>
<entity version> <string> </entity version>
<entity timestamp> <timestamp> </entity timestamp>
<properties>
<property> <property name> <property value>
<entity property system statistic> <number> </entity property system statistic>
<entity property personal statistic> <number> </entity property personal statistic>
<entity subset values>
</property>
.........
</properties>
<entity system statistic> <number> </entity system statistic>
<entity personal statistic> <number> </entity personal statistic>
</entity></iteration control>
.........
….......
</entity subset values>

c. <escape entity set>


<standard set name><string></standard set name>
<technique set name><string></technique set name>
<escape entity><iteration control>
<escape entity name><string></escape entity name>
<escape entity type><string></escape entity type>
<escape entity name sound> <escape entity name sound file> </escape entity name
sound>
<escape entity hardware representation><string</escape entity hardware
representation>
<escape entity picture> <escape entity picture file> </escape entity picture>
<escape entity sound> <escape entity sound file> </escape entity sound>
<escape entity meaning> <string> </escape entity meaning>
<escape entity version> <string> </escape entity version>
<escape entity timestamp> <timestamp> </escape entity timestamp>
<escape entity properties>
<escape entity property> <escape entity value>
<escape entity property system statistic> <number> </escape entity property system
statistic>
<escape entity property personal statistic> <number> </escape entity property personal
statistic>
<escape entity subset>
</escape entity property>
..........
</escape entity properties>
<escape entity system statistic> <number> </escape entity system statistic>
</escape entity>
<escape entity personal statistic> <number> </escape entity personal statistic>
</escape entity></iteration control>
.........
….......
</escape entity set>

d. <escape entity subset>


<standard set name><string></standard set name>
<technique set name><string></technique set name>
<escape entity><iteration control>
<escape entity name><string></escape entity name>
<escape entity type><string></escape entity type>
<escape entity name sound> <escape entity name sound file> </escape entity name
sound>
<escape entity hardware representation><string</escape entity hardware
representation>
<escape entity picture> <escape entity picture file> </escape entity picture>
<escape entity sound> <escape entity sound file> </escape entity sound>
escape entity meaning> <string> <escape entity meaning>
<escape entity version> <string> </escape entity version>
<escape entity timestamp> <timestamp> </escape entity timestamp>
<escape entity properties>
<escape entity property> <escape entity value>
<escape entity property system statistic> <number> </escape entity property system
statistic>
<escape entity property personal statistic> <number> </escape entity property personal
statistic>
<escape entity subset>
</escape entity property>
..........
</escape entity properties>
<escape entity system statistic> <number> </escape entity system statistic>
</escape entity>
<escape entity personal statistic> <number> </escape entity personal statistic>
</escape entity></iteration control>
.........
….......
</escape entity subset>

14.2 Services
a. <service set>
<service set name><string></service set name>
<standard set name><string></standard set name>
<technique set name><string></technique set name>
<service><iteration control>
<service name><string></service name>
<service name sound> <service name sound file> </service name sound>
<service hardware representation><string</service hardware representation>
<service picture> <service picture file> </service picture>
<service sound> <service sound file> </service sound>
<service meaning> <string> <service meaning>
<service version> <string> </service version>
<service timestamp> <timestamp> </service timestamp>
<geographic position><coordinates></geographic position>
<properties>
<property> <property name> <property value>
<service property system statistic> <number> <service property system statistic>
<service property personal statistic> <number> <service property personal statistic>
</property>
.........
<service subset>
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<service priority> <string> </service priority>
...........
<service relations>
<service relation>
<relation name><string></relation name>
<service name>
</service relation>
...........
</service relations>
<service language statistic> <number> </service language statistic>
<service personal statistic> <number> </service personal statistic>
</service></iteration control>
.........
</service set>

b. <service subset>
<service subset name><string></service subset name>
<standard set name><string></standard set name>
<technique set name><string></technique set name>
<service><iteration control>
<service name><string></service name>
<service name sound> <service name sound file> </service name sound>
<service hardware representation><string</service hardware representation>
<service picture> <service picture file> </service picture>
<service sound> <service sound file> </service sound>
<service meaning> <string> <service meaning>
<service version> <string> </service version>
<service timestamp> <timestamp> </service timestamp>
<properties>
<property> <property name> <property value>
service property system statistic> <number> <service property system statistic>
service property personal statistic> <number> <service property personal statistic>
</property>
.........
<service subset>
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<service priority> <string> </service priority>
...........
<service relations>
<service relation>
<relation name><string></relation name>
<service name>
</service relation>
...........
</service relations>
<service language statistic> <number> </service language statistic>
<service personal statistic> <number> </service personal statistic>
</service></iteration control>
.........
</service subset>

c. <escape service set>


<escape service set name><string></escape service set name>
<standard set name><string></standard set name>
<technique set name><string></technique set name>
<escape service><iteration control>
<escape service name><string></escape service name>
<escape service name sound> <escape service name sound file> </escape service
name sound>
<escape service hardware representation><string</escape service hardware
representation>
<escape service picture> <escape service picture file> </escape service picture>
<escape service sound> <escape service sound file> </escape service sound>
<escape service meaning> <string> <escape service meaning>
<escape service version> <string> </escape service version>
<escape service timestamp> <timestamp> </escape service timestamp>
<properties>
<property> <property name> <property value>
service property system statistic> <number> <service property system statistic>
service property personal statistic> <number> <service property personal statistic>
</property>
.........
<service subset>
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<escape service subset>
<escape service priority> <string> </escape service priority>
...........
<escape service relations>
<escape service relation>
<relation name><string></relation name>
<escape service name>
</escape service relation>
...........
</escape service relations>
<escape service system statistic> <number> </escape service system statistic>
<escape service personal statistic> <number> </escape service personal statistic>
</escape service></iteration control>
.........
</escape service set>

d. <escape service subset>


<escape service subset name><string></escape service subset name>
<standard set name><string></standard set name>
<technique set name><string></technique set name>
<escape service><iteration control>
<escape service name><string></escape service name>
<escape service name sound> <escape service name sound file> </escape service
name sound>
<escape service hardware representation><string</escape service hardware
representation>
<escape service picture> <escape service picture file> </escape service picture>
<escape service sound> <escape service sound file> </escape service sound>
<escape service meaning> <string> <escape service meaning>
<escape service version> <string> </escape service version>
<escape service timestamp> <timestamp> </escape service timestamp>
<properties>
<property> <property name> <property value>
<escape service property system statistic> <number> <escape service property system
statistic>
<escape service property personal statistic> <number> <escape service property
personal statistic>
</property>
.........
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<escape service subset>
<escape service priority> <string> </escape service priority>
...........
<escape service relations>
<escape service relation>
<relation name><string></relation name>
<escape service name>
</escape service relation>
...........
</escape service relations>
<escape service system statistic> <number> </escape service system statistic>
<escape service personal statistic> <number> </escape service personal statistic>
</escape service></iteration control>
.........
</escape service subset>

14.3 Standards

a. <standard set>
<standard set name><string></standard set name>
<standard>
<standard name><string></standard name>
<standard hardware representation><string></standard hardware representation>
<standard rule>
<standard name>.............<standard name>
<standard version> <string> </standard version>
<standard timestamp> <timestamp> </standard timestamp>
<standard system rule statistic> <number> </standard system rule statistic>
<standard personal rule statistic> <number> </standard personal rule statistic>
</standard rule>
<standard entity>
<entity name>.............<entity name>
<standard system entity statistic> <number> </standard system entity statistic>
<standard personal entity statistic> <number> </standard personal entity statistic>
</standard entity>
<standard service>
<service name>.............<service name>
<standard system service statistic> <number> /standard system service statistic>
<standard personal service statistic> <number> <s/standard personal service statistic>
</standard service>
<standard technique>
<technique name>.............<technique name>
<standard system technique statistic> <number> </standard system technique
statistic>
<standard personal technique statistic> <number> </standard personal technique
statistic>
</standard technique>
...........
<standard system statistic> <number> </standard system statistic>
<standard personal statistic> <number> /<standard personal statistic>
</standard>
.........
</standard set>

b. <escape standard set>


<escape standard set name><string></escape standard set name>
<escape standard>
<escape standard name><string></escape standard name>
<escape standard hardware representation><string</escape standard hardware
representation>
<escape standard rule>
<escape standard name>.............<escape standard name>
<escape standard version> <string> </escape standard version>
<escape standard timestamp> <timestamp> </escape standard timestamp>
<escape standard system rule statistic> <number> </escape standard system rule
statistic>
<escape standard personal rule statistic> <number> </escape standard personal rule
statistic>
</escape standard rule>
<escape standard entity>
<escape entity name>.............<escape entity name>
<escape standard system entity statistic> <number> </escape standard system entity
statistic>
<escape standard personal entity statistic> <number> </escape standard personal
entity statistic>
</escape standard entity>
<escape standard service>
<escape standard service name>.............<escape standard service name>
<escape standard system service statistic> <number> </escape standard system
service statistic>
<escape standard personal service statistic> <number> </escape standard personal
service statistic>
</escape standard service>
<escape standard technique>
<escape standard technique name>.............<escape standard technique name>
<escape standard system technique statistic> <number> </escape standard system
technique statistic>
<escape standard personal technique statistic> <number> </escape standard personal
technique statistic>
</escape standard technique>
...........
<escape standard system statistic> <number> </escape standard system statistic>
<escape standard personal statistic> <number> </escape standard personal statistic>
</escape standard>
.........
</escape standard set>

14.4 Techniques

a. <technique set>
<technique set name><string></technique set name>
<technique><iteration control>
<technique name><string></technique name>
<technique name sound> <technique name sound file> </technique name sound>
<technique hardware representation><string</technique hardware representation>
<technique picture> <technique picture file> </technique picture>
<technique sound> <technique sound file> </technique sound>
<technique meaning> <string> <technique meaning>
<technique version> <string> </technique version>
<technique timestamp> <timestamp> </technique timestamp>
<properties>
<property> <property name> <property value>
technique property system statistic> <number> <technique property system statistic>
technique property personal statistic> <number> <technique property personal
statistic>
</property>
.........
<technique subset>
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<technique priority> <string> </technique priority>
...........
<technique relations>
<technique relation>
<relation name><string></relation name>
<technique name>
</technique relation>
...........
</technique relations>
<technique language statistic> <number> </technique language statistic>
<technique personal statistic> <number> </technique personal statistic>
</technique></iteration control>
.........
</technique set>
b. <technique subset>
<technique subset name><string></technique subset name>
<technique><iteration control>
<technique name><string></technique name>
<technique name sound> <technique name sound file> </technique name sound>
<technique hardware representation><string</technique hardware representation>
<technique picture> <technique picture file> </technique picture>
<technique sound> <technique sound file> </technique sound>
<technique meaning> <string> <technique meaning>
<technique version> <string> </technique version>
<technique timestamp> <timestamp> </technique timestamp>
<properties>
<property> <property name> <property value>
technique property system statistic> <number> <technique property system statistic>
technique property personal statistic> <number> <technique property personal
statistic>
</property>
.........
<technique subset>
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<technique priority> <string> </technique priority>
...........
<technique relations>
<technique relation>
<relation name><string></relation name>
<technique name>
</technique relation>
...........
</technique relations>
<technique language statistic> <number> </technique language statistic>
<technique personal statistic> <number> </technique personal statistic>
</technique></iteration control>
.........
</technique subset>

c. <escape technique set>


<escape technique set name><string></escape technique set name>
<escape technique><iteration control>
<escape technique name><string></escape technique name>
<escape technique name sound> <escape technique name sound file> </escape
technique name sound>
<escape technique hardware representation><string</escape technique hardware
representation>
<escape technique picture> <escape technique picture file> </escape technique
picture>
<escape technique sound> <escape technique sound file> </escape technique sound>
<escape technique meaning> <string> <escape technique meaning>
<escape technique version> <string> </escape technique version>
<escape technique timestamp> <timestamp> </escape technique timestamp>
<properties>
<property> <property name> <property value>
escape technique property system statistic> <number> <escape technique property
system statistic>
escape technique property personal statistic> <number> <escape technique property
personal statistic>
</property>
.........
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<escape technique subset>
<escape technique priority> <string> </escape technique priority>
...........
<escape technique relations>
<escape technique relation>
<relation name><string></relation name>
<escape technique name>
</escape technique relation>
...........
</escape technique relations>
<escape technique system statistic> <number> </escape technique system statistic>
<escape technique personal statistic> <number> </escape technique personal
statistic>
</escape technique></iteration control>
.........
</escape technique set>

d. <escape technique subset>


<escape technique subset name><string></escape technique subset name>
<escape technique><iteration control>
<escape technique name><string></escape technique name>
<escape technique name sound> <escape technique name sound file> </escape
technique name sound>
<escape technique hardware representation><string</escape technique hardware
representation>
<escape technique picture> <escape technique picture file> </escape technique
picture>
<escape technique sound> <escape technique sound file> </escape technique sound>
<escape technique meaning> <string> <escape technique meaning>
<escape technique version> <string> </escape technique version>
<escape technique timestamp> <timestamp> </escape technique timestamp>
<properties>
<property> <property name> </property value>
<escape technique property system statistic> <number> </escape technique property
system statistic>
escape technique property personal statistic> <number> <escape technique property
personal statistic>
</property>
.........
</properties>
<events>
<event> <event name> <event value>
<when> <service> <property name> …...</when>
<service event system statistic> <number> <service event system statistic>
<service event personal statistic> <number> <service event personal statistic>
</event>
.........
<service subset>
</events>
<escape technique subset>
<escape technique priority> <string> </escape technique priority>
...........
<escape technique relations>
<escape technique relation>
<relation name><string></relation name>
<escape technique name>
</escape technique relation>
...........
</escape technique relations>
<escape technique system statistic> <number> </escape technique system statistic>
<escape technique personal statistic> <number> </escape technique personal
statistic>
</escape technique></iteration control>
.........
</escape technique subset>

14.5 Communications

a. <communication set>
<communication set name><string></communication set name>
<communication>
<standard set name><string></standard set name>
<technique set name><string></technique set name>
<communication name><string></communication name>
<communication name sound> <communication name sound file> </communication
name sound>
<communication hardware representation><string</communication hardware
representation>
<communication picture> <communication picture file> </communication picture>
<communication sound> <communication sound file> </communication sound>
<communication version> <string> </communication version>
<communication timestamp> <timestamp> </communication timestamp>
<communication language statistic> <number> </communication language statistic>
<communication personal statistic> <number> </communication personal statistic>
<entity name> .........
<service name> .........
<technique name> .........
<standard name> .........
</communication>
........
</communication set>

b. <escape communication set>


<escape communication set name><string></escape communication set name>
<escape communication>
<standard set name><string></standard set name>
<technique set name><string></technique set name>
<escape communication name><string></escape communication name>
<escape communication name sound> <escape communication name sound file>
</escape communication name sound>
<escape communication hardware representation><string</escape communication
hardware representation>
<escape communication picture> <escape communication picture file> </escape
communication picture>
<escape communication sound> <escape communication sound file> </escape
communication sound>
<escape communication version> <string> </escape communication version>
<escape communication timestamp> <timestamp> </escape communication
timestamp>
<escape communication language statistic> <number> </escape communication
language statistic>
<escape communication personal statistic> <number> </escape communication
personal statistic>
<escape entity name> .........
<escape service name> .........
<escape technique name> .........
<escape standard name> .........
</escape communication>
........
</escape communication set>

15 Appendix – Example Norton Database Scheme for IoT


15.1 Entities
<Norton>
<Antivirus>
<Automatic Protection>
<Boot Time Protection> Boot Time Protection Can Be Aggressive, Normal, Off </Boot
Time Protection>
<Real-Time Protection>
<Auto Protect>Auto Protect Can Be On, Off
<Removable Media Scan>Removable Media Scan Can Be On, Off </Removable Media
Scan>
</Auto Protect>
<Sonar Protection> Sonar Protection Can Be On, Off
<Network Drive Protection>Network Drive Protection Can Be On, Off </Network Drive
Protection>
<Sonar Advanced Mode> Sonar Advanced Mode Can Be Off, Aggressive, Automatic
<Removable Discs Automatically> Removable Discs Automatically Can Be Ask-Me,
Always, Highly-Certainty-Only </Removable Discs Automatically>
<Remove Risks If I Am Away> Remove Risks If I Am Away Highly-Certainty-Only, Ignore,
Always </Remove Risks If I Am Away>
</Sonar Advanced Mode>
<Shows Sonar Block Notifications>Shows Sonar Block Notifications Are Show-All Or
Log-Only </Shows Sonar Block Notifications>
<Early Launch Anti Malware Protection> Early Launch Anti Malware Protection Can Be
On Or Off </Early Launch Anti Malware Protection>
</Sonar Protection>
</Real-Time Protection>
</Automatic Protection>
<Scan And Risks>
<Computer Scans>
<Compressed File Scan> Compressed File Scan Can Be On Or Off
<Remove Infected Folders>Remove Infected Folders Can Be Automatic Or Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Rootkits And Stealth Items Scan>Rootkits And Stealth Items Scan Can Be On Or
Off</Rootkits And Stealth Items Scan>
<Network Drive Scan> Network Drive Scan Can Be On Or Off </Network Drive Scan>
<Heuristic Protection> Heuristic Protection Can Be Automatic, Off Or Aggressive
</Heuristic Protection>
<Tracking Cookies> Tracking Cookies Can Be Remove, Ignore Or Ask-Me</Tracking
Cookies>
<Full System Scan> Full System Scan Can Be Configured
<Scan Items> Scan Items Entire Computer </Scan Items>
<Scan Schedule> Scan Schedule Can Be Do Not Schedule This Scan, Run At A Specific
Time Interval (set The Number Of Days, Hours), Daily, Weekly, Monthly
<Only Time Idle Time> Only Only Time Idle Time Only Can Be On Or Off </Only Time Idle
Time>
<On Ac Power> On Ac Power Can Be On Or Off </On Ac Power>
<Prevent Standby> Prevent Standby Can Be On Or Off </Prevent Standby>
<After Scan Completion> After Scan Completion Can Be Stay On, Turn Off, Sleep Or
Hibernate </After Scan Completion>
</Scan Schedule>
<Scan Options>
<Compressed File Scan> Compressed File Scan Can Be On Or Off
<Remove Infected Folders> Remove Infected Folders Can Be Automatic Or Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Network Drive Scan> Network Drive Scan Can Be On Or Off </Network Drive Scan>
<Low Risks> Low Risks Can Be Remove, Ignore Or Ask-Me </Low Risks>
</Scan Options>
</Full System Scan>
</Computer Scans>
<Protected Ports> Protected Ports Can Be Configured
<Full Name> Full Name </Full Name>
<Port Number> Port Number </Port Number>
With Add Or Remove Operations
</Protected Ports>
<Email Antivirus Scan> <Email Antivirus Scan Control> Email Antivirus Scan Can Be On
Or Off </Email Antivirus Scan Control>
Email Antivirus Scan Can Be Configured
<Scan Incoming Email> Scan Incoming Email Can Be On Or Off </Scan Incoming Email>
<Scan Outgoing Email> Scan Outgoing Email Can Be On Or Off </Scan Outgoing Email>
<Scan Outgoing Messages For Suspect Worms> Scan Outgoing Messages For Suspect
Worms Can Be On Or Off
<How To Respond When The Output Threat> How To Respond When The Output Threat
Is Found Can Be Automatically Removed Or Ask-Me </How To Respond When The
Output Threat>
</Scan Outgoing Messages For Suspect Worms>
<What To Do What To Do When Scanning Email Messages> What To Do What To Do
When Scanning Email Messages Can Be
<Protect Against Timeouts> Protect Against Timeouts Can Be On Or Off </Protect
Against Timeouts>
<Display Process Indicator> Display Process Indicator Can Be On Or Off </Display
Process Indicator>
</What To Do What To Do When Scanning Email Messages>
</Email Antivirus Scan>
<Exclusions And Low Risk>
<Low Risks> Low Risks Can Be Removed, Ignore Or Ask-Me </Low Risks>
<Items To Exclude From Scans> Items To Exclude From Scans Can Be Configured To
<Item To Exclude From Scans>System Volume Information </Item To Exclude From
Scans>*
With Operations Add Folders, Add Files, Edit And Remove
<Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Item To Exclude From Auto Protect Sonar And Download Intelligence Detection>
Items To Exclude From Auto Protect Sonar And Download Intelligence Detection Can
Be Configured To </Item To Exclude From Auto Protect Sonar And Download
Intelligence Detection>*
Operation Can Be Add Folder, Add Files, Edit And Remove
</Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Signatures To Exclude From All Detections>
<Signature To Exclude From All Detections>
Signatures To Exclude From All Detections Can Be Configured To
Operation Can Be Add, Remove And Risk Details
</Signature To Exclude From All Detections>*
</Signatures To Exclude From All Detections>
<Clear File Id Excluded During Scans> Clear File Id Excluded During Scans Can Be Set
With The Operation Clear All </Clear File Id Excluded During Scans>
</Items To Exclude From Scans>
</Exclusions And Low Risk>
</Scan And Risks>
<Updates>
Updates
<Automatic Live Update> Automatic Live Update Can Be On Or Off </Automatic Live
Update>
<Apply Updates Only On Reboot> Apply Updates Only On Reboot Can Be On Or Off
</Apply Updates Only On Reboot>
</Updates>
</Antivirus>
<Firewall>
<General Settings>
<Smart Firewall>
<Smart Firewall Setting> Smart Firewall Setting Can Be On Or Off </Smart Firewall
Setting>
<Uncommon Protocols> Uncommon Protocols Can Be Configured With
<Protocol Entry>
<Protocol Number> Protocol Number </Protocol Number>
<Protocol Name> Protocol Name </Protocol Name>
<Protocol Status> Protocol Status Can Be Enable, Disabled </Protocol Status>
</Protocol Entry>*
</Uncommon Protocols>
<Firewall Reset> Firewall Reset Operation To Reset Or Not </Firewall Reset>
<Stealth Blocked Ports> Stealth Blocked Ports Can Be On Or Off </Stealth Blocked
Ports>
<Stateful Protocol Filter> Stateful Protocol Filter Can Be On Or Off </Stateful Protocol
Filter>
<Public Network Exceptions> Public Network Exceptions Can Be Configured To Allow
And Service
<Network Discovery> Network Discovery Can Be Allow Or Not Allow </Network
Discovery>
<File And Printer Sharing> File And Printer Sharing Can Be Allow Or Not Allow </File
And Printer Sharing>
<Remote Desk Top Connection> Remote Desk Top Connection Can Be Allow Or Not
Allow </Remote Desk Top Connection>
<Remote Assistence> Remote Assistence Can Be Allow Or Not Allow </Remote
Assistence>
<Windows Media Play> Windows Media Play Can Be Allow Or Not Allow </Windows
Media Play>
<Windows Media Player Extended> Windows Media Player Extended Can Be Allow Or
Not Allow </Windows Media Player Extended>
<Windows Web Services> Windows Web Services Can Be Allow Or Not Allow </Windows
Web Services>
<Remote Procedure Call> Remote Procedure Call Can Be Allow Or Not Allow </Remote
Procedure Call>
<Internets Connection Sharing> Internets Connection Sharing Can Be Allow Or Not
Allow </Internets Connection Sharing>
</Public Network Exceptions>
</Smart Firewall>
<Network Settings>
<Network Cost Awareness>
<Network Cost Awareness Status> Network Cost Awareness Status Can Be On Or Off
</Network Cost Awareness Status>
Network Cost Awareness And Can Be Configured To
<Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<Policy> Policy Can Be Auto, No Limit, Economy, No Traffic </Policy>
<In Use> In Use Can Be In Use Or Not Use </In Use>
</Network Cost Awareness>
<Network Trust> Network Trust Can Be Configured To The Same Thing Trust Level Can
Be <Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<trust Level>
Trust Level Can Be Full Trust, Private, Public Or Restricted
</trust Level>
<In Use> In Use Can Be In Use Or Not Use </In Use>
</Network Trust>
<Device Trust>
Device Trust Can Be Configured To
<Name>Name</Name>
<Type>Type</Type>
<Trust Level>Trust Level Can Be Full Trust Or Restricted</Trust Level>
<Address> Address Can Be Ip Or Physical Address </Address>
<Ips Extrusion> Ips Extrusion </Ips Extrusion>
The Operations Are Add And Remove
</Device Trust>
</Network Settings>
</General Settings>
<Program Control>
Program Control Can Be
<Owner> Owner </Owner>
<Trust> Trust </Trust>
<Program> Program </Program>
<Access> Access Can Be Allowed, Block Or Custom </Access>
Operations Can Be Program Search, Add, Modify, Remove, Rename
</Program Control>
<Traffic Rules> Traffic Rules Can Be
<Traffic Rule>
<Active> Active Can Be On Or Off </Active>
<Direction> Direction </Direction>
<Description> Description </Description>
<Action> Actions Can Be Allow, Block Or Monitor </Action>
<Connections> Connections Can Be Connection To Other Computers, Connection From
Other Computers Connections To And From Other Computers </Connections>
<Computers> Computers Can Be Any Computer, Computer In The Local Subnet Or Only
The Computers And Sites Listed Below </Computers>
Operations Can Be Add And Remove
<Communications> Communications Can Be Protocol, All Types Of Communications Or
Listed Below </Communications>
Operation Can Be Add And Remove Ports
<Advanced> Advanced
<Create The Security History To Log Entry> Create The Security History To Log Entry
Can Be On Or Off </Create The Security History To Log Entry>
<Apply Rule For Nat Transfersal Traffic> Apply Rule For Nat Transfersal Traffic Can Be
On, If Explicitly Requested Or Off </Apply Rule For Nat Transfersal Traffic>
</Advanced>
<Description Is The Name For The Rule> Description Is The Name For The Rule
</Description Is The Name For The Rule>
</Traffic Rule>*
Operations Can Be Add, View, Remove, Move Up, Move Down
</Traffic Rules>

<Intrusion And Browser Protection>


<Intrusion Protection Prevention> Intrusion Protection Prevention Can Be On Or Off
</Intrusion Protection Prevention>
<Intrusion Auto Block> Intrusion Auto Block Can Be Configured To On Off </Intrusion
Auto Block>
<Autoblock Attacking Computers> Autoblock Attacking Computers For Time Can Be
30min,1h,2h,4,8,16,24,48 </Autoblock Attacking Computers>
<Computers Currently Blocked By Auto Block> Computers Currently Blocked By Auto
Block
<Computer Currently Blocked By Auto Block> Computer Currently Blocked By Auto
Block
<Address> Address </Address>
<Action> Action </Action>
</Computer Currently Blocked By Auto Block>*
</Computers Currently Blocked By Auto Block>
<Intrusion Signatures> Intrusion Signatures
<Intrusion Signature> Intrusion Signature Can Be Configured To
<Active> Activecan Be On Or Off </Active>
<Signature Id> Signature Id </Signature Id>
<Signature Name> Signature Name </Signature Name>
<Notify Me> Notify Me Can Be On Or Off </Notify Me></Intrusion Signature>*
Operaions Can Be Search, Activate All And Deactivate All
</Intrusion Signatures>
<Notification> Notification Can Be On Or Off </Notification>
<Exclusion List> Exclusion List Can Operate Purge </Exclusion List>
<Intrusion Protection Prevention>
<Exploit Prevention> Exploit Prevention Can Be On Or Off </Exploit Prevention>
<Browser Protection> Browser Protection Can Be On Or Off </Browser Protection>
<Download Intelligence> Download Intelligence Can Be On Or Off
<Download Insight Notifications> Download Insight Notifications Can Be On Or Off
</Download Insight Notifications>
<Show Report On Launch Of Files> Show Report On Launch Of Files Can Be Unproven
Only, Always And Never </Show Report On Launch Of Files>
</Download Intelligence>
</Intrusion Protection Prevention>
</Intrusion And Browser Protection>
<Advanced Program Control Section>
<Advanced Program Control> Advanced Program Control Advanced Program Control
Can Be On Or Off </Advanced Program Control>
<Blocked Traffic For Malicious Applications> Blocked Traffic For Malicious Applications
Can Be Normal, Aggressive Or High Certainty Only </Blocked Traffic For Malicious
Applications>
<Low Risk Applications> Low Risk Applications Can Be Alert If Suspicious, Alert
Always Or Allow </Low Risk Applications>
<Apply Program Control For Ipv6 Nat Transfers Traffic> Apply Program Control For Ipv6
Nat Transfers Traffic Can Be Set To Off, On Or If Torito Is Requested
</Apply Program Control For Ipv6 Nat Transfers Traffic>
<Show Firewall Block Notification> Show Firewall Block Notification Can Be On Or Off
</Show Firewall Block Notification>
</Advanced Program Control Section>
</Firewall>

<Anti Spam>
<Filter>
<Norton Antispam> Norton Antispam Can Be On Or Off </Norton Antispam>
<Address Book Exclusions> Address Book Exclusions Can Be Configured To
<Name> Name </Name>
<Email Address> Email Address </Email Address>
Operations Can Be Add, Edit Or Removed
</Address Book Exclusions>
<Allow List> Allow List Can Be Configured With
<allow Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</allow Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</Allow List>
<block List> Block List Can Be Configured With
<block Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</block Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</block List>
<Web Query> Web Query Can Be On Or Off </Web Query>
<Protected Ports> Protected Ports Can Be Configured To
<Protected Port>
<Name> Name </Name>
<Number> Number </Number>
</Protected Port>*
Operations Can Be Add And Remove
</Protected Ports>
</Filter>
<Client Integration>
<Email Clients>
<Outlook> Outlook Can Be On Or Off
Operations Can Register Add-Ins
</Outlook>
<Address Books>
<Integrate With Outlook Contact List> Integrate With Outlook Contact List Can Be On
Or Off </Integrate With Outlook Contact List>
<Integrate With Windows Address Book> Integrate With Windows Address Book Can Be
On Or Off </Integrate With Windows Address Book>
</Address Books>
</Email Clients>
<Miscellaneous>
<Welcome Screen> Welcome Screen Can Be On Or Off </Welcome Screen>
<Feedback> Feedback Can Be On, Ask-Me Or Off </Feedback>
</Miscellaneous>
</Client Integration>
</Anti Spam>

<Identity Safe>
<Identity Safety> Identity Safety Can Be On Or Off
It Can Be Configured But It Has To Be Signed In
</Identity Safety>
<Safe Surfing>
<Anti Phishing> Anti Phishing Can Be On Or Off
<Submit Full Site Information> Submit Full Site Information Can Be On Or Off </Submit
Full Site Information>
</Anti Phishing>
<Norton Safe Web> Norton Safe Web Can Be On Or Off
<Block Malicious Pages> Block Malicious Pages Can Be On Or Off </Block Malicious
Pages>
<Site Rating Icons In Search Results> Site Rating Icons In Search Results Can Be On Or
Off </Site Rating Icons In Search Results>
<Scan In Sight> Scan In Sight Can Be On Or Off </Scan In Sight>
</Norton Safe Web>
</Safe Surfing>
</Identity Safe>
<Task Scheduling>
<Automatic Tasks>
<Automatic Task>
Automatic Task Can Configure
<Task Name> Task Name </Task Name>
<Description> Description </Description>
</Automatic Task>*
</Automatic Tasks>
<Scheduling>
<Schedule> Schedule Can Be Automatic, Weekly, Monthly Or Manual Schedule
</Schedule>
</Scheduling>
</Task Scheduling>
<Administrative Settings>
<Background Tasks> Background Tasks Can Be Configured
<Background Task>
<Norton Task> Norton Task Can Be One Of A List </Norton Task>
<Last Run> Last Run </Last Run>
<Duration> Duration </Duration>
<Run During Idle> Run During Idle Can Be Yes Or No</Run During Idle>
<Status> Status Can Be Complete Or Not Run</Status>
</Background Task>*
</Background Tasks>
<Idle Time Optimiser> Idle Time Optimiser Can Be On Or Off </Idle Time Optimiser>
<Report Card> Report Card Can Be On Or Off </Report Card>
<Automatic Download Of New Version> Automatic Download Of New Version Can Be On
Or Off </Automatic Download Of New Version>
<Search Short Key> Search Short Key Can Be On An Off
<global> Global Can Be Set On Or Off </global>
<function Key> Function Key Can Be Control , Alt Or Win </function Key>
<key> Key Can Be A To Z Or F1 To F12 </key>
</Search Short Key>
<Network Proxy Settings> Network Proxy Settings Can Be Configured With
<Automatic Configuration>
<Configuration> Configuration With
<Automatic Detect Settings> Automatic Detect Settings Can Be On Or Off</Automatic
Detect Settings>
<Use An Automatic Configurations Script> Use An Automatic Configurations Script
</Use An Automatic Configurations Script>
<Ulr> Ulr </Ulr>
</Configuration>
<Proxy Settings> Proxy Settings
<Use Proxy Setting> Use Proxy Setting </Use Proxy Setting>
<Address> Address </Address>
<Port> Port </Port>
</Proxy Settings>
Authentication Can Be Set With I Need Authentication To Connect Through My Firewill
Or Proxy With Username And Password
</Automatic Configuration>
</Network Proxy Settings>
<Norton Community Watch> Norton Community Watch Can Be On Or Off
<Detailed Error Data Collection> Detailed Error Data Collection Can Be Ask-Me, Never
Or Always </Detailed Error Data Collection>
</Norton Community Watch>
<Remote Management> Remote Management Can Be On Or Off </Remote Management>
<Family> Family Is Not Installed And It Can Be Installed </Family>
<Norton Task Notification> Norton Task Notification Can Be On The Off </Norton Task
Notification>
<Performance Monitoring> Performance Monitoring Can Be On Or Off
<Performance Alerting> Performance Alerting Can Be On, Off And Log-Only
</Performance Alerting>
</Performance Monitoring>
</Administrative Settings>
</Norton>
<Norton>
<Antivirus>
<Automatic Protection>
<Boot Time Protection> Boot Time Protection Can Be Aggressive, Normal, Off </Boot
Time Protection>
<Real-Time Protection>
<Auto Protect>Auto Protect Can Be On, Off
<Removable Media Scan>Removable Media Scan Can Be On, Off </Removable>
</Auto Protect>
<Sonar Protection> Sonar Protection Can Be On, Off
<Network Drive Protection>Network Drive Protection Can Be On, Off </Network Drive
Protection>
<Sonar Advanced Mode> Sonar Advanced Mode Can Be Off, Aggressive, Automatic
<Removable Discs Automatically> Removable Discs Automatically Can Be Ask-Me,
Always, Highly-Certainty-Only </Removable Discs Automatically>
<Remove Risks If I Am Away> Remove Risks If I Am Away Highly-Certainty-Only, Ignore,
Always </Remove Risks If I Am Away>
</Sonar Advanced Mode>
<Shows Sonar Block Notifications>Shows Sonar Block Notifications Are Show-All Or
Log-Only </Shows Sonar Block Notifications>
<Early Launch Anti Malware Protection> Early Launch Anti Malware Protection Can Be
On Or Off </Early Launch Anti Malware Protection>
</Sonar Protection>
</Real-Time Protection>
</Automatic Protection>
<Scan And Risks>
<Computer Scans>
<Compressed File Scan> Compressed File Scan Can Be On Or Off
<Remove Infected Folders>Remove Infected Folders Can Be Automatic Or Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Rootkits And Stealth Items Scan>Rootkits And Stealth Items Scan Can Be On Or
Off</Rootkits And Stealth Items Scan>
<Network Drive Scan> Network Drive Scan Can Be On Or Off </Network Drive Scan>
<Heuristic Protection> Heuristic Protection Can Be Automatic, Off Or Aggressive
</Heuristic Protection>
<Tracking Cookies> Tracking Cookies Can Be Remove, Ignore Or Ask-Me</Tracking
Cookies>
<Full System Scan> Full System Scan Can Be Configured
<Scan Items> Scan Items Entire Computer </Scan Items>
<Scan Schedule> Scan Schedule Can Be Do Not Schedule This Scan, Run At A Specific
Time Interval (set The Number Of Days, Hours), Daily, Weekly, Monthly
<Only Time Idle Time> Only Only Time Idle Time Only Can Be On Or Off </Only Time Idle
Time Only>
<On Ac Power> On Ac Power Can Be On Or Off </On Ac Power>
<Prevent Standby> Prevent Standby Can Be On Or Off </Prevent Standby>
<After Scan Completion> After Scan Completion Can Be Stay On, Turn Off, Sleep Or
Hibernate </After Scan Completion>
</Scan Schedule>
<Scan Options>
<Compressed File Scan> Compressed File Scan Can Be On Or Off
<Remove Infected Folders> Remove Infected Folders Can Be Automatic Or Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Network Drive Scan> Network Drive Scan Can Be On Or Off </Network Drive Scan>
<Low Risks> Low Risks Can Be Remove, Ignore Or Ask-Me </Low Risks>
</Scan Options>
</Full System Scan>
</Computer Scans>
<Protected Ports> Protected Ports Can Be Configured
<Full Name> Full Name </Full Name>
<Port Number> Port Number </Port Number>
With Add Or Remove Operations
</Protected Ports>
<Email Antivirus Scan> <Email Antivirus Scan Control> Email Antivirus Scan Can Be On
Or Off </Email Antivirus Scan Control>
Email Antivirus Scan Can Be Configured
<Scan Incoming Email> Scan Incoming Email Can Be On Or Off </Scan Incoming Email>
<Scan Outgoing Email> Scan Outgoing Email Can Be On Or Off </Scan Outgoing Email>
<Scan Outgoing Messages For Suspect Worms> Scan Outgoing Messages For Suspect
Worms Can Be On Or Off
<How To Respond When The Output Threat> How To Respond When The Output Threat
Is Found Can Be Automatically Removed Or Ask-Me </How To Respond When The
Output Threat>
</Scan Outgoing Messages For Suspect Worms>
<What To Do What To Do When Scanning Email Messages> What To Do What To Do
When Scanning Email Messages Can Be
<Protect Against Timeouts> Protect Against Timeouts Can Be On Or Off </Protect
Against Timeouts>
<Display Process Indicator> Display Process Indicator Can Be On Or Off </Display
Process Indicator>
</What To Do What To Do When Scanning Email Messages>
</Email Antivirus Scan>
<Exclusions And Low Risk>
<Low Risks> Low Risks Can Be Removed, Ignore Or Ask-Me </Low Risks>
<Items To Exclude From Scans> Items To Exclude From Scans Can Be Configured To
<Item To Exclude From Scans>System Volume Information </Item To Exclude From
Scans>*
With Operations Add Folders, Add Files, Edit And Remove
<Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Item To Exclude From Auto Protect Sonar And Download Intelligence Detection>
Items To Exclude From Auto Protect Sonar And Download Intelligence Detection Can
Be Configured To </Item To Exclude From Auto Protect Sonar And Download
Intelligence Detection>*
Operation Can Be Add Folder, Add Files, Edit And Remove
</Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Signatures To Exclude From All Detections>
<Signature To Exclude From All Detections>
Signatures To Exclude From All Detections Can Be Configured To
Operation Can Be Add, Remove And Risk Details
</Signature To Exclude From All Detections>*
</Signatures To Exclude From All Detections>
<Clear File Id Excluded During Scans> Clear File Id Excluded During Scans Can Be Set
With The Operation Clear All </Clear File Id Excluded During Scans>
</Items To Exclude From Scans>
</Exclusions And Low Risk>
</Scan And Risks>
<Updates>
Updates
<Automatic Live Update> Automatic Live Update Can Be On Or Off </Automatic Live
Update>
<Apply Updates Only On Reboot> Apply Updates Only On Reboot Can Be On Or Off
</Apply Updates Only On Reboot>
</Antivirus>
<Firewall>
<General Settings>
<Smart Firewall>
<Smart Firewall Setting> Smart Firewall Setting Can Be On Or Off </Smart Firewall
Setting>
<Uncommon Protocols> Uncommon Protocols Can Be Configured With
<Protocol Entry>
<Protocol Number> Protocol Number </Protocol Number>
<Protocol Name> Protocol Name </Protocol Name>
<Protocol Status> Protocol Status Can Be Enable, Disabled </Protocol Status>
</Protocol Entry>*
</Uncommon Protocols>
<Firewall Reset> Firewall Reset Operation To Reset Or Not </Reset Firewall Reset>
<Stealth Blocked Ports> Stealth Blocked Ports Can Be On Or Off </Stealth Blocked
Ports>
<Stateful Protocol Filter> Stateful Protocol Filter Can Be On Or Off </Stateful Protocol
Filter >
<Public Network Exceptions> Public Network Exceptions Can Be Configured To Allow
And Service
<Network Discovery> Network Discovery Can Be Allow Or Not Allow </Network
Discovery>
<File And Printer Sharing> File And Printer Sharing Can Be Allow Or Not Allow </File
And Printer Sharing>
<Remote Desk Top Connection> Remote Desk Top Connection Can Be Allow Or Not
Allow </Remote Desk Top Connection>
<Remote Assistence> Remote Assistence Can Be Allow Or Not Allow </Remote
Assistence>
<Windows Media Play > Windows Media Play Can Be Allow Or Not Allow </Windows
Media Play>
<Windows Media Player Extended> Windows Media Player Extended Can Be Allow Or
Not Allow </Windows Media Player Extended>
<Windows Web Services> Windows Web Services Can Be Allow Or Not Allow </Windows
Web Services>
<Remote Procedure Call> Remote Procedure Call Can Be Allow Or Not Allow </Remote
Procedure Call>
<Internets Connection Sharing> Internets Connection Sharing Can Be Allow Or Not
Allow </Internets Connection Sharing>
</Public Network Exceptions>
</Smart Firewall>
<Network Settings>
<Network Cost Awareness>
<Network Cost Awareness Status> Network Cost Awareness Status Can Be On Or Off
</Network Cost Awareness Status>
Network Cost Awareness And Can Be Configured To
<Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<Policy> Policy Can Be Auto, No Limit, Economy, No Traffic </Policy>
<In Use> In Use Can Be In Use Or Not Use </In Use>
</Network Cost Awareness>
<Network Trust> Network Trust Can Be Configured To The Same Thing Trust Level Can
Be <Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<trust Level>
Trust Level Can Be Full Trust, Private, Public Or Restricted
</trust Level>
<In Use> In Use Can Be In Use Or Not Use </In Use>
</Network Trust>
<Device Trust>
Device Trust Can Be Configured To
<Name>Name</Name>
<Type>Type</Type>
<Trust Level>Trust Level Can Be Full Trust Or Restricted</Trust Level>
<Address> Address Can Be Ip Or Physical Address </Address>
<Ips Extrusion> Ips Extrusion </Ips Extrusion>
The Operations Are Add And Remove
</Device Trust>
</Network Settings>
</General Settings>
<Program Control>
Program Control Can Be
<Owner> Owner </Owner>
<Trust> Trust </Trust>
<Program> Program </Program>
<Access> Access Can Be Allowed, Block Or Custom </Access>
Operations Can Be Program Search, Add, Modify, Remove, Rename
</Program Control>
<Traffic Rules> Traffic Rules Can Be
<Traffic Rule>
<Active> Active Can Be On Or Off </Active>
<Direction> Direction </Direction>
<Description> Description </Description>
<Action> Actions Can Be Allow, Block Or Monitor </Action>
<Connections> Connections Can Be Connection To Other Computers, Connection From
Other Computers Connections To And From Other Computers </Connections>
<Computers> Computers Can Be Any Computer, Computer In The Local Subnet Or Only
The Computers And Sites Listed Below </Computers>
Operations Can Be Add And Remove
<Communications> Communications Can Be Protocol, All Types Of Communications Or
Listed Below </Communications>
Operation Can Be Add And Remove Ports
<Advanced> Advanced
<Create The Security History To Log Entry> Create The Security History To Log Entry
Can Be On Or Off </Create The Security History To Log Entry>
<Apply Rule For Nat Transfersal Traffic> Apply Rule For Nat Transfersal Traffic Can Be
On, If Explicitly Requested Or Off </Apply Rule For Nat Transfersal Traffic>
</Advanced>
<Description Is The Name For The Rule> Description Is The Name For The Rule
</Description Is The Name For The Rule>
</Traffic Rule>*
Operations Can Be Add, View, Remove, Move Up, Move Down
</Traffic Rules>

<Intrusion And Browser Protection>


<Intrusion Protection Prevention> Intrusion Protection Prevention Can Be On Or Off
<Intrusion Auto Block> Intrusion Auto Block Can Be Configured To On Off
<Autoblock Attacking Computers> Autoblock Attacking Computers For Time Can Be
30min,1h,2h,4,8,16,24,48 </Autoblock Attacking Computers>
<Computers Currently Blocked By Auto Block> Computers Currently Blocked By Auto
Block
<Computer Currently Blocked By Auto Block> Computer Currently Blocked By Auto
Block
<Address> Address </Address>
<Action> Action </Action>
</Computer Currently Blocked By Auto Block>*
</Computers Currently Blocked By Auto Block>
<Intrusion Signatures> Intrusion Signatures
<Intrusion Signature> Intrusion Signature Can Be Configured To
<Active> Activecan Be On Or Off </Active>
<Signature Id> Signature Id </Signature Id>
<Signature Name> Signature Name </Signature Name>
<Notify Me> Notify Me Can Be On Or Off </Notify Me>
<//Intrusion Signature>*
Operaions Can Be Search, Activate All And Deactivate All
</Intrusion Signatures>
<Notification> Notification Can Be On Or Off </Notification>
<Exclusion List> Exclusion List Can Operate Purge </Exclusion List>
</Intrusion Protection Prevention>
<Exploit Prevention> Exploit Prevention Can Be On Or Off </Exploit Prevention>
<Browser Protection> Browser Protection Can Be On Or Off </Browser Protection>
<Download Intelligence> Download Intelligence Can Be On Or Off
<Download Insight Notifications> Download Insight Notifications Can Be On Or Off
<.Download Insight Notifications>
<Show Report On Launch Of Files> Show Report On Launch Of Files Can Be Unproven
Only, Always And Never </Show Report On Launch Of Files>
</Download Intelligence>
</Intrusion Protection Prevention>
</Intrusion And Browser Protection>
<Advanced Program Control Section>
<Advanced Program Control> Advanced Program Control Advanced Program Control
Can Be On Or Off </Advanced Program Control>
<Blocked Traffic For Malicious Applications> Blocked Traffic For Malicious Applications
Can Be Normal, Aggressive Or High Certainty Only </Blocked Traffic For Malicious
Applications>
<Low Risk Applications> Low Risk Applications Can Be Alert If Suspicious, Alert
Always Or Allow </Low Risk Applications>
<Apply Program Control For Ipv6 Nat Transfers Traffic> Apply Program Control For Ipv6
Nat Transfers Traffic Can Be Set To Off, On Or If Torito Is Requested
</Apply Program Control For Ipv6 Nat Transfers Traffic>
<Show Firewall Block Notification> Show Firewall Block Notification Can Be On Or Off
</Show Firewall Block Notification>
</Advanced Program Control Section>
</Firewall>
<Anti Spam>
<Filter>
<Norton Antispam> Norton Antispam Can Be On Or Off </Norton Antispam>
<Address Book Exclusions> Address Book Exclusions Can Be Configured To
<Name> Name </Name>
<Email Address> Email Address </Email Address>
Operations Can Be Add, Edit Or Removed
</Address Book Exclusions>
<Allow List> Allow List Can Be Configured With
<allow Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</allow Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</Allow List>
<block List> Block List Can Be Configured With
<block Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</block Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</block List>
<Web Query> Web Query Can Be On Or Off </Web Query>
<Protected Ports> Protected Ports Can Be Configured To
<Protected Port>
<Name> Name </Name>
<Number> Number </Number>
</Protected Port>*
Operations Can Be Add And Remove
</Protected Ports>
</Filter>
<Client Integration>
<Email Clients>
<Outlook> Outlook Can Be On Or Off
Operations Can Register Add-Ins
</Outlook>
<Address Books>
<Integrate With Outlook Contact List> Integrate With Outlook Contact List Can Be On
Or Off </Integrate With Outlook Contact List>
<Integrate With Windows Address Book> Integrate With Windows Address Book Can Be
On Or Off </Integrate With Windows Address Book>
</Address Books>
<Miscellaneous>
<Welcome Screen> Welcome Screen Can Be On Or Off </Welcome Screen>
<Feedback> Feedback Can Be On, Ask-Me Or Off </Feedback>
</Miscellaneous>
</Client Integration>
</Anti Spam>
<Identity Safe>
<Identity Safety> Identity Safety Can Be On Or Off
It Can Be Configured But It Has To Be Signed In
</Identity Safety>
<Safe Surfing>
<Anti Phishing> Anti Phishing Can Be On Or Off
<Submit Full Site Information> Submit Full Site Information Can Be On Or Off </Submit
Full Site Information>
</Anti Phishing>
<Norton Safe Web> Norton Safe Web Can Be On Or Off
<Block Malicious Pages> Block Malicious Pages Can Be On Or Off </Block Malicious
Pages>
<Site Rating Icons In Search Results> Site Rating Icons In Search Results Can Be On Or
Off </Site Rating Icons In Search Results>
<Scan In Sight> Scan In Sight Can Be On Or Off </Scan In Sight>
</Norton Safe Web>
</Safe Surfing>
</Identity Safe>
<Task Scheduling>
<Automatic Tasks>
<Automatic Task>
Automatic Task Can Configure
<Task Name> Task Name </Task Name>
<Description> Description </Description>
</Automatic Task>*
</Automatic Tasks>
<Scheduling>
<Schedule> Schedule Can Be Automatic, Weekly, Monthly Or Manual Schedule
</Schedule>
</Scheduling>
</Task Scheduling>
<Administrative Settings>
<Background Tasks> Background Tasks Can Be Configured
<Background Task>
<Norton Task> Norton Task Can Be One Of A List </Norton Task>
<Last Run> Last Run </Last Run>
<Duration> Duration </Duration>
<Run During Idle> Run During Idle Can Be Yes Or No</Run During Idle>
<Status> Status Can Be Complete Or Not Run</Status>
</Background Task>*
</Background Tasks>
<Idle Time Optimiser> Idle Time Optimiser Can Be On Or Off </Idle Time Optimiser>
<Report Card> Report Card Can Be On Or Off </Report Card>
<Automatic Download Of New Version> Automatic Download Of New Version Can Be On
Or Off </Automatic Download Of New Version>
<Search Short Key> Search Short Key Can Be On An Off
<global> Global Can Be Set On Or Off </global>
<function Key> Function Key Can Be Control , Alt Or Win </function Key>
<key> Key Can Be A To Z Or F1 To F12 </key>
</Search Short Key>
<Network Proxy Settings> Network Proxy Settings Can Be Configured With
<Automatic Configuration>
<Configuration> Configuration With
<Automatic Detect Settings> Automatic Detect Settings Can Be On Or Off</Automatic
Detect Settings>
<Use An Automatic Configurations Script> Use An Automatic Configurations Script
</Use An Automatic Configurations Script>
<Ulr> Ulr </Ulr>
</Configuration>
<Proxy Settings> Proxy Settings
<Use Proxy Setting> Use Proxy Setting </Use Proxy Setting>
<Address> Address </Address>
<Port> Port </Port>
</Proxy Settings>
Authentication Can Be Set With I Need Authentication To Connect Through My Firewill
Or Proxy With Username And Password
</Network Proxy Settings>
<Norton Community Watch> Norton Community Watch Can Be On Or Off
<Detailed Error Data Collection> Detailed Error Data Collection Can Be Ask-Me, Never
Or Always </Detailed Error Data Collection>
</Norton Community Watch>
<Remote Management> Remote Management Can Be On Or Off </Remote Management>
<Family> Family Is Not Installed And It Can Be Installed </Family>
<Norton Task Notification> Norton Task Notification Can Be On The Off </Norton Task
Notification>
<Performance Monitoring> Performance Monitoring Can Be On Or Off
<Performance Alerting> Performance Alerting Can Be On, Off And Log-Only
</Performance Alerting>
</Performance Monitoring>
</Administrative Settings>
</Norton>

15.2 Services
<Norton>
<Antivirus>
<Automatic Protection>
<Boot Time Protection> Boot Time Protection Can Be
set Aggressive
set Normal
set Off
</Boot Time Protection>
<Real-Time Protection>
<Auto Protect>Auto Protect Can Be
set On
set Off
<Removable Media Scan>Removable Media Scan Can Be
set On
set Off
</Removable Media Scan>
</Auto Protect>
<Sonar Protection> Sonar Protection Can Be
set On
set Off
<Network Drive Protection>Network Drive Protection Can Be
set On
set Off
</Network Drive Protection>
<Sonar Advanced Mode> Sonar Advanced Mode Can Be
set Off
set Aggressive
set Automatic
<Removable Discs Automatically> Removable Discs Automatically Can Be
set Ask-Me
set Always
set Highly-Certainty-Only
</Removable Discs Automatically>
<Remove Risks If I Am Away> Remove Risks If I Am Away
set Highly-Certainty-Only
set Ignore
set Always
</Remove Risks If I Am Away>
</Sonar Advanced Mode>
<Shows Sonar Block Notifications>Shows Sonar Block Notifications Are
set Show-All
set Log-Only
</Shows Sonar Block Notifications>
<Early Launch Anti Malware Protection> Early Launch Anti Malware Protection Can Be
set On
set Off
</Early Launch Anti Malware Protection>
</Sonar Protection>
</Real-Time Protection>
</Automatic Protection>
<Scan And Risks>
<Computer Scans>
<Compressed File Scan> Compressed File Scan Can Be
set On
set Off
<Remove Infected Folders>Remove Infected Folders Can Be
set Automatic
set Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Rootkits And Stealth Items Scan>Rootkits And Stealth Items Scan Can Be
set On
set Off
</Rootkits And Stealth Items Scan>
<Network Drive Scan> Network Drive Scan Can Be
set On
set Off
</Network Drive Scan>
<Heuristic Protection> Heuristic Protection Can Be
set Automatic
set Off
set Aggressive
</Heuristic Protection>
<Tracking Cookies> Tracking Cookies Can Be
set Remove
set Ignore
set Ask-Me
</Tracking Cookies>
<Full System Scan> Full System Scan Can Be Configured
<Scan Items> Scan Items Entire Computer </Scan Items>
<Scan Schedule> Scan Schedule Can Be
set Do Not Schedule This Scan
set Run At A Specific Time Interval (set The Number Of Days, Hours), Daily, Weekly,
Monthly
<Only Time Idle Time> Only Only Time Idle Time Only Can Be
set On
set Off
</Only Time Idle Time>
<On Ac Power> On Ac Power Can Be
set On
set Off
</On Ac Power>
<Prevent Standby> Prevent Standby Can Be
set On
set Off
</Prevent Standby>
<After Scan Completion> After Scan Completion Can Be
set Stay On
set Turn Off
set Sleep
set Hibernate
</After Scan Completion>
</Scan Schedule>
<Scan Options>
<Compressed File Scan> Compressed File Scan Can Be
set On
set Off
<Remove Infected Folders> Remove Infected Folders Can Be
set Automatic
set Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Network Drive Scan> Network Drive Scan Can Be
set On
set Off
</Network Drive Scan>
<Low Risks> Low Risks Can Be
set Remove
set Ignore
set Ask-Me
</Low Risks>
</Scan Options>
</Full System Scan>
</Computer Scans>
<Protected Ports> Protected Ports Can Be Configured
<Full Name> Full Name </Full Name>
<Port Number> Port Number </Port Number>
With Add Or Remove Operations
</Protected Ports>
<Email Antivirus Scan> <Email Antivirus Scan Control> Email Antivirus Scan Can Be
set On
set Off
</Email Antivirus Scan Control>
Email Antivirus Scan Can Be Configured
<Scan Incoming Email> Scan Incoming Email Can Be
set On
set Off
</Scan Incoming Email>
<Scan Outgoing Email> Scan Outgoing Email Can Be
set On
set Off
</Scan Outgoing Email>
<Scan Outgoing Messages For Suspect Worms> Scan Outgoing Messages For Suspect
Worms Can Be
set On
set Off
<How To Respond When The Output Threat> How To Respond When The Output Threat
Is Found Can Be
set Automatically Removed
set Ask-Me
</How To Respond When The Output Threat>
</Scan Outgoing Messages For Suspect Worms>
<What To Do What To Do When Scanning Email Messages> What To Do What To Do
When Scanning Email Messages Can Be
<Protect Against Timeouts> Protect Against Timeouts Can Be
set On
set Off
</Protect Against Timeouts>
<Display Process Indicator> Display Process Indicator Can Be
set On
set Off
</Display Process Indicator>
</What To Do What To Do When Scanning Email Messages>
</Email Antivirus Scan>
<Exclusions And Low Risk>
<Low Risks> Low Risks Can Be
set Removed
set Ignore
set Ask-Me
</Low Risks>
<Items To Exclude From Scans> Items To Exclude From Scans Can Be Configured To
<Item To Exclude From Scans>System Volume Information </Item To Exclude From
Scans>*
With Operations Add Folders, Add Files, Edit And Remove
<Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Item To Exclude From Auto Protect Sonar And Download Intelligence Detection>
Items To Exclude From Auto Protect Sonar And Download Intelligence Detection Can
Be Configured To </Item To Exclude From Auto Protect Sonar And Download
Intelligence Detection>*
Operation Can Be Add Folder, Add Files, Edit And Remove
</Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Signatures To Exclude From All Detections>
<Signature To Exclude From All Detections> Signatures To Exclude From All Detections
Can Be Configured To
Operation Can Be Add, Remove And Risk Details
</Signature To Exclude From All Detections>*
</Signatures To Exclude From All Detections>
<Clear File Id Excluded During Scans> Clear File Id Excluded During Scans Can Be Set
With The Operation Clear All </Clear File Id Excluded During Scans>
</Items To Exclude From Scans>
</Exclusions And Low Risk>
</Scan And Risks>
<Updates>
Updates
<Automatic Live Update> Automatic Live Update Can Be
set On
set Off
</Automatic Live Update>
<Apply Updates Only On Reboot> Apply Updates Only On Reboot Can Be
set On
set Off
</Apply Updates Only On Reboot>
</Updates>
</Antivirus>

<Firewall>
<General Settings>
<Smart Firewall>
<Smart Firewall Setting> Smart Firewall Setting Can Be
set On
set Off
</Smart Firewall Setting>
<Uncommon Protocols> Uncommon Protocols Can Be Configured With
<Protocol Entry>
<Protocol Number> Protocol Number </Protocol Number>
<Protocol Name> Protocol Name </Protocol Name>
<Protocol Status> Protocol Status Can Be
set Enable
set Disabled
</Protocol Status>
</Protocol Entry>*
</Uncommon Protocols>
<Firewall Reset> Firewall Reset Operation To Reset Or Not </Firewall Reset>
<Stealth Blocked Ports> Stealth Blocked Ports Can Be
set On
set Off
</Stealth Blocked Ports>
<Stateful Protocol Filter> Stateful Protocol Filter Can Be
set On
set Off
</Stateful Protocol Filter>
<Public Network Exceptions> Public Network Exceptions Can Be Configured To Allow
And Service
<Network Discovery> Network Discovery Can Be
set Allow
set Not Allow
</Network Discovery>
<File And Printer Sharing> File And Printer Sharing Can Be
set Allow
set Not Allow
</File And Printer Sharing>
<Remote Desk Top Connection> Remote Desk Top Connection Can Be
set Allow
set Not Allow
</Remote Desk Top Connection>
<Remote Assistence> Remote Assistence Can Be
set Allow
set Not Allow
</Remote Assistence>
<Windows Media Play> Windows Media Play Can Be
set Allow
set Not Allow
</Windows Media Play>
<Windows Media Player Extended> Windows Media Player Extended Can Be
set Allow
set Not Allow
</Windows Media Player Extended>
<Windows Web Services> Windows Web Services Can Be
set Allow
set Not Allow
</Windows Web Services>
<Remote Procedure Call> Remote Procedure Call Can Be
set Allow
set Not Allow
</Remote Procedure Call>
<Internets Connection Sharing> Internets Connection Sharing Can Be
set Allow
set Not Allow
</Internets Connection Sharing>
</Public Network Exceptions>
</Smart Firewall>
<Network Settings>
<Network Cost Awareness>
<Network Cost Awareness Status> Network Cost Awareness Status Can Be
set On
set Off
</Network Cost Awareness Status>
Network Cost Awareness And Can Be Configured To
<Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<Policy> Policy Can Be
set Auto
set No Limit
set Economy
set No Traffic
</Policy>
<In Use> In Use Can Be
set In Use
set Not Use
</In Use>
</Network Cost Awareness>
<Network Trust> Network Trust Can Be Configured To The Same Thing Trust Level Can
Be
<Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<trust Level>
Trust Level Can Be
set Full Trust
set Private
set Public
set Restricted
</trust Level>
<In Use> In Use Can Be
set In Use
set Not Use
</In Use>
</Network Trust>
<Device Trust>
Device Trust Can Be Configured To
<Name>Name</Name>
<Type>Type</Type>
<Trust Level>Trust Level Can Be
set Full Trust
set Restricted
</Trust Level>
<Address> Address Can Be
set Ip
set Physical Address
</Address>
<Ips Extrusion> Ips Extrusion </Ips Extrusion>
The Operations Are Add And Remove
</Device Trust>
</Network Settings>
</General Settings>
<Program Control>
Program Control Can Be
<Owner> Owner </Owner>
<Trust> Trust </Trust>
<Program> Program </Program>
<Access> Access Can Be
set Allowed
set Block
set Custom
</Access>
Operations Can Be Program Search, Add, Modify, Remove, Rename
</Program Control>
<Traffic Rules> Traffic Rules Can Be
<Traffic Rule>
<Active> Active Can Be
set On
set Off
</Active>
<Direction> Direction </Direction>
<Description> Description </Description>
<Action> Actions Can Be
set Allow
set Block
set Monitor
</Action>
<Connections> Connections Can Be Connection To Other Computers, Connection From
Other Computers Connections To And From Other Computers </Connections>
<Computers> Computers Can Be Any Computer, Computer In The Local Subnet Or Only
The Computers And Sites Listed Below </Computers>
Operations Can Be Add And Remove
<Communications> Communications Can Be Protocol, All Types Of Communications Or
Listed Below </Communications>
Operation Can Be Add And Remove Ports
<Advanced> Advanced
<Create The Security History To Log Entry> Create The Security History To Log Entry
Can Be
set On
set Off
</Create The Security History To Log Entry>
<Apply Rule For Nat Transfersal Traffic> Apply Rule For Nat Transfersal Traffic Can Be
set On
set If Explicitly Requested
set Off
</Apply Rule For Nat Transfersal Traffic>
</Advanced>
<Description Is The Name For The Rule> Description Is The Name For The Rule
</Description Is The Name For The Rule>
</Traffic Rule>*
Operations Can Be Add, View, Remove, Move Up, Move Down
</Traffic Rules>
<Intrusion And Browser Protection>
<Intrusion Protection Prevention> Intrusion Protection Prevention Can Be
set On
set Off
</Intrusion Protection Prevention>

<Intrusion Auto Block> Intrusion Auto Block Can Be Configured To


set On
set Off
<Autoblock Attacking Computers> Autoblock Attacking Computers For Time Can Be
set 30min
set 1h
set 2h
set 4h
set 8h
set 16h
set 24h
set 48h
</Autoblock Attacking Computers>
</Intrusion Auto Block>
<Computers Currently Blocked By Auto Block> Computers Currently Blocked By Auto
Block
<Computer Currently Blocked By Auto Block> Computer Currently Blocked By Auto
Block
<Address> Address </Address>
<Action> Action </Action>
</Computer Currently Blocked By Auto Block>*
</Computers Currently Blocked By Auto Block>
<Intrusion Signatures> Intrusion Signatures
<Intrusion Signature> Intrusion Signature Can Be Configured To
<Active> Active can Be
set On
set Off
</Active>
<Signature Id> Signature Id </Signature Id>
<Signature Name> Signature Name </Signature Name>
<Notify Me> Notify Me Can Be
set On
set Off
</Notify Me>
</Intrusion Signature>*
Operaions Can Be Search, Activate All And Deactivate All
</Intrusion Signatures>
<Notification> Notification Can Be
set On
set Off
</Notification>
<Exclusion List> Exclusion List Can Operate Purge </Exclusion List>
<Exploit Prevention> Exploit Prevention Can Be
set On
set Off
</Exploit Prevention>
<Browser Protection> Browser Protection Can Be
set On
set Off
</Browser Protection>
<Download Intelligence> Download Intelligence Can Be
set On
set Off
<Download Insight Notifications> Download Insight Notifications Can Be
set On
set Off
</Download Insight Notifications>
<Show Report On Launch Of Files> Show Report On Launch Of Files Can Be
set Unproven Only
set Always
set Never
</Show Report On Launch Of Files>
</Download Intelligence>

</Intrusion And Browser Protection>


<Advanced Program Control Section>
<Advanced Program Control> Advanced Program Control Advanced Program Control
Can Be
set On
set Off
</Advanced Program Control>
<Blocked Traffic For Malicious Applications> Blocked Traffic For Malicious Applications
Can Be
set Normal
set Aggressive
set High Certainty Only
</Blocked Traffic For Malicious Applications>
<Low Risk Applications> Low Risk Applications Can Be
set Alert If Suspicious
set Alert Always
set Allow
</Low Risk Applications>
<Apply Program Control For Ipv6 Nat Transfers Traffic> Apply Program Control For Ipv6
Nat Transfers Traffic Can Be Set To
set Off
set On
set If Torito Is Requested
</Apply Program Control For Ipv6 Nat Transfers Traffic>
<Show Firewall Block Notification> Show Firewall Block Notification Can Be
set On
set Off
</Show Firewall Block Notification>
</Advanced Program Control Section>
</Firewall>

<Anti Spam>
<Filter>
<Norton Antispam> Norton Antispam Can Be
set On
set Off
</Norton Antispam>
<Address Book Exclusions> Address Book Exclusions Can Be Configured To
<Name> Name </Name>
<Email Address> Email Address </Email Address>
Operations Can Be Add, Edit Or Removed
</Address Book Exclusions>
<Allow List> Allow List Can Be Configured With
<allow Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</allow Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</Allow List>
<block List> Block List Can Be Configured With
<block Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</block Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</block List>
<Web Query> Web Query Can Be
set On
set Off
</Web Query>
<Protected Ports> Protected Ports Can Be Configured To
<Protected Port>
<Name> Name </Name>
<Number> Number </Number>
</Protected Port>*
Operations Can Be Add And Remove
</Protected Ports>
</Filter>
<Client Integration>
<Email Clients>
<Outlook> Outlook Can Be
set On
set Off
Operations Can Register Add-Ins
</Outlook>
<Address Books>
<Integrate With Outlook Contact List> Integrate With Outlook Contact List Can Be
set On
set Off
</Integrate With Outlook Contact List>
<Integrate With Windows Address Book> Integrate With Windows Address Book Can Be
set On
set Off
</Integrate With Windows Address Book>
</Address Books>
</Email Clients>
<Miscellaneous>
<Welcome Screen> Welcome Screen Can Be
set On
set Off
</Welcome Screen>
<Feedback> Feedback Can Be
set On
set Off
set Ask-Me
</Feedback>
</Miscellaneous>
</Client Integration>
</Anti Spam>
<Identity Safe>
<Identity Safety> Identity Safety Can Be
set On
set Off
It Can Be Configured But It Has To Be Signed In
</Identity Safety>
<Safe Surfing>
<Anti Phishing> Anti Phishing Can Be
set On
set Off
<Submit Full Site Information> Submit Full Site Information Can Be
set On
set Off
</Submit Full Site Information>
</Anti Phishing>
<Norton Safe Web> Norton Safe Web Can Be
set On
set Off
<Block Malicious Pages> Block Malicious Pages Can Be
set On
set Off
</Block Malicious Pages>
<Site Rating Icons In Search Results> Site Rating Icons In Search Results Can Be
set On
set Off
</Site Rating Icons In Search Results>
<Scan In Sight> Scan In Sight Can Be
set On
set Off
</Scan In Sight>
</Norton Safe Web>
</Safe Surfing>
</Identity Safe>
<Task Scheduling>
<Automatic Tasks>
<Automatic Task>
Automatic Task Can Configure
<Task Name> Task Name </Task Name>
<Description> Description </Description>
</Automatic Task>*
</Automatic Tasks>
<Scheduling>
<Schedule> Schedule Can Be
set Automatic
set Weekly
set Monthly
set Manual Schedule
</Schedule>
</Scheduling>
</Task Scheduling>
<Administrative Settings>
<Background Tasks> Background Tasks Can Be Configured
<Background Task>
<Norton Task> Norton Task Can Be One Of A List </Norton Task>
<Last Run> Last Run </Last Run>
<Duration> Duration </Duration>
<Run During Idle> Run During Idle Can Be
set Yes
set No
</Run During Idle>
<Status> Status Can Be
set Complete
set Not Run
</Status>
</Background Task>*
</Background Tasks>
<Idle Time Optimiser> Idle Time Optimiser Can Be
set On
set Off
</Idle Time Optimiser>
<Report Card> Report Card Can Be
set On
set Off
</Report Card>
<Automatic Download Of New Version> Automatic Download Of New Version Can Be
set On
set Off
</Automatic Download Of New Version>
<Search Short Key> Search Short Key Can Be
set On
set Off
<global> Global Can Be
set On
set Off
</global>
<function Key> Function Key Can Be Control , Alt Or Win </function Key>
<key> Key Can Be A To Z Or F1 To F12 </key>
</Search Short Key>
<Network Proxy Settings> Network Proxy Settings Can Be Configured With
<Automatic Configuration>
<Configuration> Configuration With
<Automatic Detect Settings> Automatic Detect Settings Can Be
set On
set Off
</Automatic Detect Settings>
<Use An Automatic Configurations Script> Use An Automatic Configurations Script
</Use An Automatic Configurations Script>
<Ulr> Ulr </Ulr>
</Configuration>
<Proxy Settings> Proxy Settings
<Use Proxy Setting> Use Proxy Setting </Use Proxy Setting>
<Address> Address </Address>
<Port> Port </Port>
</Proxy Settings>
Authentication Can Be Set With I Need Authentication To Connect Through My Firewill
Or Proxy With Username And Password
</Automatic Configuration>
</Network Proxy Settings>
<Norton Community Watch> Norton Community Watch Can Be
set On
set Off
<Detailed Error Data Collection> Detailed Error Data Collection Can Be
se Ask-Me
set Never
set Always
</Detailed Error Data Collection>
</Norton Community Watch>
<Remote Management> Remote Management Can Be
set On
set Off
</Remote Management>
<Family> Family Is Not Installed And It Can Be Installed </Family>
<Norton Task Notification> Norton Task Notification Can Be
set On
set Off
</Norton Task Notification>
<Performance Monitoring> Performance Monitoring Can Be
set On
set Off
<Performance Alerting> Performance Alerting Can Be
set On
set Off
set Log-Only
</Performance Alerting>
</Performance Monitoring>
</Administrative Settings>
</Norton>

15.3 Standards
15.4 Techniques
15.5 Communications
<Norton>
<Antivirus>
<Automatic Protection>
<Boot Time Protection> Boot Time Protection Can Be Aggressive, Normal, Off </Boot
Time Protection>
<Real-Time Protection>
<Auto Protect>Auto Protect Can Be On, Off
<Removable Media Scan>Removable Media Scan Can Be On, Off </Removable>
</Auto Protect>
<Sonar Protection> Sonar Protection Can Be On, Off
<Network Drive Protection>Network Drive Protection Can Be On, Off </Network Drive
Protection>
<Sonar Advanced Mode> Sonar Advanced Mode Can Be Off, Aggressive, Automatic
<Removable Discs Automatically> Removable Discs Automatically Can Be Ask-Me,
Always, Highly-Certainty-Only </Removable Discs Automatically>
<Remove Risks If I Am Away> Remove Risks If I Am Away Highly-Certainty-Only, Ignore,
Always </Remove Risks If I Am Away>
</Sonar Advanced Mode>
<Shows Sonar Block Notifications>Shows Sonar Block Notifications Are Show-All Or
Log-Only </Shows Sonar Block Notifications>
<Early Launch Anti Malware Protection> Early Launch Anti Malware Protection Can Be
On Or Off </Early Launch Anti Malware Protection>
</Sonar Protection>
</Real-Time Protection>
</Automatic Protection>
<Scan And Risks>
<Computer Scans>
<Compressed File Scan> Compressed File Scan Can Be On Or Off
<Remove Infected Folders>Remove Infected Folders Can Be Automatic Or Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Rootkits And Stealth Items Scan>Rootkits And Stealth Items Scan Can Be On Or
Off</Rootkits And Stealth Items Scan>
<Network Drive Scan> Network Drive Scan Can Be On Or Off </Network Drive Scan>
<Heuristic Protection> Heuristic Protection Can Be Automatic, Off Or Aggressive
</Heuristic Protection>
<Tracking Cookies> Tracking Cookies Can Be Remove, Ignore Or Ask-Me</Tracking
Cookies>
<Full System Scan> Full System Scan Can Be Configured
<Scan Items> Scan Items Entire Computer </Scan Items>
<Scan Schedule> Scan Schedule Can Be Do Not Schedule This Scan, Run At A Specific
Time Interval (set The Number Of Days, Hours), Daily, Weekly, Monthly
<Only Time Idle Time> Only Only Time Idle Time Only Can Be On Or Off </Only Time Idle
Time Only>
<On Ac Power> On Ac Power Can Be On Or Off </On Ac Power>
<Prevent Standby> Prevent Standby Can Be On Or Off </Prevent Standby>
<After Scan Completion> After Scan Completion Can Be Stay On, Turn Off, Sleep Or
Hibernate </After Scan Completion>
</Scan Schedule>
<Scan Options>
<Compressed File Scan> Compressed File Scan Can Be On Or Off
<Remove Infected Folders> Remove Infected Folders Can Be Automatic Or Ask-Me
</Remove Infected Folders>
</Compressed File Scan>
<Network Drive Scan> Network Drive Scan Can Be On Or Off </Network Drive Scan>
<Low Risks> Low Risks Can Be Remove, Ignore Or Ask-Me </Low Risks>
</Scan Options>
</Full System Scan>
</Computer Scans>
<Protected Ports> Protected Ports Can Be Configured
<Full Name> Full Name </Full Name>
<Port Number> Port Number </Port Number>
With Add Or Remove Operations
</Protected Ports>
<Email Antivirus Scan> <Email Antivirus Scan Control> Email Antivirus Scan Can Be On
Or Off </Email Antivirus Scan Control>
Email Antivirus Scan Can Be Configured
<Scan Incoming Email> Scan Incoming Email Can Be On Or Off </Scan Incoming Email>
<Scan Outgoing Email> Scan Outgoing Email Can Be On Or Off </Scan Outgoing Email>
<Scan Outgoing Messages For Suspect Worms> Scan Outgoing Messages For Suspect
Worms Can Be On Or Off
<How To Respond When The Output Threat> How To Respond When The Output Threat
Is Found Can Be Automatically Removed Or Ask-Me </How To Respond When The
Output Threat>
</Scan Outgoing Messages For Suspect Worms>
<What To Do What To Do When Scanning Email Messages> What To Do What To Do
When Scanning Email Messages Can Be
<Protect Against Timeouts> Protect Against Timeouts Can Be On Or Off </Protect
Against Timeouts>
<Display Process Indicator> Display Process Indicator Can Be On Or Off </Display
Process Indicator>
</What To Do What To Do When Scanning Email Messages>
</Email Antivirus Scan>
<Exclusions And Low Risk>
<Low Risks> Low Risks Can Be Removed, Ignore Or Ask-Me </Low Risks>
<Items To Exclude From Scans> Items To Exclude From Scans Can Be Configured To
<Item To Exclude From Scans>System Volume Information </Item To Exclude From
Scans>*
With Operations Add Folders, Add Files, Edit And Remove
<Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Item To Exclude From Auto Protect Sonar And Download Intelligence Detection>
Items To Exclude From Auto Protect Sonar And Download Intelligence Detection Can
Be Configured To </Item To Exclude From Auto Protect Sonar And Download
Intelligence Detection>*
Operation Can Be Add Folder, Add Files, Edit And Remove
</Items To Exclude From Auto Protect Sonar And Download Intelligence Detection>
<Signatures To Exclude From All Detections>
<Signature To Exclude From All Detections>
Signatures To Exclude From All Detections Can Be Configured To
Operation Can Be Add, Remove And Risk Details
</Signature To Exclude From All Detections>*
</Signatures To Exclude From All Detections>
<Clear File Id Excluded During Scans> Clear File Id Excluded During Scans Can Be Set
With The Operation Clear All </Clear File Id Excluded During Scans>
</Items To Exclude From Scans>
</Exclusions And Low Risk>
</Scan And Risks>
<Updates>
Updates
<Automatic Live Update> Automatic Live Update Can Be On Or Off </Automatic Live
Update>
<Apply Updates Only On Reboot> Apply Updates Only On Reboot Can Be On Or Off
</Apply Updates Only On Reboot>
</Antivirus>
<Firewall>
<General Settings>
<Smart Firewall>
<Smart Firewall Setting> Smart Firewall Setting Can Be On Or Off </Smart Firewall
Setting>
<Uncommon Protocols> Uncommon Protocols Can Be Configured With
<Protocol Entry>
<Protocol Number> Protocol Number </Protocol Number>
<Protocol Name> Protocol Name </Protocol Name>
<Protocol Status> Protocol Status Can Be Enable, Disabled </Protocol Status>
</Protocol Entry>*
</Uncommon Protocols>
<Firewall Reset> Firewall Reset Operation To Reset Or Not </Reset Firewall Reset>
<Stealth Blocked Ports> Stealth Blocked Ports Can Be On Or Off </Stealth Blocked
Ports>
<Stateful Protocol Filter> Stateful Protocol Filter Can Be On Or Off </Stateful Protocol
Filter >
<Public Network Exceptions> Public Network Exceptions Can Be Configured To Allow
And Service
<Network Discovery> Network Discovery Can Be Allow Or Not Allow </Network
Discovery>
<File And Printer Sharing> File And Printer Sharing Can Be Allow Or Not Allow </File
And Printer Sharing>
<Remote Desk Top Connection> Remote Desk Top Connection Can Be Allow Or Not
Allow </Remote Desk Top Connection>
<Remote Assistence> Remote Assistence Can Be Allow Or Not Allow </Remote
Assistence>
<Windows Media Play > Windows Media Play Can Be Allow Or Not Allow </Windows
Media Play>
<Windows Media Player Extended> Windows Media Player Extended Can Be Allow Or
Not Allow </Windows Media Player Extended>
<Windows Web Services> Windows Web Services Can Be Allow Or Not Allow </Windows
Web Services>
<Remote Procedure Call> Remote Procedure Call Can Be Allow Or Not Allow </Remote
Procedure Call>
<Internets Connection Sharing> Internets Connection Sharing Can Be Allow Or Not
Allow </Internets Connection Sharing>
</Public Network Exceptions>
</Smart Firewall>
<Network Settings>
<Network Cost Awareness>
<Network Cost Awareness Status> Network Cost Awareness Status Can Be On Or Off
</Network Cost Awareness Status>
Network Cost Awareness And Can Be Configured To
<Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<Policy> Policy Can Be Auto, No Limit, Economy, No Traffic </Policy>
<In Use> In Use Can Be In Use Or Not Use </In Use>
</Network Cost Awareness>
<Network Trust> Network Trust Can Be Configured To The Same Thing Trust Level Can
Be <Network Connection> Network Connection Usb To Asics At Fast Ethernet Adapter
</Network Connection>
<trust Level>
Trust Level Can Be Full Trust, Private, Public Or Restricted
</trust Level>
<In Use> In Use Can Be In Use Or Not Use </In Use>
</Network Trust>
<Device Trust>
Device Trust Can Be Configured To
<Name>Name</Name>
<Type>Type</Type>
<Trust Level>Trust Level Can Be Full Trust Or Restricted</Trust Level>
<Address> Address Can Be Ip Or Physical Address </Address>
<Ips Extrusion> Ips Extrusion </Ips Extrusion>
The Operations Are Add And Remove
</Device Trust>
</Network Settings>
</General Settings>
<Program Control>
Program Control Can Be
<Owner> Owner </Owner>
<Trust> Trust </Trust>
<Program> Program </Program>
<Access> Access Can Be Allowed, Block Or Custom </Access>
Operations Can Be Program Search, Add, Modify, Remove, Rename
</Program Control>
<Traffic Rules> Traffic Rules Can Be
<Traffic Rule>
<Active> Active Can Be On Or Off </Active>
<Direction> Direction </Direction>
<Description> Description </Description>
<Action> Actions Can Be Allow, Block Or Monitor </Action>
<Connections> Connections Can Be Connection To Other Computers, Connection From
Other Computers Connections To And From Other Computers </Connections>
<Computers> Computers Can Be Any Computer, Computer In The Local Subnet Or Only
The Computers And Sites Listed Below </Computers>
Operations Can Be Add And Remove
<Communications> Communications Can Be Protocol, All Types Of Communications Or
Listed Below </Communications>
Operation Can Be Add And Remove Ports
<Advanced> Advanced
<Create The Security History To Log Entry> Create The Security History To Log Entry
Can Be On Or Off </Create The Security History To Log Entry>
<Apply Rule For Nat Transfersal Traffic> Apply Rule For Nat Transfersal Traffic Can Be
On, If Explicitly Requested Or Off </Apply Rule For Nat Transfersal Traffic>
</Advanced>
<Description Is The Name For The Rule> Description Is The Name For The Rule
</Description Is The Name For The Rule>
</Traffic Rule>*
Operations Can Be Add, View, Remove, Move Up, Move Down
</Traffic Rules>

<Intrusion And Browser Protection>


<Intrusion Protection Prevention> Intrusion Protection Prevention Can Be On Or Off
<Intrusion Auto Block> Intrusion Auto Block Can Be Configured To On Off
<Autoblock Attacking Computers> Autoblock Attacking Computers For Time Can Be
30min,1h,2h,4,8,16,24,48 </Autoblock Attacking Computers>
<Computers Currently Blocked By Auto Block> Computers Currently Blocked By Auto
Block
<Computer Currently Blocked By Auto Block> Computer Currently Blocked By Auto
Block
<Address> Address </Address>
<Action> Action </Action>
</Computer Currently Blocked By Auto Block>*
</Computers Currently Blocked By Auto Block>
<Intrusion Signatures> Intrusion Signatures
<Intrusion Signature> Intrusion Signature Can Be Configured To
<Active> Activecan Be On Or Off </Active>
<Signature Id> Signature Id </Signature Id>
<Signature Name> Signature Name </Signature Name>
<Notify Me> Notify Me Can Be On Or Off </Notify Me>
<//Intrusion Signature>*
Operaions Can Be Search, Activate All And Deactivate All
</Intrusion Signatures>
<Notification> Notification Can Be On Or Off </Notification>
<Exclusion List> Exclusion List Can Operate Purge </Exclusion List>
</Intrusion Protection Prevention>
<Exploit Prevention> Exploit Prevention Can Be On Or Off </Exploit Prevention>
<Browser Protection> Browser Protection Can Be On Or Off </Browser Protection>
<Download Intelligence> Download Intelligence Can Be On Or Off
<Download Insight Notifications> Download Insight Notifications Can Be On Or Off
<.Download Insight Notifications>
<Show Report On Launch Of Files> Show Report On Launch Of Files Can Be Unproven
Only, Always And Never </Show Report On Launch Of Files>
</Download Intelligence>
</Intrusion Protection Prevention>
</Intrusion And Browser Protection>
<Advanced Program Control Section>
<Advanced Program Control> Advanced Program Control Advanced Program Control
Can Be On Or Off </Advanced Program Control>
<Blocked Traffic For Malicious Applications> Blocked Traffic For Malicious Applications
Can Be Normal, Aggressive Or High Certainty Only </Blocked Traffic For Malicious
Applications>
<Low Risk Applications> Low Risk Applications Can Be Alert If Suspicious, Alert
Always Or Allow </Low Risk Applications>
<Apply Program Control For Ipv6 Nat Transfers Traffic> Apply Program Control For Ipv6
Nat Transfers Traffic Can Be Set To Off, On Or If Torito Is Requested
</Apply Program Control For Ipv6 Nat Transfers Traffic>
<Show Firewall Block Notification> Show Firewall Block Notification Can Be On Or Off
</Show Firewall Block Notification>
</Advanced Program Control Section>
</Firewall>
<Anti Spam>
<Filter>
<Norton Antispam> Norton Antispam Can Be On Or Off </Norton Antispam>
<Address Book Exclusions> Address Book Exclusions Can Be Configured To
<Name> Name </Name>
<Email Address> Email Address </Email Address>
Operations Can Be Add, Edit Or Removed
</Address Book Exclusions>
<Allow List> Allow List Can Be Configured With
<allow Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</allow Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</Allow List>
<block List> Block List Can Be Configured With
<block Item>
<Name> Name </Name>
<Type> Type </Type>
<Address> Address </Address>
Operations Can Be Add, Edit, Remove Or Import
</block Item>*
<Option> Option Is Ask-Me, Always Or Never </Option>
</block List>
<Web Query> Web Query Can Be On Or Off </Web Query>
<Protected Ports> Protected Ports Can Be Configured To
<Protected Port>
<Name> Name </Name>
<Number> Number </Number>
</Protected Port>*
Operations Can Be Add And Remove
</Protected Ports>
</Filter>
<Client Integration>
<Email Clients>
<Outlook> Outlook Can Be On Or Off
Operations Can Register Add-Ins
</Outlook>
<Address Books>
<Integrate With Outlook Contact List> Integrate With Outlook Contact List Can Be On
Or Off </Integrate With Outlook Contact List>
<Integrate With Windows Address Book> Integrate With Windows Address Book Can Be
On Or Off </Integrate With Windows Address Book>
</Address Books>
<Miscellaneous>
<Welcome Screen> Welcome Screen Can Be On Or Off </Welcome Screen>
<Feedback> Feedback Can Be On, Ask-Me Or Off </Feedback>
</Miscellaneous>
</Client Integration>
</Anti Spam>
<Identity Safe>
<Identity Safety> Identity Safety Can Be On Or Off
It Can Be Configured But It Has To Be Signed In
</Identity Safety>
<Safe Surfing>
<Anti Phishing> Anti Phishing Can Be On Or Off
<Submit Full Site Information> Submit Full Site Information Can Be On Or Off </Submit
Full Site Information>
</Anti Phishing>
<Norton Safe Web> Norton Safe Web Can Be On Or Off
<Block Malicious Pages> Block Malicious Pages Can Be On Or Off </Block Malicious
Pages>
<Site Rating Icons In Search Results> Site Rating Icons In Search Results Can Be On Or
Off </Site Rating Icons In Search Results>
<Scan In Sight> Scan In Sight Can Be On Or Off </Scan In Sight>
</Norton Safe Web>
</Safe Surfing>
</Identity Safe>
<Task Scheduling>
<Automatic Tasks>
<Automatic Task>
Automatic Task Can Configure
<Task Name> Task Name </Task Name>
<Description> Description </Description>
</Automatic Task>*
</Automatic Tasks>
<Scheduling>
<Schedule> Schedule Can Be Automatic, Weekly, Monthly Or Manual Schedule
</Schedule>
</Scheduling>
</Task Scheduling>
<Administrative Settings>
<Background Tasks> Background Tasks Can Be Configured
<Background Task>
<Norton Task> Norton Task Can Be One Of A List </Norton Task>
<Last Run> Last Run </Last Run>
<Duration> Duration </Duration>
<Run During Idle> Run During Idle Can Be Yes Or No</Run During Idle>
<Status> Status Can Be Complete Or Not Run</Status>
</Background Task>*
</Background Tasks>
<Idle Time Optimiser> Idle Time Optimiser Can Be On Or Off </Idle Time Optimiser>
<Report Card> Report Card Can Be On Or Off </Report Card>
<Automatic Download Of New Version> Automatic Download Of New Version Can Be On
Or Off </Automatic Download Of New Version>
<Search Short Key> Search Short Key Can Be On An Off
<global> Global Can Be Set On Or Off </global>
<function Key> Function Key Can Be Control , Alt Or Win </function Key>
<key> Key Can Be A To Z Or F1 To F12 </key>
</Search Short Key>
<Network Proxy Settings> Network Proxy Settings Can Be Configured With
<Automatic Configuration>
<Configuration> Configuration With
<Automatic Detect Settings> Automatic Detect Settings Can Be On Or Off</Automatic
Detect Settings>
<Use An Automatic Configurations Script> Use An Automatic Configurations Script
</Use An Automatic Configurations Script>
<Ulr> Ulr </Ulr>
</Configuration>
<Proxy Settings> Proxy Settings
<Use Proxy Setting> Use Proxy Setting </Use Proxy Setting>
<Address> Address </Address>
<Port> Port </Port>
</Proxy Settings>
Authentication Can Be Set With I Need Authentication To Connect Through My Firewill
Or Proxy With Username And Password
</Network Proxy Settings>
<Norton Community Watch> Norton Community Watch Can Be On Or Off
<Detailed Error Data Collection> Detailed Error Data Collection Can Be Ask-Me, Never
Or Always </Detailed Error Data Collection>
</Norton Community Watch>
<Remote Management> Remote Management Can Be On Or Off </Remote Management>
<Family> Family Is Not Installed And It Can Be Installed </Family>
<Norton Task Notification> Norton Task Notification Can Be On The Off </Norton Task
Notification>
<Performance Monitoring> Performance Monitoring Can Be On Or Off
<Performance Alerting> Performance Alerting Can Be On, Off And Log-Only
</Performance Alerting>
</Performance Monitoring>
</Administrative Settings>
</Norton>

16 Appendix - SONAR Protection


Symantec Online Network for Advanced Response (SONAR) identifies emerging threats
based on the behavior of files. It detects malicious code before virus definitions are
available through LiveUpdate and protects you from advanced threats.
You must be connected to the Internet to get real-time SONAR protection.
You can modify SONAR settings, including Network Drive Protection and Advanced
Mode options.
16.1 How do I turn on Network Drive Protection?
Symantec recommends that you keep Network Drive Protection turned on to keep your
network drives secure.
16.2 Turn on Network Drive Protection
• In the Automatic Protection tab, under SONAR Protection, in the Network Drive
Protection row, move the switch to On.
16.3 How do I configure SONAR Advanced Mode?
SONAR categorizes threats as high-certainty or low-certainty based on their behaviors.
By default, SONAR blocks high-certainty threats. You can choose to block all low-
certainty threats or receive notifications that let you allow or block detected threats. If
you allow a threat, SONAR does not notify you if it detects the same threat again.
16.4 Configure SONAR Advanced Mode
• In the Automatic Protection tab, under SONAR Protection, in the SONAR Advanced
Mode row:
• To block high-certainty threats and allow low-certainty threats, move the switch
to Off.
• To block high-certainty threats and receive notifications for low-certainty threats,
move the switch to Automatic.
This is the recommended setting.
• To block high-certainty threats, and receive notifications for low-certainty threats
with few suspicious characteristics, move the switch to Aggressive.
This setting is highly sensitive and might cause legitimate files to be identified as
threats. It is recommended for advanced users only.
16.5 How do I configure Remove Risks Automatically?
By default, SONAR blocks only high-certainty threats. You can change the settings to
have SONAR block all threats or ask you to block or allow low-certainty threats when it
detects them.
16.6 Configure Remove Risks Automatically
• In the Automatic Protection tab, under SONAR Advanced Mode, in the Remove Risks
Automatically row:
• To block all threats, move the switch to Always.
• To block only high-certainty threats, move the switch to High-Certainty Only.
This is the recommended setting.
• To ask for your decision when a risk is detected, move the button to Ask Me.
16.7 How do I configure Remove Risks if I Am Away?
By default, SONAR only blocks high-certainty threats when you are not using your
computer. You can change the settings to have SONAR remove all threats or ask you to
block or allow low-certainty threats when it detects them.
16.8 Configure Remove Risks if I Am Away:
• In the Automatic Protection tab, under SONAR Advanced Mode, in the Remove Risks
if I Am Away row:
• To block all threats, move the switch to Always.
• To block only high-certainty threats, move the switch to High-Certainty Only.
This is the recommended setting.
• To ignore threats when you are away, move the switch to Ignore.
16.9 How do I configure Show SONAR Block Notifications?
Use the Show SONAR Block Notifications option to turn on or turn off notifications
when SONAR blocks a threat. For example, you can suppress notifications when you
watch a movie or play a video game in full screen mode.
16.10 Configure Show SONAR Block Notifications:
• In the Automatic Protection tab, under SONAR Advanced Mode, in the Show SONAR
Block Notifications row:
• Move the switch to Show All to notify you every time SONAR blocks a threat.
• Move the switch to Log Only, to suppress notifications but allow you to see
details of blocked threats in the Security History window.
To access the Security History window, in the Norton main window, double-
click Security and then click History.
Related Solutions
• Turn SONAR Protection off or on
• Exclude files and folders from Norton Auto-Protect, SONAR, and Download
Intelligence scans
• Keep your computer secure
17 Appendix - Control for Nodes
17.1 Initial Loading Of Hardware
When an item of hardware is added to the IoT network, it is given an initial
configuration of item type, an item type it would connect to and its owner. The
functions of the item include:
• A method of connecting to the nearest node with a utility or named owner by
polling the nearest nodes with physical layer, data link layer, network layer, transport
layer and session layer using the appropriate protocol.
• A standard profile dependent on node type.
• The functions supported by the item normal operations.
• A method of accepting updates for control ware.
• A GPS collection support module then add to the item type to give IoT
identification if duplicate identification then add a number suffix then distribute to
proper nodes.
17.2 Update Loading Of Software
When an item of software is updated on the IoT network, it uses the configuration of
item type, an item type it would connect to and its owner. The functions of the item
include:
• A method of connecting to the nearest node with a utility or named owner by
polling the nearest nodes with physical layer, data link layer, network layer, transport
layer and session layer using the appropriate protocol.
• An updated profile dependent on node type.
• The updated functions supported by the item’s normal operations.
• A method of accepting updates for control ware and reporting the configuration
to a catalog.

17.3 Profile
The profile is built from core.
While adding to them a set of specifications that make services ready for the IoT
including:
• Configuration Management
• Fault Tolerance
• Security
• Metrics
• Health Checks
• JWT Authorization
• Type-safe REST Client
• OpenAPI
• OpenTracing
• Recovery
• Fallback

18 Appendix – IOT Configurations


18.1 Introduction
Organizations that rely heavily on data are increasingly likely to use cloud, fog, and
edge computing infrastructures. These architectures allow organizations to take
advantage of a variety of computing and data storage resources, including the
Industrial Internet of Things (IIoT). Cloud, fog and edge computing may appear similar,
but they are different layers of the IIoT. Edge computing for the IIoT allows processing
to be performed locally at multiple decision points for the purpose of reducing network
traffic.

18.2 Cloud Computing


Most enterprises are familiar with cloud computing since it’s now a de facto standard
in many industries. Fog and edge computing are both extensions of cloud networks,
which are a collection of servers comprising a distributed network. Such a network can
allow an organization to greatly exceed the resources that would otherwise be
available to it, freeing organizations from the requirement to keep infrastructure on
site.  The primary advantage of cloud-based systems is they allow data to be collected
from multiple sites and devices, which is accessible anywhere in the world.
Embedded hardware obtains data from on-site IIoT devices and passes it to the fog
layer. Pertinent data is then passed to the cloud layer, which is typically in a different
geographical location. The cloud layer is thus able to benefit from IIoT devices by
receiving their data through the other layers. Organizations often achieve superior
results by integrating a cloud platform with on-site fog networks or edge devices. Most
enterprises are now migrating towards a fog or edge infrastructure to increase the
utilization of their end-user and IIoT devices.
The use of embedded systems and other specialized devices allows these
organizations to better leverage the processing capability available to them, resulting
in improved network performance. The increased distribution of data processing and
storage made possible by these systems reduces network traffic, thus improving
operational efficiency. The cloud also performs high-order computations such as
predictive analysis and business control, which involves the processing of large
amounts of data from multiple sources. These computations are then passed back
down the computation stack so that it can be used by human operators and to facilitate
machine-to-machine (M2M) communications and machine learning.
18.3 Fog Computing
Fog computing and edge computing appear similar since they both involve bringing
intelligence and processing closer to the creation of data. However, the key difference
between the two lies in where the location of intelligence and compute power is
placed. A fog environment places intelligence at the local area network (LAN). This
architecture transmits data from endpoints to a gateway, where it is then transmitted
to sources for processing and return transmission. Edge computing places intelligence
and processing power in devices such as embedded automation controllers.
For example, a jet engine test produces a large amount of data about the engine’s
performance and condition very quickly. Industrial gateways are often used in this
application to collect data from edge devices, which is then sent to the LAN for
processing.
Fog computing uses edge devices and gateways with the LAN providing processing
capability.  These devices need to be efficient, meaning they require little power and
produce little heat. Single-board computers (SBCs) can be used in a fog environment to
receive real-time data such as response time (latency), security and data volume,
which can be distributed across multiple nodes in a network.
18.4 Edge Computing
The IoT has introduced a virtually infinite number of endpoints to commercial networks.
This trend has made it more challenging to consolidate data and processing in a single
data center, giving rise to the use of “edge computing.” This architecture performs
computations near the edge of the network, which is closer to the data source.
Edge computing is an extension of older technologies such as peer-to-peer networking,
distributed data, self-healing network technology and remote cloud services. It’s
powered by small form factor hardware with flash-storage arrays that provide highly
optimized performance. The processors used in edge computing devices offer improved
hardware security with a low power requirement. Industrial embedded SBCs and data
acquisition modules provide gateways for the data flow to and from an organization’s
computing environments.
The IIoT is composed of edge, fog and cloud architectural layers, such that the edge
and fog layers complement each other. Fog computing uses a centralized system that
interacts with industrial gateways and embedded computer systems on a local area
network, whereas edge computing performs much of the processing on embedded
computing platforms directly interfacing to sensors and controllers. However, this
distinction isn’t always clear, since organizations can be highly variable in their
approach to data processing.
Edge computing offers many advantages over traditional architectures such as
optimizing resource usage in a cloud-computing system. Performing computations at
the edge of the network reduces network traffic, which reduces the risk of a data
bottleneck. Edge computing also improves security by encrypting data closer to the
network core, while optimizing data that’s further from the core for performance.
Control is very important for edge computing in industrial environments because it
requires a bidirectional process for handling data. Embedded systems can collect data
at a network’s edge in real time and process that data before handing it off to the
higher-level computing environments.
18.5 Summary
The growth of the IIoT has increased the need for edge, fog, and cloud platforms. It
provides high-performance embedded systems that can be utilized in industrial
environments to enable solutions for edge computing requirements and gateways
within the fog platforms. Embedded systems thus allow leverage for IIoT hardware and
network infrastructure. It shows the advantages of distributed computing and what it
can do.

You might also like