You are on page 1of 39

Jason Rommel S.

Mendaros 2/20/2021
IT – 31
EMERGING TECHNOLOGY ASSIGNMENT NO. 1

1. What is meant by disruptive technology? Give at least five (5) examples of


disruptive technologies and explain the impact of each.

 Disruptive technology is an invention which changes the way customers,


industries and companies function considerably. A disturbing invention
sweeps out the processes or routine, because it has recognizably superior
qualities.

 Web-based video - Netflix is now popular, and aims to revolutionize the


viewing of TV and films. The conventional radio broadcasting model was
scrambled at the appeal. Netflix and other related organizations allow users to
prevent mishaps and to watch presentations on their own time schedules.

 3-D Printers - In reality, a 3D printer does not just build a copy, a widget or a
thing. This stuff varies from buildings, parts of the body to buttons, etc. It
does not seem to be revolutionary technology at first glance. In comparison to
a subtractive process such as sculpting, 3D printing is an additive processing
method. It uses digital image plans or scans a 3D model, uploads the picture to
a digital file, and then can layer by layer render the image. The printer utilizes
resins or fluidized materials that can be molded and hardened to the printed
structure in the appropriate shapes.
 Li-Fi, 100X Faster Than Wi-Fi

Haas insisted in a recent TED conference that household LED light bulbs
would easily be turned into Li-Fi transmitters, offering improved connectivity
to Internet users. "We all need to fit a small microchip into every possible
lighting device and that will then combine two key features: lighting and
wireless data transmission," he said. It is worth noting that this speed of LEDs
is too fast for the human eye to interpret in order to transmit data, so that users
do not have to worry about distracting flashes in their environment. Thanks to
the work of the Estonian start-up Velmenni, which has started the Li-Fi trial at
offices and other industrial sites at Tallinn, the technology has been applied in
real-life circumstances. They could hit link speeds of approximately one
gigabit per second in those settings. As internet use continues to increase
across the world, Li-Fi has been suggested to provide a reply to rising
frequency congestion.
 Blockchain - The Blockchain is a modern, state-the-art technology platform
that uses complex cryptography and a decentralized network to create
applications, record data transparently or transact financially through a block
chain. Although the Blockchain technology is traditionally equated with
'Bitcoin,' it is now used as a framework for a range of decentralized and open
ventures, such as intelligent contracts and even the whole of the Internet.
Blockchain technology is capable of altering the whole internet and
application development industry fully

 Cloud Computing - The activity of companies with large, robust off-site data
centers and servers has been dramatically changed by cloud storage and cloud
computing to allow unparalleled data processing, stock and data computing
for organizations of all sizes. Cloud systems have also allowed companies to
stream full games and/or programs in real-time to user connected to their
servers along with higher Internet bandwidth and the evolution of streaming
services. Cloud storage/computing effectively enables users to benefit from
large and efficient computer systems in order to improve their performance
and complete complex computer tasks, all with high technological
specifications, over the Internet. In addition, new products / services including
Service Infrastructure (IaaS) and Service Platform (PaaS) were built into new
models for the deployment of products and services at company level over the
cloud.
2. Discuss the theories, principles, evolution, emerging trends and applications of
each of the following technologies:
a. Mobile technology - The technology used for mobile communication is mobile
technology. Over the last few years, mobile technology has changed rapidly. Since the
start of the millennium, a standard mobile device has become a mobile phone, a GPS
navigator, an embedded web browser and instant messaging client and a handheld
gaming console. Many experts believe that in mobile computing with wireless
networking the future of computer technology lies. Mobile computing is becoming
increasingly popular via tablet computers. Tablets on the 3G and 4G networks are
available. Mobile technology has various significances in various ways; mostly mobile
information technology, and basketball technology based mainly on wireless. Wireless
technology integration of information technology appliances (including laptops, tablets,
mobile phones, etc.).

In the early stages of wireless engineering, a portable mobile radio telephone service was
provided. In 1917, a "pocket-size folding telephone with a very thin carbon microphone"
patent was filed by Finnish inventor Eric Tigerstedt. Analog radio signals from vessels
and trains were included early ancestors of mobile phones. Following World War II,
inventions were undertaken in several countries to build very portable telephones. Mobile
telephony development has been traced in successive generations, beginning with early
zero-generation (0G) systems, such as the mobile phone services of Bell System and its
successor, the enhanced mobile telephone service. The 0G systems have not been
cellular, have been very costly, supported few concurrent calling processes. The
advancement of large-scale integration (LSI) (metal oxide semiconductor, theory of
information and cellular communication) technology contributed to an expansion of cost-
effective mobile communications and devices, such as automotive phones. In 1973 John
F. Mitchell and Martin Cooper from Motorola demonstrated their first portable cell phone
with a 2 kg handset (4.4 lb). Nippon Telegraph and Telephone introduced the first
commercial automatic cellular (1G) analog in Japan in 1979. In 1981, the Nordic Mobile
Telephone (NMT) was introduced simultaneously in Danish, Finnish, Norwegian and
Swedish countries. In the early to mid-1980s, many other countries followed. These 1G
systems could support many more concurrent calls but still use analog cell technology.
The DynaTAC 8000x was the first mobile handheld phone commercially available in
1983. The 1990s saw the appearance of Modern Cellular Networks, which led to the
implementation of digital signal processing in wireless communication through the
widespread use of the MOSFET power amps and the RF circuits (RF CMOS). In 1991,
Radiolinja's GSM standard introduced the second generation (2G) digital mobile system
technology in Finland. The new entrants challenged the existing 1G network operators as
a result of rivalry in the market. The European GSM Standard is a European initiative
articulated at the CEPT ("Conférence Européenne des Postes et Telecommunications").
In 1987, a Memorandum of Understanding between 13 European countries had been
signed which agreed to launch a trade services before 1991. Franco-German R&D
cooperation demonstrated technical feasibility. There were 6,000 pages in the first edition
of the GSM standard (=2G). The James Clerk Maxwell Medal 2018 for their
contributions to the first digital mobile deviction Standard was awarded by the IEEE/RSE
to Thomas Haug and Philippe Dupu. More than 5 billion people in more than 220
countries used the GSM in 2018. The GSM (2G) is now 3G, 4G and 5G. In 1991, Sony
and Asahi Kasei promoted the lithium-ion battery, an essential source of energy for
modern cell phones. In 2001 NTT DoCoMo implemented the WCDMA standard of the
third generation (3G) in Japan. The resulting improvement was 3.5G, 3G+ and turbo 3G
based on the family high-speed packet access (HSPA) which permits higher data transfer
and capacities for UMTS networks. By 2009, the surge in bandwidth-intensive
applications as streaming media will at some stage overtake 3G networks. The industry
then started to search for data-optimized fourth-generation technologies with a pledge of
up to ten times faster performance increases than current 3G technologies. The first two
market-distinguished infrastructure charges of 4G were Sprint's WiMAX standard, and
TeliaSonera's LTE, the first in Scandinavia. 5G is a technology and concept for research
and initiatives that represents, beyond 4G/IMT-Advanced Standards, the next big step of
mobile telecommunication standards. 5G is not used officially by telecommunication
companies or standardization organizations such as 3GPP, WiMAX Forum or ITU-R, in
any specification or official document yet made public. Standardization authorities are
actively designing new standards beyond 4G, but at this point they are considering 4G
standards, not new smartphone generations.
b. Biotechnology - Biotechnology is the most direct, biology-based technology -
biotechnology uses cell and biomolecular processes to create technologies and products
that enhance our life and health on our planet. We've been producing useful food
products such as bread and cheese and preserving dairy products with biological
microorganism processes for more than 6,000 years.

In four major industrial sectors, biotechnology applies, including health (medical), crop
and agriculture, non-food (industrial) crop and other products use (for example
biodegradable plastic, vegetable oils, biofuels, etc.) and environmental applications.
For instance, the use of microorganisms for the production of organic products is a
specific application of biotechnology (examples include beer and milk products). Another
example is the use of bioleaching of natural bacteria found in the mining industry.
Biotechnology is also used as a recycling, waste disposal, remediation and biological
weapons for areas polluted by industrial activities (bioremediation).
There have been a number of derived words to describe, for example, some
biotechnology industries:
The field of bioinformatics is an interdisciplinary field, which uses computer
technologies to resolve biological issues and facilitates fast organization and biological
data analysis. This field can also be referred to as computational biology, and can be
defined as "conceptualizing biology in terms of molecules and then applying informatics
techniques to understand and organize the information associated with these molecules,
on a large scale."
The use of the marine resources to create products and industrial applications is the basis
of Blue Biotechnology. This biotechnology division is the most used in the refining and
combustion industries, primarily for bio-oils with micro-algae photosynthesizing.
Green biotechnology for agricultural processes is biotechnology. The selection and
domestication of plants via micropropagation will be an example. Another example is the
creation of transgenic plants in the presence (or lack) of chemicals to thrive in particular
environments. One hope is for green biotechnology to develop solutions that are more
environmentally sustainable than conventional industrial agriculture. An example is the
manufacturing of a pesticide plant, which ends the need for external use of pesticides. Bt
maize will be an example of this. It is a subject of considerable debate whether or not
green biotechnology products, like these, are eventually more environmentally friendly. It
is also considered the next stage of the Green Revolution that can be seen as a forum to
eradicate the global hunger through the use of technologies that allow the manufacture of
more fertile and resistant plants, biotic and abiotic strains, and ensure the use of
ecological fertilizers and biopesticides.
This division includes the development of vaccines, antibiotics, regenerative medicine,
the formation of artificial organisms and modern disease diagnostics. The key component
of this area is the use of biotechnology for medical and pharmaceutical industry and the
protection of public health. Neither hormone development, stem cell development,
antibodies, siRNA or diagnostic testing.
Biotechnology used in industrial processes is white biotechnology, also known as
industrial biotechnology. The design of an organism for the creation of a useful chemical
is an example. The use of enzymes to produce useful chemicals or to detruct
hazardous/polluting chemicals as industrial catalysts is another example. White
biotechnology appears to be less resource-efficient than conventional industrial methods.
"Yellow biotechnology" means the use of biotechnology, for example by means of
fermentation in the manufacture of wine, cheese and beer. The biotechnology used in
insects was also used. These include biotechnology-based approaches to harmful insect
controls, characterization and use for study and application in agriculture and medicine
and numerous other approaches of active components or genes of insects.
Grey biotechnology focuses on environmental applications and ecosystem maintenance
and pollutant elimination.
The management of arid lands and deserts is linked to Brown biotechnology. Application
is the establishment, through innovation, agricultural techniques and management of
resources, of enhanced seeds resisting extreme environmental conditions in arid regions.
Law, race and philosophical concerns apply to biotechnology. Violet biotechnological
issues.
Dark biotechnology is the color of bioterrorism or chemical and biological warfare and
bio-war use of microorganisms and toxins for human, animal and crop disease and death.
While many types of agricultural products derived from humans do not usually come to
the fore, the broad concept of "'utilizing a biotechnological system to make products"
obviously fits in. In reality, plant cultivation can be seen as the first biotechnological
business.

Since the Neolithic Revolution, agriculture has been technically the prevailing method of
producing food. The earliest farmers have selected, produced and produced the best
plants that are best suited for the production of food with the highest yields to feed a
growing population. In view of the rising size of crops and fields, it has been found that
specific species and their by-products can effectively fertilize, restore nitrogen and
control pests. During farming history, farmers accidentally changed their crops genetics
by adding new conditions and breeding them with other plants – one of the first types of
biotechnology.
These processes were also taken into account in early beer fermentation. In early
Mesopotamia, Egypt, China and India, these processes were implemented using the same
simple biological methods. In the brewing process, malted grains (which contain
enzymes) turn from grain starch into sugar and then add unique yeasts to beer.
Carbohydrates in the grains, for instance ethanol, split into alcohols. Later, the process of
lactic acid fermentation was produced in other cultures and other preserved foods such as
soy sauce were produced. In this time even leavened bread was made from fermentation.
Although the fermentation mechanism was not well understood until the work of Louis
Pasteur in 1857, the biotechnology was also the first to turn a dietary source into a
different shape.
Animal and plant scientists had already used selective breeding before the days of
Charles Darwin's work and life. In his scientific findings about science's ability to alter
species, Darwin added to this work. These accounts have helped Darwin's natural
selection theory.
Human beings have used selective reproductive breeding for thousands of years to boost
crop production and livestock production for food consumption. Organisms with
attractive properties are matured in the selective breeding to produce descendants with
the same characteristics. This method, for example, has been used to grow the largest and
sweetest plants with maize.
At the beginning of the 20th century scientists become more aware of microbiology and
investigates how various goods are made. In 1917, Chaim Weizmann first developed the
acetone that the United Kingdom had urgently needed to manufacture explosives during
the 1st world war using a pure microbiologic culture in an industrial method of producing
maize starch using Clostridium acetobutylicum.
The development of antibiotics was also driven by biotechnology. Alexander Fleming
found the Penicillium mold in 1928. The work of Mr. Howard Florey, Mr. Ernst Boris
and Mr Norman Heatley has led him to the purification of the antibiotic compound,
which is what we now call penicillin. In 1940 the drugs for the treatment of bacterial
infections in humans made penicillin available.
In general, the field of modern biotechnology was established in 1971 when the
experiments of Paul Berg (Stanford) in the field of gene splicing succeeded early on. By
passing genetic material to a bacterium, the imported material will be replicated in 1972
Herbert W. Boyer (Univ. Calif. at San Francisco) and Stanley N. Cohen (Stanforn)
greatly advanced the new technology. On 16 June 1980, when the Supreme Court of the
United States ruled that a genetically modified micro-organism could be patented on the
case of diamond v Chakrabarty, Ananda Chakrabarty of India was born to General
Electric and had altered the bacterium (Pseudomonas genus) which he used to break
down crude oil. The commercial feasibility of a biotechnology industry developed
significantly. (The work of Chakrabarty involved the transfer of entire organelles
between Pseudomonas-bacterial strains rather than genetic manipulation.
In 1959, Mohamed M. Atalla and Dawon Kahng invented the MOSFET (metal-oxide-
semiconductive field-effect transistor). Two years later, in 1962, Leland C. Clark and
Champ Lyons invented the first biosensor. The first BioFET was ion-sensitive field-
effective transistor (ISFET), invented by Piet Bergveld in 1970.The MOSFET is a special
kind of MOSFET, in which metal gate is substituted by ion-sensitive membrane, an
electrolyte solution and a reference electrode. It was built later on and is commonly used
to quantify physical, chemical, biologic and environmental parameters. The ISFET is
commonly used in biomedical applications such as DNA hybridization detection, blood
biomarker detection, anticorporate detection, measurement of glucose, pH sensing and
genetics.
Other BioFETs had also been created in the early '80s, including the gas sensor FET
(GASFET), pressure sensor FET (PRESSFET). As far as the early 2000s, BioFETs such
as DNA-field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-
potential biomass bioFET (CPFET) were established.
Improved regulations on intellectual property rights and regulation around the world and
increased demand of medical and pharmaceutical products to cope with the aging and
ailing American population are a factor affecting the performance of the biotechnology
industry.
The growing demand for bio fuel is expected to be good news for the biotechnology
industry, with the U.S. petroleum fuel consumption of fuel up to 30% by 2030 predicted
by the Department of Energy. The biotechnology sector has allowed the US agriculture
industry, through the production of genetically modified seeds that resist pests and
drought, to rapidly improve its supply of maize and soya – the main supply of biofuel.
Biotechnology improves biofuel production by rising farm productivity.
c. RAID technology – RA ("Redundant Array of Inexpensive Disks" or "Redundant
Array of Independent Disks") ID is an information virtualization array that incorporates
several physical disk drive components into one or more logical units, for the purpose of
data replication, performance enhancement or both, and a redundant array of Inexpensive
Disks or a Redundant Array of Independent Disks. This was counter to the earlier idea of
highly durable mainframe discs known as "single large expensive disk" (SLED). Data is
spread across disks according to the necessary level of redundancy and performance, in
one way or another called RAID levels. The names of the different systems or delivery
schemes contain the word "RAID" and a number, such as RAID 0 or RAID 1. The main
goals are reliability, availability, efficiency, and capability at the individual schemes (or
RAID levels). RAID levels higher than RAID 0 guard against non-recoverable sector
errors and physical drive failure.

The word 'RAID' was developed in 1987 at the University of California, Berkeley by
David Patterson, Garth A. Gibson, and Randy Katz. In their June 1988 paper "A Case for
Redundant Arrays of Inexpensive Disks (RAID)" on the SIGMOD conference the high-
performance mainframe drives of the time could be overcome by a variety of low-cost
drives built to support the increasing personal computer market. In their June 1988 paper
the SIGMOD Conference was held, the most important mainframe drives of the time.
Even if failures were proportionate to the number of drives, redundancy could far
outweigh that of any large single drive by configuring an array
Even though this terminology has not been used, the five-level RAID technologies
mentioned in the document in June 1988 were used in different products before the article
was written, including:
Mirroring (RAID 1) was established well in the 1970s, with Tandem NonStop Systems,
for instance.
Norman Ken Ouchi filed a patent at the IBM in 1977, which disclosed what was later
called RAID 4.
By 1983, DEC began shipping RA8X mirrored subsystems as part of their HSC50
subsystem (which are now known as RAID 1).
The patent revealing what was then referred to as RAID 5 was filed in 1986 in IBM by
Clark et al.
In an array of disk drives around 1988 DataVault from the Thought Machines used error
correction codes (now called the RAID 2). In the early 1960s, IBM 353 followed a
similar approach.
The RAID acronym was then redefined by manufacturers to be "Redundant Array of
Independent Disks"
While there were originally five standard RAID levels, several variations, including
many nesting levels and several non-standard levels have evolved (mostly proprietary).
The storage Networking Industry Association (SNIA) in the Popular RAID Disk Drive
Format (DDF) standard standardizes RAID levels and their related data size formats:
Stripping, but no reflective or parity is included in RAID 0. The RAID 0 volume is the
same in comparison with a volume spanned; this is the sum of the driving capacities in
the package. However, due to the striping of the contents of any file through every drive
in the set the entire RAID 0 volume is lost as a result of failure of any drive. Compared to
the unfailing drives a wide volume protects files. The advantage of RAID 0 is that the
number of drives multiply the read and write transactions to each file because, unlike
large volumes, reads and writings are performed simultaneously. The risk is enhanced by
drive failure – as a failure of any RAID 0 drive contributes to the loss of volume in its
entirety, with the number of attached drives the average failure rate is increasing.
RAID 1 is a mirroring of data without parity or strip. The data is written on two or more
disks identically and thus creates a "mirrored set" Any reading request can therefore be
met by any drive on the list. When a request is transmitted to each drive on the set, the
drive accessing the data can be used for improvement of performance depending on its
request time and rotary latency. If the controller or program is configured for sustainable
reading, the amount of the output of and drive within the set is approaching as for RAID
0. Most RAID 1 implementations' real read performance is slower than the fastest drive.
Writing throughput is often lent as each drive must be modified, and write output is
restricted by the slower drive. The array continues to run while at least one drive works.
RAID 2 consists of striping at a bit level with the dedicated parity of the hamming code.
All the drive spindles are synchronized and the data is streaked to allow for a different
drive for each sequential bit. The parity of hamming code is measured on a bit and is
stored on at least one parity drive. While it was used on some early machines (for
example, CM-2 thinking machines), this level is historically important only, and since
2014 it is not used by a commercially available device.
RAID 3 consists of a devoted parity byte level striping. Each disk spindle rotation is
synchronized and data is stripped to a separate drive for every sequential byte. A parity of
the corresponding bytes is measured and stored on a special parity disk. While
implementations exist, in practice, RAID 3 is not widely used.

RAID 4 contains a dedicated parity block strip. This level has already been applied by
NetApp but has now been largely replaced with two parity disks called RAID-DP by the
proprietary RAID 4 implementation. The key advantage of RAID 4 compared with RAID
2 and 3 is that the I/O parallelism involves reading the whole group of data drives in
RAID 2 and 3, although not all the data drives have one I/O reading operation. As a
consequence, additional I/O operations can be carried out in accordance with limited
transfer output.
RAID 5 consists of distributed parity block-level striping. Unlike RAID 4, parity
information between drives is transmitted, which requires all drives but one. After a
single drive fails, the readings of the distributed parity can be estimated so that no data is
lost. At least 3 disks are required for RAID 5. As with all single-parity principles, broad
RAID 5 implementations are subject to device failures due to patterns regarding
regeneration time for arrays and the likelihood of disk failure during regeneration (see the
section called "Increasing rebuild time and failure probability" below).
RAID 6 consists of double distributed parity block-level striping. Double parity provides
up to two failed drives with fault tolerance. This makes it more realistic for larger RAID
groups, particularly for available systems as large-power drives need to be restored
longer. A minimum of four discs is required for RAID 6. As with R Help 5, the output of
the entire array is decreased before the failed drive is replaced by a single drive failure. A
RAID 6 collection can alleviate much of the problems related to RAID 5 using drives
from different sources and producers. When the drive capacity and array size are greater,
the more necessary it is to select RAID 6 rather than RAID 5. These issues are minimized
by RAID 10 as well.
d. NFC technology - Near-Field Communication (NFC) involves a sequence of
communication protocols between two electronic devices over a distance of 4 cm or less.
NFC provides a basic configuration that can be used to bootstrap more efficient wireless
networks with a low-speed link. NFC systems are able to act as documents and keycards
for electronic identification. They are used in non-contact payments and allow systems
such as credit cards and electronic smart cards to replace or complement mobile
payments. The NFC / CTLS or NFC CTLS is often called, with the abbreviated CTLS in
touch. NFC can serve to share large media like pictures, video files and other files with
small files like contacts and fast bootstrap link.
Similar concepts in advertisement and industrial applications, overridden with
technologies like QR codes, bar codes, and UHF RFID tags, were not generally
commercially viable. The general norm of NFC protocols was set. With Internet access,
one of the connected devices will share data with online services.
NFC defines a technique that can be used for contactless data sharing over short
distances. During the 0 to 2 cm gap, a point-to-point touch connects two NFC-capable
devices. This link can be used to exchange data between devices (for example, process
data, maintenance information, and service information). This interface can also be used
for component parameterization.
Application software for reading, for example, electronic tags or making payments while
connected to an NFC-compliant unit may also be supplied with the NFC-enabled portable
devices. Previous close-circuit contact used manufacturer-owned hardware for
applications like inventory vouchers, access controls and payment readers.
As with other "proximity card" devices, NFC is based on the inductive linkage in the
globally accessible non licensed ISM frequency range, with an ISO-IEC 18000-3 air
interface at data rates from 106 to 424 kg, between two so-called "antennas" present on
NFC-compatibles, such as a smartphone and a printer, communicating in any one
direction and using a frequency of 13.56 MHz
In one or more of three modes, each active NFC system will work:
Emulation of NFC cards
Enable NFC-activated devices, such as smartphones to serve as smart cards to allow users
to carry out payment or ticketing transactions.

Writer/reader of NFC
Enables NFC-activated devices to play information stored on low-cost labels and smart
posters with NFC tags.
Peer to Peer NFC
Allows two NFC-enabled devices to communicate ad hoc information exchange with
each other.
NFC tags are passive data stores that are readable and written to an NFC system in
certain circumstances. They normally carry data (96 to 8,192 bytes by 2015) and are
read-only, but rewritable, for normal use. Applications include safe storage of personal
data (e.g. debit or credit card, loyalty cards, PINs, contacts). The NFC tags can be
custom-coded or the industry standards can be used by their manufacturers.

The NFC Forum set the criteria. The forum promoted the technology, set requirements
and verified the compliance with the device. Encryption algorithms, such as credit cards
and criteria for personal field networks, are required for secure communication.
The NPC specifications are based on established standards of RFID, including ISO/IEC
14443 which FeliCa, and cover communication protocols and data exchange formats. The
ISO/IEC 18092[8] and NFC Forum standards are part of the standard. Besides the NFC
Forum a platform for applying GSMA NFC standards has been developed by the GSMA
community in mobile telephones. GSMA efforts include the Single Wire Protocol,
Trusted Services Manager, testing/certification and the protected feature.
France Brevets, a patent fund founded in 2011, is currently deploying an NFC patent
licensing program. The program was produced by the independent subsidiary of Dolby
Laboratories, Through Licensing Corporation, which was concluded in May 2012. The
GNU Lesser General Public License offers a platform-independent, free and open-source
NFC library, Libnfc.
Contactless transfers, data sharing and simpler installation of more complex connectivity,
such as Wi-Fi, compose the current and expected applications.
NFC is grounded in RFID technology, which provides both compatible hardware power
and communicates through radio waves with otherwise powerless and passive electronic
tags. This is used to define, authenticate and monitor.
17 May 1983 – Charles Walton was awarded the first patent to be related to the
abbreviation "RFID"
1997 - Early patented and used character toys for Hasbro in Star Wars for the first time.
The patent originally was held at Innovision Science and Technology by Andrew White
and Marc Borrett (Patent WO9723060). The system allowed close proximity to the data
communication between two units.
25 March 2002—Philips & Sony decided to design and develop a technological layout.
For the six fundamental NFC patents, invented by Franz Amtmann and Philippe Maugars
from Austria and France and which were presented with the European Inventor award in
2015, Philips Semiconductor has applied for them.
08 December 2003—The ISO/IEC standard and later the ECMA standard were approved
for NFC.
2004 - The NFC Forum was set up by Nokia, Philips and Sony
2004 - The Nokia NFC shell Add-on for Nokia 5140 models will be introduced in 2005,
followed by Nokia 3220 models.

2005 - Experimenting with cell phones in transport, payment in May in Hanau (Nokia)
and validation in October with Orange in Nice on board and payment in shops in Caen
(Samsung) in October with first receipt of details on "Fly Tag"
2006 - Original NFC Tags Requirements
2006 - "Smart Poster" specification 2006 2006
2007 – NFC tags used in Nokia 6131 for the first market trial in the UK.
2008 - The first NFC SDK has been introduced by Air Tag.
2009 – The NFC Forum published in January peer-to-peer standards on touch sharing,
URLs, initiation of Bluetooth and so on.
2009 – On 19 January 2009, the NFC first used in China Unicom and Yucheng
Transportation Cards in Chongqing tram and Bus, and then introduced for the first time
by China Unicom on 31 December 2010 in Beijing, China Unicom.
2010 - Innovision launched a number of low costs, mass market mobile and other devices
designs and patents.
2010 - Nokia C7: First smartphone available with the NFC capability. In early 2011, the
NFC functionality was upgraded with software.
2010 - Samsung Nexus S: First NFC phone shown in Android
21 May 2010 – The first in Europe to supply people with bank cards and cell phones
(such as Samsung Player One S5230) in Europe, and with a "bouquet of services"
covering transportation (such as trams and bus), tourism and students' services is to be
launched by the "Cityzi" Project.
2011 – Google I/O "How to NFC" shows NFC starting and sharing a contact, URL, app
or video.
2011 – The introduction of Symbian Anna will provide NFC support for the Symbian
mobile operating system.
2011 – The first MasterCard Worldwide approved Pay Pass devices are Research in
Motion
2012 - British chain of restaurants Feed. The first global smart poster campaign partner in
the UK, Everything Anywhere (Orange Mobile Network Operator). As NFC-enabled
mobile phones come into contact with the smart poster, a specially built mobile app is
activated.
2012 – Sony presented NFC Smart Tags, which includes the Sony Xperia P Smartphone
launched the same year, to alter modes and profiles on Sony's smartphone.
2013 - Samsung and VISA have confirmed their Mobile Payment Partnership.

2013 - IBM scientists are designing NFC-based mobile authentication protection


technologies to resolve fraud and security breaches. Similar concepts are applied to this
technology as regards double-factor protection.
2014 – Soft card issued by AT&T, Verizon and T-Mobile (formerly ISIS mobile wallet).
When an external NFC case is attached, it will run on NFC-enabled Android and iPhone
4 and iPhone 5. Google bought the technology and completed the operation on March 31,
2015.
November 2015 – Swatch and Visa Inc. announced a collaboration with the Swatch
Bellamy Braceclock to facilitate financial transactions for NFC. Currently in Asia, the
system is online via a collaboration between China UnionPay and Bank. This
collaboration takes innovations to the United States, Brazil and Switzerland.
October 2015—A direct rival to Apple Pay, Google's Android Pay feature was launched
and its deployment across the US began.
e. Laser technology - A laser is an optical amplification process that emits light based on
inducing electromagnetic radiation emission. The word "laser" came to be referred to as a
"light amplification by stimulated emission of radiation" Based on the theoretical work
by Charles Hard Townes and Arthur Leonard Schawlow, the first laser was developed by
H. Maiman at Hughes Research Laboratories in 1960. A laser differs from other light
sources by emitting continuous light. Spatial coherence makes it possible for a laser to
focus closely so that applications like laser cut-offs and lithographs are possible. Spatial
coherence also enables a laser beam to remain narrow over wide distances (collimation),
allowing uses such as laser and lidar points. Lasers may also have a high time coherence,
which enables light to be emitted with a very narrow spectrum. Temporal coherence can
be used to create a large range of pulses of light that lasts as rapidly as a femtosecond
("ultrashort pulses"). Lasers are used in optical disk drives, laser printers, barcode
scanners, DNA sequence equipment, fiber optical, semiconducting (photolithographic)
and optical space-free communications, cutting and welding material, military and law
enforcement systems for goal marking, measuring range and speed as well as in laser
lighting displays for entering systems. lasers are used in electrical and electrical
equipment. They are used with a blue laser and a phosphorus to create highly directional
white light for car headlamps on the luxury cars.

Fundamentals
In 1917 Albert Einstein established a new derivation of Max Planck's radiation law,
based in principle on the probability coefficients for absorption, spontaneous emission
and stimulated electromagnetic radiation (Einstein coefficients), for the laser and the
maser. In the document "Zur Quantentheorie der Strahlungs." In 1928 the phenomenon of
stimulated emissions and adverse absorption were confirmed by Rudolf W. Ladenburg.
In 1939, Valentin A. Fabrikant forecast the use for the amplification of "short" waves of
stimulating emissions. Willis E. Lamb and R.C. in the year 1947. In hydrogen spectrum,
Retherford noticed apparent stimulated emissions and demonstrated the first stimulated
emissions. The method of optical pumping, which had been proven by Brossel, Kastler
and Winter for two years, was proposed in 1950 by Alfred Kastlers (Nobel Prize for
Physics 1966).
Measuring Maser
Principal article:
Prokhorov Aleksandr
In 1951, at the June, 1952 Institute of Radio engineers Vacuum Tube Research
Conference in Ottawa, Ontario, Joseph Weber presented a document on the use of
stimulated emissions to create a microwave amplifier. RCA asked Weber to send this
idea to a conference, after which he had been invited by Charles Hard Townes to make
copies of the article.
Townes Charles H.
In 1953, the first microwave amplification, a device that operates according to similar
concepts to a laser but amplifies microwave rays and not infrared or visible radiation,
came from Charles Hard Townes and graduates James P. Gordon and Herbert J. Zeiger,
students. The maser in Townes was unable to produce continuously. Meanwhile, Nikolay
Basov and Aleksandr Prokhorov worked independently on the quantum oscillator in the
Soviet Union, and by using more than two energy levels, solved the issue of continuous-
production systems. These media will release stimulated emissions between an excited
state and a less excited state rather than the ground, which may make it easier to sustain a
population investment. As a method for obtaining population reverse, Prokhorov and
Basov proposed optical pumping of multiple-level systems in 1955, later a major laser
pumping method.
Townes claimed that the Maser violated the theory of uncertainty by many eminent
physicists – including Niels Bohr, John von Neumann and Llewellyn Thomas. Others like
Isidor Rabi and Polykarp Kusch expected the initiative to be unworkable and unworthy.
In 1964 Charles H. Townes, Nikolay Basov and Aleksandr Prokhorov shared the Nobel
Prize for Physics, which "has led to the development of oscillators and amplifier-
dependent devices, based on the maser-laser theory, for fundamental work in the field of
quantum electronic.

Laser: Laser
A serious research on the infrarot laser started in 1957 at Bell Labs by Charles Hard
Townes and then Arthur Leonard Schawlow. As theories progressed, infrarot radiation
was discarded to focus solely on visible light. The word "optical maser" was originally
used. In 1958 Bell Labs filed a patent application for their optical maser; the Physical
Analysis was published by Schawlow and Townes in Volume 112, Issue 6 of their
theoretical calculations.
LASER notebook: The first page of the notebook in which LASER acronym was
invented by Gordon Gould and defined the system building elements.
At the same time, student Gordon Gould, a PhD thesis on energy levels of excited
thallium, worked at Columbia University. In the meeting of Gould and Townes they
spoke as a general question about radiation emission; then he noted his proposals for a
"laser" in November 1957, including the use of an open resonator (later an essential laser-
device component). In addition, in 1958 Prokhorov proposed the first published
appearance of this concept independently using an open resonator. Elsewhere else, in the
US the concept of a Laser Open-Resonator was decided between Schawlow and Townes,
obviously not understanding the publications of Prokhorov and the unpublished work of
Gould.
In a 1959 meeting, in The LASER, Light Amplification by Stimulated Emissions of
Radiation, Gordon Gould published the word LASER. The linguistic aim of Gould was to
use "-aser" particles to acutely indicate the LASER device's range of light: thus x-rays:
xaser, ultraviolet: uvaser and the like; none of them were a discreet term, while "raser"
was common for the denoting of the device's radiofrequency emission.
In Gould's notes, potential laser applications such as spectrometry, performance, radar
and nuclear fusion were included. He further developed the concept and in April 1959
submitted a patent application. Bell Labs was granted a patent in 1960. The US Patent
Office refused his application. It provoked a 28-year lawsuit, which included scientific
reputation and money. It was not until 1987 that Gould won his first major patent case,
when a Federal Judge decreed the United States, that he won his first minor patent in
1977. For the optically pumped and gas discharge laser systems, the Patent Office is to
grant patents to Gould. Historians remain unresolved on the subject of how to give credit
for inventing the laser.

In the lead of several research teams, including those in Towne, Theodore H. Maiman
worked the first working laser at the Hughes Research Laboratories in Malibu, California,
at Columbia University, Arthur Schowlow at Bell Labs and Gould in the TRG (Technical
Research Group) on 16 May 1960. For the production of red laser light at 694 nanometer
wavelength, Maiman functional laser used a flashlamp pumped, synthetic ruby crystal.
Due to its three-level pumping design system, the unit was able to only operate pulsed.
The first gas laser, made from helium and Neon capable of continuous infrared use, were
made by Iranian physicist Ali Javan and William R. Bennett and Donald Herriott in 1993,
and Javan was awarded the Albert Einstein Award later that year. The definition of laser
diode semiconductor was introduced by Basov and Javan. The first laser diode system
made of gallium arsenide and emitted in the almost infrarot range of the spectrum was
shown by Robert N. Hall in 1962. The first visible laser semiconductor was seen by Nick
Holonyak, Jr. later that year. This first laser can be used only for pulse beams and cooled
down to liquid nitrogen temperatures (77 K). In 1970, the Bell Telephone Laboratories
also autonomously equipped room temperature lasers with diode continuous operation
using a heterojunction structure in Zhores Alferov in the USSR, and Izuo Hayashi and
Morton Panish.
Latest developments
Displays the history over the last 40 years of the highest strength of the laser pulse.
Laser research has developed since the early days of the laser history a range of enhanced
and unique laser kinds that have been optimized for various efficiency objectives,
including:

New bands of wavelength


Average maximum processing capability
Full energy peak pulse
Maximum pulse capability
Minimum pulse processing time
Linewidth limited
Efficiency of full power
Minimum expenditures
And to this day this investigation continues.
The researchers made a white’s laser for 2015 whose light is modulated by synthetic
nanosheets made of zinc, cadmium, sulfur and selenium, with a wavelength of 191 nm
that are capable to emit red, green and blue light in different proportions.
TU Delft researchers demonstrated an AC Josephson microwave laser in 2017.[The laser
is more stable than other semiconductor lasers, since the laser operates in a
superconducting regime. The computer can be used for quantum computing applications.
Researchers at the University of Technology in Munich demonstrated in 2017 the
smallest laser-locking mode that is able to emit phase-locked picosecond laser pulses up
to 200 GHz.
In 2017, researchers from the PTB, along with JI LA US researchers, the NIST and the
University of Colorado Boulder jointly set the current world record by creating a
linewidth of just 10 millihertz erbium-doped fiber laser. In 2017, a new fiber record was
established in collaboration with the University of Colorado.

f. RFID technology - Electromagnetic fields are used to automate the detection and
monitoring of tags attached to the objects for radiofrequency identification (RFID). A
small radio transponder, a radio receiver and a radio transmitter are included in the RFID
system. The tag sends digital data, typically an identifying inventory number, back to the
reader when activated by an electromagnetic interrogation pulse from a nearby RFID
reader. You can track inventory commodities with that number.
Two types of RFID tags are available:
Passive tags are powered by energy from the probing radio waves of the RFID reader.
Active tags are battery driven, and can be read up to hundreds of meters from the RFID
reader.
In comparison to a barcode, the tag should not be in a reader's sight, so it can be inserted
into the tracked object. RFID is an automated recognition and data collection system
(AIDC).
In many sectors, RFID tags are used. An RFID tag attached to a car during manufacturing
can, for example, be used to monitor its progress across the assembly line, RFID tags can
be tracked through warehouses, and RFID microchips can be implanted in livestock and
pets to classify animals positively.
Since RFID tags may be attached to physical money, clothing or property or be inserted
in animals and individuals, there have been significant privacy concerns about the
possibility of reading personal information without permission. These issues contributed
to the creation of uniform specifications for privacy and protection. For traceability, tag
and reader authentication, as well as for over-the-air privacy, ISO/IEC 18000 and
ISO/IEC 29167 use on-chip encrypting methods. ISO/IEC 20248 defines a digital RFID
and barcode signature data structure that authenticates data, source and read methods.
This work is carried out using Automated ID and data capture techniques ISO / IEC JTC
1/SC 31. Tags may also be applied by consumers and workers in shops to easily check
out and avoid theft. The world demand for RFID was 8.89 billion US dollars by 2014, up
from 7.77 billion US dollars in 2013 and 6.96 billion US dollars in 2012. This figure
displays RFID card, numbers, fobs and all other type factors tags, readers and
software/services. The market value is estimated to grow from 12.08 billion US dollars in
2020 to 16.23 billion US dollars in 2029.

Léon Theremin invented a soviet-based listening system in 1945, which broadcast radio
incidents with the added audio details. Sound waves vibrated a diaphragm that subtly
changed the resonator's form, modulating the radio frequency reflected. Although this
device was a hidden listening device and not an identifying tag, its predecessor is called a
passive RFID, which was driven and triggered by waves from an external source. The
Allies and Germany regularly used similar technologies, such as the Friend or Faith
Transponder, to mark the aircraft as friendly or hostile in World War 2. The early work
exploring RFID is Harry Stockman's landmark 1948 paper which predicted that
"Considerable research and development work has to be done before the remaining basic
problems in reflected-power communication are solved, and before the field of useful
applications is explored." The apparatus of Mario Cardullo, patented on 23 January 1973
as a passive memory radio transponder, was the first authentic ancestor of modern RFID.
The original system was passive, operated by the questioning signal, and was shown to
the NYPA and other prospective users in 1971. It consisted of a 16 bit memory
transponder to be used as a toll unit. The basic patent for the Cardullo transmission
carriers covers the use of RF, sound and light. Applications in transport (automobile
identification, automated payroll, electronic licence plate, electronic manifestation,
vehicle routing, vehicle performance monitoring), banking (electronic check-Book,
electronic credit card), protection (personnel identification, automatic portal,
surveillance) and medical applications were seen in 1968 in the original business plan
(identification, patient history).
Steven Depp, Alfred Koelle and Robert Frayman in the Los Alamos National Laboratory
carried out an early demonstration of RFID power (moduled backscatter) tags in 1973,
passive and semi-passive. The portable machine is 915 MHz in operation and uses 12-bit
tags. The most recent UHFID and RFID microwave tags use this technique.
Charles Walton was awarded the first patent to be related to the RFID abbreviation in
1983.

g. Nanotechnology - The use of material on an atomic, molecular, and supramolecular


scale to industrial uses is nanotechnology or nanotechnology. In the earliest and most
commonly used definition of nanotechnology, there was a special technical aim to
precisely use macroscale products known as molecular nanotechnology in atoms and
molecules. The National Nanotechnology Initiative subsequently developed a more
widespread definition of nanotechnology as the handling of matter of at least one
measure from 1 to 100 nanometers. This description represents the significance of
quantum mechanical effect in this quantum realm scale, thus moving from a specific
technical objective to a research category that encompasses all forms of research and
technologies dealing with the special characteristics of matter under the defined size
threshold. The plural form "nanotechnologies" and "nanoscale technologies" refers to the
large variety of research and applications of which the common feature is the size. Size-
defined nanotechnology is naturally broad, involving science as diverse as the fields of
surface science, organic chemistry, molecular biology, semiconductor physics, energy
storage, engineering as well as molecular engineering. The association's research and
applications range from expansions of traditional system physics to fully modern
molecular self-assembly methods, from the development of new materials with nanoscale
dimensions to the direct regulation of matter at the atomic level. The future effects of
nanotechnology are currently being explored by scientists. Nanomedicine,
nanoelectronics, biomaterials, electricity and consumer goods will develop many new
materials and devices for different applications. In comparison, nanotechnology poses
numerous the same problems as any emerging technology, including questions about
nanomaterials' toxicity and environmental impact, their possible effects on the global
economy, and speculation on different doomsday scenarios. These issues led to a
discussion of whether special nanotech Regulations are needed between advocacy groups
and governments.

In his lecture, The Plenty of Space At the Bottom, where he presented the possibility of
synthesis through the direct handling of atoms, the idea of seeds nanotechnology was first
debated in 1959 by renowned physicist Richard Feynmen.
Nanomaterial size contrast
Norio Taniguchi first used the word "nano-technology" in 1974, although not very
popular. Stimulated by the ideas of Feynman, K. In his 1986 book entitled Engines of
Creation: The Coming Era of Nanotechnology, Eric Drexler used the word
'nanotechnology' which suggested the concept of a "assembler' of the nanoscale that
could create a copy of itself and other arbitrary complexities with atomic control. In
1986, Drexler also co-founded the Foresight Institute to help the public become more
informed and better understand nanotechnology principles and consequences. The
Foresight Institute is no longer affiliated. Drexler has developed and popularized a
conceptual framework for nanotechnology and high visible experimental breakthroughs
that have drawn further, broad-based interest for the prospects of atomic matter
regulation, through a fusion of theoretical and public work. The vast majority of
nanotechnology have been researching many methods for mechanical instruments from a
limited number of atoms since the popularity increase in the 1980's. The rise of
nanotechnology in modern times was propelled by two significant breakthroughs in the
1980s. Firstly, in 1981 the development of a microscope for scanning tunneling, which
offered an unparalleled visualization of individual atoms and bonds and was used to
manipulate individual atoms successfully in 1989. The creators of the microscope, Gerd
Binnig and Heinrich Rohrer, received a Nobel Prize in 1986 at the IBM Zurich Research
Laboratory. The analog atomic force microscope was also invented by Binnig, Quate and
Gerber that year. A representative of the carbohydrate structure known as the fullerenes is
buckminsterfullerene C60. The fullerene family is a major research topic under the
umbrella of nanotechnology. The term was first not defined as Nanotechnology; the term
was used in connection with subsequent work involving the related graphene’s tubes
(called carbon nanotubes and sometimes referred to as Bucky tubes) which proposed
potential applications for nanoscale electronics, devices and other products. Sumio Iijima
of NEC, which was awarded the inaugural 2008 Kavli Prize in Nanoscience by Iijima,
obtained the discovery of carbon nano-tubes. Initially, A suggested a transistor of
nanolayer-based metal-semiconductor junction (M–S junction). Rose and rendered by L
in 1960. In 1962, in Geppert and in Mohamed Atalla. Decades later, developments in the
multi-gate technology allowed the scaling of field-effect transistors (MOSFET) in metal-
oxide-semiconduction devices to a nanomic level less than 20 nm of gate-length,
beginning with the three-dimensional, non-planar MOSFET FinFET (final field-effect
transistor). UC Berkeley has developed FinFET devices down to a 17 Nm in 1998, then a
15 Nm in 2001 and 10 Nm in 2002 at UC Berkeley including the Digh Hisamoto,
Chenming Hu, the Tsu Jae King Liu and others. The area receives growing attention in
the beginning of the 2000s, leading to controversy and growth. Conflicts arose on the
concepts and future consequences of nanotechnology, as demonstrated by the
nanotechnology study of the Royal Society. During 2001 and 2003, public debates
between Drexler and Smalley culminated on the viability of applications planned by
proponents of molecular nanotechnology. In the meantime, the marketing of goods
focused on nanoscale technology innovations has started to emerge. These products do
not require nuclear control of materials, but are limited to bulk applications of
nanomaterials. The Silver Nano platform for anti-bacterial use of silver nanoparticles,
transparent sunscreens based on nanoparticle, carbon fiber enhancement with Silica
Nanoparticles, and carbon nanoparticles in stain-resistant textiles are some instances.
Governments supported and funded research in nanotechnology as the NNI in the US and
in Europe through the European Framework Programs for Research and Technological
Development (EFRD) and formalized a formal concept of nanotechnology and developed
research funding for nanotechnology. New and serious scientific focus started to thrive in
the mid-2000s. There have been projects to produce roadmaps for nanotechnology which
focus on nuclear accurate material handling and on current and planned capacities, goals,
and applications.
In 2006, the 3-nm MOSFET, the smallest nanoelectronic instrument in the world, was
developed by a team of Korean researchers from the Korea Advanced Institute of Science
and Technology (KAIST) and the National Fab Center. It was based on FinFET
technology from the Gate-all-around (GAA).
Between 2001 and 2004, over 60 countries developed government programs in
nanotechnology for research and development (R&D). Corporate investment on
nanotechnology R&D has surpassed government funding, much of which is supported by
companies in the USA, Japan and Germany. Samsung Electronics (2,578 first patents),
Nippon Steel (1,490 first patents), IBM (1,360 first patents) and Toshiba (1,298 first
patents) and Canon, among the top five organizations which filed the most intellectual
patents on nanotechnology R&D in the 1970-2011 period (1,162 first patents). The
Chinese Academy of Sciences, Russian Academy of Sciences, Center national de la
recherche scientifiques, University of Tokyo, and Osaka University were the top five
institutions publishing the most scientific papers on nanotechnology research between
1970 and 2012.

h. QR Code technology - A QR code (abbreviated to Fast Answer code) was developed


in 1994 for Japan's automotive sector as a kind of matrix barcode (or two-dimensional
barcode). The barcode includes information about the object it is connected to, and is a
computer readable optical sticker. In practice, QR codes also provide information for a
website or application locator, identifier or tracker. A QR Code is used to store data
efficiently using four standardize encoding methods (numeric, alphanumeric, byte/binary,
and kanji); extensions can also be used. With its fast reading and increased storage
capacity compared to standard UPC barcodes, the Quick Response system has become
popular outside the automotive industry. Product monitoring, ID, time tracking, handling
paperwork and general marketing are included in applications. A QR code contains black
squares on a white background arranged in a square grid and read by an image system
such as a camera. Reed–Solomon error correction is used to process the image until it can
be properly interpreted. The necessary data are then extracted from the patterns in
horizontal as well as vertical components of the image.
In consumer ads, QR codes have become popular. A smartphone is usually used as a QR
scanner and shows the code and converts it into a useful format (such as a standard URL
for a website, thereby obviating the need for a user to type it into a web browser). As it
offers a way for a brand website to navigate more easily than by manually entering a
URL, QR code has become the focal point of advertising strategy. The value of this
capability is not only for the customer but also for increasing the conversion rate, which
would be the likelihood of interaction with the publicity turning into a sale. It coaxes
fascinating viewpoints further down the transition funnel, taking the viewer to the
website of an advertiser instantly with little time or effort, while a longer and more
focused sales pitch can lose the audience.
In fact, QR codes are used in many wider applications, though initially for monitoring
parts in the manufacturing of vehicles. This includes commercial monitoring, ticketing
for entertainment and travel, advertisement and in-store marking on goods and loyalty.
Marketing examples include the ability of an organization to capture a discounted and
discount percentage with an application QR code decoder, or the storage, along with
alpha numeric text information, of information such as address and associated
information of a business in the Yellow Pages archive.
You may also be used to store personal data for organizations to use. An example of this
is the National Investigation Board (NBI) in the Philippines, which now has a QR code
for NBI clearances. Many of these apps are targeted at smartphone users (via mobile
tagging). After scanning the QR-codes, users can receive text, connect a vCard to their
laptop, open a URL, or write an e-mail or text message. You can create your own QR
codes and print them for other people to search and use using one of many pay or free QR
sites or applications. Google has an API to generate QR codes, which is now
discontinued, and applications to search QR codes are available on almost any
smartphone computer.
Since 2010 in the Chinese Railway, QR codes have been used and printed.In magazines,
posters, buses, business cards or almost any object for which users may want details, QR
storage codes and URLs may appear. The picture of the QR code can be scanned for
viewing of the text, contact information, wireless network connections or websites on the
browser by the user with a camera phone that features the correct player application. It's
called hard linking or hyper linkage of objects from real world objects. QR codes can also
be connected to a location that can monitor the scanning of the code. The application
scanning the QR code is connected with either geo details using a GPS and a cell tower
(aGPS) triangulation or the URL encoded in the QR Code itself. In 2008, a Japanese
stonemason announced plans to grave the gravestones' QR codes that will allow visitors
to display details on the dead and family members to monitor their visits. One of the first
writers to have used QR codes in a book by the psychologist Richard Wiseman,
Paranormality: Why We See What Isn't There (2011).
In currency, QR codes are used. The Royal Nederlandse Munt issued the first official
coin in the world in June2011 to mark the hundredth anniversary of a new building and
premises of the Royal Dutch Mint. A smartphone scans the coin and is originally
connected to a particular Internet site containing information concerning the historic
event and the design of the Coin. A 100-naira banknote commemorating its 100-year-old
bill was issued by the Central Bank of Nigeria in 2014, the first banknote to include a QR
code. If the code is scanned with a mobile device that unlocks the Internet, it goes to a
website that tells Nigeria's centennial story. In 2015, the Russian Central Bank released a
100-ruble notice commemorating the Russian Federation's annexation of Crimea (not
legally recognized by most countries of the world). It contains a QR code for its design,
and when scanned with a mobile device allowed by the website, it goes to a website
which provides information on the history and the techniques of the memorandum. In
2017, the Bank of Ghana released a 5-cedis banknote commemorative of 60 years of
Central Banking in Ghana, containing a design QR code that goes to the official Bank of
Ghana website when scanned with an internet connected mobile device.
Functionality of the credit card is being created. A joint QR code for all four major
payment cards - National Payments Corporation of India, which operates Ruby Payment
Cards along with MasterCard, Visa and American Express - has been launched in
September 2016 by the Reserve Bank of India (RBI). It will also be able to accept
payments on the UPI website. It will be able to accept the payments.

Truth Improved

QR codes are used for evaluating the location of objects in 3-dimensional space in some
augmented reality systems. QR codes are often used to provide experiences of increased
reality.
Showing the content of multimedia
Multimedia QR codes are useful for directing users to such multimedia content (such as
video, audio, images, documents, etc.).
Remote networks of service A number of mobile operating device systems may use QR
codes. iPhones with iOS 11 or higher and certain Android appliances can search QR
codes natively without an external app being downloaded. The app will scan and view the
QR code form (only on iPhone) along with the connection (both on Android and iPhone).
These devices allow the redirection of URL, which makes it possible to send QR codes to
the device's current applications. Many paid or free apps will scan the code and connect
hard to an external URL.
Links
URLs managed to change the advertisement rate well before the smartphone era, but
there were many drawbacks during those years: advertisers typically had to type the URL
and they sometimes didn't have a site browsed before them. They were more likely to
forget to visit the site later, not to think about entering a URL or forget about the URL to
type. Those threats were minimized but not eradicated by Semantic URLs. When
smartphones were developed, the problem of viewers not instantly having access to a
website became less a problem, but the problems with type in URLs remain, which led to
the use of QR codes to allow instant redirection to URLs. A dynamic QR code function is
given by some QR code generators. Dynamic QR codes can be edited over and over
again because they are not changing, but link to a placeholder URL that redirects the
scanner to the website they see. In contrstance to a standard QR code which does not
have an intermediate site and a connection directly to a site that cannot be modified, this
Placeholder redirect can be customised (thus, the name "dynamic"). Additional functions
such as code review may also be done by the placeholder.
Virtual shopping
QR codes were used to set up 'virtual stores' where consumers are presented with a
gallery of product details and QR codes, i.e. on a wall of a station. The customers search
and send the QR codes to their homes. This use began in South Korea and Argentina, but
is increasing globally today. The Virtual Shop model has been adopted by Walmart,
Procter & Gamble and Woolworths.

Payment of QR code
QR codes may also be used to store credit card information or bank account information
or to work with special payment provider applications. In developing countries like
China, India and Bangladesh, QR code payment is a widely common and convenient
method for making payments. There are numerous trial applications world-wide for QR
code payments. In China, mobile payment has been rapidly adopted since Alipay
designed a QR code payment method in 2011. Around 83 percent of all payments via
mobile payment were collected in 2018.
In November 2012, QR-code payments were implemented and sponsored in the Czech
Republic as the Czech Banking Association's official local solution to the sharing of
payment data – a Short Payment Descriptor. The EPC QR Code guidelines for initiation
of SCT within the Eurozone were issued by the European Payment Council in 2013.
Login to the website
QR codes can be found in websites: a QR-code is displayed on a computer screen on the
registration page and is automatically logged on when a registered user scans it with a
checked smartphone. The smartphone that contacts the server performs authentication. In
January 2012, Google checked this form of login process.
Purchase of restaurants
Restaurants that serve easily have a QR code near the front door that allows customers to
access an online menu or even guide them to a website or app to order, order or possibly
pay for meals without having to line up or using a cashier. QR codes may also be related
to specials which do not appear in uniform menus daily or weekly. The QR-codes in table
service restaurants allow guests to order their food without a waiter – the QR-code
provides a table number so that the service providers know where to supply it.[43][44]
This application has expanded, particularly because there were reduced communications
between service staff and customers due to the need to separate people during the
COVID-19 pandemic in 2020.
Wi-Fi network membership
A QR code for Wi-Fi network connectivity automatically

By defining the SSID form, password, encryption and whether the SSID is or
is not secret,

3. What is technological innovation? Why is it important? Discuss the lessons to be


learned from the following companies with regard to innovation:
Technological Innovation - An organization (or a group of people working outside a
formal organization) may take a technological innovation journey to seek new methods
for accelerating market competition by highlighting the value of technology as a source
of innovation. Instead of using the words "technological innovation" the company prefers
the wording "technology innovation" Technological progress gives the appearance of
being a mechanism for applying technology only for the sake of doing so. The business
consideration of improving business value by working on technical aspects of the product
or services is best defined as "Technological innovation" Many goods and services do not
have a core technology at their core. The performance of the product or service relies on
the coexistence, incorporation, and interaction of various technologies.
Society succeeds as creativity alleviates societal issues and encourages society to act.
Inspiration can be found in the forms, shapes, structures, types, and practices we see, both
actual and imaginary. By introducing these emerging technologies, goods, and services,
these businesses simultaneously fulfill a social need while also helping to motivate staff
and capital.
a. Apple - Systemic approach to creativity for Apple

Apple is the most creative organization in the world, according to industry experts.
Innovation is accomplished by developing new and creative products and business
models. The organization uses a succession of gifts to reward their clients – terrific apps
bundled up in fantastic hardware and appealing packaging. At Apple, we break into new
markets and venture into new business spaces. By creating ground-breaking new ideas,
such as the iPod, iTunes, iPhone, and iPad, the business made a real impact on the
industry. Leaders of Apple's innovation plan conceive of the company's future as a series
of networks and pipelines, and push to encourage more rapid innovation. People who are
in a rush to catch up to the latest and greatest Apple release may end up being left behind.
Apple focuses on searching out people who have something important to give, including
highly creative and talented people who want to contribute the most they can to the
planet.
initial and ground-breaking business models
They are based on creating creative and beautiful goods, not just building them. These
companies are able to create creative business models as well. Without iTunes and the
App Store, the iPod and iPhone certainly would not have been nearly as successful as
they were. up-to-date
Apple's method of creativity Apple's culture of innovation is much more than a set of
procedures; it is rooted in the company's values. Apple has some of the most popular
products in history began with a small group of people with no hierarchy or formal
structure, but with no corporate oversight. There is, however, an interesting and oft-
repeated quip about entrepreneurship that goes: "It takes a lot of disciplines to turn fresh
ideas and nascent technologies into a business that can continue to innovate for years."
Steve Jobs discussed this frequently. The Apple method of using innovation to harness
new ideas, quicken the design process, and bring in profits has been successful.

b. Microsoft - Microsoft Research is a worldwide hunt for scientific breakthroughs.


These ventures also include machine learning and artificial intelligence, which is in line
with Microsoft's goal to embed AI smarts into all of its products. The organization is on a
quest to be more creative, and it is putting systems in place to help it achieve that goal.
Microsoft has gone to great lengths to make this method publicly available, as well, and it
is an important point of focus for them this week at Ignite, an annual conference that
usually focuses on a more technical approach to IT management. Microsoft will present
these projects to the public for the first time at a keynote devoted to Microsoft projects at
Ignite. This is reminiscent of Google's use of its ATAP (Advanced Technology and
Projects) community at its I/O conferences, as shown by this presentation; this
presentation clearly showcases cutting-edge creativity within Microsoft (outside of
making Excel smarter).
Microsoft has formed the Microsoft AI group headed by VP Mitra Azizirad, who is
responsible for thinking-leadership growth both internally and externally, and helping the
business itself innovate faster. I had a long talk with Azizirad to learn more about her
team's work and how she approaches helping businesses use AI and inspire them to
launch R&D projects in the field. Azizirad shared: “With a backdrop of what it means to
be in an AI-driven world, and with a fresh perspective, we're putting together a narrative
for the company of what we mean by having a differentiated approach.” This is
something that has had an impact on people and made an impact. Today, we're including
AI, but we're moving beyond it to the vast variety of future paradigms, like human-
machine interaction, the future of computation, and digital responsibility. Currently,
Microsoft is debating horizons one and three fairly, she said. "Horizon two, we're making
progress by getting better at that.
c. Nokia - We have been in the forefront of creativity for more than 150 years. To
continuously reinvent ourselves and the world around us, we innovate. The planet is
about to embark on a whole new digital age, and it will radically change everything. It is
technological advancement that will be instrumental in bettering the environment for all
humanity, and in saving the planet. All facets of our lives and our world will be changed
by technology in the next great industrial revolution. We keep discovering new ways to
improve the way people communicate and connect with the environment, and to engage
with it. In our world-renowned industrial research and development (R&D) arm, Nokia
Bell Labs encompasses 17 research facilities around the globe. In order to do this, we're
undertaking initiatives that have the ability to form the next stage of human development.
Several of the basic technologies that form the backbone of information and
communications networks and all digital devices and systems were developed during the
company's 100-year period of innovation.

d. Sony - The first thing that pops into your head when you think of Sony might still be a
big, beautiful TV. Nomura Research Institute estimates that even though the tech giant
has endured a tough era, it has gotten stronger over time because of its performance in
different product categories other than its mainstays. Today’s Sony is a pioneer in camera
technology, and this is something that is often ignored due to the fact that much of the
technology that helps make it possible is used in smartphones produced by other
companies. Additionally, the company is also challenging the long-standing leaders in the
professional and serious amateur camera market, which has continued even as point-and-
shoot models have fallen out of fashion. At CES 2018, Sony unveiled the latest iteration
of Aibo, its $1,800 robotic dog, which has a much different design than previous
versions. No wonder everyone was pleased when they saw the cybernetic pooch: it has a
quad-core processor, Wi-Fi, LTE, and 22 actuators, plus it apparently earned everyone's
love.
e. Kodak - According to Rodgers (1983), innovation is described as something new that
individuals or units see as possible to implement. Ideas that are more successful or better
tolerated by society, governments, and most importantly, the market are new innovations.
In the business world, innovative ideas are perceived as potent growth stimulants.
Industries need to be creative and experiment with new products and processes in order to
help an economy remain dynamic. Companies must look for new ways and approaches to
delight their consumers by offering them superior quality, lower rates, quicker service,
and longer lifespans.
Innovation in this context emerges as an outgrowth of corporate methods and state-of-
the-art innovations, which combine to create new goods. These advances will not only
bring value to consumers, but they will also boost the company's financial health
(Tushman & Anderson, 2004, p.42).
This paper investigates Kodak Eastman's developments in the film and photography
industry and their role in the company's subsequent leadership in the industry, but cannot
tell what effect other companies' innovations will have on Kodak Company. Innovation
by other companies made Kodak a money-loser.

f. Huawei - In the Innovation 1.0 era, we have concentrated on innovative goods,


technical advancements, and solutions to meet the needs and challenges of customers.
From one to a whole community of people is what we're talking about here. Our main
aim is to assist our clients and partners in enhancing their productivity, raising their sales,
or lowering their costs. We support these goals by ensuring they meet their highest level
of business performance. Historically, Huawei has been interested in numerous technical
fields, including optical networks, smart devices, and wireless communications. A large
amount of commercial value was created for our customers, with tremendous social value
to boot.
Our goal in Innovation 2.0 is to tackle the theoretical and basic technology roadblocks
that have held back ICT growth thus far. This stage is based on fundamental
philosophical advances and technological innovations, which means moving from 0 to 1.
Innovation: transparency and growth that welcomes all citizens
New concepts, hypotheses, and technologies depend on a lot of unknowns, which means
creativity should not be limited to one individual or organization. The innovation concept
of Huawei's Innovation 2.0 hopes to build collaborations between universities and
research institutes in order to link all kinds of global science research capital and
expertise.
The approach is getting a clear vision combined with cutting-edge technology. Supported
by our common vision for the future, we will explore the possibilities for intelligent life
in the future. When faced with these enormous obstacles, we will explore market
opportunities and emerging technology in order to build new industries. In light of these
assumptions, we expect to make confident assumptions and shape our technological
future.

scope: Moving quickly through the information lifecycle with the use of disruptive
technologies This role involves utilizing technologies with the ability to evaluate and
leverage technologies that are helpful in a variety of areas, including output, storage,
computation, transmission, presentation, and consumption. Furthermore, we have ramped
up our investments in disruptive technology to achieve a competitive edge. strategic
initiative: Academic and corporate collaborations, with an emphasis on technology
investment
Businesses and universities working together will advance the commercial application of
research findings. Since scientists would have a clearer understanding of where their
future research should go based on the problems, real-world situations, and demands
posed by companies, the field will become even more creative. Huawei will employ
numerous methods to improve the efficacy of Innovation 2.0, including funding
university and research institute research projects, establishing their own laboratories, and
making substantial investments in several technical routes.
The very basis of Huawei's survival and development over the last three decades has been
creativity. We will keep devoting an additional US$3 billion to US$5 billion to exploring
future-oriented cutting-edge technology and basic research per year. Additionally,
Huawei employs about 15,000 staff, of which over 700 are active in basic research,
including approximately 200 Ph.D.s with research specialization in mathematics, over
200 Ph.D.s with research specialization in physics and chemistry, and approximately
5,000 Ph.D.s in engineering.
At the same time, we have collaborated with over 300 universities and over 900 academic
institutions and companies to develop innovation and research. We will harness global
innovation capital and draw individuals from around the world to collaborate on research
to represent the concept of open innovation. We will pay attention to the market and
academic obstacles that face companies, as well as the trust of venture capitalists. Sharing
our innovation achievements with the industry and community will light the way for the
future, for both Huawei and the planet.
A Linked and Intelligent Environment Can Be Generated With Key Innovations
In order to capture the opportunities and solve the problems of the future intelligent
world, Huawei is continuing to concentrate on research and innovation. The nozzle will
be extended, thereby allowing us to accelerate further. And on top of that, we'll be hiring
experts from around the world, as well as promoting creativity and even starting the
development of cutting-edge technology that we will release to the industry. As a team,
we will give useful knowledge to the information society.
Investing more heavily in fundamental science in order to create breakthroughs in theory
production. The result is the creation of an intelligent planet.
As technologies advance rapidly, communications capacities shift gradually closer to the
Shannon limit. Many facets of the communications industry are searching for advances in
underlying technology, in order to keep pace with the intelligent world. We investigated
advanced networking hypotheses in 2019, in the hopes of reinventing a wireless network
infrastructure that will no longer be feasible once the Shannon limit is broken.
Specifically, we are concentrating on:
They were the first in the industry to recognise the most urgent research issues in basic
science and what we see ahead on the threshold of a future where technology is more
advanced. Besides privacy and information models, these numerous obstacles to
innovation include problems with channel power, approximate computation, and
optimization. We have also been collaborating with some of the world's top scientists to
find new ways to deal with these issues.
The aforementioned was among the first of its kind, developing a theoretical model for
trust in networks in the era of AI. Network robustness, forecastability, and usage have
greatly increased, and active control performance has risen by 56%.
They found some breakthroughs in important theoretical and scheduling algorithms for
future wireless networks, increasing network edge user interface data rates by at with at
least 30 percent.
The wide use of artificial intelligence has had a profound influence on computer
architecture. In this place, we do.
Fundamental research that involved both theoretical breakthroughs and technical progress
was held in high regard.
I have more than 80 publications in artificial intelligence, of which one of them is the
only paper to receive the ACL 2019 Long Paper Prize.
AdderNets is the first to apply this to the world of deep learning, which in addition to
adding, often breaks down large multiplications into separate operations. Using this new
approach, the software-hardware co-design, it is possible to significantly enhance
computer vision tasks, saving an order of magnitude more energy in terms of area and
chips, as well as several times more than the requirements of current usage. Additionally,
we have made the research code of AdderNets public to the world, hoping to collaborate
with academia in order to discover the next generation of AI accelerators.
This was the first known causal-structure learning system, replacing the local search
framework that previously existed. This innovative framework integrates reinforcement
learning, which is capable of matching the success of the best academic frameworks on
industry benchmarks. We have been selected to present at ICLR 2020, and our paper
about this system has ranked first (parallel with other submissions).
Invented revolutionary AI concepts from data-driven and knowledge-driven aspects, in
order to design industry-standard explainable steps for AI.
Aided in the implementation of Design Space Exploration Technology (DSE) by
introducing Pareto optimizations, which is the first of its kind.
A relentless effort to push forward the industry is founded on creativity and ingenuity.

Multimedia High-speed Broadband Networks With the introduction of 5G into the age,
we have started our quest for the 6th generation of technology. Huawei is looking for new
ideas in the realm of 6G. as a consequence, we
Investigated 6G network design problems, such as air interface developments, network
designs, and key enabling technologies.
I led industry-wide joint efforts and prompted the industry to voluntarily embrace 6G.
An example of an optical network is a television signal sent through fiber optics.
will need to continue to sustain our position as leaders in this region.
Additionally, the continuing advancement of fiber capacity and key interconnection hubs
across the industry have led to advanced super-core transmission sites, which would
allow the industry to increase fiber capacity two times every three years, even in the era
of ultra-broadband connectivity.
olved the problems around crucial technologies like the use of high-performance optical
algorithms for long-haul transmission and optical amplifiers operating at transmission
speeds of 400 Gbit/s per wavelength, both of which would help make future long-haul
transmissions much more effective.
This person proposed an optical cross-connect (OXC) architecture with wavelength-
based versatile scalability, capable of supporting ten Petabit per second of total capacity.
intelligent on-going maintenance
In this place, we do.
Using intelligent detection and more precise fault position to ensure only a single work
order is placed for each fault resulted in less repetitive work order assignments.
Improved wireless network efficiency by using spatial-temporal algorithms and dynamic
equipment modifications, resulting in a 15% energy saving benefit and a 9% to 17%
power consumption effectiveness (PUE) reduction in data centers with a higher PUE.
Went over five stages of Autonomous Driving Network (ADN) with many other industry
players and published a white paper outlining Autonomous Networks: Empowering
Digital Transformation for the Telecoms Industry.
intelligent, automatic, and self-activating adhesives, has shown new findings that allow
ADN technology to advance even further.

g. Research in Motion - The Research In Motion (RIM) Innovation Challenge was first
introduced in South Africa on Friday, February 24, 2012 at the University of
Johannesburg's Intellilab. This gathering, which was sponsored by RIM, gathered 40
Grade 11 learners from McCauley House, Dawnview High School, Sunward Park High
School, and Ponelopele Oracle Secondary School. Participants were trained on how to do
things differently by collaborating with University of Johannesburg's Intellilab staff to
design a series of challenging challenges.
The teams of four started out consisting of one learner from each school. The five Market
Points around the University of Johannesburg's Kingsway campus were discovered by the
clues that were given to the teams. It was part of the challenge that teams had to go to
each Market Point spot, look at the exchange rate of their BlackBerry Bucks, and buy a
box of robot parts at the best moment. Volunteers from RIM South Africa manning the
Market Points made it easy for the teams to find the best time to buy their pieces. A large
contingent of RIM volunteers was led by Karina Gibson, RIM's Community Relations
Manager for Europe, who is based in the United Kingdom. to be a judge in the case

h. Walt Disney - Popular for its classic feature films and hit television shows, as well as
its theme parks, toys, and characters, the Disney Corporation is an American organization
recognized all over the world.
The 2019 market change is away from impersonal, on-demand entertainment and toward
customized, on-demand entertainment. "traditional" before any other traditional media
business, Disney started to remake itself. One way in which Disney's wide range of
premium global brands—Disney, Pixar, Marvel, Lucasfilm, ESPN, and ABC—aids the
service's ability to retain customers is that they provide additional content that motivates
users to sign up for streaming services.

Although Disney has been planning for this eventuality for a few years, it has recently
made an acquisition of BamTech, a company that has had their streaming services
thoroughly tested for a decade. Disney Direct-to-Consumer and International was
founded in 2018 to act as a central distribution and technology hub for the company's
streaming services, which launched with ESPN+ in April 2018 and have now signed up
over one million subscribers. With the support of BamTech's video streaming, commerce
engine, dynamic ad insertion technology, and the data collection required to personalize
user experiences, ESPN+ uses the new streaming platform.
Both original content and films and TV shows from Disney's vast catalog will be made
available on the upcoming Disney streaming service, which is scheduled to launch by the
end of the year. It will also have rights to titles like Avatar, X-Men, and National
Geographic as a result of the acquisition of 21st Century Fox. Additionally, Disney will
have a controlling stake in Hulu, which has over 25 million subscribers for their
streaming and live TV services.
Bob Iger's plan of expanding the Disney empire by purchasing companies like Pixar,
Lucasfilm, and Marvel worked out extremely well in 2016, when Disney became the first
movie studio to ever reach the $7 billion mark at the box office. The new animated
releases from these studios, Finding Dory, Captain America: Civil War, Doctor Strange,
and Rogue One: A Star Wars Story, led to blockbuster box office results for Disney. The
company consistently makes smart decisions including hiring an indie director, such as
Gareth Edwards for Rogue One: A Star Wars Novel, and hiring an experienced director,
such as Jon Favreau, to reinterpret The Jungle Book, another hit film. Meanwhile, though
Disney Animation had won the previous few years, the studio released both the original
title Zootopia and the newly released Moana. The $5.5 billion Disney Shanghai resort
opened in 2016, allowing the Disney brand to grow in China. To be sure, Disney's cable
network ESPN has seen ratings growth decelerate in recent years, as many other media
companies have, but the organization is seeking new and creative ways to attract young,
male audiences back into the fold. The company spent an additional $400 million in
VICE in 2016, and now it is distributing VICE's content through digital, web, and TV for
ESPN.

You might also like