You are on page 1of 25

LEGAL IMPLICATIONS OF AI IN THE BEAUTY AND HEALTH

PART 1- INTRODUCTION & USE CASES

1. INTRODUCTION-

 The use of artificial intelligence (“AI”) and machine learning is growing at a significant pace
and spreading across many industry sectors, including beauty and healthcare.

 According to the latest market intelligence research report by InsightAce Analytic, the global
artificial intelligence in beauty and cosmetics market size was valued at US$ 2.70 Billion in
2021, and it is expected to reach US$ 13.34 Billion in 20301.

 Alternatively, accordingly to the CB Insights, the funding in AI sector in 2021 was up by 108%
in 2021, with healthcare accounting for about a fifth of overall funding 2.

 These figures are a clear indication of the fact that with the consumers hunting for more
personalized and speedy solutions, brands are also expanding their product line-ups to
include AI-powered platforms.

 For example, while Boots’ AI-powered Digital Beauty Advisor, analyses skin selfies and
offers relevant product recommendations3, Actlyzer is a hand wash movement recognition
technology that identifies complex hand washing movements from video data captured by a
camera4.

2. USE CASES OF AI
2.1. Use Cases of AI in the Beauty:

The use cases of AI in the beauty industry can be categorized into four categories, namely
virtual product trials, personalisation, product development and virtual influencers,
depending upon the type of service it offers. These categories are elaborated with examples
below as-

1
For additional information, you may visit- https://www.prnewswire.com/news-releases/global-artificial-
intelligence-ai-in-beauty-and-cosmetics-market-worth-us-13-34-billion-by-2030---exclusive-report-by-
insightace-analytic-301470507.html .
2
The complete report of the CBInsights may be accessed at- https://www.cbinsights.com/reports/CB-
Insights_Artificial-Intelligence-Report-2021.pdf?utm_campaign=marketing_state-ai_2022-
02&utm_medium=email&_hsmi=201947916&_hsenc=p2ANqtz-
931DshGf9eoiolwHAvLFrYTDz8n2QmgJjHCJkycRv-LAKtWykU4kKgF4B4PUQog3YNqQjL86Xov0EWHdsbQ8c7l-
SUOw&utm_content=201947916&utm_source=hs_automation.
3
For more details, you may refer to- https://storebrands.com/boots-brings-personalized-approach-no7-online
4
For more details, you may refer to- https://www.fujitsu.com/global/about/resources/news/press-releases/
2020/0526-01.html#:~:text=Kawasaki%2C%20Japan%2C%20May%2026%2C%202020&text=(FRDC)
%20today%20announced%20the%20development,video%20data%20captured%20by%20camera.
2.1.1. Virtual Product Trials

 AR-powered ‘virtual mirrors’ allows a consumer to try on cosmetic products in real-time.


One of the first to put this technology to use was Perfect Corp, which has been offering
beauty SaaS solutions since 2014.

 ModiFace by L’Oréal: Acquired by L’Oréal in 2018, ModiFace is a leading provider of


Augmented Reality technology for the beauty industry. Some of the recent real-time
examples linked with ModiFace include Facebook and L’Oréal’s venture  to bring AR-powered
makeup try-ons to Instagram shopping5 and NYKAA’s virtual try-on technology 6.
 We may also consider a few other examples within this category which are as follows:

i. GlamAR (developed by an Indian company, Fynd): It is a virtual try-on platform that


lets its users try-on makeup, hair and eyewear products before they actually buy
them7.

ii. Lip Hue’s smart mirror: Custom lipstick studio Lip Hue in partnership with Morph
Digital Solutions has created a smart mirror that allows customers to watch their
reflection, try on different lipstick shades, and customize those shades according to
their preferences8.

2.1.2. Personalisation-

 Another aspect of the use cases of AI in the beauty industry is that it enhances the
personalisation of products. For a closer understanding, we may refer to the following
examples:

i. Let’s Get Ready: In 2018 Coty entered into a partnership with Amazon to launch a
digital personal beauty assistant designated for the Echo Show, namely, ‘Let’s Get
Ready’. This beauty assistant brings on-demand, occasion-based look planning fine-
tuned by personal attributes such as hair, eye and skin colour 9.

ii. The Skin-Gerome Project: Founded in 2017, Proven Skincare was recently awarded
the Best Use of Technology at the 2022 Glossy Beauty Awards for its proprietary
Skin Gerome Project. The Skin Gerome Project is the largest beauty database that
utilises customer reviews, skincare ingredients and academic journals to tailor
skincare products to customer-specific needs 10.
5
For more details, you may refer to-
https://www.loreal.com/en/press-release/research-and-innovation/loreal-and-facebook-bring-virtual-tryons-
to-instagram-shopping-with-modiface/.
6
For more details, you may refer to- https://www.livemint.com/companies/news/nykaa-launches-ai-powered-
virtual-try-on-tech-modiface-for-beauty-shoppers-11639539986118.html .
7
For more details, you may refer to- https://www.glamar.io/.
8
For more details, you may refer to- https://www.getmorph.com/solutions.html.
9
For more details, you may refer to- https://www.businesswire.com/news/home/20180117006021/en/Coty-
to-Launch-%E2%80%98Let%E2%80%99s-Get-Ready%E2%80%99-Skill-for-Amazon-Echo-Show .
10
For more details, you may refer to-
https://www.businesswire.com/news/home/20220617005071/en/PROVEN-Skincare-Awarded-Best-Use-of-
Technology-at-the-2022-Glossy-Beauty-Awards .
2.1.3. Product Development-

 Alike the above two use cases of AI in beauty industry, its significance at the product
development stage is not unknown.

 For instance, Avon’s True 5-in-1 Lash Genius Mascara was developed by analysing the top
needs that customers expressed through social media. Their machine learning and AI-
powered genius algorithm was used to read, filter, process and rank thousands of online
consumer comments, to determine the top features they crave in a mascara, accordingly
giving birth to the mentioned product 11.

2.1.4. Virtual Influencers:

 Lil Miquela: Lil Miquela is a perpetual 19-year-old virtual influencer with reportedly 3 million
followers on Instagram who made headlines for taking over Prada’s Instagram for Milan
Fasion Week12.

2.2.USE CASES OF AI IN HEALTH AND HYGIENE-

Artificial intelligence is reshaping healthcare, and its use is becoming a reality in many
medical fields and specialties. Some of the uses of AI in healthcare are mentioned below as-

2.2.1. Vita-Cam: Developed by Students at UAE-based Ajman University, Vita-Cam is a


smartphone AI application that detects vitamin deficiencies of the user from photos they
take of themselves without the need for blood samples and then suggests a
compensational diet13.

2.2.2. Colleagues at Harvard University uses satellite data and AI to predict food security
problems: The two Computer scientist Elizabeth Bondi and her colleagues at Harvard
University used publicly available satellite data and artificial intelligence to reliably
pinpoint geographic areas where populations are at high risk for micronutrient
deficiencies. This analysis could potentially pave the way for early public health
interventions14.
2.2.3. Atolla’s curated face serums: Launched in 2019, the brand works by giving users testing
kits to measure the exact characteristics of their skin – specifically its hydration, oil, pH
and absorption. It then sends them a face serum calibrated precisely to their skin's
needs, with the formula updated monthly 15.

11
For more details, you may refer to- https://www.avonworldwide.com/news/lash-genius-five-benefits-in-one-
mascara .
12
For more details, you may refer to https://thecurrentdaily.com/2018/02/27/prada-enlists-computer-
generated-influencer-ss18-show/ .
13
For more details, you may refer to- https://www.jamesdysonaward.org/en-AE/2019/project/vita-cam/.
14
For more details, you may refer to- https://www.scientificamerican.com/article/ai-can-predict-potential-
nutrient-deficiencies-from-space/
15
For more details, you may refer to- https://www.fastcompany.com/90469026/this-company-uses-ai-to-
formulate-your-perfect-skin-serum-and-it-works
2.2.4. Face Oil and cleanser by Atypical: The company uses an online survey to learn a
customer's allergies, skin type, lifestyle and skin goals, and an algorithm identifies the
best ingredients and formulation to fit them. The products are then made to order,
which, as Atypical points out, means not only that they are tailored to the customer but
that the ingredients are at their most fresh and active 16.
2.2.5. Service Platform- HelloAva: In contrast to the other companies on this list, HelloAva is a
curation service rather than a product maker. To get personalized product
recommendations, you start by creating an account. Once that’s set, a chatbot named
Ava gathers basic information about your skin-care concerns and the product categories
you’re interested in. Uploading a selfie is recommended, but you can choose to skip that
step. Next, Ava walks you through 12 more in-depth questions designed to zero in on
your skin type and specific issues. After the evaluation, the app spits out a list of
shoppable product suggestions — two for every category you selected during the quiz 17.
2.2.6. Curology- Connecting your concerns to dermatologists : Curology embraces technology in
a more distant fashion. It uses a certain level of artificial intelligence in its analysis tool,
which incorporates an app, photographs and quiz questions, but calls on human skill for
the solution. Thereafter, customers get matched to a dermatologist who customises
their formula and provides ongoing advice.
2.2.7. Function of Beauty: It creates customised shampoos and conditioners using big data and
machine learning. Similar to Proven, you enter in your hair type, hair structure, and hair
goals on their website. All that information is then put through an algorithm that spits
out your perfect haircare.18

2.2.8. Buddy Nutrition: It is a personalized nutrition platform which offers a range of


personalized products as recommended by the nutrition expert “Buddy”. The Buddy
makes use of users’ body type, goals and lifestyle and accordingly suggest a daily
wellness shot that is custom manufactured 19.

2.2.9. Netmeds: Unlike the aforesaid examples, Netmeds leverages AI for operational
efficiency. It uses artificial intelligence to forecast demand and procurement
requirements by pincode and tie this up to the warehouse that services the respective
pincode. Forecasts are made depending on historical sales data, the type of the product,
season, marketing efforts, etc20.

2.2.10. Vitmedics Vitcheck Assessment: As an addition to offering personalized nutrition based


on users’ body parameters, this assessment seeks to understand the medicines you take
and how these may affect nutrient absorption. Thereafter, it recommends supplements
that help to address any imbalances21.

16
For more details, you may refer to- https://www.atypicalcosmetics.com/skincare/ .
17
For more details, you may refer to- https://helloava.co/ .
18
For more details, you may refer to- https://www.functionofbeauty.com/.
19
For more details, you may refer to- https://www.latechwatch.com/2019/09/buddy-nutrition-personalized-dan-
obegi/ .
20
https://www.expresscomputer.in/artificial-intelligence-ai/how-netmeds-is-leveraging-ai-ml-for-operational-
efficiency-customer-experience/34536/#:~:text=In%20addition%2C%20Netmeds%20uses%20artificial,season
%2C%20marketing%20efforts%2C%20etc.
21
For more details, you may refer to- https://www.vitmedics.com/ .
2.2.11. Oral B: - Oral-B GENIUS X Electric Toothbrush uses Artificial Intelligence in Motion
sensors to recognise brushing style, tracks where a customer brushes and gives real-time
feedback for best results22.
2.2.12. Toilet set by Rochester Institute of Technology: contains devices that measure blood
oxygenation levels, heart rate, and blood pressure to signal when someone is at risk for
congestive heart failure. The device was part of a study to determine ways to reduce
hospital readmission rates

PART 2- POSITION OF LAWS IN US, EU AND INDIA ALONG WITH COUNTRY-SPECIFIC USE CASES

1. INTRODUCTION-

 Although AI offers unique opportunities to improve health care and patient outcomes, it also
comes with potential challenges. AI-enabled products, for example, have sometimes resulted
in inaccurate, even potentially harmful, recommendations for treatment. Another example
includes an insensitivity to potential impact. AI systems may not be trained in the same ways
as humans to ‘err on the side of caution’. While that can result in more false positives, this
approach may be appropriate when the alternative is a serious safety outcome for the
patient.

 For example, Google’s Medical AI for diagnosing diabetic retinopathy, which provided
accurate speedy results in the lab, turned out to be a failure in real-time setup 23. This medical
AI made use of clinical photos of a patient’s eye’s interior to screen results. While it did speed
up the process, sometimes it simply failed to give a result at all. This was because, like most
image recognition systems, this model had been trained on high-quality scans to ensure
accuracy and was designed to reject images that fell below a certain threshold of quality.
With nurses scanning dozens of patients an hour and often taking the photos in poor lighting
conditions, more than a fifth of the images were ultimately rejected by the system, leading no
results.

 Another more serious mishap was reported in July 2021, where Epic System’s artificial
intelligence algorithms were found to deliver inaccurate or irrelevant information to hospitals
about the care of seriously ill patients24.

 Such mishaps warrant specific regulatory oversight in place to mitigate potential injuries.

22
For more details, you may refer to- https://www.oralb.co.uk/en-gb/product-collections/genius-x.
23
For more details, you may refer to- https://www.technologyreview.com/2020/04/27/1000658/google-
medical-ai-accurate-lab-real-life-clinic-covid-diabetes-retina-disease/ .
24
For more details, you may refer to https://mindmatters.ai/2021/08/an-epic-failure-overstated-ai-claims-in-
medicine/ .
2. POSITION OF LAWS IN THE US-

 Guidance for Regulation of Artificial Intelligence Applications, 2020 25: The Guidance consists
of 10 principles that agencies should consider when formulating approaches to AI
applications. These principles include- (1) public trust in AI, (2) public participation, (3)
scientific integrity and information quality, (4) risk assessment and management, (5) benefits
and costs, (6) flexibility, (7) fairness and non-discrimination, (8) disclosure and transparency,
(9) safety and security, and (10) interagency coordination.

 An executive AI initiative: AI.gov26: The White House has also launched a new website
(“AI.gov”) that focuses on AI for the American people and aims to provide a platform for
those who wish to learn more about AI and its opportunities.

 Proposed FUTURE of Artificial Intelligence Act of 2020 27: This Bill puts forth a regulatory
requirement for the creation of a federal advisory committee by the Secretary of Commerce
to study and assess certain aspects of AI. One such covered aspect is the development of AI
to save costs in healthcare28.

 FDA Clearance: Pursuant to the mandatory requirement by the FDA, all the medical device
companies and their products which qualify in the prescribed threshold have to be
registered. The FDA approval serves as a predominant requirement to market and sell
medical devices of all risk classes in the United States. Under this requirement, the medical
devices are categorized into “FDA Registered”, “FDA Cleared”, or “FDA Approved” ,
depending on their associated risk class (class I, II, or III, respectively).

2.1.Use Cases of AI in Health Industry in USA-

 In total, the Food and Drug Administration (FDA) has already cleared or approved around 40
AI-based medical devices 

 Apple Watch Series 4: This smart wearable offers an FDA-cleared EKG technology that uses
electrodes to capture heart rhythm irregularities. This technology is the first consumer-
available product that allows users to take an EKG from their own wrist, which can later
provide critical data to physicians29. 

3. POSITION OF LAWS IN THE EU-

25
The Guidance Note may be accessed at https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-
OMB-Memo-on-Regulation-of-AI-1-7-19.pdf.
26
The website may be accessed at- https://www.ai.gov/.
27
The Bill may be accessed at- https://www.congress.gov/bill/116th-congress/senate-bill/3771/text.
28
Section 3(b)(2)(M), The FUTURE of Artificial Intelligence Act of 2020.
29
For more details, you may refer to- https://www.theverge.com/2018/9/13/17855006/apple-watch-series-4-
ekg-fda-approved-vs-cleared-meaning-safe .
 European Commission’s AI Strategy, 2018: The European Commission adopted its AI strategy
for Europe in April 2018. This strategy aims to ensure an appropriate ethical and legal
framework by creating a European AI Alliance 30 and developing AI Ethics Guidelines31.

 AI Ethics Guidelines, 2019: The European Commission’s High-Level Expert Group on AI (AI
HLEG), which was appointed by the European Commission in June 2018 and which is also the
steering group for the European AI Alliance, published these Guidelines.

The Guidelines promote the slogan “Trustworthy AI” and contain seven key requirements
that AI systems need to fulfil in order to be trustworthy: “(1) human agency and oversight,
(2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5)
diversity, non-discrimination and fairness, (6) environmental and societal well-being, and (7)
accountability”.

 UK’s New Code of Conduct: The European Union encourages all its member states to
develop a national AI strategy. One such strategy is developed by the United Kingdom
through UK’s New Code of Conduct for AI. This Code of Conduct ensures that only the best
and safest data driven technologies are used by NHS and it protects patient data 32.

 CE Marking System: The CE marking represents a manufacturer’s declaration that products


comply with the EU’s New Approach Directives. It is pertinent to note here that the CE mark
is not intended for AI-specific products but applies to certain categories of products, which is
independent of technology used in manufacturing or creating them. The CE mark includes
‘medical devices’ as one of such category.

 Annexure XVI of the European Medical Device Regulations (“MDR”) : These regulations
prescribes a number of product which has to mandatorily comply with the CE marking
requirement. Annexure XVI of the Regulations includes cosmetic and aesthetic products like
dermal fillers and tattoo removal equipment’s. Therefore, use of AI in such prescribed
products would also entail a mandatory CE compliance 33.

 Proposed Regulation on the European Health Data Space 34: This regulation establishes the
European Health Data Space (EHDS), a common space for health data where natural persons
can control their electronic health data (primary use) and where researchers, innovators and
policy makers have access to these electronic health data in a trusted and secure way that
preserves the individual’s personal data (secondary use). Herein, the data holders (such as
health care providers, including private and public hospitals, and research institutions) may
30
A detailed policy on the European AI Alliance may be accessed at-
https://digital-strategy.ec.europa.eu/en/policies/european-ai-alliance.
31
The Ethics Guidelines for trustworthy AI may be accessed at-
https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai .
32
For more details on the UK’s Code of Conduct, you may refer to-
https://www.gov.uk/government/news/new-code-of-conduct-for-artificial-intelligence-ai-systems-used-by-
the-nhs
33
Full text of the said regulation may be accessed at-
https://www.medical-device-regulation.eu/2019/08/14/annex-xvi/
34
The Proposal for a Regulation on the European Health Data Space may be accessed at-
https://ec.europa.eu/health/publications/proposal-regulation-european-health-data-space_en.
be subject to new, burdensome obligations to make their data available for secondary use
through the EHDS.

Further, The EHDS Proposal aims to reconcile the regulation of the primary use of the health
data by the individual and health professionals and the secondary use by researchers,
innovators and policymakers. The proposal also introduces a voluntary label for wellness
applications to ensure transparency for users regarding the interoperability and security
requirements. This development is thus expected to reduce cross-border barriers for
manufacturers of smart devices targeting health data like Fitbit.

 The Proposed Artificial Intelligence Act (“AI Act”) 35: The proposed legislation puts forth a
provision that requires potentially risky artificial intelligence (AI) systems to bear a CE mark.
This would include all products invented by the application of AI- including innovations in the
healthcare and beauty industry.

The AI Act is applicable to providers of AI systems established within the EU or in a third


country placing AI systems on the EU market or putting them into service in the EU, as well
as to users of AI systems located within the EU.

In the aforesaid draft, the Commission follows a graduated approach based on possible
threats to EU values and fundamental rights as follows-

1. systems with unacceptable risk are prohibited;

2. systems with high risk are subject to stringent regulatory requirements;

3. low-risk systems are subject to special transparency requirements;

4. other systems are permitted – subject to compliance with general laws.

 It becomes pertinent to note here that, unlike MDR, which is product-oriented and requires
CE marking for products only if they fall into a specified category, the proposed AI Act is
technology oriented. Meaning thereby, if a product, irrespective of its category (being a
medical device or a cosmetic device) is an output of AI, it will trigger compliance under the
AI Act.

4. POSITION OF THE LAWS INDIA-

 While in general, the Government of India is taking active steps in promoting AI across
sectors, the beauty industry does not seem to be a priority. While the below-mentioned
governmental actions stress on various sectors including healthcare, the regulatory or policy
oversight is negligible with respect to the beauty industry.

Full text of the proposal may be accessed at- https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX


35

%3A52021PC0206 .
 Nonetheless, it is evident from the following that the government of India is working
towards creating an AI-friendly technological ecosystem in India

4.1. AI Task Force, 201736: In 2017, the Ministry of Commerce and Industry set up an AI task
force that highlighted sixteen sectors of importance including healthcare and
retail/customer engagement37. It is pertinent to note here that while healthcare is covered
directly under this report the AI innovations in the beauty industry to some extent may be
covered under the retail/customer engagement, for e.g. Vedix, an Indian brand offers
personalized hair care regime.

4.2. New Industry Policy, 201838: The Ministry of Commerce and Industry framed a new
Industrial Policy in consultation with all central ministries, state governments, industries
and stakeholders to align India with latest technologies such as drones, AI, and blockchain.

4.3. Invest India-UAE MoU, 201839: In July 2018, Invest India (a non-profit venture under the
Department for Promotion of Industry and Internal Trade, Ministry of Commerce and
Industry) and the UAE Ministry for Artificial Intelligence signed an MoU for India-UAE
Artificial Intelligence Bridge. The Government of India is launching multiple initiatives to
create an environment for digital growth through which the potential of AI can be realized
in the areas of agriculture supply, healthcare, and disaster management services.

4.4. Ministry of Health and Family Welfare’s MoU with Wadhwani Institute for Artificial
Intelligence, 201940: This MOU is signed to explore the Artificial Intelligence technology
application in fighting against Tuberculosis.

4.5. AI-explicit computer framework AIRAWAT, 2020 41 : The government has an ambitious plan
of setting up AIRAWAT, a cloud platform for Big Data analytics with advanced AI processing
capabilities. The infrastructure will be capable of high-performance supercomputing and
creating new applications, especially for the development of the healthcare and agriculture
sector, weather forecasting, and financial inclusion to name a few.

4.6. Responsible #AIFORALL: NITI Ayog’s Publication, 2021 42 : The Approach Document
“aims to establish broad ethics principles for design, development, and deployment
of AI in India. The second part of this document, released in August 2021 puts forth
seven principles for responsible management of AI systems.

4.7. Contract Guidelines on Utilization of AI and Data:

36
For additional information, you may visit- https://www.aitf.org.in/
37
For additional information, you may visit- https://indiaai.gov.in/government/ministry-of-commerce-and-
industry
38
For additional information, you may visit- https://pib.gov.in/Pressreleaseshare.aspx?PRID=1549516
39
For additional information, you may visit- https://pib.gov.in/newsite/PrintRelease.aspx?relid=181145
40
For additional information, you may visit- https://pib.gov.in/Pressreleaseshare.aspx?PRID=1583584
41
For additional information, you may visit- https://indiaai.gov.in/research-reports/airawat-establishing-an-ai-
specific-cloud-computing-infrastructure-in-india
42
For additional information, you may visit- https://analyticsindiamag.com/top-ai-based-initiatives-launched-
by-niti-aayog-in 2021/#:~:text=Responsible%20AI%20%23AIFORALL%3A%20Approach%20Documents,for
%20the%20Fourth%20Industrial%20Revolution.
These Guidelines (AI Section) present a fundamental approach to considerations, means of
avoiding trouble, and other aspects that takes into account the characteristics of AI-based
software when preparing development and utilization agreements related to that software.
In doing so, the purpose of these Guidelines (AI Section) is to promote development and
utilization of AI-based software by providing information for the execution of reasonable
agreements that are agreeable to all parties and by serving as an aid in the establishment of
contract practices.

It is also noted here for clarity that these Guidelines (AI Section) present nothing more than
the aforementioned fundamental approach to contracts. Meaning thereby, these Guidelines
(AI Section) have no binding legal force and do not constitute any restriction on freedoms
enjoyed by parties in relation to executing contracts.

(a) Issue 1: Lack of understanding among Parties about the characteristics of AI


technology

Shared understanding and awareness of what AI technology is and what characteristics it


possesses have yet to take shape, and as this is the case, differences in opinion and
misunderstandings between parties arise and problems readily materialize.

Solution: The Guidelines summarize and discuss fundamental concepts of AI


technology

These Guidelines (AI Section) explain problems caused by characteristics of AI-based


software, specifically that it is difficult at the initial phase of development to predict what
deliverables will be produced and it is difficult to make performance assurances with respect
to unknown inputs. By explaining these matters, it is hoped that parties will approach
contract negotiations based on the premise of shared awareness.

The characteristics of AI technology do not directly determine the burden of risk among
parties in an agreement; rather, they constitute ultimately nothing more than one factor in
risk evaluation. On this point, contracts related to AI technology are in no way different from
pre- existing contracts.

As an example, from a business perspective, it would be conceivable to use a method


whereby a balance between a Vendor and a User is struck in relation to payment of
consideration for generation of trained models or for a service using such a trained model by
making payment dependent on certain results, achievement of KPIs, or any other similar
variation of payment terms and conditions.

(b) Issue 2: Lack of clarity on legal relationships related to AI-based software

Legislation has not kept up with the rapid expansion and dissemination of AI technology, so
a number of matters related to the relationship of rights, attribution of liability, and the like
remain unclarified by law.
Solution: The Guidelines provides model contracts

With respect to the relationship of rights with regard to AI-based software, not only is
ownership of rights provided for in the contracts, but a flexible framework that enables
parties to achieve their respective purposes is also presented by establishing detailed terms
of use for deliverables and data.

(c) Issue3 : High economic value of the data involved

Generally, large volumes of high-quality data for training are required in order to develop
highly accurate and competitive AI-based software. Time and effort is often committed to
acquiring and processing data in the first half of the development period when developing
AI-based software. In this way, it is necessary to perceive data as being intrinsically linked to
the development of AI-based software, and, in general, the data required for development is
provided to a Vendor by a User.

Solution: The model contract provides a detailed terms of use for deliverables and
data

To deal with the issue of User concerns and rights claims arising from that fact that in some
cases data provided to a Vendor by a User is economically valuable or confidential, these
Guidelines (AI Section) and the Model Contracts present, as described above, a framework
that establishes detailed terms of use for deliverables and data, and by establishing terms
and conditions that reflect the circumstances of the parties and the characteristics of the
provided data in the terms of use, these Guidelines (AI Section) and the Model Contracts
present an approach that seeks harmony between the data-handling needs of Users and the
needs of Vendors in relation to making effective use of deliverables.

(d) Issue 4: Lack of contract practices for development and utilization of AI based
software.

Understanding and awareness of contracts related to the development and utilization of AI-
based software have not adequately taken shape among contracting parties, and it is not
uncommon that contracts are negotiated without an adequate understanding of the
characteristics of AI-based software, the value of data and know-how, and the other party’s
perspective.

Solution: Along with model contracts, an “exploratory multi-phased” AI development


process is proposed.

The Guidelines (AI Section) propose the adoption of a process (referred to as an “exploratory
multi-phased” AI development process) wherein the following phases are established and
whereby development progresses on a step-by-step basis while matters including whether
each party is able to achieve its purposes through AI technology and whether the parties will
progress to subsequent phases are explored at each phase and verification of those matters
and confirmation between the parties is attained at each phase: (i) an assessment phase for

reviewing the feasibility of a trained model; (ii) a Proof of Concept phase; (iii) a development
phase; and (iv) a retraining phase. This “exploratory multi-phased” AI development process
permits a trial-and-error model of development, which differs from the waterfall model
where requirement definitions are fixed at the start.

4.8. Use Cases in India-

(a) Use cases of AI in beauty Industry in India-

i. SkinKraft Laboratories : It is India’s first AI-enabled customized beauty and personal


care brand. As per the company’s CEO & Co-founder they have built an in-house data
tracking dashboard that pulls in inventory information from all warehouses and maps them
against sales to give us an accurate estimate of days of inventory across all SKUs and across
all platforms. This information directly feeds into their procurement dashboard and also
helps the marketing team to create the right sales strategy 43.

ii. Virtual Try-On by Lakme: Indian brand, Lakme has made ‘virtual try-on’ possible by creating
a smart mirror on its official website that allows customers to watch their reflection, try on
different shades, and customize those shades according to their preferences.

iii.  Olay’s Skin Advisor: Olay launched an online skin advisor app based on a deep-learning
algorithm that analyses a consumer’s skin using a simple selfie and recommends products
accordingly.

PART 3: AI AND IPRs

1. POTENTIAL IP ISSUES IN AI:

There are three major intellectual property-related issues that are faced by AI with respect to
its protection, ownership, and liability. These issues are mentioned below as-

1.1. Is the intellectual property (IP) behind AR technology being properly protected with
patents, trademarks, service marks and trade secrets making these start-ups more
valuable and more attractive for potential purchase or partnerships from the large
company's point of view?   

 Loosely executed agreements could lead to potential challenges in the future. Usually,
the companies working in AI are not the end companies who use it in their business.
Alternatively, a software company is the one behind the development of an AI, whose
43
For more details, you may visit- https://skinkraft.com/.
collaboration with another player in the beauty or health industry, brings innovation to
the market. However, such collaboration requires clear terms of IP ownership and alike
matters.

 For example, ModiFace—and its IP—was acquired by L’Oréal in 2018, guaranteeing that
consumers will see more AR technology not only in apps but in stores. Another related
announcement by ModiFace was about its partnership with Pinterest to build a machine
learning system to decode skin tone in images. As its implication, now when consumers
search "makeup ideas” on Pinterest, it invites them to pick a skin tone range to narrow
their search. 

1.2. Can AI be the owner?

 Currently, AI is not a self-determinant entity according to intellectual property law in


general. As changing the basics of IP law is time-consuming, authorities have come up
with ways to protect the inventions through the existing opportunities available within the
law.

 In India the laws pertaining to this subject are very narrow, according to section 2(d) of
the Copyright Act, 1957 an 'author' refers to a human or a legal person whereas pursuant
to sections 2(p) and 2(t) of The Patents Act, 1970 a patentee is specifically pointed out to
be a human thus, making the scope of ownership of AI in India very restrictive.

 As far as other jurisdictions are concerned, a single judge of the Australian Federal Court
in the case of Thaler v. Commissioner of Patents (2021), FCA 879, ruled that Australian law
provides no prohibition against an artificial intelligence submitting an application or being
named as an inventor in such a patent filing, and therefore that it should be allowed.
However, this was later ruled out by a full bench of the Federal Court in 2022.

 While the argument on the recognition of AI creations is not yet settled, the topic has
continually raised other consequential issues. For example, even if AI were able to receive
IP recognition, who would be able to commercialize the exclusive rights? Also, if
ownership is given to the AI developer as a reward for effort and investment, why would
the developer—involved only during the input stage—be rewarded for the final output
stage as well? Finally, if the last option is for works produced by AI to fall into the public
domain, why would developers put forth the mental and financial efforts to develop AI
with vigor?

1.3. Who owns the data that the AI uses to “learn” and the liability associated with such
ownership.

 If AIs are able to create, it is worth considering that they might also be liable in certain
circumstances.
 Consider an AI that analyses a company’s investment strategies or personalises big data to
a tailor-made marketing advertisement, by way of auto-copying information might be
subject to claims of infringement of copyright, trade secrets, or even data privacy.

 In the same manner, a computer that produces poetry or artwork or generates 3D printing
could be accused of copyright or trademark infringement if it uses others’ IP without
requesting authorization.

 A relevant example to understand this issue better is the AI-powered search algorithm
application, My Beauty Matches. This website offers a quiz for new website visitors. With
the particular quiz, the AI algorithm collects information about skin type and condition,
hair structure and references. Thereafter, the AI analyses the customer data, find relevant
products in the datable and displays personalised recommendations. Now, this setup
raises IP concerns in multiple facets such as- who owns the data so collected by the quiz?;
who owns the compilation of data belonging to different individual brands?; and whether
such compilation of data would lead to IP infringement or not?

 While discussing another related aspect of this issue, i.e. who owns the liability associated
with the ownership, a relevant example could be Beauty.AI’s first international beauty
contest judged by robots44. Organized in 2016, the contest was criticised for its bizarre
results which indicated a glaring factor linking the winners: the robots did not like people
with dark skin. While this controversy did not knock on the door of courts, it raises a
critical concern about who owns the liability for such mishaps?

2. IP CHECKLIST FOR TECH STARTUPS-

The surest way a start-up can succeed against larger competitors is by protecting its
innovations and inventions. These protections not only level the playing field between start-
ups and incumbents but also give investors confidence that their investment will be protected.

 Identification of IP Asset

 Assess Potential Infringements

 Check confidentiality clauses in collaboration agreements

PART 4- AI in Health & Beauty Industry- Data Privacy and Ethics Concerns

44
For more details, you may refer to- https://www.theguardian.com/technology/2016/sep/08/artificial-
intelligence-beauty-contest-doesnt-like-black-people .
AI’s processes often require enormous amounts of data. As a result, it is inevitable that using AI may
implicate various privacy and security laws and regulations with respect to such data, which may
need to be de-identified. Alternatively, with respect to the health data, authorization from the
patient may be required prior to the disclosure of the data via AI or to the AI.

Further, AI poses unique challenges and risks with respect to privacy breaches and cybersecurity
threats, which have an obvious negative impact on patients and providers. Some of these challenges
are elaborated below.

1) Issues and Concerns

1.1.Data Privacy-

 The recent cases of high-profile corporations abusing consumer data, not specifically
relating to the beauty and healthcare industry, continue to raise concerns over how
beauty brands and retailers can reach the right balance between personalization and
customer privacy protection.

 Data associated with payment methods or basic personal information such as gender
and age can be considered the number one issue. However, it's even more
complicated in the context of AI-powered beauty technologies. AI/AR-enabled
applications require detailed information about consumers' skin concerns and even
their selfie images.

 Therefore, a robust and advanced customer data protective infrastructure is a must


for any beauty brands and retailers operating in an era of tighter control of personal
data, especially when specific legislation like the General Data Protection Regulation
(GDPR) comes into force.

1.2. Data Breach-


 Data breaches can significantly damage brand reputation and erode customers' trust
and confidence from sharing information about themselves.
 According to Salesforce, 65% of customers have stopped purchasing from brands that
acted suspiciously with their data45.
 To cope with these matters, beauty industry players are increasing their focus on
protecting customer data and carefully choosing trusted partners for collaboration.

1.3. Data Security-

 Even in the absence of bias, errors, and other confounders, health systems must
remain vigilant for signs of cyber intrusion. Malicious actors are increasingly holding
data hostage in exchange for ransom, often to the tune of millions of dollars.
For additional information, you may visit-
45
https://www.salesforce.com/news/stories/state-of-the-
connected-customer-report-outlines-changing-standards-for-customer-engagement/
 Earlier that month, a ransomware attack at a Dusseldorf University hospital in
Germany resulted in emergency-room diversions to other hospitals 46.

2) Legislative Framework-

 Organizations using personal information for AI may face tensions when attempting
to comply with global data protection laws. Data protection laws exist throughout the
world and generally apply to the collection, use, processing, disclosure, and security
of personal information. These laws may also restrict cross-border personal
information transfers. Certain countries have comprehensive data protection laws
that restrict AI and automated decision-making involving personal information.
Other countries, such as the USA, as mentioned above, do not have a single,
comprehensive federal law regulating privacy and automated decision-making but
rather sectoral laws.

 Further, it is pertinent to note here that while no legislation in place puts forth any
specific provision with respect to the data collected in the beauty industry, health
data is given specific importance over general data. Thus, for all jurisdictions, the
data collected in the beauty industry would fall within the ambit of the general data
protection regime, while special provisions (as discussed below) would also apply to
health data.

2.1. EU- General Data Protection Regulations (“GDPR”)

 Before going through the technicalities of GDPR, it is important to understand that


the GDPR is applicable regardless of the means by which personal data is processed,
and therefore applies when an AI system is used to process personal
data (irrespective of it being produced for beauty or healthcare industry). However,
it does puts data about health in a special category 47.

2.2. Informed Consent-

 The GDPR stipulates that data subjects must always be informed about the use of
algorithmic decision-making.48 Autonomous decision-making which affects humans
is prohibited under the GDPR unless this is necessary for the performance of a
contract, permitted by law or is based on the explicit consent of the data subject 49.

46
For more details, you may refer to- https://www.wired.co.uk/article/ransomware-hospital-death-germany .
47
Art 9(1), GDPR.
48
Art. 13(2)(f) and 14(2)(g), GDPR.
49
Art. 22(1) and 22(2), GDPR.
 Therefore, by application of GDPR, in a situation where a patient’s data is
concerned, such a patient should be well informed of the logic involved in the
algorithmic decision.50

2.3. Data Protection- in general-


 Pursuant to Article 4(1), GDPR personal data means “any information relating to an
identified or identifiable natural person (‘data subject’); an identifiable natural person
is one who can be identified, directly or indirectly, in particular by reference to an
identifier such as a name, an identification number, location data, an online identifier
or to one or more factors specific to the physical, physiological, genetic, mental,
economic, cultural or social identity of that natural person”.

2.4. Data Protection- Health Data

 AI applications process, collect and analyse personal data, which in the healthcare
sector often includes sensitive information about patients’ health, such as medical
records and medical images of the body.

 Further, data concerning health is defined as “personal data related to the physical


or mental health of a natural person, including the provision of health care services,
which reveal information about that person's health status” 51.

 Processing the data concerning health is permitted if any one of the requirements


prescribed under Article 9(2) of the GDPR is met. These requirements include the
following:
i. For the purposes of preventative or occupational medicine, for the
assessment of the working capacity of an employee, medical diagnosis, the
provision of health or social care or treatment or the management of health
and social care systems and services52.
ii. In the area of public health, such as protecting against serious cross-border
threats to health or ensuring high standards of quality and safety of health
care and of medicinal products or medical devices 53

3. US- Health Insurance Portability and Accountability Act (“HIPAA”)-

 The Act is aimed to protect the privacy and security of health data in the USA. In
order to implement the requirement of HIPAA, three Rules were enacted by the US
Department of Health and Human Services (HHS) – The Privacy Rule, the Security
Rule, and the Security Breach Notification Rule 54.
50
Art. 13(2)(f) and 14(2)(g), GDPR.
51
Art. 4(15), GDPR.
52
Article 9(2)(h), GDPR
53
Article 9(2)(i), GDPR
54
Full text of the Regulation maybe assessed at
https://www.hhs.gov/sites/default/files/ocr/privacy/hipaa/administrative/combined/hipaa-simplification-
 The Privacy Rule lays down standards for the use and disclosure of Protected Health
Information (PHI) by entities subject to the Rule. The Security Rule lays down
standards for the protection of electronic Protected Health Information (e-PHI). The
Security Breach Notification Rule is the set of guidelines required to be followed by
‘covered entities’ in situations of breach of the Privacy Rule.

 Under HIPAA, PHI stands for Protected Health Information, which is any information
that is related to the health status of an individual. This can include the provision of
health care, medical record and/or payment for the treatment of a particular patient
and can be linked to him or her.

 Further, As per the Act, a ‘covered entity’ refers to “a health plan; a healthcare
clearinghouse; or any healthcare provider who transmits any health information in
electronic form in connection with a transaction as under the Act 55.

4. Position in India-

 There are no dedicated regulations in India currently for the protection of Data in
Artificial Intelligence and Machine Learning in the Beauty and Cosmetics Sector.
However, there is a general framework which gives protection to the usage of the
data in the aforesaid sectors and such framework is governed by the IT rules released
in 2011 which address the issue of data protection and misuse by corporate entities.
Other than these, few draft data specific legislations were shelved recently, which are
discussed below.

4.1. Information Technology (Reasonable security practices and procedures and sensitive

personal data or information) Rules, 2011.

 Framed under Section 43A of the IT Act. 56 The legal framework mandates that a
“body corporate” must protect “sensitive personal data or information” when
providing any service or performing under a contract, adhere to certain standards,
and pay compensation to the affected person in the event of a “intentional
personal data breach” under Section 72A of the IT Act. 57 Such information
includes “medical records and history”.58

201303.pdf
55
Full text of the Regulation maybe assessed at
https://www.hhs.gov/sites/default/files/ocr/privacy/hipaa/administrative/combined/hipaa-simplification-
201303.pdf
56
Section 43A, Information Technology Act 2000.
57
§ 72A, Information Technology Act 2000.
58
Rule 3, IT Rules 2011.
 The body corporate is obligated to provide a privacy policy and to be available to
the data owner,59 who shall give informed consent about the purpose for such
collection but can withdraw his earlier consent. 60

4.2. Compliance in using data for the purpose of AI or Machine learning

Reasonable Security Practices and Procedures 61 — Under Rule 8 of the G.S.R. 313(E). —In
exercise of the powers conferred by clause (ob) of subsection (2) of section 87 read with
section 43A of the Information Technology Act, 2000 (21 of 2000)

 Under these rules, a body corporate or a person on its behalf shall be considered to have
complied with reasonable security practices and procedures, if they have implemented such
security practices and standards and have a comprehensively documented information
security program and information security policies that contain managerial, technical,
operational and physical security control measures that are commensurate with the
information assets being protected with the nature of business.

 The International Standard IS/ISO/IEC 27001 on "Information Technology - Security


Techniques - Information Security Management System - Requirements" is one such
standard referred to in sub-rule (1).

 Any industry association or an entity formed by such an association, whose members are
self-regulating by following other than IS/ISO/IEC codes of best practices for data protection
as per sub-rule(1), shall get its codes of best practices duly approved and notified by the
Central Government for effective implementation.

 The body corporate or a person on its behalf who has implemented either IS/ISO/IEC 27001
standard or the codes of best practices for data protection as approved and notified under
sub-rule (3) shall be deemed to have complied with reasonable security practices and
procedures provided that such standard or the codes of best practices have been certified or
audited on a regular basis by entities through an independent auditor, duly approved by the
Central Government. The audit of reasonable security practices and procedures shall be
carried out by an auditor at least once a year or as and when the body corporate or a person
on its behalf undertakes significant up-gradation of its process and computer resource.

4.3. Preventive protection from data thefts to AI & ML

No. 20(3)/2022-CERT62-In Government of India Ministry of Electronics and Information


Technology (MeitY) Indian Computer Emergency Response Team (CERT-In)

 CERT-In has issued directions under sub-section (6) of section 70B of the Information
Technology Act, 2000 relating to information security practices, procedure, prevention,
59
Rule 4, IT Rules 2011.
60
Rule 5, IT Rules 2011.
61
Full text may be accessed at-
https://www.meity.gov.in/writereaddata/files/GSR313E_10511%281%29_0.pdf.
62
Full text of the directions may be accessed at- https://privacyblogfullservice.huntonwilliamsblogs.com/wp-
content/uploads/sites/28/2022/05/CERT-In_Directions_70B_28.04.2022-2.pdf.
response and reporting of cyber incidents for Safe & Trusted Internet to augment and
strengthen the cyber security in the country.
 The directions are obligatory and are required to be complied with by service providers,
intermediaries, data centres, body corporate, Government organisations, Virtual Private
Server (VPS) providers, Cloud Service providers and Virtual Private Network Service, virtual
asset service providers, virtual asset exchange providers and custodian wallet providers, as
in case of non-compliance provision of punishment/ penalty would be attracted.

 All service providers, intermediaries, data centres, body corporate and Government
organisations shall connect to the Network Time Protocol (NTP) Server of the National
Informatics Centre (NIC) or National Physical Laboratory (NPL) or with NTP servers traceable
to these NTP servers, for synchronization of all their ICT systems clocks. 
 Any service provider, intermediary, data centre, body corporate and Government
organisation shall mandatorily report cyber incidents the below mentioned following cyber
incidents within 6 hours of noticing such incidents or being brought to notice about such
incidents:
i. Targeted scanning/probing of critical networks/systems 
ii. Compromise of critical systems/information 
iii. Unauthorised access of IT systems/data
iv. Defacement of website or intrusion into a website and unauthorised changes such
as inserting malicious code, links to external websites etc. 
v. Malicious code attacks such as the spreading of virus/worm/Trojan/Bots/
Spyware/Ransomware/Cryptominers 
vi. Attack on servers such as Database, Mail and DNS and network devices such as
Routers 
vii. Identity Theft, spoofing and phishing attacks 
viii. Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks 
ix. Attacks on Critical infrastructure, SCADA and operational technology systems and
Wireless networks 
x. Attacks on Application such as E-Governance, E-Commerce etc. 
xi. Data Breach
xii. Data Leak 
xiii. Attacks on Internet of Things (IoT) devices and associated systems, networks,
software, servers 
xiv. Attacks or incident affecting Digital Payment systems 
xv. Attacks through Malicious mobile Apps
xvi. Fake mobile Apps 
xvii. Unauthorised access to social media accounts 
xviii. Attacks or malicious/ suspicious activities affecting Cloud computing
systems/servers/software/applications 
xix. Attacks or malicious/suspicious activities affecting systems/ servers/ networks/
software/ applications related to Big Data, Blockchain, virtual assets, virtual asset
exchanges, custodian wallets, Robotics, 3D and 4D Printing, additive manufacturing,
Drones 
 When required by order/direction of CERT-In, for the purposes of cyber incident response,
protective and preventive actions related to cyber incidents, the service
provider/intermediary/data centre/body corporate is mandated to take action or provide
information or any such assistance to CERT-In, which may contribute towards cyber security
mitigation actions and enhanced cyber security situational awareness. 
 Therefore, preventive measures regarding data theft in AI and machine learning domain of
the Companies working in beauty and cosmetics include the constant reporting to the CERT
which is empowered to take action against such misconduct.

4.4.Pending Legislative Measures for AI & ML- Data Protection Perspective

 As per the PDP Bill, “personal data” refers to information about an individual’s features,
qualities, or attributes of identity that can be used to classify them.
 The PDP Bill allows for a data fiduciary, whether in the case of a natural or legal individual, to
process personal data under certain conditions, including purpose, processing, and storage
limitations. Personal data processing, for example, should not be permitted unless there is a
specific, clear, and lawful reason for doing so. Accordingly, the data fiduciaries are obligated
towards ensuring transparency and accountability, together with instituting grievance
redressal mechanisms dealing with individual complaints 63
 It further requires substantial data fiduciaries that handle sensitive personal data to
complete a data security review before proceeding with any procedure involving the use of
emerging technology, extensive profiling, or the use of sensitive personal data. 64
 The PDP Bill envisions the data principal having a number of rights, including the right to
seek assurance from the fiduciary regarding collection of their personal data, the right to
restrict continued disclosure of such data by a fiduciary, data correction and erasure, data
portability, and so on.65 It also aims to clarify different aspects of consent that are relevant
for the processing of personal data. 66 The PDP Bill does, however, list the grounds for
collecting personal data without permission, which include reacting to any medical
emergency arising from a life threat or a serious health compromise of the data subject or
any other individual.67

4.5. Draft DISHA and PDP Bill-

 The proposed Digital Information in Security and Healthcare Act governs Electronic
Health Data collected in India and places data protection regulations on medical
establishments and other entities collecting such data 68.

 The Act is focused on health data collected at clinical establishments but widens its
scope to additionally cover other forms of health data generated.

63
Personal Data Protection Bill, 2019, chap. II.
64
Personal Data Protection Bill, 2019, cl. 27.
65
Personal Data Protection Bill, 2019, chap IV.
66
cl. 11, Personal Data Protection Bill, 2019.
67
cl. 12, Personal Data Protection Bill, 2019.
68
For additional information, you may visit- https://www.mondaq.com/india/healthcare/1059266/disha-
india39s-probable-response-to-the-law-on-protection-of-digital-health-data
4.5.1. Analysis of overlapping provisions under DISHA and PDP Bill:

 The Personal Data Protection Bill includes “data related to the state of mental and
physical health of the data principal” as well as data collected in the course of the
provision of health services in its definition of the term “health data”.

 DISHA covers the two above-mentioned forms of data as well “information derived
from the testing or examination of a body part” in its definition of Electronic Health
Data.

 Moreover, the DISHA regulations prevent health data being collected on fitness
trackers from being used for medical research. It allows only for electronic health
data captured by clinical establishments to be used for any academic research and
does not allow data captured by other entities to be used for this purpose under
Section 29.

PART 5- OTHER LEGAL CONSIDERATIONS

1. CONTRACTUAL EXPOSURE

The parties in a complex contractual setup may fall pity to any of the following clause due to
an underlying loosely drafted contract:

i. expectations regarding services

ii. representation and warranties

iii. indemnification

iv. insurance

v. dynamic nature of laws


 The aforesaid risk makes the consideration of a contractual exposure relevant for the
businesses entering into a contract in relation to use of AI for their product or
services.

2. PRODUCTS LIABILITY

 As companies increasingly integrate AI into their products and systems, the potential
for AI to cause injury and property damage is growing. AI's ability to act
autonomously raises novel legal issues. An overarching question is how to assign
fault when the product at issue incorporates AI.
 Since no laws currently address injuries caused by AI, litigants and courts have begun
to test the application of traditional legal theories to injuries involving AI products,
such as self-driving vehicles and workplace robots.
 Product liability law determines who is at fault when a product injures someone. It is
generally based on state common and statutory law, including theories of Breach of
Warranty and Strict Liability.
 A recent failure reported is that of IBM Watson, which was initially meant to help
fight cancer, led to bizarre treatment recommendations including one case where it
suggested giving a cancer patient with severe bleeding, a drug that could worsen the
bleeding. In this case, while the outcome was not very unfortunate in reality, such a
mishap raises serious questions, of liability, specifically when AI is used in
healthcare.

3. ANTI-TRUST CONCERNS-

 The general concerns applicable to big data and AI applies here.

 An issue specific example could be- Global cosmetics manufacturer L’Oréal has
developed a system called TrendSpotter that uses artificial intelligence to analyse
millions of comments, images and videos posted online. The aim is to spot trends
before rivals do so that L’Oréal can develop on-trend products and incorporate into
its online marketing the terms and topics that are engaging consumers now 69.

 Businesses are increasingly using AI to respond to market conditions faster, innovate


their product offerings, set prices, and more. For example, AI pricing algorithms can:

i. Assimilate and almost instantly process significant amounts of information relating


to competitors' prices, demand, the price and availability of substitutes, and even
customer personal data.

ii. Respond almost immediately to changes in the market or competitor pricing.

iii. Set prices to achieve a business objective consistently across all sales.

 Therefore, while the benefits of AI from a commercial perspective are clear, its use
raises potential antitrust risks, specifically relating to unlawful, anticompetitive
agreements. Artificial Intelligence could thus be used in any of following anti-
competitive ways such as-
i. Be used to facilitate price-fixing agreements among competitors (see AI Aiding in
Collusion).

ii. Reach anticompetitive agreements with other AI systems (see Independent AI


Collusion).

For additional information, you may visit- https://www.digitalcommerce360.com/2022/02/24/how-loreal-


69

uses-ai-to-stay-ahead-of-its-competition/
PART 6: AI AND BLOCKCHAIN

1. What is blockchain?
Blockchain may be defined as a shared, immutable ledger that provides an immediate,
shared and transparent exchange of encrypted data simultaneously to multiple parties as
they initiate and complete transactions. A blockchain network may be used in multiple ways
by the business as it can track orders, payments, accounts, production, and much more 70.

2. Combined Values of AI and Blockchain

2.1.Authenticity: Blockchain’s digital record offers insight into the framework behind AI and
the provenance of the data it is using, addressing the challenge of explainable AI. This
helps improve trust in data integrity and, by extension, in the recommendations that AI
provides. Using blockchain to store and distribute AI models provides an audit trail, and
pairing blockchain and AI can enhance data security. 
2.2. Augmentation: AI can rapidly and comprehensively read, understand and correlate data
at incredible speed, bringing a new level of intelligence to blockchain-based business
networks. By providing access to large volumes of data from within and outside of the
organization, blockchain helps AI scale to provide more actionable insights, manage data
usage and model sharing, and create a trustworthy and transparent data economy.
2.3. Automation: AI, automation and blockchain can bring new value to business processes
that span multiple parties — removing friction, adding, speed and increasing efficiency.
For example, AI models embedded in smart contracts executed on a blockchain can
recommend expired products to recall, execute transactions — such as re-orders,
payments, or stock purchases based on set thresholds and events — resolve disputes,
and select the most sustainable shipping method. 

3. Few use cases:


3.1. BurstIQ: It is a provider of blockchain-enabled data solutions for the healthcare
industry71.

3.2. Vytalyx: Vytalyx is a health technology company that plans to use AI to provide health
professionals with access to intelligence and insights in context across multiple big data
sources – all through decentralization, cryptography and utilization of blockchain 72.
3.3. Finalize: It is a software platform that uses blockchain and machine learning to build
applications aimed at improving civil infrastructure. The company’s tools automate
and speed up construction industry workflow, management, and verification
processes, and its technology also integrates with wearables to meet safety
regulations73.
70
For a detailed discussion, you may refer to- https://www.ibm.com/in-en/topics/blockchain-ai#:~:text=Next
%20Steps-,Defining%20blockchain%20and%20AI,%2C%20production%2C%20and%20much%20more
71
The website of the company may be accessed at- https://burstiq.com/ .
72
The website of this company may be accessed at- https://vytalyx.io/.
73
For more details, you may refer to- https://www.geeksforgeeks.org/integration-of-blockchain-and-ai/ .

You might also like