You are on page 1of 14

vera.

ai: VERification Assisted by Artificial Intelligence

Akis Papadopoulos (@sympap)


vera.ai coordinator, Senior Researcher @ CERTH/ITI, Head of MeVer Group (@meverteam)

TITAN kick-off meeting, 21 Sep 2022


vera.ai consortium

Participant No. Participant organisation name Country


8 Research partners
1 (Coordinator) Centre for Research and Technology Hellas Greece
2 The University of Sheffield UK
3 Università di Urbino Carlo Bo Italy 2 Commercial organisations with
4 Fraunhofer Institute for Digital Media Technology Germany relevant products
5 University of Amsterdam The Netherlands
6 Kempelen Institute of Intelligent Technologies Slovakia 2 Global news and fact-checking
7 Università degli Studi di Napoli Federico II Italy providers
8 Borelli Center, ENS Paris-Saclay France
9 Athens Technology Centre Greece An NGO focusing on tackling
10 Sirma AI EAD (trading as Ontotext) Bulgaria disinformation at EU-level
11 AFP news agency France
12 Deutsche Welle Germany The world’s biggest union of
13 EU DisinfoLab Belgium public service broadcasters
14 European Broadcasting Union Switzerland
overall goal
Develop novel AI- and network science-based methods that assist verification professionals
throughout the complete content verification workflow.
vera.ai specific objectives
CONTENT
SO1: AI methods for content analysis, enhancement, and evidence retrieval.
SO2: AI tools for the detection of deepfake, synthetic media and manipulated content.

DISINFORMATION SPREAD
SO3: Discovery, tracking, and impact measurement of disinformation narratives and campaigns across social
platforms, modalities, and languages, through integrated AI and network science methods.
HUMAN-AI
SO4: Intelligent verification and debunk authoring assistant, based on chatbot NLP technology, to
explain the AI outputs and guide its professional users through the verification process.
SO5: Fact-checker-in-the-loop approach to gather new feedback as a side effect of verification.

SUSTAINABILITY
SO6: Adoption and sustainability of the new AI tools in real-world applications through integration in
leading verification tools and AI platforms with established large stakeholder communities.
SO1: Multilingual and multimodal trustworthy AI methods
for content analysis, enhancement, and evidence retrieval

Expected Measurable Outcomes Beyond the State of the Art

● Content and source credibility extraction methods ● Explore content analysis beyond English
● Explore automatic detection of more subtle signals (instead of
● Deep keyframe selection and enhancement
the studied credibility indicators) such as high subjectivity or
● Audio analysis tools
presence of argumentation fallacies.
● Text extraction from social media images and screenshots ● Speed up time-consuming verification of social media posts
● Visual location estimation (retrieve, analyse, summarize and show contextually relevant
● Cross-modal decontextualization detection ‘evidence snippets’)
● Perceptual audio matching ● Explainability (inform users about models' decisions)
● Multi-modal cross-lingual matching and retrieval ● Decontextualised content: i) text-Image misuse, audio-text
and text-video misuse detection
SO2: Μultimodal trustworthy AI tools for the detection of
deepfake, synthetic media and manipulated content

Expected Measurable Outcomes Beyond the State of the Art

● Deepfake detection on visual content ● Visual deepfake detection


○ Address poor generalisation to unseen/novel synthesis methods
● Deepfake audio detection
○ Combine video and audio
● Synthetic text detection
○ Focus on robustness of models to adversarial attacks
● Image manipulation detection ● Audio content
● Multimodal deepfake detection ○ Extend SotA forensics approaches based on footprint analysis
○ Develop advanced speech synthesis detection approaches
○ Improve audio phylogeny analysis with respect to efficiency
○ Audio search within the verification process
● AI generated text detection
● Multimodal deepfake detection methods
SO3: Enable the discovery, tracking, and impact measurement of
disinformation narratives and campaigns across social platforms, modalities,
and languages, through integrated AI and network science methods

Expected Measurable Outcomes Beyond the state-of-the-art

● Spatio-temporal analysis of disinformation ● hierarchical semantic model of disinformation that will provide uniform
narratives and campaigns representation of disinformation narratives and campaigns, leveraging data
● Network-based coordinated sharing behaviour from fact-checking organizations and debunks in the DBKF / EDMO
analysis ● AI methods for cross-lingual and cross-borders analysis of disinformation
● Multidimensional disinformation impact narratives/campaigns and their evolution over time.
assessment methodology ● Investigate and develop new methods to surface coordinated sharing
● Disinformation alerting mechanism behaviour with a content-agnostic approach, incl. sophisticated graph
analysis using Graph Neural Networks
● Understanding of disinformation impact and user alerting
○ Tackle the absence of labelled data
○ Take advantage of weak crowdsourced signals
○ Utilize auditing studies
SO4: Intelligent verification and debunk authoring assistant, based on chatbot
NLP technology, to explain the AI outputs and guide its professional users
through the verification process
Expected Measurable Outcomes Beyond the State of the Art

● Intelligent verification and debunk authoring assistant ● End-user platform guidance


● Assist professionals with AI tool adoption ● Automated question-answering
● Alerting ● Explaining AI model outputs
● Debunk authoring ● Assist the professionals with debunk authoring
● Continuous collection of new data

SO5: Fact-checker-in-the-loop approach to seamlessly gather new actionable


feedback as a side effect of verification workflows
Expected Measurable Outcomes Beyond the State of the Art

● Experiments on AI approaches for continuous learning ● Extract automatically new training data
under limited, biased and noisy training data ● Automatic re-training or fine-tuning of our models
● User feedback on mistakes made by the AI models
SO6: Adoption and sustainability of the new AI tools in real-world
applications through integration in leading verification tools and
AI platforms with established large stakeholder communities

Expected Measurable Outcomes Beyond the State of the Art

● Co-created AI requirements and UI designs ● Addition of cutting edge explainable, fair, and robust AI methods
● New release of the vera.ai-boosted Truly Media / EDMO ● User-centric approach to UX design
● New release of vera.ai boosted verification plugin ● AI-based chatbot
● Contributions of AI components to AI on Demand ● Tools will be made available to other relevant research initiatives
platform (AI4EU) and the European Language Grid for free reuse, experimentation, and low-overhead integration
● Enriched Database of Known Fakes
● User training, engagement, and support of users
vera.ai workplan
Analysis of needs through surveys and co-creation workshops, evaluation of results (WP2)
Doing research and building AI technologies against disinformation (WP3, WP4)
Integration to user-facing tools (WP5)
vera.ai assets

InVID-WeVerify Verification plugin (counts 53,000+ weekly users)


Truly Media (EDMO technical platform)
Database of Known Fakes (DBKF)
some existing verification tools by vera.ai partners
open science approach of vera.ai

● open-ness of research and outcomes is a fundamental principle of vera.ai


● different levels of open-ness:
○ datasets
○ methods
○ tools
○ code
● more controlled access to results when we see risk of disclosing sensitive
information to adversaries
● even though our research is tailored to professionals, citizens might benefit
(depending on media literacy and motivation) → possibilities for collaboration
with TITAN
Thank you for your attention!

Dr. Symeon Papadopoulos (@sympap, papadop@iti.gr)


Senior Researcher @ CERTH/ITI, Head of @meverteam
vera.ai Project Coordinator

Keep up-to-date with vera.ai on Twitter (@veraai_eu) and


the Web (www.veraai.eu)!

You might also like