You are on page 1of 9

Finding subjectivity clues;

sentence and clause-level
Sentiment Retrieval Using Generative
authors: Eguchi, K. & Lavrenko, V.
read by: John Knox
Sentiment retrieval: combined IR and
sentiment classification
Novel idea
Does it work? No reported recall, low
precision…but competitive with similar
Automatic Identification Of Sentiment
read by: Michael Lipschultz
authors: Wilson, T., Wiebe, J., Hoffmann, P.

Two step: neutral-polar then determine
Discussion of the role “not” and “will”
Reduce some errors by allowing neutral
terms in stage 2
Relation to “Identifying Subjective Adjectives
through Web-based Mutual Information,”
Baroni, M. & Vegnaduzzo, S.
This work is concerned with finding opinions
Wilson et. al is concerned with the next step
Using Emoticons to reduce Dependency in
Machine Learning Techniques
authors: Read J.
read by: Yaw Gyamfi

Lack of rigor in analysis
Rare work in that it looks at temporal
dependency…but is it persuasive?
Title is misleading – emoticons are used
more for reduction of annotation costs
Identifying Expressions of Opinion in
authors: Breck, E., Choi, Y. & Cardie, C.
read by: Matt McGettigan
Reviewer impressed with performance (close
to human annotators)…but…
Concern with evaluation standards
Subjective phrases as a natural extension
from subjective adjectives
Feature Subsumption for Opinion
authors: Riloff, E., Patwardhan, S. & Wiebe, J.
read by: Mahesh
Considering dependencies among features
can add considerable performance
Considering POS as subsuming unigrams
Mining the Peanut Gallery: Opinion
Extraction and Semantic Classification of
Product Reviews
authors: Dave K., Lawrance S. and Pennock D.
Supposedly comparing IR vs. Machine
learning techniques, but the IR approach
skews heavily towards machine learning
Comparison to emoticon paper: explicit
rating (self tagging) vs. automatic
Granularity: some features that help at e.g.
sentence level are less useful at the
document level
Extracting Appraisal Expressions
authors: Bloom, K., Garg, N. & Argamon., S.
read by: Danielle Mowery

Significantly different annotation schema
compared to MPQA
Author evaluated system output post facto
Can’t evaluate precision
Ample opportunity for bias
Major themes
IR for sentiment analysis
Problems with evaluation standards
Weak standards
Lack of rigor in analysis
Not enough data supplied (e.g. accuracy only)
Increasing sophistication of features
Multi-stage approaches
Dependencies among features
Levels of tagging (phrase/sentence/document)