You are on page 1of 8

German Perception Verbs: Automatic Classification of

Prototypical and Multiple Non-literal Meanings

Benjamin David, Sylvia Springorum, Sabine Schulte im Walde


Institut fur Maschinelle Sprachverarbeitung
Universitat Stuttgart
{benjamin.david,sylvia.springorum,schulte}@ims.uni-stuttgart.de

Abstract Nissim, 2002; Nastase and Strube, 2009; Nastase


et al., 2012). One of the few studies going beyond
This paper presents a token-based auto-
a binary classification is represented by Shutova
matic classification of German perception
verbs into literal vs. multiple non-literal et al. (2013) who classified literal vs. metaphor-
senses. Based on a corpus-based dataset of ical verb senses on a large scale and for multiple
German perception verbs and their system- non-literal meanings. Cook and Stevenson (2006)
atic meaning shifts, we identify one verb also took multiple sense distinctions into account,
of each of the four perception classes op- focusing on English up particle verbs.
tical, acoustic, olfactory, haptic, and use
In this paper,2 we address the automatic clas-
Decision Trees relying on syntactic and se-
mantic corpus-based features to classify the sification of German perception verbs into literal
verb uses into 3-4 senses each. Our classi- vs. non-literal meanings. Our research goes be-
fier reaches accuracies between 45.5% and yond a binary classification and distinguishes be-
69.4%, in comparison to baselines between tween multiple non-literal senses. Taking the PhD
27.5% and 39.0%. In three out of four cases thesis by Ibarretxe-Antunano (1999) as a start-
analyzed our classifiers accuracy is signif- ing point, a preparatory step places German per-
icantly higher than the according baseline. ception verbs into four classes: optical, acous-
tic, olfactory and haptic. In the main part, we
1 Introduction then choose one perception verb from each class
In contrast to Word Sense Disambiguation in gen- (betrachten, horen, wittern, spuren)3 which
eral (cf. Agirre and Edmonds (2006); Navigli each have multiple literal/non-literal senses, and
(2009)), most computational approaches to mod- rely on syntactic and semantic corpus-based fea-
elling literal vs. non-literal meaning are still re- tures and a Decision Tree classifier to perform a
stricted to a binary distinction between two sense token-based assignment to senses. We address
categories (literal vs. non-literal), rather than be- both a binary (literal vs. non-literal) and a mul-
tween multiple literal and non-literal senses. For tiple sense discrimination.
example,1 Bannard (2007), and Fazly et al. (2009) The paper describes related work in Section 2,
identified light verb constructions as non-literal specifies the perception verbs and their features in
verb uses; Birke and Sarkar (2006), Birke and Sections 3 and 4, and performs automatic token-
Sarkar (2007), Sporleder and Li (2009), and Li based word sense classification in Section 5.
and Sporleder (2009) distinguished literal vs. id- 2
This work is licensed under a Creative Commons At-
iomatic meaning. Concerning metonymic lan- tribution 4.0 International License (CC BY 4.0). Page
guage, most approaches address various senses, numbers and proceedings footer are added by the orga-
which are however very restricted to two do- nizers. License details: http://creativecommons.
mains, locations and organizations (Markert and org/licenses/by/4.0/.
3
Since the verbs have multiple meanings, we do not
1
See Section 2 for details on related work. translate them here but in Section 3.
2 Related Work tion names and focused on grammatical features;
Nastase and Strube (2009) enriched the feature
Computational work on non-literal meaning com- set from Markert and Nastase by lexical features
prises research from various sub-fields. Ap- from the Sketch Engine (Kilgarriff et al., 2004)
proaches to light verb constructions (Bannard, fed into WordNet supersenses, and by encyclope-
2007; Fazly et al., 2007; Fazly et al., 2009) dic knowledge from Wikipedia relations; Nastase
relied on measures of syntactic variation of et al. (2012) used an unsupervised classifier and
phrases, in combination with standard associ- global context relying on the Wikipedia concept
ation measures, to perform a type-based clas- network.
sification. Approaches to literal vs. non-
literal/figurative/idiomatic meaning performed 3 Dataset of German Perception Verbs
binary classifications (Birke and Sarkar, 2006; In this section, we describe the creation of our
Birke and Sarkar, 2007; Sporleder and Li, 2009; dataset of German perception verbs in three steps:
Li and Sporleder, 2009), relying on various con- (i) sampling the perception verbs (Section 3.1),
textual indicators: Birke and Sarkar exploited (ii) identification of literal and non-prototypical
seed sets of literal vs. non-literal sentences, and meanings (Section 3.2), and (iii) corpus annota-
used distributional similarity to classify English tion with perception senses (Section 3.3).
verbs. Li and Sporleder defined two models of
text cohesion to classify V+NP and V+PP combi- 3.1 Sampling of Perception Verbs
nations. All four approaches were token-based. As there is no available resource providing a com-
Approaches to metaphoric language predom- plete list of German perception verbs, we com-
inantly focus on binary classification. The bined the information of several dictionaries and
most prominent research has been carried out thesauri to create such a dataset. As a starting
by Shutova, best summarized in Shutova et al. point, we defined a base verb for each type of
(2013). Shutova performed both metaphor iden- perception: sehen see for optical verbs, horen
tification and interpretation, focusing on English hear for acoustic verbs, riechen smell for ol-
verbs. She relied on a seed set of annotated factory verbs, tasten touch for haptic verbs and
metaphors and standard verb and noun clustering, schmecken taste for gustatory verbs. Using these
to classify literal vs. metaphorical verb senses. verbs as starting points, all their synonyms or
Gedigian et al. (2006) also predicted metaphori- closely related words were determined, relying
cal meanings of English verb tokens, relying on on Ballmer and Brennenstuhl (1986) and Schu-
manual rather than unsupervised data, and a max- macher (1986). Using the enlarged set of verbs,
imum entropy classifier. Turney et al. (2011) as- we again added all their synonyms and closely re-
sume that metaphorical word usage is correlated lated words. We repeated this cycle and at the
with the abstractness of a words context, and same time made sure that each additional verb be-
classified word senses in a given context as ei- longs exclusively to the desired perception class,
ther literal or metaphorical. Their targets were until no further changes occurred. The sampling
adjective-noun combinations and verbs. process determined 54 optical, 15 acoustic, 9 ol-
Approaches to metonymic language represent a factory, 12 haptic and one gustatory verbs.
considerable development regarding features and For the classification experiments, we selected
classification approaches since 2002: Markert and one verb from each perception class, disregard-
Hahn (2002) proposed a rule-based ranking sys- ing the sole gustatory verb. The selected olfactory
tem exploring the contribution of selectional pref- and haptic verbs only undergo passive perception
erences vs. discourse and anaphoric informa- meanings, the optical verb only undergoes active
tion; Markert and Nissim (2002) presented the perception meanings, and the acoustic verb holds
first supervised classifier for location names and both active and passive perception meanings.4
compared window co-occurrences, collocations 4
Active perception is controlled perception (as in listens
and grammatical features; Nissim and Markert to the music); passive perception is non-controlled percep-
(2005) extended the framework towards organiza- tion (as in hears faint barking).
3.2 Non-Prototypical Meanings to hear (lit.):
Analyzing the senses of the perception verbs in Er horte die Wolfe heulen.
our dataset was carried out in accordance with He heard (lit.) the wolves howl.
Polysemy and Metaphor in Perception Verbs: to (dis-)like/ignore:
A Cross-Linguistic Study (Ibarretxe-Antunano, Sie konnen es nicht mehr horen.
1999), which systematically determined non- They dont want to hear about it anymore.
prototypical meanings of perception verbs cross- to obey:
linguistically for English, Spanish and Basque. Wenn er nicht hort, gibts kein Futter.
For example, Ibarretxe-Antunano (1999) identi- If he doesnt obey/listen, he doesnt get food.
fied three major groups of shifted meanings for to be informed:
vision verbs, (i) the Intellection group including Davon habe ich noch nie gehort.
to understand, to foresee, to visualize, to con- I never heard/read/etc. about that.
sider, to revise; (ii) the Social group including to
3.3 Annotation of Verb Senses
meet, to visit, to receive, to go out with, to get on
badly; (iii) the Assurance group including to as- Based on the sense definitions, we performed a
certain, to make sure, to take care. We applied her manual annotation to create a gold standard for
cross-lingual meaning shifts to all German per- our classification experiments: A random selec-
ception verbs in our dataset, if possible, to iden- tion of 200 sentences for each of the four se-
tify the meanings of the perception verbs. As in lected perception verbs was carried out, gather-
Ibarretxe-Antunano (1999), the applicability was ing 50 sentences for each meaning. As an excep-
determined by corpus evidence (see below). tion, wittern (olfactory) only has three promi-
The following lists specify the main senses of nent meanings, resulting in 150 annotated sen-
the perception verbs that were selected for the tences. The random selection was based on a sub-
classification experiments, with the first category categorization database (Scheible et al., 2013) ex-
in each list referring to the literal meaning. tracted from a parsed version (Bohnet, 2010) of
the SdeWaC corpus (Faa and Eckart, 2013), a
Optical: betrachten web corpus containing 880 million words.
to look at (lit.) These randomly selected sentences were anno-
to define/name/interpret tated by two native speakers of German with a lin-
to analyze objectively guistic background (doctoral candidates in com-
to analyze subjectively putational linguistics). The annotators were asked
Acoustic: horen to label each sentence with one of the specified
meanings of the respective verb. In cases where
to hear (lit.)
to (dis-)like/ignore the annotators disagreed, the first author of this
to obey paper took the final decision. Agreement and ma-
to be informed jority class baselines are shown in Table 1.

Olfactory: wittern Verb Perception Baseline Agreement


betrachten optical 33.5% 63.0%
to sense (by smell, lit.) horen acoustic 35.5% 64.5%
to advance towards a goal/event spuren haptic 27.5% 75.0%
to predict wittern olfactory 39.0% 69.4%

Haptic: spuren Table 1: Baseline and inter-annotator agreement.


to feel (lit.)
to realize 4 Syntax-Semantic Verb Features
to feel (emotions)
to suspect The feature vector used to classify verb instances
is split into three subsets of features: syntactic,
Taking the acoustic verb horen as an example,
verb-modifying and semantic features. The sub-
we illustrate the corpus uses of the verb by one
sets are described in the following subsections.
sentence for each sense.
4.1 Syntactic and Verb-Modifying Features Verb-modifying features:
The syntactic and the verb-modifying fea- Verb Form: Part-of-speech tag.
tures rely on the subcategorization database by
Scheible et al. (2013). This resource is a com- Adverb: Presence of an adverb represented
pact but linguistically detailed database for Ger- by a Boolean value.
man verb subcategorization, containing verbs ex- Adverbial or Prepositional Object: A
tracted from the SdeWaC along with the follow- Boolean value for each preposition introduc-
ing information: ing a prepositional object.
(1) verb information: dependency relation of the 4.2 Semantic Features
target verb according to the TIGER annota- The semantic features rely on two different re-
tion scheme (Brants et al., 2004; Seeker and sources, GermaNet and German Polarity Clues.
Kuhn, 2012); verb position in the sentence; (1) Information on hypernymy is extracted from
part-of-speech tag and lemma of the verb; GermaNet, which has been modelled along the
(2) subcategorization information: list of all lines of the Princeton WordNet for English
verb complements; (Miller et al., 1990; Fellbaum, 1998) and shares
(3) applied linguistic rule that was used to ex- its general design principles (Hamp and Feld-
tract the verb and subcategorization informa- weg, 1997; Kunze and Wagner, 1999). Lexical
tion from the dependency parses; units denoting the same concept are grouped into
synonym sets (synsets), which are interlinked
(4) whole sentence.
via conceptual-semantic relations (such as hyper-
Based on the database information, we defined nymy) and lexical relations (such as antonymy).
the following features: GermaNet provides up to 20 hypernym levels.
We used the most common concepts from the 3rd
Syntactic features: level (counted down from the unique top level):
Texture
Sentence Rule: Rule to extract the verb and
Situation
subcategorization information; relies on the
Quality
verb form and the dependency constellation
of the verb. Cognitive Object
Common Object
Sentence Form: Dependency relations of Pronouns (added to the original net)
the verb complex according to TIGER. None available (added to the original net)

Adjective: Presence of an adjective repre- (2) Information on adverb and adjective sentiment
sented by a Boolean value. is extracted from the German Polarity Clues
(Waltinger, 2010), which labels adjectives and ad-
Accusative Object: Presence of an ac- verbs as positive, negative or neutral.
cusative object represented by a Boolean We extracted the following semantic features:
value. Semantic features:

Subjunction: Either none or the lemma of Subject Hypernym: Hypernym of the sub-
the subjunction if available. ject.
Object Hypernym: Hypernym of the direct
Modal Verb: Either none or the lemma of
accusative object.
the modal verb if available.
Adverb/Adjective Sentiment: Either none
Negation: Presence of a negation repre- if no adverbs or adjectives are available; or
sented by a Boolean value. the adverb/adjective sentiment label.
5 Classification best subset vector for classification is the syntac-
tic one with 43.9% accuracy.
Our classification experiments were performed The semantic subset vector turns out to be the
with WEKA. The classifier algorithm used is J48, overall best with an average of 47.2%. For all
a Java reimplementation of the C4.5 algorithm but the olfactory verb classification any subset of
(Quinlan, 1993). For training and testing, ten-fold features returns higher accuracy than the baseline.
cross-validation was applied.
The classification experiments were done sep- 6.2 Ambiguity
arately for each perception type, i.e. for each
The classification results and confusion matrices
verb. Table 2 lists the classification results for the
(see an example in Table 3) show that ambigu-
verb horen,5 distinguishing between the subsets
ity is the biggest source of misclassifications. In
of syntactic, verb-modifying and semantic fea-
the confusion matrices one can observe that often
tures as well as the results for the combined vec-
meaning A is misinterpreted as meaning B,
tor. Instances refers to the number of sentences
which is in turn often misinterpreted as mean-
for the respective meaning. Fraction shows the
ing A. Interestingly, meanings confused by the
proportion of instances of one meaning in rela-
classification algorithm are very similar to those
tion to all classified instances for the respective
confused by human annotators.
verb. Classifier accuracy shows the proportion
of instances which have been correctly classified 0 1 2
by our classifier; significance according to chi- 0: Prototypical 28 0 26
square is marked, if applicable. Annotator agree- 1: Adv. towards Goal 15 1 30
ment is the proportion of instances in which the 2: Predict 21 0 43
two annotators chose the same meaning. Table 3: Confusion matrix for olfactory/syntactic.

6 Discussion
6.3 Lack of Detailed Semantic Data
In the following, we provide qualitative analyses
and discussions regarding our classifications. The hypernym data covers a very high level of ab-
straction. This data distinguishes between, for ex-
6.1 Features ample, texture and objects, but it does not distin-
guish between, for example, animals and plants,
For the optical perception verb sehen and which might have been more desirable. High lev-
the acoustic perception verb horen, the verb- els of abstraction had to be chosen for this re-
modifying and the semantic subset of features, search project as the lower levels of abstraction
as well as the combined set of all features, sig- would have resulted in several hundred feature
nificantly beat the baselines (33.5% and 35.5%, values and thus most probably have run into se-
respectively). The two subsets are equally suc- vere sparse data problems. Future work will nev-
cessful at classifying optical and acoustic verb in- ertheless address an improved identification of se-
stances, reaching between 52.5% and 55.5%. mantic levels of abstraction.
For the haptic perception verb spuren, each of
the subset vectors and the overall feature vector 6.4 Literal Meaning as Residual Class
provide results significantly better than the base-
The varying results by feature subsets for a verbs
line (27.5%). The best subset vector for this verb
prototypical instances suggest to have a closer
is the syntactic one with an accuracy of 43.0%.
look at their classification. The correctly clas-
The olfactory perception verb wittern is not sified instances increase and decrease in propor-
classified significantly better than the baseline tion to the correctly classified instances of all
(39.0%) by any subset or the combined set. The other meanings. Looking into the decision trees
5
The results for the three other verbs are omitted for
which result in classification as prototypical in-
space reasons. We nevertheless include them into our dis- stances, it turns out that the prototypical meaning
cussion below. See David (2013) for further results. shows residual class characteristics, cf. Table 4: It
Instances Fraction Accuracy Agreement
syntactic 200 100.0% 46.0% 68.5%
prototypical 71 35.5% 46.5% 59.2%
to (dis)like 11 5.5% 36.4% 81.8%
to obey 47 23.6% 70.2% 95.7%
to be informed 71 35.5% 31.0% 57.7%
verb-modifying 200 100.0% ***53.0% 68.5%
prototypical 71 35.5% 50.7% 59.2%
to (dis)like 11 5.5% 0.0% 81.8%
to obey 47 23.6% 51.1% 95.7%
to be informed 71 35.5% 64.8% 57.7%
semantic 200 100.0% ***55.5% 68.5%
prototypical 71 35.5% 40.8% 59.2%
to (dis)like 11 5.5% 0.0% 81.8%
to obey 47 23.6% 70.2% 95.7%
to be informed 71 35.5% 70.4% 57.7%
overall 200 100.0% ***57.0% 68.5%
prototypical 71 35.5% 39.4% 59.2%
to (dis)like 11 5.5% 18.2% 81.8%
to obey 47 23.6% 72.3% 95.7%
to be informed 71 35.5% 70.4% 57.7%

Table 2: Classification results for acoustic perception verb horen (baseline: 35.5%).

optic acoust olfac haptic avg 7 Conclusion


baseline 19.0 35.5 32.9 25.0 28.1
annotation 84.2 59.2 92.4 92.0 82.0 This paper presented a token-based automatic
syntactic 5.3 46.5 51.8 54.0 50.8 classification of German perception verbs into
verb-mod 2.6 50.7 37.0 56.0 47.9 literal vs. multiple non-literal senses. Based
semantic 2.6 40.8 72.2 20.0 44.3 on a corpus-based dataset of German perception
overall 42.1 39.4 44.4 46.0 43.0 verbs and their systematic meaning shifts, follow-
Table 4: Prototypical meaning by subsets. ing Ibarretxe-Antunano (1999), we identified one
verb of each of the four perception classes optical,
acoustic, olfactory and haptic, and used Decision
Trees relying on syntactic and semantic corpus-
seems that only the inability to determine a non-
based features to classify the verb uses into 3 to 4
prototypical meaning through the use of distinct
senses each. Our classifier reached accuracies be-
features results in a classification as prototypical.
tween 45.5% and 69.4%, in comparison to base-
lines between 27.5% and 39.0%. In three out of
four cases analysed our classifiers accuracy was
6.5 Choice of Non-literal Meanings
significantly higher than the according baseline.
The classification results also depend on whether
fine-grained or coarse-grained senses are used. A References
fine-grained sense definition would lead to less
variation within a sense class but to a higher num- Eneko Agirre and Philip Edmonds, editors. 2006.
Word Sense Disambiguation: Algorithms
ber of meanings. This in turn would require more
and Applications. Springer-Verlag. URL:
manually annotated data to cover all meanings http://www.wsdbook.org/.
with enough corpus examples, therefore we de- Thomas T. Ballmer and Waltraud Brennenstuhl. 1986.
cided to only use the reduced and coarse-grained Deutsche Verben. Gunter Narr Verlag, Tubingen.
sense selection. However, it is not clear where to Colin Bannard. 2007. A Measure of Syntactic Flexi-
draw the line, as there are cases where a verb can bility for Automatically Identifying Multiword Ex-
have two meanings at once in one context. pressions in Corpora. In Proceedings of the ACL
Workshop on A Broader Perspective on Multiword Birgit Hamp and Helmut Feldweg. 1997. GermaNet
Expressions, pages 18, Prague, Czech Republic. a Lexical-Semantic Net for German. In Proceed-
Julia Birke and Anoop Sarkar. 2006. A Clustering ings of the ACL Workshop on Automatic Informa-
Approach for the Nearly Unsupervised Recognition tion Extraction and Building Lexical Semantic Re-
of Nonliteral Language. In Proceedings of the 11th sources for NLP Applications, pages 915, Madrid,
Conference of the European Chapter of the ACL, Spain.
pages 329336, Trento, Italy. B. Iraide Ibarretxe-Antunano. 1999. Polysemy and
Julia Birke and Anoop Sarkar. 2007. Active Learn- Metaphor in Perception Verbs. Ph.D. thesis, Uni-
ing for the Identification of Nonliteral Language. versity of Edinburgh.
In Proceedings of the Workshop on Computational Adam Kilgarriff, Pavel Rychly, Pavel Smrz, and David
Approaches to Figurative Language, pages 2128, Tugwell. 2004. The Sketch Engine. In Proceed-
Rochester, NY. ings of the 11th EURALEX International Congress,
Bernd Bohnet. 2010. Top Accuracy and Fast De- pages 105111, Lorient, France.
pendency Parsing is not a Contradiction. In Pro-
Claudia Kunze and Andreas Wagner. 1999. Integrat-
ceedings of the 23rd International Conference on
ing GermaNet into EuroWordNet, a Multilingual
Computational Linguistics, pages 8997, Beijing,
Lexical-Semantic Database. Sprache und Daten-
China.
verarbeitung, 23(2):519.
Sabine Brants, Stefanie Dipper, Peter Eisenberg, Sil-
via Hansen-Schirra, Esther Konig, Wolfgang Lez- Linlin Li and Caroline Sporleder. 2009. Classifier
ius, Christian Rohrer, George Smith, and Hans Combination for Contextual Idiom Detection With-
Uszkoreit. 2004. TIGER: Linguistic Interpretation out Labelled Data. In Proceedings of the Confer-
of a German Corpus. Research on Language and ence on Empirical Methods in Natural Language
Computation, 2(4):597620. Processing, pages 315323, Singapore.
Paul Cook and Suzanne Stevenson. 2006. Classi- Katja Markert and Udo Hahn. 2002. Understanding
fying Particle Semantics in English Verb-Particle Metonymies in Discourse. Artificial Intelligence,
Constructions. In Proceedings of the ACL/COLING 136:145198.
Workshop on Multiword Expressions: Identifying Katja Markert and Malvina Nissim. 2002. Metonymy
and Exploiting Underlying Properties, pages 45 Resolution as a Classification Task. In Proceedings
53, Sydney, Australia. of the Conference on Empirical Methods in Natural
Benjamin David. 2013. Deutsche Wahrnehmungsver- Language Processing, pages 204213.
ben: Bedeutungsverschiebungen und deren George A. Miller, Richard Beckwith, Christiane Fell-
manuelle und automatische Klassifikation. Stu- baum, Derek Gross, and Katherine J. Miller. 1990.
dienarbeit. Institut fur Maschinelle Sprachverar- Introduction to Wordnet: An On-line Lexical
beitung, Universitat Stuttgart. Database. International Journal of Lexicography,
Gertrud Faa and Kerstin Eckart. 2013. SdeWaC 3(4):235244.
a Corpus of Parsable Sentences from the Web.
Vivi Nastase and Michael Strube. 2009. Combining
In Proceedings of the International Conference of
Collocations, Lexical and Encyclopedic Knowledge
the German Society for Computational Linguistics
for Metonymy Resolution. In Proceedings of the
and Language Technology, pages 6168, Darm-
Conference on Empirical Methods in Natural Lan-
stadt, Germany.
guage Processing, pages 910918, Singapore.
Afsaneh Fazly, Suzanne Stevenson, and Ryan North.
2007. Automatically Learning Semantic Knowl- Vivi Nastase, Alex Judea, Katja Markert, and Michael
edge about Multiword Predicates. Language Re- Strube. 2012. Local and Global Context for Su-
sources and Evaluation, 41:6189. pervised and Unsupervised Metonymy Resolution.
Afsaneh Fazly, Paul Cook, and Suzanne Stevenson. In Proceedings of the Joint Conference on Empir-
2009. Unsupervised Type and Token Identification ical Methods in Natural Language Processing and
of Idiomatic Expressions. Computational Linguis- Computational Natural Language Learning, pages
tics, 35(1):61103. 183193, Jeju Island, Korea.
Christiane Fellbaum, editor. 1998. WordNet An Roberto Navigli. 2009. Word Sense Disambiguation:
Electronic Lexical Database. Language, Speech, A Survey. ACM Computing Surveys, 41(2).
and Communication. MIT Press, Cambridge, MA. Malvina Nissim and Katja Markert. 2005. Learning
Matt Gedigian, John Bryant, Srini Narayanan, and to buy a Renault and talk to BMW: A Supervised
Branimir Ciric. 2006. Catching Metaphors. In Approach to Conventional Metonymy. In Proceed-
Proceedings of the 3rd Workshop on Scalable Nat- ings of the 6th International Workshop on Compu-
ural Language Understanding, pages 4148, New tational Semantics, pages 225235, Tilburg, The
York City, NY. Netherlands.
Ross Quinlan. 1993. C4.5: Programs for Machine
Learning. Morgan Kaufmann.
Silke Scheible, Sabine Schulte im Walde, Marion
Weller, and Max Kisselew. 2013. A Compact but
Linguistically Detailed Database for German Verb
Subcategorisation relying on Dependency Parses
from a Web Corpus: Tool, Guidelines and Re-
source. In Proceedings of the 8th Web as Corpus
Workshop, pages 6372, Lancaster, UK.
Helmut Schumacher. 1986. Verben in Feldern. de
Gruyter, Berlin.
Wolfgang Seeker and Jonas Kuhn. 2012. Making
Ellipses Explicit in Dependency Conversion for a
German Treebank. In Proceedings of the 8th In-
ternational Conference on Language Resources and
Evaluation, pages 31323139, Istanbul, Turkey.
Ekaterina Shutova, Simone Teufel, and Anna Korho-
nen. 2013. Statistical Metaphor Processing. Com-
putational Linguistics, 39(2):301353.
Caroline Sporleder and Linlin Li. 2009. Unsupervised
Recognition of Literal and Non-Literal Use of Id-
iomatic Expressions. In Proceedings of the 12th
Conference of the European Chapter of the ACL,
pages 754762, Athens, Greece.
Peter Turney, Yair Neuman, Dan Assaf, and Yohai Co-
hen. 2011. Literal and Metaphorical Sense Identi-
fication through Concrete and Abstract Context. In
Proceedings of the Conference on Empirical Meth-
ods in Natural Language Processing, pages 680
690, Edinburgh, UK.
Ulli Waltinger. 2010. German Polarity Clues: A
Lexical Resource for German Sentiment Analysis.
In Proceedings of the 7th International Conference
on Language Resources and Evaluation, Valletta,
Malta.

You might also like