Professional Documents
Culture Documents
Textbook Information Retrieval 24Th China Conference Ccir 2018 Guilin China September 27 29 2018 Proceedings Shichao Zhang Ebook All Chapter PDF
Textbook Information Retrieval 24Th China Conference Ccir 2018 Guilin China September 27 29 2018 Proceedings Shichao Zhang Ebook All Chapter PDF
https://textbookfull.com/product/information-retrieval-24th-
china-conference-ccir-2018-guilin-china-
september-27-29-2018-proceedings-shichao-zhang/
https://textbookfull.com/product/web-information-systems-and-
applications-15th-international-conference-wisa-2018-taiyuan-
china-september-14-15-2018-proceedings-xiaofeng-meng/
https://textbookfull.com/product/quantitative-evaluation-of-
systems-15th-international-conference-qest-2018-beijing-china-
september-4-7-2018-proceedings-annabelle-mciver/
Advances in Multimedia Information Processing – PCM
2018: 19th Pacific-Rim Conference on Multimedia, Hefei,
China, September 21-22, 2018, Proceedings, Part III
Richang Hong
https://textbookfull.com/product/advances-in-multimedia-
information-processing-pcm-2018-19th-pacific-rim-conference-on-
multimedia-hefei-china-september-21-22-2018-proceedings-part-iii-
richang-hong/
https://textbookfull.com/product/network-and-system-
security-12th-international-conference-nss-2018-hong-kong-china-
august-27-29-2018-proceedings-man-ho-au/
https://textbookfull.com/product/dna-computing-and-molecular-
programming-24th-international-conference-dna-24-jinan-china-
october-8-12-2018-proceedings-david-doty/
Shichao Zhang · Tie-Yan Liu
Xianxian Li · Jiafeng Guo
Chenliang Li (Eds.)
LNCS 11168
Information Retrieval
24th China Conference, CCIR 2018
Guilin, China, September 27–29, 2018
Proceedings
123
Lecture Notes in Computer Science 11168
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zurich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology Madras, Chennai, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7407
Shichao Zhang Tie-Yan Liu
•
Chenliang Li (Eds.)
Information Retrieval
24th China Conference, CCIR 2018
Guilin, China, September 27–29, 2018
Proceedings
123
Editors
Shichao Zhang Jiafeng Guo
School of Computer Science Chinese Academy of Sciences
and Information Technology Beijing
Guangxi Normal University China
Guilin
China Chenliang Li
Wuhan University
Tie-Yan Liu Wuhan
Microsoft Research Asia China
Beijing
China
Xianxian Li
School of Computer Science
Guangxi Normal University
Guilin
China
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Steering Committee
Shuo Bai Shanghai Stock Exchange, China
Xueqi Cheng Institute of Computing Technology, Chinese Academy
of Sciences, China
Shoubin Dong East China University of Science and Technology,
China
Xiaoming Li Beijing University, China
Hongfei Lin Dalian University of Technology, China
Ting Liu Harbin Institute of Technology, China
Jun Ma Shandong University, China
Shaoping Ma Tsinghua University, China
Shuicai Shi Beijing TRS Information Technology Co., Ltd., China
Mingwen Wang Jiangxi Normal University, China
Publication Chair
Chenliang Li Wuhan University, China
VIII Organization
Publicity Chair
Chenliang Li Wuhan University, China
Organizing Chairs
Zhenjun Tang Guangxi Normal University, China
Zhixin Li Guangxi Normal University, China
Jingli Wu Guangxi Normal University, China
Organizing Committee
Jinlu Liu Guangxi Normal University, China
Yonggang Li Guangxi Normal University, China
Jinyan Wang Guangxi Normal University, China
Peng Liu Guangxi Normal University, China
Lie Wang Guangxi Normal University, China
Yangding Li Guangxi Normal University, China
Shenglian Lu Guangxi Normal University, China
Program Committee
Shuo Bai Shanghai Stock Exchange, China
Xueqi Cheng Institute of Computing Technology, Chinese Academy
of Sciences, China
Fei Cai National University of Defense Technology, China
Dongfeng Cai Shenyang Aerospace University, China
Zhumin Chen Shandong University, China
Yi Chang Huawei Technologies Co., Ltd., China
Yajun Du Xihua University, China
Shoubin Dong South China University of Technology, China
Zhicheng Dou Renmin University of China, China
Shicong Feng Beijing Siming Software System Co., Ltd., China
Jiafeng Guo Institute of Computing Technology, Chinese Academy
of Sciences, China
Xuanxuan Huang Fudan University, China
Yu Hong Suzhou University, China
Minlie Huang Tsinghua University, China
Zhongyuan Han Heilongjiang Institute of Technology, China
Donghong Ji Wuhan University, China
Tieyan Liu Microsoft Asia Research Institute, China
Yiqun Liu Tsinghua University, China
Ting Liu Harbin Institute of Technology, China
Hang Li Toutiao AI Lab, China
Hongfei Lin Dalian University of Technology, China
Organization IX
Classification
NLP for IR
Social Computing
Recommendation
Sequence Modeling
1 Introduction
While performing complex search tasks, search users usually need to submit
more than one query to search systems. Understanding how users reformulate
queries is as important as the users’ examination behavior study [1] for IR
related researches. Existing works show that query reformulation behaviors con-
tain important signals of users satisfaction perception with search results [8] and
their actual information needs [9]. Uncovering the semantic relationship behind
reformulated queries may also help search engine to better rank results. Prior
works have made much progress in interpreting query reformulation actions. For
example, Huang et al. create a taxonomy of query refinement strategies and
R. Cai—This work is supported by Natural Science Foundation of China (Grant
No. 61622208, 61732008, 61532011) and National Key Basic Research Program
(2015CB358700).
c Springer Nature Switzerland AG 2018
S. Zhang et al. (Eds.): CCIR 2018, LNCS 11168, pp. 3–15, 2018.
https://doi.org/10.1007/978-3-030-01012-6_1
4 R. Cai et al.
build a rule-based classifier to detect each type of reformulation [2]. Sloan et al.
present a term-based methodology and provide valuable insight into understand-
ing query reformulation [3]. However, most of these efforts rely on the textual
content of queries. The average length of queries is usually no more than a few
words, which makes this kind of efforts suffer from data sparsity problems.
Recently, many works attempt to embed a knowledge graph into a continu-
ous vector space, called knowledge graph embedding, to aggregate global knowl-
edge over a graph. For example, TransE [10] interprets relations as translation
operating on the low-dimensional embeddings from head entities to tail entities.
TransH [11] and TransR [12] improve embedding algorithm by modeling rela-
tions as translation on a hyperplane and a relation space respectively. Inspired
by these recent efforts, we add translation module to our method, which will
help the model to better understand query reformulation intentions.
We use an example to interpret how translation embedding methods work on
query reformulation intentions understanding. There are two users’ query logs
as follows.
2 Related Work
2.1 Query Reformulation
Query reformulation is the process that queries issued to search engines are mod-
ified after examination of the results, which is an important topic in web search.
Understanding users’ reformulation behaviors accurately will help search engine
to improve performance. Jiang et al. propose a supervised approach to query
auto-completion based on the analysis of users’ reformulation behaviors [4]. San-
tos et al. create a probabilistic framework for Web search result diversification
and exploit query reformulation as a means to uncover different query aspects [7].
Hassan et al. study additional implicit signals based on the relationship of adja-
cent queries and predict the satisfaction of non-clicking users [8]. Prior works
show that the analysis of query reformulation plays an important role in many
IR tasks.
Since search queries usually contain only a few words, solving the problem
of data sparsity is particularly important for query reformulation understand-
ing. Herdagdelen et al. present a novel approach utilizing syntactic and semantic
information, whose operating costs are based on probabilistic term rewrite func-
tions [6]. Dang et al. treat anchor text as a substitute for query log and study
the effectiveness of a range of query reformulation techniques [5]. Sloan et al.
6 R. Cai et al.
3 Methodology
3.1 Overall Framework
The framework of our method contains two modules, which shown in Fig. 2. We
utilize translation embedding methods to train feature sets of relations and use
classifiers to predict the best relation.
In translation module, the labeled data is split into two-level relations
datasets, which contain all query pairs and only one level relations. These two
datasets are inputted into translation embedding models separately. After algo-
rithm training, we will obtain vector representation of all queries in two-level
relations.
In classification module, we get the vector of each relation by calculating the
difference of next query and current query. Each dimension of the relation vector
is a feature of classifiers. Finally, we predict the best relation by combining the
prediction results of two classifiers.
It is worth noting that the training set of translation embedding methods
and classifiers are the same. Besides, we utilize 10-fold cross-validation to avoid
data bias when training and testing.
Translating Embeddings for Modeling Query Reformulation 7
Relation Design Item Style Brand Model Function Material Channel Cost Size
Increase 34.0% 8.97% 7.47% 1.08% 0.77% 2.51% 1.74% 1.14% 0.17% 0.14%
Decrease 3.45% 0.70% 0.36% 0.31% 0.30% 0.30% 0.21% 0.34% 0.05% 0.04%
Replace 8.50% 20.6% 0.37% 4.23% 1.93% 0.08% 0.16% 0.01% 0.02% 0.02%
Sum 46.0% 30.3% 8.20% 5.62% 3.00% 2.88% 2.11% 1.49% 0.23% 0.20%
3.3 Classification
The input data of classifiers include relation vectors and relation labels. Rela-
tion vectors are calculated by the difference of next query vectors and current
query vectors from translation module, while relation labels are annotated by
annotators. We use the accuracy of classifiers for evaluating the performance of
our model, which corresponds to the evaluation metric of intentions prediction.
4 Experiments
4.1 Dataset
4.2 Baselines
Since we use TransE, TransH and TransR for carrying out translation module,
we select these three models as baselines, which show outstanding performance
in analyzing various knowledge graphs. Baselines don’t contain a classification
module, and their training and testing data don’t divide into two-level relations.
Referring to conventional knowledge graph embedding methods, we adopt Hit@1
as metric to evaluate the performance of relations prediction, which equates to
the accuracy of our model mentioned above.
Table 3. The overall accuracy of our model in relations prediction. The first row
represents the results of three baselines without classification module, while the other
rows represent the results of our model in the combination of different modules. Best
result is highlighted with boldface.
Table 4. The prediction accuracy of first-level relations. The first row represents the
results of three baselines without classification module, while the other rows represent
the results of our model in the combination of different modules. Best result in each
relation is highlighted with boldface.
(1) For the results of baselines, there is a great difference in the effectiveness of
different relations. The accuracy of ‘Model’ is over 50%, while the accuracy of
‘Item’ is less than 10%. And the prediction results are not directly related to
12 R. Cai et al.
Table 5. The prediction accuracy of second-level relations. The first column represents
the test sample size of 7 second-level relations, while the other columns represent the
results of three baselines. Improvement of accuracy added ‘KNN’ module based on
baselines is shown in parentheses.
the test sample size. We judge that this phenomenon has a great relationship
with the attributes of the second-level relations, where the definitions of some
relations are clearer, while the others are ambiguous.
(2) Our model outperforms baselines in most second-level relations. In the
results of ‘Model’, ‘Function’ and ‘Material’, the performance of our model
and baselines is very close. And in the results of ‘Design’, ‘Item’ and ‘Brand’,
our model has been significantly improved comparing baselines, where the
accuracy of all these three relations is up to 40%. It is worth noting that
the accuracy of ‘Item’ is raised from 7% to 46% averagely, which is the
most obvious promotion. What’s more, ‘Item’ is a top relation according
to the quantity distribution of dataset in various relations. Therefore, this
promotion will bring important practical significance for shopping query
reformulation understanding.
(3) Our model is deficient in predicting the relations of ‘style’, where the accu-
racy is reduced from 29% to 5% averagely. This means that our model is
hard to predict this relation, which often appears in the query logs about
clothing. In order to understand the cause of this issue, we will give a detailed
analysis in the section of case study.
There are some relations that often occur in the shopping search. Thus, the
prediction accuracy of these relations is extremely important. We select 7 top
relations, whose sample size is over 3%, and calculate the prediction accuracy
for baselines and our model. The statistical results are shown in Table 6. The
first column represents the test sample size of 7 top relations, while the other
columns show the accuracy of three baselines and the improvement of ‘KNN’
module. From the results, we see that:
Translating Embeddings for Modeling Query Reformulation 13
Table 6. The prediction accuracy of top relations. The first column represents the test
sample size of 7 top relations, while the other columns represent the results of three
baselines. Improvement of accuracy added ‘KNN’ module based on baselines is shown
in parentheses.
(1) The sum of the test sample size of 7 top relations is up to 90% of all test set,
which means that the accuracy of these top relations will directly affect the
overall performance. As mentioned above, we don’t make the sample size of
different relations equal by data sampling, because the uneven distribution
of dataset is the characteristic of shopping search. In the same way, we
don’t use the average accuracy of different relations to evaluate the overall
performance of our model and baselines. We think that the performance
improvement brought by this evaluation metric will meet the actual needs
of shopping search.
(2) Our model consistently achieves improvement comparing to all the baselines
on all top relations except ‘Increase-Style’ and ‘Decrease-Design’. Besides,
the accuracy of top 3 relations is all up to 40%, where the sum of test
sample size of these three relations is over 65% of all test set. Unlike con-
ventional classifiers, whose classification results tend to the categories with
larger training set, ‘KNN’ can perform very good effect in the category of
small-scale training data, such as ‘Replace-Brand’. Generally speaking, our
model has been improved effectively based on baselines, and our model can
accurately understand the query reformulation intentions of users.
(3) As the results of the previous section, our model suffers from the prediction
of ‘Increase-Style’. Because the test sample size of ‘Style’ and ‘Increase-Style’
is almost equal, we think that the reason for the low accuracy of these two
relations is the same. We will carry out a detailed study on this issue in the
following section.
Our model suffers from low accuracy in predicting ‘Style’ as mentioned above.
To explain the cause of this issue, we show the prediction results distribution
of ‘Style’ in Table 7. Note that, for a test sample whose true category is ‘Style’,
14 R. Cai et al.
the prediction result may be one of 7 second-level relations. We only show the
relations with top 3 proportion, because the results of other relations are almost
zero.
From the table, we find that about 90% of the test samples of ‘Style’ are
predicted to be ‘Design’, which result in a very low prediction accuracy of ‘Style’.
The ‘Design’ describes the shape of the products, while the ‘Style’ represents
the series of products. These two relations are intersecting when describing the
products. What’s more, there are 966 test samples containing ‘Design’ and ‘Style’
at the same time, 83% of the test samples of ‘Style’, which makes our model can’t
distinguish between these two relations. To solve this issue, we will redefine these
two relations and deal with multiple relations better in the future.
References
1. Liu, Y., Wang, C., Zhou, K., Nie, J., Zhang, M., Ma, S.: From skimming to reading:
a two-stage examination model for web search. In: ACM International Conference
on Information and Knowledge Management, pp. 849–858. ACM (2014)
2. Huang, J., Efthimiadis, E.N.: Analyzing and evaluating query reformulation strate-
gies in web search logs. In: Proceedings of the 18th ACM Conference on Information
and Knowledge Management, pp. 77–86. ACM, November 2009
3. Sloan, M., Yang, H., Wang, J.: A term-based methodology for query reformulation
understanding. Inf. Retr. J. 18(2), 145–165 (2015)
4. Jiang, J.Y., Ke, Y.Y., Chien, P.Y., Cheng, P.J.: Learning user reformulation behav-
ior for query auto-completion. In: Proceedings of the 37th International ACM
SIGIR Conference on Research Development in Information Retrieval, pp. 445–
454. ACM, July 2014
5. Dang, V., Croft, B.W.: Query reformulation using anchor text. In: Proceedings of
the Third ACM International Conference on Web Search and Data Mining, pp.
41–50. ACM, February 2010
6. Herdagdelen, A., et al.: Generalized syntactic and semantic models of query refor-
mulation. In: Proceedings of the 33rd International ACM SIGIR Conference on
Research and Development in Information Retrieval, pp. 283–290. ACM, July 2010
7. Santos, R.L., Macdonald, C., Ounis, I.: Exploiting query reformulations for web
search result diversification. In: Proceedings of the 19th International Conference
on World Wide Web, pp. 881–890. ACM, April 2010
8. Hassan, A., Shi, X., Craswell, N., Ramsey, B.: Beyond clicks: query reformulation as
a predictor of search satisfaction. In: Proceedings of the 22nd ACM International
Conference on Information and Knowledge Management, pp. 2019–2028. ACM,
October 2013
9. Bouramoul, A., Kholladi, M.K., Doan, B.L.: PRESY: a context based query refor-
mulation tool for information retrieval on the web. arXiv preprint arXiv:1106.2289
(2011)
10. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating
embeddings for modeling multi-relational data. In: Advances in Neural Information
Processing Systems, pp. 2787–2795 (2013)
11. Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge Graph Embedding by Trans-
lating on Hyperplanes. In: AAAI, vol. 14, pp. 1112–1119, July 2014
12. Lin, Y., Liu, Z., Sun, M., Liu, Y., Zhu, X.: Learning entity and relation embeddings
for knowledge graph completion. In: AAAI, vol. 15, pp. 2181–2187, January 2015
13. Xiao, H., Huang, M., Hao, Y., Zhu, X.: TransG: a generative mixture model for
knowledge graph embedding. arXiv preprint arXiv:1509.05488 (2015)
14. Ji, G., He, S., Xu, L., Liu, K., Zhao, J.: Knowledge graph embedding via dynamic
mapping matrix. In: Proceedings of the 53rd Annual Meeting of the Association for
Computational Linguistics and the 7th International Joint Conference on Natural
Language Processing (Volume 1: Long Papers), vol. 1, pp. 687–696 (2015)
15. Xiao, H., Huang, M., Hao, Y., Zhu, X.: TransA: an adaptive approach for knowledge
graph embedding. arXiv preprint arXiv:1509.05490 (2015)
16. Jia, Y., Wang, Y., Lin, H., Jin, X., Cheng, X.: Locally adaptive translation for
knowledge graph embedding. In: AAAI, pp. 992–998, February 2016
17. Ji, G., Liu, K., He, S., Zhao, J.: Knowledge graph completion with adaptive sparse
transfer matrix. In: AAAI, pp. 985–991, February 2016
A Deep Top-K Relevance Matching
Model for Ad-hoc Retrieval
Zhou Yang1 , Qingfeng Lan2 , Jiafeng Guo3 , Yixing Fan3 , Xiaofei Zhu1(B) ,
Yanyan Lan3 , Yue Wang1 , and Xueqi Cheng3
1
School of Computer Science and Engineering, Chongqing University of Technology,
Chongqing, China
yangzhou@software.ict.ac.cn, {zxf,wangyue}@cqut.edu.cn
2
University of Chinese Academy of Sciences, Beijing, China
lanqingfeng14@mails.ucas.ac.cn
3
CAS Key Lab of Network Data Science and Technology,
Institute of Computing Technology, Chinese Academy of Sciences,
Beijing 100190, China
{guojiafeng,lanyanyan}@ict.ac.cn, fanyixing@software.ict.ac.cn
1 Introduction
In traditional information retrieval models, the relevance of the document is
measured according to the exact matching signals. That is to say, the relevance
score is determined by the frequencies of query word from the document. These
models often face the typical term mismatch problem [1] since the semantic
matching signals are ignored. Recently, deep neural network have achieved great
success in many natural language processing tasks. At the same time, these deep
c Springer Nature Switzerland AG 2018
S. Zhang et al. (Eds.): CCIR 2018, LNCS 11168, pp. 16–27, 2018.
https://doi.org/10.1007/978-3-030-01012-6_2
A Deep Top-K Relevance Matching Model for Ad-hoc Retrieval 17
neural networks have also been applied in information retrieval, which called
neural information retrieval (i.e., NeuIR). Neural information retrieval models,
which measure the relevance based on continuous embedding [6], have achieved
significant progress.
It is of great importance to model the word importance in retrieval models.
In traditional retrieval models, they measured the word importance based on the
inverse document frequency (i.e., IDF), which have been a specific requirement
of retrieval models [2]. As these models only considered document words which
equal to the query word, thus, it is sufficient to only take IDF of query words
into consideration. Recently, neural retrieval models employ deep neural network
to model the semantic matching between query words and document words. In
this way, those words which relate to the query words are also used to measure
the relevance. However, existing neural retrieval models ignored the importance
of these non-query words, which is also critical in relevance judgement. Take the
following case as an example:
Query: Introduction of animals living in water, such as sharks
A fragment of document A: Dolphins swimming in the water are looking for
food.
A fragment of document B: A yellow puppy fell into the water.
From the above example we can see, compared with the exact matching signal
water, dolphins and puppy as similar matching signal appear in the document A,
B respectively. Given the semantic environment provided by water and sharks in
query, the importance of dolphins should be greater than puppy [7]. So, matching
errors can easily occur without emphasizing the importance of document words.
When the importance of words is emphasized, it will have a beneficial effect on
correct matching.
In this work, we take the document word importance into consideration while
modeling the relevance between query and document. Specifically, we proposed
a deep top-k relevance matching model (DTMM) for ad-hoc retrieval. DTMM is
a deep retrieval model which takes the raw text of query and document as input,
and extract relevance signals automatically through deep neural network to pro-
duce the final relevance score. Specifically, DTMM firstly build an interaction
matrix, where each element denotes the interaction between the corresponding
query word and document word. At the same time, we compressed the embedding
of document word into a small dimension, and concatenate into the interaction
matrix. In this way, the interaction matrix can not only capture the matching
signals but also the document importance. Then, we apply the top-k pooling
layer on the interaction matrix, and obtain the essential top relevance signals.
The top relevance signals is associated with each query term, and projected into
a multi-layer perceptron neural network to get the query term level matching
score. Finally, the query term level matching scores are aggregated with the term
gating network to produce the final relevance score.
We have conducted extensive experiments to verify the effectiveness of our
model. Specifically, we tested our model in two benchmark datasets (i.e., MQ2007
and Robust04). The experimental result show that DTMM can significantly
Another random document with
no related content on Scribd:
NO. 18. FORCEMEAT FOR RAISED AND OTHER COLD PIES.
This is the name given to the soaked bread which is mixed with
the French forcemeats, and which renders them so peculiarly
delicate. Pour on the crumb of two or three rolls, or on that of any
other very light bread, as much good boiling broth, milk, or cream, as
will cover and moisten it well; put a plate over to keep in the steam,
and let it remain for half an hour, or more; then drain off the
superfluous liquid, and squeeze the panada dry by wringing it in a
thin cloth into a ball; put it into a small stewpan or enamelled
saucepan, and pour to it as much only of rich white sauce or of gravy
as it can easily absorb, and stir it constantly with a wooden spoon
over a clear and gentle fire, until it forms a very dry paste and
adheres in a mass to the spoon; when it is in this state, mix with it
thoroughly the unbeaten yolks of two fresh eggs, which will give it
firmness, and set it aside to become quite cold before it is put into
the mortar. The best French cooks give the highest degree of savour
that they can to this panada, and add no other seasoning to the
forcemeats of which it forms a part: it is used in an equal proportion
with the meat, and with the calf’s udder or butter of which they are
composed, as we have shown in the preceding receipt for quenelles.
They stew slowly for the purpose, a small bit of lean ham, two or
three minced eschalots, a bay-leaf, a few mushrooms, a little
parsley, a clove or two, and a small blade of mace in a little good
butter, and when they are sufficiently browned, pour to them as
much broth or gravy as will be needed for the panada; and when this
has simmered from twenty to thirty minutes, so as to have acquired
the proper flavour without being much reduced, they strain it over,
and boil it into the bread. The common course of cookery in an
English kitchen does not often require the practice of the greater
niceties and refinements of the art: and trouble (of which the French
appear to be perfectly regardless when the excellence of their
preparations is concerned) is there in general so much thought of,
and exclaimed against, that a more summary process would
probably meet with a better chance of success.
A quicker and rougher mode of making the panada, and indeed
the forcemeat altogether, is to pour strong veal broth or gravy upon
it, and after it has soaked, to boil it dry, without any addition except
that of a little fine spice, lemon-grate, or any other favourite English
seasoning. Minced herbs, salt, cayenne, and mace, may be beaten
with the meat, to which a small portion of well-pounded ham may
likewise be added at pleasure.
CHAPTER IX.
Boiling, Roasting, etc.
A thorough practical knowledge of the processes described in the
present chapter will form a really good cook far sooner and more
completely than any array of mere receipts can do, however minutely
they may be explained; they should, therefore, be well studied and
comprehended, before any attempt is made to compound difficult
dishes; and the principles of roasting, boiling, stewing, and baking, at
least, ought to be clearly understood by every servant who
undertakes the duties of what is called plain cookery, which is, in
fact, of more importance than any other, because it is in almost
universal request in this country for families of moderate fortune; and
any person who excels in it will easily become expert in what are
considered the higher branches of the art.
In a vast number of English kitchens the cookery fails from the
hurried manner in which it is conducted, and from the excess of heat
produced by the enormous coal-fires constantly kept burning there at
all seasons, without which ignorant servants imagine no dinner can
be properly cooked; a mistake which cannot fail quickly to become
apparent to the most inexperienced reader who will give a patient
trial to the slow methods of cooking recommended in the following
pages. These will be found to combine exceeding economy in the
consumption of fuel, with a degree of superiority in the food prepared
by them, which would scarcely be credited unless it were put to the
test. In stewing, and baking in closely covered vessels, this
superiority is more particularly remarkable; and we would willingly
give a far larger space to so useful a subject than our limits will
permit: we are, however, compelled, though with regret, to restrict
ourselves to such details as we have now supplied in various parts
of this volume.
TO BOIL MEAT.
Iron Boiler.
Cut into large dice two pounds of lean veal, and two pounds of fat
bacon cured without saltpetre, two large carrots, and two onions; to
these add half a pound of fresh butter, put the whole into a stewpan,
and stir it with a wooden spoon over a gentle fire until the veal is very
white, and the bacon is partially melted; then pour to them three
pints of clear boiling broth or water, throw in four cloves, a small
bunch or two of thyme and parsley, a bay-leaf, and a few corns of
white pepper; boil these gently for an hour and a half, then strain the
poêlée through a fine sieve, and set it by in a cool place. Use it
instead of water for boiling the various articles we have already
named: it will answer for several in succession, and will remain good
for many days. Some cooks order a pound of butter in addition to the
bacon, and others substitute beef-suet in part for this last.
A BLANC.
Put into a stewpan one pound of fat bacon rasped, one pound of
beef-suet cut small, and one pound of butter; the strained juice of
two lemons, a couple of bay-leaves, three cloves, three carrots, and
three onions divided into dice, and less than half a pint of water.
Simmer these gently, keeping them often stirred, until the fat is well
melted, and the water has evaporated; then pour in rather more than
will be required for the dish which is to be cooked in the blanc; boil it
softly until all the ingredients have given out their full flavour, skim it
well, add salt if needed, and strain it off for use. A calf’s head is often
boiled in this.
ROASTING.
The application
of steam to
culinary purposes
is becoming very
general in our
kitchens at the
present day,
especially in those
Saucepan with of large
Steamer. establishments,
many of which are furnished with apparatus
for its use, so admirably constructed and so
complete, that the process may be conducted on an extensive scale
with very slight trouble to the cook; and with the further advantage of
being at a distance from the fire, the steam being conveyed by pipes
to the vessels intended to receive it. Fish, butcher’s meat, poultry,
vegetables, puddings, maccaroni, and rice, are all subjected to its
action, instead of being immersed in water, as in simple boiling: and
the result is to many persons perfectly satisfactory; though, as there
is a difference of opinion amongst first-rate cooks, with regard to the
comparative merits of the two modes of dressing meat and fish, a
trial should be given to the steaming on a small scale, before any
great expenses are incurred for it, which may be done easily with a
common saucepan or boiler, fitted like the one shown above, with a
simple tin steamer. Servants not accustomed to the use of these,
should be warned against boiling in the vessel itself any thing of
coarse or strong flavour, when the article steamed is of a delicate
nature. The vapour from soup containing onions, for example, would
have a very bad effect on a sweet pudding, and on many other
dishes. Care and discretion, therefore, must be exercised on this
point. By means of a kettle fixed over it, the steam of the boiler in the
kitchen range may be made available for cooking, in the way shown
by the engraving, which exhibits fish, potatoes, and their sauces, all
in progress of steaming at the same time.[69] The limits of our work
do not permit us to enter at much length upon this subject, but the
reader who may wish to understand the nature of steam, and the
various modes in which its agency may be applied to domestic
purposes, will do well to consult Mr. Webster’s excellent work,[70] of
which we have more particularly spoken in another chapter. The
quite inexperienced cook may require to be told, that any article of
food which is to be cooked by steam in a saucepan of the form
exhibited in the first of the engravings of this section, must be
prepared exactly as for boiling, and laid into the sort of strainer
affixed to the top of the saucepan; and that water, or some other kind
of liquid, must be put into the saucepan itself, and kept boiling in it,
the lid being first closely fixed into the steamer.
69. Invented and sold by Mr. Evans, Fish-street Hill.
71. Securely closed with a band of paste passed round the edges, and pressed
tightly over them. The lute or luting used for chemical apparatus is of a kind
of clay.
STEWING.