You are on page 1of 5

Slide 9:

I will discuss this study's method. First, let's review the study
participants. For this study, 86 American English-speaking adults
between 18 and 49 were recruited from various backgrounds regarding
their first language, education, and professional history. Additionally,
two experienced English as a second language teachers were present to
grade students' writing.

Slide 10:
First, let me explain the experiment procedure. Participants initially
complete the Oxford Online Placement Test (OOPT) to determine their
assigned assessment condition, followed by three Microsoft Word tasks
given to students. The tasks were to email a group project partner
apologizing and suggesting a solution, write an online review about food
service, and provide an opinion on an online education topic. Moreover,
the study was conducted with two groups of participants: one had access
to Microsoft Word's spelling tools, while the other was the control group
without access to any devices. All experiments were recorded using
QuickTime Player's screen recording function.

Slide 11:

How do the two evaluators rate each participant? Assessing lexical form
and meaning at the sentence level depends on the participant's ability to
write words correctly. In contrast, their use of form and syntax to expres
s meaning determines the score for morphosyntactic form and meaning.
Cohesion in forms and meanings at the discourse level is graded based o
n participants' expressive abilities, with consistent use of cohesive devic
es being a crucial factor in scoring. At the discourse level, participants ar
e evaluated based on their use of referential forms and logical connective
s to achieve cohesion in both sentences and discourse.
Slide 12:

The evaluation also considers how well these cohesive forms are conn
ected to their intended referential meanings. Functional meaning is evalu
ated based on how effectively the author conveys their intended message
in writing. Implied meaning is assessed regarding its sociolinguistic, psy
chological, and rhetorical implications.

Slide 13:

(How the data is analyzed?) The study created a coding scheme to


document participants' writing processes and analyze the use of spelling
and reference tools by L2 learners. Following coding, the two assessors
received grading details, written responses, and grading sheets for each
of the three written tasks produced by the participants. The grading
process involved both raters grading all three tasks. For each response,
raters assigned scores between 0 and 5 based on six components of the
scoring scale. The raters also calculated the average scores for each of
the six components.

Slide 14:

Here are the study results: Tables 3 to 5 display the average scores per
task and the ANOVA(analysis of variance) outcomes. Based on
Table 3-5, there is no performance difference between participants with
access to spelling or reference tools and those without. Thus,
participants who used vocabulary tools did not exhibit any advantage in
these ratings compared to those who did not, regardless of the task.

Slide 15:

Based on Table 6, the number of spelling errors detected by the tool is


similar across the three tasks when controlling writing length. For
instance, the average number of errors identified per 100 words in Task
1, Task 2, and Task 3 is 4.74, 4.59, and 4.59, respectively. These
outcomes are similar, suggesting that the variations do not impact the
number of spelling errors detected by the tool.

Slide 16:

The third finding identified spelling errors, primarily consisting of


misspelled words with one or more missing letters or an incorrect vowel
or consonant. Table 7 indicates that the frequency of these errors was
similar across different forms of detected errors. For instance, the
frequency of Typo errors was 76.67%, 86.76%, and 85.84% in tasks 1,
2, and 3, respectively. Overall, there was slight variation in the types and
frequency of errors across the three tasks.

Slide 17:

Table 8 states that when participants are informed of a word's form-


related errors, some participants attempt to modify not only the form,
including spelling, spaces, punctuation, and capitalization, but also swap
the word for another option. Consequently, some reconsider the word's
meaning and may ultimately select a distinct yet superior choice.

Taolun : Spelling tools often employ contextual analysis to determine


the most likely word a user intended to type based on the surrounding
words in a sentence.

Spelling tools are programmed with a database of common


words and their frequency of use in a given language. If a misspelled
word doesn't match any common words but is similar to a more
frequently used word, the tool may suggest the more common word.

Sometimes, users may not pay close attention to the


suggestions provided by spelling tools and may accept them without
verifying if it's the correct word. This can lead to the incorrect word
being substituted for the intended word.
Typology:

If I want to write classifiers about Japanese, classifiers can be categorized into eight types, for
example in the first type “numerical classifiers ” but Japanese describes numeral , they often use
no in all cases for example three books that is misu no hon , so how do i describe this ?

As to serial verb construction, I do not know how to write ?

It has four categories:

Symmetrical (对称的)and asymmetrical(不对称的)


Contiguity

Wordhood of components

Making of grammatical categories in svc

"昼ごはんを食べて、水を飲みます。"
(Romaji: "Hirugohan o tabete, mizu o nomimasu."

You might also like