You are on page 1of 8

Problem solving in computer games

Marcussen.D.B.
IT University of Copenhagen Rued Langaardsvej 7 DK-2300 Copenhagen S Denmark

Hvoslef.H.H.
IT University of Copenhagen Rued Langaardsvej 7 DK-2300 Copenhagen S Denmark

dbma@itu.dk

hhhv@itu.dk

ABSTRACT
This paper is a study of whether game literacy affects efficiency in problem solving in computer games. This is tested with 30 participants, divided between people with extensive computer gaming experience and people who are Inexperienced in playing computer games. The participants were given the same level of information about a previously unknown game and were asked to play through 10 levels to study how completion time and Deathcount vary between groups. Experimental data suggest that there is a tendency towards Avid gamers being more efficient in learning game mechanics, but no difference in applying these mechanics on problems in games. Subjective data suggest that there is a fine line between challenge and frustration, and in order to engage the player this balance needs to be integrated in to the game play.

players skill level, the player will be frustrated. If the players skill is much higher than the challenge level of the task, the player will be bored as he will not find a suitable challenge. The balance here is steadily increasing the difficulty along with challenge difficulty, constantly minding the buildup of knowledge and skill in the player. A crucial point is that the player has to know about the tools before the game can ask the player to use them in problem solving. This paper aims at investigating this specific area of HCI. The ability to understand and acquire this knowledge can be seen as a type of literacy (Gee, 2003), a mode of understanding that people who are Avid computer gamers will have built up over time. In this view, every game can be seen as a semiotic domain, or a family of related semiotic domains pr. game genre. When one plays games we gain resources that prepare us for future learning and problem solving in the domain, and, perhaps, more important, in related domains (Gee, 2003) The hypotheses being tested in this paper is therefore if playing games gives abilities in solving abstract problems more efficiently, in other games. Games are often characterized by a trial and error approach, and it has been postulated that playing games therefore can support the development of problem solving skills. (Inkpen et al, 1995; Higgins, 2000; Whitebread, 1997). The player learns and tests their abilities to solve logical problems, something that is applicable to problems outside of the game related sphere. To investigate whether this is apparent, this paper sought to test two groups of people, Avid gamers and Inexperienced players by testing how efficiently they would solve the problems in a previously unknown puzzle game, given the same level of information and training. Efficiency is in this case defined as their completion time and Death-count on each of the games 10 problems. This study expects that game literacy affects players efficiency in problem solving in games.

General Terms
Measurement, Performance, Experimentation, Human Factors Games, learning,

Keywords
Flash game, learning, experimental design, HCI, gaming. Between groups design.

1. INTRODUCTION
Playing a game is a learning process. A game has to teach you a set of rules, a set of mechanics just to be able to interact with the system. A game mechanic is in this context methods invoked by agents, designed for interaction with the game state (Sicart, 2008), or in other words the tools and methods the player has in disposition to interact with the game. After the player has learned these basic tools of interaction, they must learn how to use them to solve the problems and challenges in the game. A good game can therefore in this context be seen as a learning machine. (Gee, 2005) It should be designed to teach the player how to play. The learning curve of a game is therefore vitally important so the player feels challenged while interacting with the system, but not overwhelmed by the difficulty or complexity of the problem at hand. This has been discussed as the phenomenon Flow (Csikzentmihalyi, 1990), and is crucial for an optimal experience within any activities with a clear goal, including games. Flow is the balance between the skill of the participant, and the challenge level of the task at hand. If it is too difficult in relation to the

2. METHOD 2.1 Experimental design


This study is a group difference study. Independent variable with two levels Inexperienced players Avid Gamers Completion Time in Seconds (CTS) Death-count 1

Dependent variables

The two groups are the result of a pre-task survey, where the participants estimate how frequent they play computer games and determine at what level they consider themselves gamers. The experiments took place in a laboratory setting. All participants are asked to play the same 10 levels of the game Kill Me, while recording the amount of times their avatar dies per level (Death-count) and the time taken to complete each level (Completion Time in Seconds (CTS)), as well as answers to subjective questions concerning each level in the game . These results are then compared between the two different groups.

The game used is a puzzle platformer developed at the ITU called Kill Me. It consists of 16 increasingly difficult levels where the player needs to navigate from A to B using the games mechanics to solve each challenge. The games main mechanic is that the player can die multiple times, and use the subsequent bodies as tools to solve the navigational challenges in each level. The player can die at strategic places and use these bodies to: Traverse spikes. As a trampoline to reach up to high ledges. Use them as weights for triggers that change the game state.

2.2 Participants
30 participants, from the available students at the IT-University of Copenhagen, were asked to participate in the experiment. All participants were male, between 19 and 39. In an effort to have a somewhat balanced sample no female participants were chosen. On the basis of the pre-task survey the participants were divided into the two groups: Avid gamers defined as people who play computer games frequently and has a lot of gaming experience, thereby acquiring game literacy. And Inexperienced game players defined as people who arent accustomed to playing computer games and who arent game literate.

The first few levels introduce the player to the mechanics in the game, and can therefore be seen as a tutorial showing the player the tools she can use to solve the challenges in the game. The mechanics taught are: Level 1 : Spiked corpses works as a platform (see figure 1) Level 2: Multiple corpses can be used as platforms at the same time. Level 3: Corpses on the ground can be used to increase jumping height Level 4: The character dies if he falls more than 5 tiles Level 5: You must use two corpses to jump on spikes Level 6: Colored buttons remove same-colored blocks if pressed.

2.3 Materials
2.3.1 Hardware
1 HP Z400 Workstation computer in the ITU 4E GameLab. Surveys for the participants to fill in during the experiment hosted online by Google docs (see appendix 1) Windows xp Kill Me. A computer game written in Flash in a special edition made for this study, with restart disabled (the original game is available online at http://www.gamepirate.com/game/kill-me.html)

Further on in the game the complexity increases and multiple mechanics must be used simultaneously to solve the problems. The game prints out 2 metrics between each level, a total tally of the amount of times the avatar died during the entire level, as well as a total completion time (Completion Time in Seconds: CTS), Both were used as a metric to measure how well the player progressed in the game. Kill Me was selected as a benchmark since it has a well-developed learning curve, and prints out relevant information to measure. Kill Me also subverts genre conventions since death is a way to progress, and not a punishment. This is usually the opposite as death is commonly used as punishment in games, especially puzzle games like Kill Me. This removes knowledge of genre conventions as a confounding factor in the study. Therefore, all of the participants had to learn a completely new set of mechanics.

2.3.2 Software

2.4 Procedure
2.4.1 Pilot study
Before beginning the actual experiment, a couple of pilot studies were conducted. The first was a quick test, with a male ITU student with previously knowledge of the game. He went through the first 13 levels, while thinking out loud about the things he encountered. On the basis of the first pilot Level 8 of the game was eliminated, the restart function was removed and a cheatsheet was developed. In level 8 it was possible to get stuck, making restart the only solution. Since the restart option in the game resets both Deathcounts and Completion times we had the option removed from our 2

Figure 1 A Screen-shot from level 1 one of Kill Me.

local copy of the game, to avoid having the participants corrupting the data and the eight level was taken out of the experiment. The second pilot was conducted as if it was the actual experiment. A couple of questions in the surveys were rephrased. A participant-ID was added to keep track of the participants across the three surveys without taking away their anonymity. And more instructional meta-text was added on the cheat-sheet and surveys to make the experiment run smoother. It proved difficult to register Death-count and Completion Time in Seconds and making sure the participant didnt feel subjected to surveillance. Therefore it was incorporated in the Task surveys, that the participants would register the data.

possible to complete all levels, with help from the cheat-sheet only. After all the levels had been played and all the surveys had been filled in the participant was thanked again for their time and cooperation.

RESULTS 2.5 Description & Analysis


Combining all three surveys there has been collected 102 individual data points from each of the 30 participants, resulting in over 3000 separate data points most of which was subjective data. The most important part of the data was Completion time and Death-count of each level 60 data points in all. One of the participants did not write down Completion time on the last level, and was therefore removed from the results.

2.4.2 Cheat-sheet
The cheat-sheet was created in an effort to test people who had little or no prior gaming experience, and in order for them to complete all 10 levels. The cheat-sheet provided an overview of all the game mechanics, and solely on the basis of the cheat-sheet it was possible to solve all levels. This was done because the information given to the player by the game was insufficient. For example the players were not informed about the ability to carry bodies before level 5, despite the fact that the player would need to move the bodies already on level 3. There would be a risk of the player getting stuck if they did not have a cheat-sheet to refer to.

2.5.1 Group division


The independent variable of this study, being the two groups (Inexperienced players and gamers), was determined by two questions in the pre-task survey: How often do you play computer games? I consider myself?

2.4.3 Execution
The experiment was executed in a laboratory environment. The game was installed on a computer and the three electronic surveys were available in an open browser with a tab for each. A Cheatsheet was placed by the computer, explaining the controls in detail, as well as the rules and goals in the game. The experiment was accompanied by three surveys 1. 2. 3. Pre-task survey to determine which of the two groups they belonged to. Task survey one page per level, with a change to give subjective feedback on each level. Post-task survey to get an overall review of the participants gaming experience.

The first question is referring to frequency of the participants playing habit, while the second question referring to their relationship to computer games. (See appendix 1)

Before beginning the participants were thanked for their compliance, and were briefly instructed about the experiment. Then they were asked to sit down, and an attempt was made to make them feel at ease by providing water and snacks. They were informed that they would be testing a game, and not their personal abilities. They were made aware of the cheat-sheet. They were instructed to fill in the pre-task survey and then to proceed to the game. The participants were then asked to complete 10 levels of the game, level 1 through 7 and 9 through 11, and fill in the appropriate page on the task survey after each level, noting down completion times Death-counts and subjective questions concerning each level. After completing each of the 10 levels, the participant was asked to fill in a post-task survey, concerning reflection of the process. The experimenters monitored participants discretely and checked that the instructions were followed correctly, and answered any questions that might come up. Any questions for hints on particular levels were answered with the following statement It is

Figure 2 Graph depicting the group division according to the participants computer gaming habit. The groups were divided as shown on Figure 2. The participants on the right side of the red line are the Inexperienced players in our definition, they play computer less frequently than the participants on the right side, the Avid gamers. This is based solely on the first question. But there is a significant correlation between the two questions which supports the group division (Spearman = ,833, N = 30, p > ,001).

2.5.2 Time data


Time data visualized per level in Figure 3 showing the mean values and spread of the Completion Time in Seconds (CTS), grouped by Inexperienced Players (blue) and Avid gamers (green). Just by looking at the graph it is apparent that the Inexperienced Players constantly have a higher mean time and a larger spread. This is most obvious on level 1, 4, 6 and 7, while there seems to be little difference on the later levels 9 through 11

In order to test the hypothesis, that Avid Gamers are significantly faster than the Inexperienced players, a statistical test for assessing the significance of the differences between the two samples is needed. When working with two groups of independent samples the independent t-test is useful, but since the t-test is based on mean values, it needs both samples to be distributed normally. Because the data is not normally distributed the Mann-Whitney U test, which is non-parametric test based on rank-order has been used to determine significance. Using the Mann Whitney U test, one can see that there is a significant difference of the distribution of the time value on level 1 (U=0,006), 4 (U=0,048) and 6 (U=0,028) with a significance level of 0,05 This is not enough to show a significant difference between the overall times for the Inexperienced players and the Avid Gamers ((U=0,136)) with a significance level of 0.05.

2.5.3

Death-count

Mean Death-count on all 10 levels of the game grouped by Inexperienced and Avid Gamers are shown in Table 2, along with standard deviations. Here the two groups have fairly close mean values and std. deviation. The Inexperienced players have a higher mean and the Avid Gamers have on the other hand had a larger spread. Group InexperiencedPlayers Figure 3 Depicts the mean completion time in seconds for each group together with spread, per level. Looking at the sum of Completion times in seconds (CTS) The Inexperienced players have a much higher overall mean time (934,54 s) compared to the Avid gamers (641,49 s). The Inexperienced Players also have a much larger spread (Std. dev 611,00) compared to the Avid gamers (Std. Dev 239,98), which indicates that a lot of outliers, might be mudding the result. Mean 57,3846 Std. Deviation 20,83482

AvidGamers 54,3750 22,26170 Table 2 Mean and St. Deviation for Sum Death-count Figure 4 visualizes the data per level. The Inexperienced players have a higher Death-count on the very first intro level, on level 5 through 9, and the Avid Gamers having a higher death-count on level 2 through 4 and level 10. The Death-count spread varies much from level to level.

Group InexperiencedPlayers

Mean 934,5392

Std. Deviation 610,99596

AvidGamers 641,4850 239,98261 Table 1 Mean and Std. Deviation for Sum CTS Because of the small sample size Shapiro-Wilks normality test is useful to determine if the distribution of the data is normal. The time data of the Inexperienced players is not normally distributed (Shapiro-wilk (P=0,001 )) while the time data on Avid gamers is normal distributed (Shapiro-wilk (P=0,223)), the normality level being 0,05. The lack of normal distribution can be compensated for by Logtransforming the data using the natural logarithm. Via Logtransformation it was possible to make the data normally distributed, but since the data is time values, the log-time values would prove illogical to work with in the end and the transformed values were dropped. 4

level in order to jump down and kill one self in the act of falling more than 6 tiles down as illustrated by the red arrows on figure 5.

Figure 4 Mean Death-count and spread per level between the two groups Again there is no significant difference between the Inexperienced players and the Avid gamers sum of Death-count (Man-Whitney (U=0,624) with a significance level of 0.05). As Figure 4 reveals some levels have very high and spread Deathcounts, especially level 5, 7 and 10. While some levels have a very low Death-count especially 2, 3,4 and 6. This huge spread can in some degree be explained by differences in the design of the levels.

Figure 6 Level 5 visual walkthrough Level 5 on the other hand can be solved with a minimum of two deaths. Here however the level design lets the player walk directly out on spikes, something that makes it very easy for the player to get killed more than necessary. At this level the mean (std. deviation) of the Inexperienced gamers Death-count is 14(3,8), with one participant dying as much as 47 times and only two making it with minimum amount of 2 deaths. The Avid Gamers mean Death-count is slightly lower 11(3,6), but with one participant dying 52 times in order to complete the level and again 2 making it with the bare minimum. Because Death-count is both a game mechanic and a variable that is affected by the players ability and/or strategy it is a tricky variable to use. Despite this there is a positive correlation between the overall Sum Death-count and Sum Completion Time in Seconds (Spearman ( = .818, N = 29, p = .000)) Correlation is significant at the 0.01 level). In other words the more a player dies, the longer amount of time he will spent on the level.

2.6 Subjective findings


After each level was complete the participants were asked to answer questions connected to the level. The questions were related to: Figure 5 Level 4 visual walkthrough Level 4 has the lowest Death-count overall and between groups, with 29 participants dying only once, and one participant dying twice. In order to complete this level the player has to die once, in order to use the first body to trampoline his way up to the platform marked with the exit. But killing the avatar isnt done for you in the level, the player has to climb up higher than the entry How frustrating it was to solve the level If they felt that the level gave them enough information to solve it. How entertaining they felt the level was If they wanted to continue playing after playing this level. How challenging they felt the level was

All these questions were summated ratings set up in a Likert (Coolican, 2009) type scale from 1-5. After completing all 10 levels they were asked whether they enjoyed playing the game or not, and if they had a certain goal when solving the problems in the game.

The relationship between the answers connected to each level was investigated with Pearsons product-moment correlation coefficient. There were no violations of normality or heteroscedasticity. There were strong positive correlation between reported frustration and reported difficulty ( = 0,688, N = 300 , p > 0,00) , and between reported entertainment and how much they wanted to continue playing ( = 0,629 N = 300 p > 0,00) These findings are not that surprising, people that found a level frustrating also found it difficult, while if they enjoyed playing a certain level they also wanted to continue playing. Difficulty and entertainment had a moderate positive correlation with ( = 0,313 N = 300 p > 0,000), indicating that to a certain extent that solving a challenging problem in Kill Me is an enjoyable experience. This is supported by the subjective comments on the levels where multiple participants responded positively on levels they thought the most difficult. Example comments are It was good! and A good level where you had to think a bit, where both participants rated the level highly on both enjoyment and difficulty. This finding supports the Flow theory, where a difficult and challenging problem is pleasurable to solve, giving a positive reaction. Correlation for each of the two groups showed a few interesting differences. Where there were no correlation between frustration and wanting to continue for the Avid gamers ( = -0,078 N = 160 p > 0,325), there was a small to moderate negative correlation for the two values for the Inexperienced players ( = -0,223 N = 140 p > 0,008). This indicates that the Inexperienced players had a tendency to be less interested in the game the more frustrating the experience, while the Avid gamers did not have that tendency. Perceived difficulty of each level increased along with the levels (See Figure 7) showing that the further the players progressed in the game, the more they felt that they were challenged in solving the problems. Since our intention was to use a game with a steady learning curve, this supports our assumption that Kill Me has a steady difficulty increase from beginning to end.

difference between Avid gamers and Inexperienced players, with the Avid gamers having a much smaller spread and mean than the Inexperienced players on both accounts. When one takes a closer look at the percieved difficulty of that spesific level, one can see the opposite trend, where the Avid gamers rate it higher than the Inexperienced players. This seems contradicting as objectively they where much faster in solving it, and more efficient as well. One can also see a trend where the Avid gamers rate the later levels as more difficult than the Inexperienced players, while the opposite trend can be detected on the earlier levels. This can be explained if one looks at the timedata where the Avid gamers solves the earlier levels quicker than the Inexperienced Players but this lead slows down in the later levels. Since the Avid gamers had felt less challenged on the earlier levels, they end up experiencing a larger contrast in relative challenge, compared to the Inexperience players that experiences a gentler difficulty curve. Thus even if the Inexperienced player use more time on the challenges, they feel a steadier increase in the difficulty along with an increased understanding of the mechanics at hand. For the Avid gamer the experience is somewhat different, they cruise through the first levels efficiently, having little challenge relative to their skillset. When they finally meet levels that challenge them, they feel much harder than what they have experienced earlier, making the relative difficulty curve much steeper, something that makes them rate relative difficlty higher then the Inexperienced players.

Figure 8 Average level of frustration sorted per level Perceived frustration peaks at level 5 and level 10, for both Avid gamers and Inexperienced gamers, something that seems related to Death-count on both accounts. Death-count peaks This indicates that dying repeatedly is perceived as more frustrating than spending much time on a problem.

Figure 7 Average perceived difficulty sorted per level When one looks closer on this data a few interesting anomalies appear. The death and time data from level 7 shows a large

2.7 Summarizing the findings


To summarize the result even though Inexperienced gamers generally have a higher completion time than the Avid gamers, the difference between the scores are not significant enough to conclude that gamers indeed are more efficient in problem 6

solving. This is due to the fact that the scores of the Inexperienced players vary a lot. Death-count scores are also not significantly different, even though the Inexperienced gamers generally die more than the Avid gamers. Death-count has proven to be a tricky variable. Mainly due to the fact that is a game mechanic and not just a metric. Death-count is both influenced by the players ability, strategy and the design of each level. Despite of this Completion time and Death-count per level are related. The longer it takes to complete a level the more times the player is likely to die. The subjective findings indicate that there is a relationship between how difficult a player perceives a level and how frustrating it is. Also if the player is entertained by the level, he/she is more inclined to continue playing. What is interesting is that to a certain extent being challenged also entertains the player, something that supports the flow theory. This was more apparent with Avid gamers, while Inexperienced players tended to become less interested in the game if it was too frustrating. The findings indicate that the more a player dies the more frustrated he feels, which coincides well with the other subjective findings. The overall perceived difficulty of the game increased along with the levels, thereby keeping the player challenged and engaged. It is interesting to note that Avid gamers solves the earlier levels a fair deal quicker than their inexperienced counterparts. But that the lead is diminished later in the game. This could indicate that their gaming experience makes it easier to acquire a new set of game mechanics. But that they are actually not better at overall problem solving, questioning the validity of the findings mentioned earlier. (Inkpen et al, 1995; Higgins, 2000; Whitebread, 1997).

2.8.2 Surveys
The three electronic surveys accompanied the game to provide a far more in depth results than the performance metrics alone, though the subjective results are not all clear. The surveys were only tested once, in the second pilot (2.4.1) and changes were made on this basis. To really test whether these surveys provided results that could be used for analysis and that actually answered the hypothesis, a pilot of 4-5 people plus an initial analysis might have proved fruitful. As it were a lot of the gathered data provided by the surveys was not usable.

2.8.3 Materials
The game Kill Me proved to be an entertaining game a factor that helped make sure that all participants completed the 10 levels. That being said, the game is not flawless. The information a player needs to play the game is not evenly distributed over the introductory levels. There is a fine line between introducing the needed game mechanics and solving the puzzles for people, thereby eliminating the element of challenge. To even out this problem the cheat sheet was developed.

2.8.4 Participants
The group division of this study is in itself problematic. As mentioned before, the participants were divided into two groups based on two questions in pre-task survey. In hindsight it has become clear the second question (Self-consider gamer) was problematic because the possible responses to the question pointed in different directions, either ability or frequency. I consider myself A non-computer game player. A novice computer game player. An occasional computer game player. A frequent computer game player. An expert computer game player.

2.8 Evaluation of methods


2.8.1 Design
The type of study described in this report, is defined by Hugh Coolican as a group difference study and is a method he is very critical about. In a group difference study one group with certain characteristics is compared to another group with non-members. Just as we have compared gamers (Avid gamers) with non-gamers (Inexperienced gamers). According to Coolican this type of study is not an experiment or even a quasi-experiment, because the experimenter cant control the independent variable. (Coolican 2009) In a true experiment the experimenter maintains complete control over all the variables, holds most of these constant, and changes one or more independent variable to measure changes in the dependent variable. This experiment lacks control of the independent variable, because it also is a measured variable. Without control of the independent variable, it is an observational study, with no possibility of measuring causality, due to too many confounding variables. In fact on the basis of this study, one cant be sure if the reason Avid gamers are more efficient in their problem-solving than Inexperienced players, is because the Avid gamers have more game experience and are game literate. But we can see if they are more efficient.

Novice and expert points towards ability of playing, while noncomputer, occasional and frequent points towards frequency. On top of that novice has negative connotations, which might make participants avoid that option. This resulted in a third of our participants choosing this option. So despite the fact that the data of the two questions correlated, the division of the two groups has been made solely on the frequency of present gaming habits. In order to do a between groups experiment it has proven crucial that the basis of the group allocation has been thoroughly thought through, and the questions sharply formulated. There might also be a sampling bias because of the way participants were chosen in the first place. As mentioned in 2.2 the participants were chosen from the available students at the ITUniversity of Copenhagen. Whether this sample is even representative of the male population of the ITU is hard to say. It is entirely possible that the people who agreed to participate overrepresent a certain group. Not all classes are at the ITU every day and are therefore not available. The fact that the participants of this study were not randomly allocated, also makes this a nonexperiment according to Cooligan.

2.8.5 Future research


To make this a true experiment, two randomly allocated groups with no prior experience with games, would test a game. The game should be especially made for the experiment with a specific amount of learning levels and increasingly difficult puzzle levels. This could then be tested as an independent samples experiment or repeated measures experiment. All the participants should play test the game. After the initial test one group would be asked to practice playing computer several times a week for period of time, and the other group, the controlgroup, should just remain the same. Then the two groups should be tested again on the same or at least at similar game also made especially for the experiment. Then the differences in the dependent variable, the efficiency of problem-solving in a game, should be measured. This should be done to see if the training and acquired game literacy made a difference, by making trained group significantly more efficient than the control group. The true experiment is also more time consuming and possible more expensive. This study might not be very scientific, but it might be able to be a building block in designing a way to test games thoroughly.

4. REFERENCES
Coolican, H., (2009). Research Methods and Statistics in Psychology. Oxford Oxfordshire: Oxford University Press. Csikzentmihalyi, DR (1990) Flow: the psychology of optimal experience. New York: Harper and Row Gee, J. (2003) What Video Games have to teach us about learning and literacy. Basingstoke: Palgrave Macmillan. Gee, J. (2005) Learning by Design: good video games as learning machines.) E-Learning and Digital Media, 2(1), 5-16 Retrieved from http://dx.doi.org/10.2304/elea.2005.2.1.5 on 09.12.10 Higgins, S (2000). The logical zoombinis. Teaching Thinking, Vol 1 Issue 1 Inkpen, KM, Booth, KS, Gribble, SD and Klawe MM (1995). Give and take: children collaborating on one computer, in JM Bowers and SD Benford (eds) CHI 95: Human Factors in Computing Systems, Denver, CO, ACM Conference Companion, pp 258-259 Sicart, M. (2008) Defining game mechanics. Game studies, 8, (2). Retrieved from http://gamestudies.org/0802/articles/sicart on 09.12.10 Whitebread, D (1997). Developing childrens problem-solving: the educational uses of adventure games, in: McFarlane, A (ed) Information Technology and Authentic Learning. London: Routledge

3. CONCLUSION
In conclusion this study has not been able to show a statistically significant difference in the problem solving abilities between Avid gamers and Inexperienced players. There were systematic differences and a trend towards shorter completion times on behalf of the Avid gamers, but not enough to prove statistically. What this study is able to show is a good outline in usability testing for video games. One can use personality traits like genre interests and preferences to differentiate between multiple target groups, and use subjective questions along with statistical measurements to get an understanding about the mental state of the player during play. By putting the different players in to predefined groups one can try to tailor factors like: the difficulty curve, the amount of information the player needs and wants. the amount of frustration. The amount of entertainment

to specific user groups and target groups. One can use both metrics and player statements as tools in this design process. Hopefully this method can help designers create video games that are more spesifically tailored towards specific audiences.

5. Appendix
Surveys can be found at http://www.itu.dk/people/hhhv/appendix_1

You might also like