You are on page 1of 7

THE ASSESSMENT OF

MOBILE COMPUTATIONAL THINKING *

Mark Sherman and Fred Martin


Department of Computer Science
University of Massachusetts Lowell
Lowell , MA 01854
(978) 934-1964
msherman@cs.uml.edu, fredm@cs.uml.edu

ABSTRACT
This paper introduces a rubric for analyzing “mobile computational thinking”
(MCT) as represented in App Inventor work products. To demonstrate its
efficacy, this rubric was used to evaluate and compare student work from the
CS and non-CS student cohorts in a mixed-major app design undergraduate
course. Our analysis showed some significant differences between the cohorts,
which were expected, as well as more subtle differences. The rubric
demonstrated that it was sensitive to significant and subtle variations of MCT.
The instrument is available for download and use.

INTRODUCTION
Background
Computational Thinking, the set of creative skills involved in computing [8], has
informed a variety of new introductory computing courses at the undergraduate level that
focus not on specific programming skills for a computing major, but on developing
general CT skills for non-majors, or cross-discipline connections to computing. One such
example is Harvey Mudd College, which explicitly co-teaches computing and biology,
engaging students interested in either [4].
Mobile Computational Thinking, or MCT, is a superset of CT, as mobile platforms
(phones and tablets) provide an additional situatedness of computing, where the device
changes location and context with its user, and is present for much of the user’s

___________________________________________
*
Copyright © 2015 by the Consortium for Computing Sciences in Colleges. Permission to copy
without fee all or part of this material is granted provided that the copies are not made or
distributed for direct commercial advantage, the CCSC copyright notice and the title of the
publication and its date appear, and notice is given that copying is by permission of the
Consortium for Computing Sciences in Colleges. To copy otherwise, or to republish, requires a
fee and/or specific permission.

53
JCSC 30, 6 (June 2015)

interactions in daily life. Existing CT assessment tools did not cover these new ideas.
This paper presents an instrument, designed for MIT App Inventor, that assesses these
new areas of mobile computational thinking, as expressed in App Inventor projects.
Similar instruments were shown recently to be effective at measuring CT in other
languages and contexts [9].
Testing of the rubric was done in a mixed majors course teaching app design with
MIT App Inventor, where MCT was introduced to students advanced in computer
science, as well as students without that expertise.
This course was based on previous courses that taught traditional CS0 and CS1
topics entirely in the context of mobile app design [1,2,10]. App Inventor [6] was the
technical vehicle for instruction and projects, as it implemented the “low floor and high
ceiling” philosophy [5], which was necessary for this deeply constructivist course [3].

Project Goals
The goal was to design a rubric to assess growth in sophistication of MCT patterns
in student work artifacts, as part of a larger project that developed a suite of pedagogical
materials for App Inventor and MCT. After a year of iterative design, the instrument was
tested in a new course, as described in this paper. We succeeded in these goals, and the
rubric is now available for public use.

METHODOLOGY
The Rubric
The rubric characterizes concepts normally involved in the programming aspects of
computational thinking and adds related concepts that are present in mobile
computing—e.g., screen design, event-based programming, location-awareness, and
persistent and shared data.
Table 1 lists all 14 properties of an app that the rubric measures. The table presents
the properties in two categories—general CT and mobile-specific concepts, along with
their corresponding dimension number on the rubric. Each property is described along a
2- to 4-point scale, with increasing points representing more sophistication within the
concept being measured.
General CT (6 items) Mobile CT (8 items)
(2) Naming (1) Screen Interface
(4) Procedural abstraction (3) Events
(5) Variables (6) Component abstraction
(7) Loops (10) Data persistence
(8) Conditionals (11) Data sharing
(9) Lists (12) Public web services
(13) Accelerometer & orientation sensors
(14) Location awareness

54
CCSC: Northeastern Conference

Table 1: Names of the 14 dimensions for Characterizing Mobile CT in the rubric.


Showing a portion of the rubric in greater detail, Table 2 presents a selection of its
14 distinct properties, referred to as Dimensions. Screen Interface characterizes the
complexity of the app’s visual presentation, including programmatic control of on-screen
elements. Events describes the complexity of the app’s use of event-based programming,
which is how all actions and interactivity in App Inventor are realized. Location
Awareness assesses the ability of the app to capture its real-world location, and the degree
to which that data is processed and used as a feature. The entire rubric is available at [7].
To develop the rubric, we initially analyzed the technical capabilities of App
Inventor Classic, and then engaged in an iterative process of coding a set of student work
from an App Inventor course taught by a research collaborator. We successively
expanded or collapsed properties until we had a set that spanned most student work, and
yet provided sufficient detail to distinguish different types projects at various degrees of
computational sophistication.
Since it can represent both the prior understandings of computational thinking as
well as styles particularly relevant for mobile computing, we call it the “Mobile
Computational Thinking” (or Mobile CT) rubric.
1 point 2 points 3 points 4 points
1. Screen Single screen with Single screen with Single screen, where Two or more
Interface five or fewer more than five some components screens; screens
visual components visual components programmatically may be
that do not that do not change state based implemented as
programmatically programmatically on user interaction screen components,
change state. change state. with the app. or by
programmatically
changing visibility
of groups of visual
components.
3. Events Fewer than two Two or more One event handler n/a
types of event types of event modifies state in a
handlers. handlers. If an way that will change
(Multiple buttons, event handler the opportunity for
all with modifies label other event handlers
“buttonX.onClick, state or sprite to begin
” are of the same position, it’s still (“interacting event
type.) in this category. handlers”).
14. No location used. Accesses location Accesses location Inspects location
Location and immediately and stores it for later data numerically,
Awareness passes it to built- retrieval and use. processes this data
in features (such as a feature.
as maps).
Table 2: A selection of three Dimensions from the Mobile CT Rubric.

The Assignments Studied


The course had eight weekly assignments (A1 to A8). New material was introduced
in one week, and then students had the opportunity to create their own apps based on it
the following week.

55
JCSC 30, 6 (June 2015)

The assignments chosen for study were the second group, where the students
exercised concepts with creative freedom. The chosen assignments were as follows: (A3)
Video game- students implemented an original game design, making use of the canvas
(a 2D animation frame), sprites (images that move/interact on the canvas), and text labels
for scorekeeping and display. (A5) Location-awareness and persistent-data app-
leveraged GPS location readings, and storing and processing data between user sessions,
or sharing with other users. (A7) Refactored with lists app- students revisited any of their
prior apps and refactored it to use lists. Students chose different assignments to re-work,
allowing for some extra diversity in the observations.

Procedure
We assembled a total of 45 apps from the 18 students who consented to the study,
with nine students in the non-CS cohort and nine students in the CS cohort. 21 of the apps
were from the non-CS cohort, and 24 of them were from the CS cohort.
Each app was rated along the 14 dimensions from the rubric, mostly by inspecting
code, and sometimes running the app. In the first analysis presented, apps received a total
score by summing the ratings from each dimension. The minimum possible rating is 14
and the maximum is 45. Every project was rated by two of the paper’s authors, and
discrepancies were discussed, with final ratings agreed upon by both. This procedure is
similar to other CT rubric studies [9].

Figure 1: Average of MCT in students’ assignments, by cohort. Error bars


are standard deviations.

RESULTS
The overall scoring of student work, per cohort, is presented Figure 1. The diagram
shows average MCT scores for each of the two cohorts on a per-assignment basis. The
raw scores are simply the sum of the points from rating the app along the 14 dimensions.

56
CCSC: Northeastern Conference

Each of the six points in the graph represents the average rating of 7 or 8 apps. The
average of the students’ work is shown to be approximately in the range of 22 to 30.
This result shows that the CS cohort created apps with more computational
sophistication than did the non-CS cohort. This is not surprising; this result primarily
demonstrates that the rubric is capable of making this discrimination.
The results also show that the apps of the CS cohort increased in computational
sophistication over the span of the course, while the apps of the non-CS cohort mostly did
not.
Figure 1 also includes error bars showing the standard deviation of each average.
The work of the CS cohort showed more variation than did the non-CS cohort.
We further analyzed the student work on a per-assignment basis, showing the
strength of the 14 individual CT and M-CT rubric components exercised by each
assignment. This analysis is shown in Figure 2. Here, each dimension of the rubric is
represented by a column, and is normalized along a scale from 0 to 1.

Figure 2: Strength of M-CT concepts in students’ work. Columns are rubric


dimensions. Rows are assignments, comparing both Non-CS and CS cohorts. Darker
represents a stronger presence of a particular property. Note the similarity in general
shape between the two cohorts (top and bottom), but with some interesting
differences.
This figure shows a generally similar pattern of exploration through the MCT
concepts between the Non-CS and CS cohorts. For each cohort (separated in top and
bottom groups of rows), moving downward in the table represents moving forward in
time across successive assignments: A3, A5, and A7. These projects were selected as they
allowed for creative freedom and exploration of the concepts. The columns represent
dimensions of the rubric as measured at each project. One cursory observation is
Dimensions 2 (Naming) and 8 (conditionals) showed growth over time for both cohorts,
shown in progression of light to dark colors down their respective columns.
The differences between the two cohorts are visually apparent Figure 2, and we
confirmed them with an ANOVA analysis. The ANOVA yielded statistical significance
of difference between the cohorts as follows: Dimension 4 (procedural abstraction), p <
0.0005; dimension 6 (component abstraction), p < 0.001; dimension 7 (loops), p < 0.001.
These dimensions represent three of the most sophisticated MCT ideas, which.
Unsurprisingly, the non-CS cohort only engaged with one of these three. This is further

57
JCSC 30, 6 (June 2015)

demonstration of the rubric’s sensitivity, and shows promise that it can be used to isolate
differences of MCT among large cohorts.

DISCUSSION
The lack of continued MCT growth over time for the non-CS cohort in Figure 1 has
an explanation: that cohort invested early in the course in learning the tools to convey
their ideas, and then progressed artistically within those tools for the remainder of the
course. This instrument does not capture artistic progress, so that shift in attention is not
visible in these data.
Several observations can be drawn from Figure 2. First, the three assignments
differentially caused students to exercise specific types of CT and M-CT. For instance,
A5 involved students in exploring data persistence (Dimension 10), while A7 involved
them in lists (Dimension 9). This was true for both the non-CS and CS cohorts.
The CS cohort clearly had stronger presence of more sophisticated MCT overall.
This was an expected result, as they entered the course having already completed much
of the traditional CS curriculum, and it demonstrates the rubric’s efficacy for detecting
such differences.
There were three rubric dimensions, however, where the non-CS cohort was
stronger. These were 5 (global variables), 11 (data sharing), and 14 (location awareness).
This indicates that the non-CS cohort favored apps that relied on these features,
progressing similar ideas further instead of exploring more broadly. This is not a bad
thing.
Non-CS students never utilized Dimensions 6 or 7, which represent the most
advanced features of App Inventor: component abstraction and loops. (The event-driven
nature of App Inventor makes traditional looping structures unnecessary for most uses,
and canonical loops are somewhat rare.) None of this material was emphasized in course
instruction, so students pursued this learning independently.

CONCLUSIONS
The rubric showed that two cohorts with different backgrounds created apps with
different degrees of MCT depth and breadth, including different growth rates over time.
The findings were expected, and validated the rubric’s efficacy for such aggregate
discrimination.
The breadth of the rubric’s dimensions allowed for more subtle variations to be
isolated, including the presence of specific concepts. Graphical analysis of that data, such
as Figure 2, can allow for patterns to be seen in the sample, which may be used by
teachers and students alike to find strengths and weaknesses in curriculum, learning, and
teaching.
This rubric is useful for assessment of mobile computation thinking (MCT) in any
course that uses App Inventor. This paper will hopefully help to grow the conversation
on MCT, and help inform future instruments and pedagogy concerning MCT in other

58
CCSC: Northeastern Conference

languages. This rubric may, in the future, be generalized to be a language-agnostic MCT


instrument, as a result of those conversations.

ACKNOWLEDGMENTS
The authors would like to thank collaborators Franklyn Turbak, David Wolber,
Ralph Morelli, Josh Sheldon, Shaileen Pokress, James DeFilippo, and Hal Abelson, and
project evaluator Lawrence Baldwin. We also would like to thank the anonymous
reviewers for their support and criticism, which resulted in improved focus of the paper.
This material is based upon work supported by the National Science Foundation under
Grant Number 1225719. All course materials are available at
appdesignf13.wiki.uml.edu. The rubric itself is available at [7].

REFERENCES
[1] H. Abelson, R. Morelli, S. Kakavouli, E. Mustafaraj, and F. Turbak. Teaching
cs0 with mobile apps using app inventor for android. J. Comput. Sci. Coll.,
27(6):16–18, June 2012.
[2] H. Abelson, E. Mustafaraj, F. Turbak, R. Morelli, and C. Uche. Lessons learned
from teaching app inventor. J. Comput. Sci. Coll., 27(6):39–41, June 2012.
[3] M. Ben-Ari. Constructivism in computer science education. Journal of
Computers in Mathematics and Science Teaching 20.1:45-73, 2001.
[4] Z. Dodds, R. Libeskind-Hadas, and E. Bush. When cs 1 is biology 1:
Crossdisciplinary collaboration as cs context. In Proceedings of the Fifteenth
Annual Conference on Innovation and Technology in Computer Science
Education, ITiCSE ’10, pages 219–223, New York, NY, USA, 2010. ACM.
[5] S. Grover and R. Pea. Computational thinking in k-12: A review of the state of
the field. Educational Researcher, 42(1):38–43, 2013.
[6] S. C. Pokress and J. J. D. Veiga. MIT App Inventor: Enabling personal mobile
computing. arXiv:1310.2830v2, 2013.
[7] M. Sherman, F. Martin, L. Baldwin, and J. DeFilippo. Mobile CT Rubric for App
Inventor. http://nsfmobilect.wordpress.com/evaluation/. Accessed: 2014-09-01.
[8] J. M. Wing. Computational thinking. Communications of the ACM, 49(3):33–35,
2006.
[9] L. Werner, J. Denner, S. Campe, D. C. Kawamoto. The fairy performance
assessment: measuring computational thinking in middle school. Proceedings of
the 43rd ACM technical symposium on Computer Science Education, SIGCSE
’12: 215-220, 2012.
[10] D. Wolber. App Inventor Course-in-a-Box. http://www.appinventor.org/course-
in-a-box. Accessed: 2014-08-31.

59

You might also like