Professional Documents
Culture Documents
DETC2004-57478
INTRODUCTION
There have been vast improvements in the rapid product LITERATURE REVIEW
development since 1980s, mostly focused on searching for Solid modeling was developed in the mid 1970s as a
ways to shorten the development process duration. Among response to the need for informational completeness in
these, the advancement in design software is very significant. geometric modeling, and can be defined as a consistent set of
Accordingly, when preparing engineering students for similar principles for computer modeling of three-dimensional solids
responsibilities, integrating a solid modeler to design teaching (Shapiro, 2001). It uses the mathematical abstraction of real
is necessary. However, it is not a trivial task. Associated with artifacts, which are transferred into computer representations
the integration, several questions need to be answered. For (Requicha, 1980; Requicha and Voelcker, 1981). Solid
example, (1) Does the software have educational materials? (2) modeling was intended to be a universal technology for
Are the educational materials adequate? (3) Is it easy and quick developing engineering languages that must maintain integrity
for students to learn? (4) Can the faculty gain the necessary for (1) validity of the object, (2) unambiguous representation,
knowledge and expertise to teach it in a short time? and (5) and (3) supporting any and all geometric queries that may be
Does learning the software help students learn another solid asked of the corresponding physical object (Shapiro, 2001).
Despite the apparent trend in adopting solid modelers in In the 1980s, the introduction of icons and small pictures,
industrial and in educational institutions, selecting the solid which incorporated the desktop mouse as an input mechanism
modeler that is best suited to the task at hand is not an easy (Rheingold, 1991), changed the human-computer interaction
decision. One needs to consider several issues when making (HCI). The implementation of this GUI takes advantage of the
Probability
.80
both quiz problems correctly. This is reflected in the .50
performance data for both samples mostly being 1.00 out of .20
1.00. However, while the sample means were not statistically .05
.01
significant, the time it took users to complete the quizzes had .001
a spread for both solid modelers. Therefore, using Minitab™
variance tests were conducted for completion time of quizzes. 10 20 30 40 50 60
TimeQ1S1
Average: 28.9130 Anderson-Darling Normality Test
When the hypothesis test for equality of variances between StDev: 15.4093
N: 23
A-Squared: 0.817
P-Value: 0.029
two samples for quiz 1 using an F-test is completed, sample
variances were found to be significantly different. The test is Figure 4. Normality Test for TimeQ1S1 Data
conducted for 95% confidence level. Figure 3 shows the
Minitab output with a p value of 0.031. Normal Probability Plot
.999
95% Confidence Intervals for Sigmas Factor Levels
.99
TimeQ1S1 .95
Probability
.80
TimeQ1S2 .50
10 15 20 25
.20
.05
F-Test .01
Test Statistic: 2.573 .001
P-Value : 0.031
10 20 30 40
TimeQ1S2
Boxplots of Raw Data Average: 27.1304 Anderson-Darling Normality Test
StDev: 9.60731 A-Squared: 0.632
N: 23 P-Value: 0.087
TimeQ1S1
TimeQ1S2
Figure 5. Normality Test for TimeQ1S2 Data
10 20 30 40 50 60
As can be seen in Figures 4 and 5, for a 90% confidence
interval both of the data sets follow normal distributions (p-
value for TimeQ1S1= 0.029, and p-value for TimeQ1S2=
Figure 3. Variance Test Results for TimeQ1 0.087). Thus the significant difference in variances cannot be
dismissed.
The significant difference in sample variances indicates a
more homogeneous user performance data (similar in
completion times) (in this case for software 2), in comparison
to a more heterogeneous set (software 1). This might be a
sign of users’ different interpretations of GUI elements and
hence related performance differences. For example, in this
When a hypothesis test for equality of variances between two engineering design curriculum. This is because based on the
samples using an F-test is completed for the second quiz multi-user experimental study presented above software 2 was
problem, sample variances were not found to be significantly deemed to yield a more uniform user performance in terms of
different. The test was conducted for 95% confidence level. completion time when compared to software 1.
Figure 6 shows the Minitab output with a p value of 0.430.
Step 6) Repeat steps 1-5 in regular intervals.
95% Confidence Intervals for Sigmas Factor Levels Because the application took approximately two years, the
TimeQ2S1 interval for repeating the SMECC cycle is determined to be
TimeQ2S2
two years. The new cycle was started in Spring 2004
semester.
10 15 20 25
F-Test
Test Statistic: 0.711
DISCUSSION: ARE PERFORMANCE DATA ENOUGH
P-Value : 0.430 TO MAKE THE FINAL JUDGMENT?
Boxplots of Raw Data
SMECC primarily takes into account user performance
data for predetermined functions and criteria when comparing
TimeQ2S1 solid modelers, and it overlooks any preference issues that
may arise. Building off of the study above, additional
TimeQ2S2
experimentation was done addressing preference issues.
20 30 40 50 60 70 80 90
Preference in this context was defined as the conscience and
final buy and/or use judgment related to a solid modeler
selection.
Figure 6. Variance Test Results for TimeQ2
Additional experimentation, which included two mechanical
Based on the statistical analysis presented above, in terms of engineering students (junior standing) and two engineering
time to complete the problem while no significant difference design faculty, involved the two solid modelers of the
in sample means for both quiz problems arose, and no previous experiment. The students had no solid modeling
significant difference in variances for quiz 2 were found, due experience and only limited 2-D CAD experience. Faculty
to the significant variance in time to complete the quiz 1 had experience in teaching engineering design using both
problem, software 2 is recommended for integration to the software.
An extensive search in Compendex resulted in direct hits on The set of outcomes of this study, such as the compiled set of
keywords in connection with icons that included: design solid modeler comparison criteria, the methodology proposed
software, 3-D design, 2-D design, and solid modeling. and applied for solid modeling software comparison
Articles on topics such as interface evaluation based on (SMECC) are expected to aid companies and design educators