The overall scores for the evaluated WBLEs fell in the medium-high range on average, with a few outliers.Using the total score of the table, only two of the evaluated websites out of 12 had the high score of 3.8out 5. Although these two websites earned high scores in usability, content, and vividness criteria, theyhad low scores for educational criterion which dropped the overall average under 4. It is interesting that allthe evaluated websites, except site #9 and 10, earned very low scores in the educational value criterioncategory. This one category decreased the overall score significantly in almost every case. This meanseven sites structured nicely for design and content did not carry the acceptable amount of educationalvalue for the users. In other words, in the majority of the evaluated websites, there were significantweaknesses in educational content and effectiveness.According to the results of 12 evaluated websites, all of them did not appear to meet all the criteria for educational and needed improvement on the delivery of what they were supposed to provide as a web-based learning environment for the purpose of education and instruction.The two evaluators generally scored the WBLEs within a reasonably similar range, with one evaluator giving consistently lower scores than the other. Using only two evaluators with a wider range of scoringvariability would decrease the reliability of the composite scores, however, the scores given by our twoevaluators seemed to provide similar assessments of each of the evaluated websites. To increasereliability, additional evaluators could be added, and the scores on the high and low ends could bedisregarded. For an initial assessment, our rating criteria appears to result in a reliable and accurateassessment.After evaluating each of the sites identified by the group, it was clear that some of the examples wereexemplary and some were average or below. There were several reasons that accounted for thesediffering assessments. The superior sites possessed either a large range of learning activities that wouldfit many different needs, or a very well rounded core concept with many interactive teaching tools toreinforce core concepts. The sites that scored in the average or lower ranges were good enough to catchthe evaluator’s attention in the gathering stage of this project, but they noted that as they dug deeper theydiscovered them to be simplistic teaching tools with little to no learning support.
Use of the Evaluation Rubric
Evan-The evaluation rubric was very helpful in the evaluation of the WBLE sites that we explored in this project.The format was well set up and allowed me to break down each site and look for key components. Thiswas helpful because as I progressed though my site evaluations, they became more systematic and Icompleted them more quickly.The only issues that I encountered with the rubric during my evaluations dealt mainly with the GoogleDocs Form. Most of the questions that were on the rubric were similar and occasionally I would skip aquestion and the form could not be submitted until I completed it. If Google allowed more flexibility withthe form, it would have been nice to add color to every other question to break it up or separate it a bit.Looking back at the rubric and the evaluation process, I do not think that any aspects were missed. Eachof the core areas that we should investigate was included and the scale questions allowed us to rate eachaspect on our observations.Traci-The evaluation rubric helped to focus my attention on various aspects of a website that I wouldn’tnormally look for. In the past, I have judged websites based on their looks and appeal. If there were too