Sunday, May 10, 2009

Introduction



TECH 4102 :Evaluation in Educational TechnologyFall 2009My Portfolio
Student ID (s): u062133 & u061820
Email:
u062133@squ.edu.om,u061820@squ.edu.om
Course instructor: Dr
Alaa Sadik, alaasadik@squ.edu.om
Department of Instructional & Learning Technologies, College of Education,
Sultan Qaboose University

Welcome to our e-portfolio in Evaluation in Educational Technology Course which was done by Issa AL-Sumri and Ahmed AL-Moqbali. Here, we present all our activities ,assignments and tasks that we did during the semester. We are inviting you kindly to view these few pages and we hope it will be beneficial and interesting.

Comparative and non-comparative evaluation in educational technology

1- Comparative study:

The title of this study is blended learning vs. traditional classroom settings: assessing effectiveness and student perceptions in an MBA accounting course which is authored by Chen and Jones. The type of this study is learners' perception and performance. It Compare students’ assessments in accounting class of course effectiveness and overall satisfaction with the course. The participants are Master of Business Administration (MBA) students in an accounting class at a university in the Northern United States. A survey was used to compare students enrolled in a traditional in-class section, and another group in a “blended-learning” section in which the primary course delivery method was online, but in which students met in class on a limited number of occasions. The results of this study suggest that the two delivery methods were similar in terms of final learning outcomes, but that both may be improved by incorporating aspects of the other. It Make meaningful comparisons between two delivery methods. However, the survey was administrated only on one school and involved one MBA course in accounting and the inferences cannot necessarily be made about other courses, institutions and instructors.

2- Non-comparative study:

The title of this study is learning strategies and other factors influencing achievement via web course which is authored by Shin, Ching-Chun, Tom and John. It Examine how students with different learning styles functioned in World Wide Web-based courses and to determine what factors influenced their learning. This study has three objectives the first one is examine how students with different learning styles function Identify the demographic characteristics of the student by learning style. Second, identify how students' learning strategies', patterns of learning, and achievement differed in relation to their learning style. Third, identify relationships among student learning style, learning strategies, patterns of learning, achievement and selected variables in World Wide Web-based courses and to determine what factors influenced their learning. This study is aiming to answer these questions, what do we know about the way students learn through the www? what are the important learning factors in web based courses? do student learning style, learning strategies and patterns of learning influence learning achievement?. The Participants are 99 students at Iowa State University taking two non-major introductory courses. An online questionnaire that included a learning strategies scale, a pattern of learning scale, and demographic questions was designed and posted on the Web. A total of 78 students also took the Group Embedded Figures Test (GEFT), a learning styles test. The Results indicated that learning styles, patterns of learning toward Web-based instruction, and student characteristics did not have an effect on Web-based learning achievement. This study use two evaluation instruments, but some students don’t respond to these instrument.

Powerpoint presentation link:
http://www.slideshare.net/61820_62133/comparative-and-noncomparative-1412738

Evaluation of online learning




Evolution of students Participation in discussions forum in programming course

We evaluate students participation in programming course offered by ILT department at SQU. In Programming course, seven discussion topics were suggested by the students. Results from Programming course records showed that students responded to the discussion boards while studying with Programming course by sending a total number of 115 messages. Assuming that every student in Programming course (26 students) should participate by sending at least one message to each discussion board, this number (115) represents only 58.2% of the predicted total number of messages (182) that should be sent to the discussion boards. In other words, the average number of messages sent by each student throughout Programming course was 4.42, compared with the ideal total of 7. You will find more detaile in this link.

Evaluation of CAI

Description of the survey:
This survey designed to evaluate the technical and pedagogical features of an online questionnaire. Expert respond to each items on a five-point scale which are (strongly disagree, disagree, agree, strongly agree, and does not apply).

Survey link:
http://www.surveyconsole.com/console/TakeSurvey?id=569738

Evaluation strategies

Evaluation Strategies:
Peer Observation

We will use the observation as an instrument and an evaluation strategy to evaluate the educational project. The project is about Photoshop course. We choose observations because it offers critical insights into an instructor’s performance, complementing student ratings and other forms of evaluation to contribute to a fuller and more accurate representation of overall teaching quality. It will be appropriate to judge specific dimensions of the quality and effectiveness of the program, including achieving the goals, content, design and organization of the course, the methods and materials used in delivery, and evaluation of student work. This observation will be carried out for both summative and formative purposes. For summative evaluation, prior consensus will be reached about what constitutes quality teaching within the discipline, what the observers will be looking for, and the process for carrying out and recording the observations. A post-observation meeting with student and teachers allows an opportunity for constructive feedback and assistance in the development of a plan for improvement. We will use the checklist to evaluate the project and it will consist and measure different aspect while users use and navigate through the educational technology project.
we search for ready made observation, see the link to find the observation checklist
http://www.slideshare.net/61820_62133/peer-observation-form

Models of evaluation in educational technology

We applied ACTIONS model of evaluation to evaluate educational technology project. that we explored in this course to evaluate an educational technology project. We selected a preject ( Tourism Web site ) and evaluated it by appleing ACTION model.We upload a PowerPoint presentation that present how we evaluate the project.To see the presentation , follow the link please :
http://www.slideshare.net/61820_62133/models-of-evaluation-in-educational-technology-1412327

Levels and techniques of evaluation in educational technology

Educational Technology at Omani Higher Education Institutions
Dr Ali Sharaf Al Musawi
Dr Hamoud Nasser Al Hashmi


Purpose:
The purpose of this research was to address the current and prospective views on educational technology (ET) in order to discover the difficulties and develop its utilization in Omani higher education.

Levels of evaluation:
The level of the evaluation study is a project level because it conducted at higher institution level.

Techniques of evaluation:
- There were two questionnaires used to carry out this study: the faculty members' questionnaire, and the technical/administrative staff questionnaire. They were developed by the researchers by generating a list of potential issues of ET derived from the literature and national level standardized surveys.
- In-depth interviews were conducted to verify some areas of the effectiveness of instructional software/equipment use brought up by faculty members in the questionnaire.
Reference link:

Evaluation in educational technology

(1) Improving pre-service teachers' visual literacy through flickr
Dr. Alaa Sadik

This summary of the study focuses on the evaluation methodology used, in terms of purpose and instruments. The title of this study is mproving pre-service teachers' visual literacy through flickr. The author of this study is Dr. Alaa Sadik and it publish in the World Conference on Educational Sciences. The purpose of this study was to investigate the influence of Flickr, as an online photo management and sharing application, when implemented in a technology course for pre-service teachers on their visual literacy skills. There are four instruments were used in this study. The first one is online tracking and observation to assess the extent to which pre-service teachers are engaged in real decoding and encoding activities through the participation in Flickr. Second, visual literacy test to assess whether pre-service teachers' visual literacy skills were enhanced via interpretation, development and sharing of photographs or not. Finally, Interview was conduct to solicit pre-service teachers’ concerns and views regarding the use of Flickr in enhancing their visual literacy skills.


(2) WebCT as an E-Learning Tool: A Study of Technology Students’ Perceptions
Lesta A. Burgess

This summary of the study focuses on the evaluation of specific technology features. The title of this study is WebCT as an E-Learning Tool: A Study of Technology Students’ Perceptions. It Authored by Lesta A. Burgess. This study evaluated the WebCT as communication (bulletin board, chat room, private e-mail) , instructional (glossary, references, self-test) and management tools (grading, tracking student interaction, monitoring class progress). The questions sought to determine whether the students perceived that they had used the WebCT effectively, what elements of WebCT they elected to use, what difficulties they might have encountered, and their overall opinions regarding this e-learning tool. The evaluation instrument has three sections which are about collected data about student use of and familiarity with WebCT, interest in using e-learning tools in the future and collected demographic information. The validity and reliability of the instrument was ensured by experts in related fields, as well as through a pilot test. The questionnaire was sent to five university faculty for validation. Before implementing the survey, a pilot test was administered to fifteen students. The result was indicate that a majority (94.7%) of students, this was the first time they had used WebCT or any courseware tool. A majority of students (52.6%) reported no technical problems with the software. Students of about (78.6%) found WebCT useful for their courses.

Presentation link:

http://www.slideshare.net/61820_62133/evaluation-in-educational-technology-1412343