Sunday, July 25, 2004

1. Introduction to VLEs in General:

Here we explore:


 

What is a VLE?

Pedagogical aspects of VLEs and...

Brief VLE Overview



What is a Virtual Learning Environment (VLE)?

Although there is some confusion in the UK about the definition of Virtual Learning Environments, BECTA[1] state that VLEs are generally a combination of some or all of the following features: 

1. Communication tools such as email, bulletin boards and chat rooms 

2. Collaboration tools such as online forums, intranets, electronic diaries and calendars 

3. Tools to create online content and courses 

4. Online assessment and marking 

5. Integration with school management information systems 

6. Controlled access to curriculum resources

7. Student access to content and communications beyond the school
 
According to Dillenbourg (2000)[2] a VLE is not synonymous with any educational web site, 3D/virtual reality technology, or notion of virtual campus. At the minimum, “A virtual learning environment is a designed information space” and can consist of very simple text-based tools.

My own definition would describe a VLE as a learning management software system that allows access to computer-based course content. This is not in conflict with either BECTA or Dillenbourg’s definitions. I would add that VLEs can be used to manage learning content and/or launch computer-assisted assessment. Ideally, the learner’s PC will be connected to an Intranet or Internet to access media-rich content and provide on-line support via email, chat, and bulletin boards. This concurs with BECTA. 

Most VLEs are not intended to entirely reproduce or replace the classroom environment but exploit technology to provide learners with new tools to facilitate learning. VLEs aim to accommodate a wider range of learning styles and goals, to encourage collaborative and resource-based learning and to allow greater sharing and re-use of resources.

[1] BECTA (2004) http://www.becta.org.uk/research/research.cfm?section=1&id=545
 
[2] Dillenbourg, P. (2000) EUN Conference 2000, ‘Learning In the New Millennium: Building New Education Strategies for Schools’, Workshop on Virtual Learning Environments.
http://tecfa.unige.ch/tecfa/publicat/dil-papers-2/Dil.7.5.18.pdf  


Pedagogical aspects of VLEs

Although later versions of VLEs are usually designed with a significant number of tools and functions, including student-tracking and in some cases, timed content delivery and content-tracking, the pedagogical aspects are not usually made explicit. A range of evaluative strategies have been developed by educators to tease out these implicit system characteristics; hence these strategies can be employed to select systems that reflect a preferred pedagogical model (Britain & Liber, 1999). In our research, it is not the VLE that is the focus of enquiry but a course called Ikarus which is using a Moodle-based VLE as a platform.

Globally, an enormous number of VLEs have emerged over the last few years and their relative functionality is tabulated and compared by Edutools[1]. As authors, we cannot guarantee the completeness of this list, nor indeed, its accuracy. The list can, however, be used to get a feel for how many different VLEs currently exist. In order to choose between the VLEs, there have been a number of evaluations by educators in order to determine their effectiveness. One of the dangers with this approach is that by focussing upon the VLE as an entity rather than as a ‘place for learning’, researchers may be predisposed to look for the overall benefits from the HEI’s perspective; for example, functions such as class management don’t necessarily have any direct bearing upon learning.


A checklist approach which focuses upon the VLE as an entity tends to overlook the specific pedagogical needs of learners according to Stiles (2000)[2]: “The author contends that over-attention on the ‘features’ provided by VLEs can lead to a ‘check-list’ approach to VLE selection, which, coupled with inattention to the educational issues, can result in mere transposition of traditional teaching approaches to the computer, and result in a poor learning experience which is ineffective.”


Quite early on in our project discussions, we considered that one possible alternative would be to use a combination of Chickering & Gamson’s 7 Principles work in conjunction with checklists that were readily available. By shifting the loci of enquiry from the HEI to the learner, we felt the benefits to be had should be demonstrable through either learner-performance, learner-confidence or learner-satisfaction. Some of these facets are clearly much more difficult to measure by us as researchers remotely; without having access to all possible sources of data, we decided that the only way we could appreciate the learner’s perspective was by putting ourselves in the learner’s position as we trawled through their transcripts, logs and questionnaires.


We sought to avoid evaluating a course by putting a tick in a box. Instead we wished to try to assimilate the rich advice within the literature with regards VLEs and use this to help determine qualitative ways of evaluating a course. For us to achieve this, we developed criteria to focus upon aspects that were learner-enabling rather than feature-laden. In order to develop suitable criteria, we looked closely at embedding Chickering & Gamson’s 7 Principles precepts into our enquiry.

 
Firstly however, we looked at the literature concerning VLEs to see what could be learnt from any existing evaluation models. A very brief overview of VLEs themselves is considered useful to put this into context.

[1] EduTools (2004) http://www.edutools.info/course/compare/all.jsp EduTools is an open resource created to help educators/administrators research and evaluate a wide range of e-learning products, services, and policies. Funded by the Hewlett Foundation in the USA to provide independent reviewed, objective source of information, EduTools provides comparisons, reviews, analyses, and automated decision-making tools in Course Management Systems, Student Services and e-Learning Policies.


[2] Stiles, M. (2000) Effective Learning & the Virtual Learning Environment, Conference Paper to UNIS 2000. http://www.staffs.ac.uk/COSE/cose10/posnan.html

Brief VLE Overview

Despite being provided with a range of ‘options’, the inherent design of many proprietary VLEs inevitably limits the choice of approaches for teaching and learning.

“It is also argued that this approach promotes a degree of pedagogical inflexibility. In this situation, it is argued that the evolution of international specifications such as IMS
[1] and SCORM[2] are likely to reflect market power rather than educational needs” (Konrad, 2003)[3].

In contrast, a Moodle
[4] VLE can prove useful when faculty wish to have greater influence over the particular pedagogical model employed. Notwithstanding the implications for staff development and the greater demands upon staff time, a Moodle can be modified to a greater extent than a proprietary VLE, in order to make them suitable within a particular educational context.
 
There are also arguments for assuming that proprietary VLEs are inherently more programme and/or content-centric whereas Moodle is more student-centred. This especially noticeable in VLEs that have in their pedigree, remnants of corporate training (predominant in USA), which represents a largely content-transmission approach to training delivery.

The IMS and SCORM attributes of content-sharing are expected to be embodied into Moodle ultimately.


[1] Instructional Management Systems (IMS) is a course management system or a learning server, a CBT system or an integrated learning system. IMS is concerned with standards for learning servers, learning content and the enterprise integration of these capabilities. http://www.imsglobal.org

[2] Sharable Content Object Reference Model [SCORM] provides a technical architecture for small, reusable learning objects to be shared across multiple learning delivery environments. The SCORM specification describes an architecture built on specifications designed to share modular course content between compliant distance learning courses using different learning delivery systems. http://www.cetis.ac.uk/encyclopedia/entries/20011129121727/view 
 
[3] Konrad, J. (2003) Review of Educational Research on VLEs – Implications for Improvement of teaching & Learning & Access to Formal Learning in Europe, EDUCOL http://www.leeds.ac.uk/educol/documents/00003192.htm
 
[4] Moodle was originally an acronym for Modular Object-Oriented Dynamic Learning Environment http://moodle.org
 

2. Background on evaluating courses designed on VLEs:

We explore several different models for evaluating VLEs:

  • Britain & Liber
  • Mason
  • Laurillard
  • Stafford Beer
  • Helen Beetham

and we decide to use the following models for evaluation:

  • Chickering & Gamson’s ‘Seven Principles for Good Practice in Undergraduate Education’
  • Gilly Salmon's 5-step model for eLearning

Britain & Liber – evaluation framework

Having stressed their “overriding concern” to match pedagogical orientation of a system to its intended use in teaching and learning, Britain & Liber (1999)[1] suggest one way of constructing an evaluation framework is to determine how well learning resources are incorporated to facilitate student-centred approaches to learning.

Britain & Liber (1999) acknowledged there are various ways that VLEs might facilitate student-centred approaches but observed that currently, there is no established evaluation methodology from a pedagogical perspective with which to assess different systems adding that “most discussions and evaluative reviews of VLEs to date have tended to concentrate on the features, technical details and pricing of different systems”.


[1] Britain, S. & Liber, O. (2004) A Framework for Pedagogical Evaluation of VLEs, JTAP. http://www.jisc.ac.uk/uploaded_documents/jtap-041.doc

Mason – evaluation framework

With regard to learning resources, Mason (1998) suggested three basic models of differentiating existing on-line courses:

Content + Support Model. Akin to early eLearning (20% resources online).


Wrap-around Model. Similar to blended learning (50% resources online).

Integrated Model. This is a resource-based model where the course is defined by collaborative activities, discussions and joint assignments. Course contents are dynamic and are determined largely by individual needs and group activities. Resources are contributed by participants or tutors as the course develops.

Laurillard – evaluation framework

Various evaluative frameworks have been suggested for VLEs. A Conversational Framework was developed as an educational model and applied to the use of learning technology in higher education by Laurillard (1993).
 
The Conversational model has its roots in Gordon Pask’s Conversation Theory and was opted for by Britain & Liber (1999) because Laurillard’s seminal work entitled ‘Rethinking University Teaching’ led the way in exploring and clarifying ways in which Learning Technology could be employed to improve elements of teaching and learning in HE. According to Laurillard, conversations are “the conscious processes accessible to the learner to consider and modify” (Laurillard, 1993 Pg 102). She also considered ways in which ICT supports and enhances teaching and learning associated with the four ‘thinking domains’ below:

Discursive
Adaptive
Interactive
Reflective

Stafford Beer – evaluation framework

An alternative to the Conversational Framework, known as the cybernetic or organisational model, is drawn from the Viable Systems Model for modelling organisational systems proposed by Stafford Beer (1981). This model is useful to understand because, from a change management perspective, when one decides to change one element in a system (such as the teaching and learning process by introducing new software for example), it is necessary to consider the impact on other elements of the system.
 
Liber (1998) suggested that this organisational systems approach may be applicable in a pedagogical context; hence the Viable Systems Model is used by Liber as a way of stratifying the different communication channels into distinct spheres known as:

Resource negotiation
Coordination
Monitoring
Individualisation
Self-organisation
Adaptation


Comparing evaluation models

One of the primary differences between the two models (Conversational vs. the organisational model) described is that the conversational model deals primarily with student-teacher interactions. This form of interaction encompasses a large part of the VLE functionality, but ignores peer-group functionality and excludes tools for allowing the teacher to manage a number of students.

For this reason, these aspects of VLE functionality were therefore evaluated by using the Viable Systems Model by Britain & Liber (1999).

Evaluation models evolve over time; models that were predominant at the birth of the first VLE centred on software evaluation. As time passes, researchers change their methods in the light of further research in order to advance their own discovery. Hence, whilst Britain & Liber (1999) separately presented a set of questions based upon both Laurillard’s Conversational Framework and the cybernetic Viable System Model, by 2004, they had decided that this was “too cumbersome”. In their second VLE report they had decided to combine the two models into one set of questions (Britain & Liber, 2004[1]) under three main headings called:

The module
The student level
The programme level

[1] Britain, S. & Liber, O. (2004) A Framework for Pedagogical Evaluation of eLearning Environments, JISC. http://www.jisc.ac.uk/uploaded_documents/VLEFullReport08.doc  


Deciding Research Questions

As novice researchers, we wanted to investigate how any previous focus of VLE enquiry might best be transferred to evaluate courses. Having turned to the literature, we embarked upon an approach to building our research criteria from first principles. Dempster (2004)[1] suggests that for individual innovators, a pragmatic approach is to identify what aspects to know about your use of e-learning, the impact on students’ learning and their experience of learning. The next step is to formulate the key questions that need answering. Questions should relate back to your teaching goals and the overall context of learning while taking account of others who might learn from your experience. “If you don't have a question, you don't know what to do (to observe or measure); if you do, then that tells you how to design the study” (Draper, 1996).

This then was how our basic questions for the evaluation of Ikarus were developed. We were interested to know to what extent Ikarus provided a learning place:

a.    as a meeting place to communicate,
b.    to share
new ideas,
c.    to experience learning

It could be argued that the above questions are too broad. To narrow the focus, we realised we needed to be more specific. In order for our focus to be more in-depth we thought perhaps we could explore how well Ikarus might meet the key tenets of Chickering & Gamson’s ‘Seven Principles for Good Practice in Undergraduate Education’[2].

The 7 Principles were developed as a means of evaluating undergraduate education and although they were first used almost twenty years ago; our rationale was that learners’ needs in HEIs were still consistent today. Chickering & Gamson’s 7 Principles are below:

1. Encourages contacts between students and faculty.
2. Develops reciprocity and cooperation among students.
3. Uses active learning techniques.
4. Gives prompt feedback.
5. Emphasizes time on task.
6. Communicates high expectations.
7. Respects diverse talents and ways of learning.
 
Our wish to conduct deep & highly-focussed qualitative research had been a reoccurring theme throughout WS3. For example, ways of framing a question that would meet the requirements of 2 above could be:

“In what ways that were developed by the learning community, did the design of Ikarus succeed in constructing a shared understanding in the individual learners?”

“In what aspects did learners feel that their learning was being facilitated by their co-learners?”

“In what aspects did learners feel that expectations were being met?”

“In what ways did the design fail to meet the aspirations of the participant learner?”

“To what extent can supportive messages within bulletin boards encourage further discussion & deepen reflective learning?”
 
Further questions were drawn from current thinking on evaluating LAMS[3]
whilst not directly relevant to VLEs, some amount of cross-fertilization was a possibility; hence current discussion about evaluating LAMS was considered viable as a spring-board to generate questions of enquiry that could be used for Ikarus. As a consultant to the JISC e-Learning and Pedagogies Programme, Beetham (2004)[4] suggests the following questions:

Activity - we know this fits with the current theoretical literature on howpeople learn, but how far is the current focus on 'activity' rhetorical? i.e. what real differences do the new tools and affordances make tolearners, as compared with tools that appear to focus on learning 'objects'? What notion of 'activity' do these tools and specifications encode? Does this fit with the notion of 'activity' held by learners, teachers, and educational theorists?

Sequence - to what extent does sequence or workflow uniquely define alearning activity? What other factors need to be taken into account? Whenand why is sequence highly significant to learning outcomes?

Approach/subject - what, if any, is the relationship between the types of activity and sequence chosen and the pedagogical approach (and/ordiscipline/subject context)? Are particular sequences or activities moreeffective in certain pedagogical and/or subject contexts?
Interactivity - does the focus on activity lead to more peer interaction? More equitable contributions from learners in groups? Higher levels ofparticipation and motivation?

Sharing and re-use - How re-usable are 're-usable' activities and/orsequences in practice? What contextualising information is necessary foreffective re-use? What is it that is actually re-used (i.e. what is a'learning design')? What evidence is there for re-use across subject and sector boundaries?

Designing for learning - How do practitioners currently plan, design and orchestrate learning activities? How much time is allocated for this? Whatskills do they need? How much discussion do they have with colleagues? What are their constraints? What software, paper-based and other tools support this activity at present? What tools could support this activity more effectively? What tools and approaches might 'disrupt' and develop this activity?

The learning design specification - What 'fit' does this offer with existing practitioner conceptions of activity, sequence, approach and design as identified above? What potential does it have for transformation of practitioner conceptions? Could this transformation be justified in terms of (a) learner outcomes (b) ease of sharing and re-use (c) development of more effective tools and teaching systems?


[1] Dempster, J. (2004) http://www2.warwick.ac.uk/services/cap/resources/eguides/evaluation/elearning/questions
 
[2] Chickering, A.W. & Gamson, Z.F. (1987) ‘Seven Principles for Good Practice in Undergraduate Education’  http://honolulu.hawaii.edu/intranet/committees/FacDevCom/guidebk/teachtip/7princip.htm   
[3] Learning Activity Management System (LAMS) is a revolutionary new tool for designing, managing and delivering online collaborative learning activities. It provides teachers with a highly intuitive visual authoring environment for creating sequences of learning activities including individual tasks, small group work and whole class activities based on both content and collaboration. http://www.lamsinternational.com/index.html
 
[4] Beetham, H. (2004) LAMS Research Questions published via LT-THEORY@jiscmail.ac.uk (21/06/04)


Evaluation framework used for Ikarus

As researchers, we were also interested to discover to what extent did culture play a part and to what extent did the inherent design of the Ikarus course mitigate any overt/explicit (or less obvious/implicit) misunderstandings between students or between students and tutors.

Thinking back to our own MEd course, we were pretty quick to explore & deal with any misunderstandings but we considered what it would be like if we were using English as a second language. We hoped that any implicit lack of rapport, evidenced by the lack of a reply (or unsupportive reply) could lead us, as researchers, to acknowledge that the meaning conveyed within the threads was not all it should or could be.  In order to discover any cultural barriers, we were interested to know if our question could be rephrased thus: “how can I discover implicit lack of understanding or how do I judge ineffective collaboration?”

 
All these considerations went into the melting pot that the two researchers dipped into when framing their enquiry.
 
To see how they managed to tackle their evaluations of Ikarus, you will have to read their accounts. This should be particularly revealing since one is an ex-student from the course and the other is not. Maha will research as an ‘insider’ while Ged will adopt the ‘outsider’ perspective.

When the two evaluations are completed, it will be interesting not only to see how well the Ikarus course meets learners’ needs but also whether it is possible to compare the two approaches by observing how closely any conclusions correlate.

Saturday, July 24, 2004

Posting by John

testing by John