You are at: Home > E-Library > Paper 9710a > Paper9710a01

 

LEARNWARE QUALITY
BACKGROUND PAPER


Prepared by Dr. Kathryn Barker
President, FuturEd
for:

Learnware Quality Working Group
Of the "Network of Networks" initiative of the 
Education and Training Provider Network Project

October 27, 1997


TABLE OF CONTENTS

1.Purpose, Scope and Context 
2.Criteria for Evaluating Learnware Quality 
3.Who is Doing What…Resources for the Learnware Quality Initiative 
4.Possible Next Steps

APPENDICES

Appendix 1: Working Definitions
Appendix 2: Related Topics that are Not Included in the Scope of the Study 
Appendix 3: Context for the Study 
Appendix 4: Print Evaluation Criteria 
Appendix 5: Criteria for Evaluating Internet Information 
Appendix 6: Library Selection Criteria for WWW Resources 
Appendix 7: Internet Information Resources Evaluation Sites 


1.Purpose, Scope and Context

This paper – a literature review of the current status of learnware quality assurance – has been prepared by Dr.
Kathryn Barker under contract to the Education and Training Provider Network. For purposes of this study,
learnware is computer software by which interactive education or training programs may be delivered to
individuals by electronic means, such as CD-ROM, Internet, or Intranet. Terminology is discussed more fully in
Appendix 1.0.

The purpose of the background paper was, as per the terms of reference of the Consultant’s Study, to identify
already-established initiatives which support quality assurance in electronically-provided education and training
courses. Accordingly, Dr. Barker of FuturEd, within the budget and timeline parameters, undertook to:

Scope of the Study

This background paper was intended to provide an inventory of quality assurance initiatives specific to learnware.
The focus of the inquiry is the unique interface between format (technical standards), process (utility standards)
and content (information standards) for a particular type of software.

This concept is situated at the intersection of many related concepts, for example:

All of these concepts have some relevance; however, all can be examined separately and at great length.
Quality assurance in learnware touches on these, but it must be limited. Therefore, a lengthy discussion of
topics that are directly related but not included in the scope of this study is found in Appendix 2.0

Context for the Study

The environment in which this study takes place is one in which there is a growing concern for quality assurance
in education and training, in educational software, in the uses of technology in education, in distance education /
distance learning combined with the potential of the learnware market. A more complete discussion of this
context is found in Appendix 3.0


2. Criteria For Evaluating Learnware Quality 

In a nutshell, there are not a lot of complete lists of criteria for evaluating the quality of learnware, particularly from
the consumer’s point of view. Here’s what was found in the time-limited on-line search that was conducted.

Toward the notion of quality standards for learnware 

Presumably, although these criteria refer to educational technology that very broadly includes learnware, the negative labels are to be avoided and the positive labels are informal quality criteria.

 

Technology and the State Adoption Process 

The goal of this recommended process, which is made available on the Internet by the Software Publishers Association, is "to provide recommendations to publishers, state textbook administrators, and state technology officers to serve as guidelines for lowering the barriers to the adoption of technology-based products." The recommended step-by-step process for adopting technology, including software, includes such pre-adoption issues as: 

 

Recommendations from the US Secretary of Education to schools 

 

Valuable Viable Software in Education 

In order to provide examples of "successful software", a study was carried out by Educom’s Educational Uses of Information Technology Program (EUIT) in partnership with the Annenberg/CPB Project. Successful software is that which has value and is viable. 

Value: Software is valuable if it has the capability of being used to help improve teaching and learning. Indicators of value can  include evaluation results, awards won, testimonials from users, and the like. A software’s value is independent of the extent to which it is used. 

Viability: Software is viable if it is used by enough people for a long period of time that all its investors (original developers, funders, publishers, institutional support staff, faculty and students) can justly feel that they each have received an adequate return on their own investments in developing, acquiring, and/or learning to use the software. 

The major findings of the study were that: 

 

The key elements of the recommended National Strategy for Lifelong Learning on the Information Highway (from the Working Group on Learning and Training of the Information Highway Advisory Council, 1995) which are: 

 

Existing quality standards for learnware 

At this time, a small number of "sets of standards" have been developed, typically by an organization or agency with a particular perspective, e.g.,: 

Oracle Corporation has produced the Oracle Learning Architecture with which it wants to establish the standard for education and training over the Internet. Oracle is drafting guidelines for learning objects that:

Microsoft, Apple and many other companies are aware of the value of being the leader in educational technology standards. The standards leader can determine core technology upon which courseware is based. This will benefit society by stimulating the development of more and better courseware. 

Standards development organizations which have standards architecture and ontologies such as 

The Institute of Electrical and Electronic Engineers (IEEE) through its P1484 project with a mission to
develop standards, guidelines, and recommended practices for the area of computer-based learning, with
the goal of enabling tools, courseware, information, and services to be provided on a component basis.
IEEE P1484 has five task groups:

 

The IEEE P1848 has developed a "computer-based learning system" model (attached). IEEE is attempting to be a forum for companies such as Oracle and Apple.

The American National Standards Institute (ANSI) which, though its ETG (Education Task Group) is: 

The Industry Research Educational Multimedia Task Force of the Directorate General of the European Union has as a goal to standardize educational technology. the purpose of the IMS Project is to develop standards for instructional use of the WWW to address the integration of information technology into teaching and learning; the goals of the IMS Project are to: 

 

Educational organizations

Educom – a nonprofit consortium of American higher education institutions that facilitates the introduction, use, and management of information resources in education

Educom’s National Learning Infrastructure Initiative which addresses the technological prerequisites needed to create an environment in which technology-mediated learning can flourish, such as open standards for networked learning applications 

Here are the standards, then, that appear to exist to date. 

The Industry Canada Quality Guidelines for Selecting Computer-based and Multimedia Programs for Training 

A set of quality guidelines was prepared by Dr. Lynette Gillis in a 1996 project managed by the Knowledge Connection Corporation and supported by Industry Canada and several major corporate training organizations. In 1997 these guidelines were pilot tested and improved, in a project led by Dr. Gillis, with further support from Industry Canada, Knowledge Connection Corporation and major corporate trainers. Industry Canada plans to produce a digitized version for electronic distribution, and will present a progress report at the October 27th meeting of the E/TPN Working Group on Learnware Quality. Unlike other quality guidelines listed in this report, the Gillis/Industry Canada tool is not simply a "grocery list" of criteria, but rather an organized process for decisions concerning the selection of quality learnware. Criteria are presented in the context of a structured decision process.

Critical Success Factors 

According to a publication of the Office of Learning Technology, the six critical success factors in the development of learnware, which are de facto criteria for quality, are: 

The four key characteristics of effective software:

The Code of Ethics for professionals who use educational technology, developed by the Association for Educational Communications and Technology (ACET), which include: 

Internet-related criteria lists 

In the development of criteria for evaluating the quality of learnware, the evaluation criteria for websites and Internet information may make an important contribution. The criteria for evaluating Internet information ranges from the simplistic – e.g., Coolium -- to the highly complex, as per the University of Georgia.

To achieve "Coolium", the criteria for what makes the Cool Site for the Day, the webpage designer is directed to: 

 

From the consumer’s point of view, there might be something to this! 

At the other end of the spectrum, Wilkinson and others at the University of Georgia have developed a list including 11 criterion and 125 indicators. The criterion found in Evaluating the Quality of Internet Information Sources: Consolidated Listing of Evaluation Criteria and Quality Indicators could be the same as those used to evaluate learnware. They are: 

After conducting a study, the indicators of (1) information quality and (2) site quality were ranked in importance by experienced Internet users. This, too, could serve as the criteria for evaluating learnware quality. 

In the middle, the Internet Public Library uses the following selection policy for quality information sources, i.e., products / services which: 

Resources that are selected / approved by the IPL receive the IPL Ready Reference Seal. 

 

According to J. Jakevicius at the Idaho State University, the following is a list of recurring criteria when Internet resource evaluation is considered: content, authority, publisher-source, reference/awards, facts, documentation, bias, links and stability. 

At the University of Washington, J. Alexander and M. Tate adapted five traditional print evaluation criteria to web resources. Located in Appendix 4.0, this process could be further adapted to learnware quality. 

Possibly the most useful list of evaluation criteria, reproduced completely in Appendix 5.0, was developed by A. Smith in New Zealand. 

A different kind of Internet resource evaluation guide, reproduced in Appendix 6.0, covers the same concepts. 

An example of the various Internet rating styles can be found in the annotated list of Evaluation Sites reproduced in Appendix 7.0.

 

Quality Indicators (of Internet Information Sources) as Ranked by Experienced Internet Users: both Information Quality and Site Quality

Actual rating sheets for evaluating Internet sites have been produced by Teacher’s CyberGuide, and others. 


3. Who is doing what… Resources for the Learnware Quality Initiative

A directory of Canadian learnware developers, a service of the Training Technology Monitor http://www.traintec.com/LW_directory.html which includes company profiles: the people, tools and technologies, custom-developed products and off-the-shelf products

Recent studies of the Training Technology Monitor http://traintec.com/studies.html including:

Learnware Evaluations, a service of the Knowledge Connection Corporation http://www.kcc.ca/project/learn/home.html that includes formative or alpha testing, impact studies of developed programs, and product evaluations against KCC criteria for learnware quality to ensure that programs are instructionally sound, easy to use and technically reliable, and a good fit with the learners, the task and instructional setting 

An evaluation of the quality of information products and services on the internet by H. Tillman -- Evaluating Quality on the Net at http://www.tiac.net/users/hope/findqual.html  – against the following generic criteria for evaluation: 

Web evaluation forms for use by primary, intermediate and secondary grades (users) that include indicators for design, content, technical elements, and credibility http://www.siec.k12.in.us/~west/edu/rubric1.html

Lucent Technologies’ Center for Excellence in Distance Learning (CEDL) which:

The products and services of the National Council for Educational Technology (NCET) in the United Kingdom, specifically: 

The study of quality in distance education in Australia, e.g., 

Organizations concerned with learning technologies in Canada: 

WICHE (Western Interstate Commission for Higher Education), at http://www.wiche.edu/ has undertaken a number of projects, among them:

An International Buyers Guide to Technology in Education is available from ISTE (International Society for Technology in Education); information is at http://www.kenpubs.co.uk/iste

The Software Publishers Association http://www.spa.org/ provides, e.g.,

Educom – a nonprofit consortium of higher education institutions that facilitates the introduction, use, and access to and management of information resources in teaching, learning, scholarship, and research -- has a large number of related projects, including, e.g.,

The TeleLearning Network, a national Center of Excellence project at http://www.telelearn.ca/ is conducting a variety of research projects that are related to this study, e.g.,

NCRTEC (North Central Regional Technology in Education Consortium) http://www.ncrtec.org/ has a downloadable computer program called "The Learning with Technology Profile Tool." It is intended to help educators think carefully about their practice in the areas of engaged learning and technology. 

A list of Sources for Reviews of Educational CD-ROMs and Software, dated 1995, is located at http://www.iat.unc.edu/guides/irg-31.html

A Course Developer’s Standards Guide (Draft July 1997), developed by the East-West Project, is available at http://teleeducation.nb.ca/eastwest/standards/

From/at the American Association for Higher Learning, among other things:

"The only" scholarly journal for the educational technology field focusing entirely on R&D: Educational Technology Research and Development http://206.67.154.75/Pubs/etr_d.html

Grants to schools, via the NFIE (National Foundation for the Improvement of Education, at http://www.hfie.org/ ), grants from the Microsoft Corporation (proceeds from Bill Gates’ book) to use technology to improve student learning 

An Education Technology Promotion Guide, from the ISTE (International Society for Technology in Education) Private Sector Council and the Software Publishers Association, intended to help educators increase awareness, gain support and raise money for the integration of technology into their schools and districts (contact: cust_svc@ccmail.uoregon.edu )

THE Online (Technological Horizons in Education) publishes courseware assessments and evaluations, but not online (?) http://www.thejournal.com/

In 1995, the Information Highway Advisory Council claimed to be "examining the pros and cons of establishing national standards – both pedagogical and technical – for the delivery of learning and training on the information highway" and asked "What are the main considerations to be taken into account with respect to this issue?" (from a Working Consultation Document entitled Lifelong Learning on the Information Highway) 

A framework for thinking about standards specific to Intelligent Educational Systems is provided by Task
Ontology Design for Intelligent Educational/Training Systems, available as a position paper for the 1996 ITS
Workshop on Architectures and Methods for Designing Cost-Effective and Reusable ITSs, at
http://advlearn.lrdc.pitt.edu/its-arch/papers/mizuguchi.html 


4. Possible Next Steps 

Having reviewed this document, Jim McPherson has concluded that:

The initiatives identified are, for the most part, regional or institutional initiatives. Increasingly learnware is crossing national and sectoral borders (between countries and from academia to industry). While there are lots of guidelines, principles and standards, there is no internationally-coordinated set of guidelines representing a consensus of professionals and consumers in many countries. If Canada wants to export learnware, it will need to do it within the context of Quality Guidelines that are developed and accepted internationally…not just as a Canadian initiative. Consumer input would facilitate ensuring general acceptance by all parties.

The recently-established Global Alliance for Transnational Education (GATE) could provide a logical framework or launching pad for an initiative which will need to be launched with stature and credibility. Canadians could negotiate a collaborative relationship with GATE to achieve this, and could have the work co-funded by GATE and Canadian sponsors. 

To that end, possible next steps are to:

Clearly, from the amount of information that exists, there is no need to reinvent the wheel.

Additional References

Collins, A. (September 1991). The Role of Computer Technology in Restructuring Schools. Phi Delta Kappan, p. 28-36.

Commonwealth of Learning. (1994). Quality Assurance in Higher Education. Vancouver: author.

Dunning, P. (1997). Education in Canada: An Overview. Toronto: Canadian Education Association.

Educom Review Staff. (1995). Roger Shank: End Run to the Goal Line, in Educom Review
http://www.educom.edu/web/pubs/review/reviewArticles/30114.html

Harasim, L. (undated). On-line Education: A New Domain.
http://www.icdl.open.ac.uk/mindweave/chap4.html

Jones, G. (1997). Cyberschools: An Education Renaissance. Englewood, CO: Jones Digital Century Inc.

Kennedy, M. and Kettle, B. (Summer 1995). Using a Transactionist Model in Evaluating Distance Education Programs. Canadian Journal of Educational Communication, p. 159-170.

Office of Learning Technologies. (1997). Critical Success Factors.
http://olt-bta.hrdc-crhc.gc.ca/info/online/green/factors.html

Oppenheimer, T. (July 1997). The Computer Delusion. The Atlantic Monthly at http://www.theatlantic.com/issues/97jul/computer.html

Ravitch, D. (1995). National Standards in American Education: A Citizen’s Guide. Washington: Brookings Institute.

Reibel, J. (1994). The Institute for Learning Technologies: Pedagogy for the 21st Century. Columbia University: Institute for Learning Technologies. At http://www.ilt.columbia.edu/ilt/papers/ILTpedagogy.html

Smith, Alastair G. (1997). Testing the Surf: Criteria for Evaluating Internet Information Resources. The Public-Access Computer Systems Review 8, no. 3. http://info.lib.uh.edu/pr/v8/n3/smit8n3.html

Supply and Services Canada. (1996) Highlights of Departmental S&T Action Plans in Response to Science and Technology for the New Century. Ottawa: Government of Canada.

Twigg, C. (1996). Academic Productivity: The Case for Instructional Software. Report from the Broadmoor Roundtable at http://www.educom.edu/program/nlii/keydocs/broadmoor.html

Wellburn, E. (1996). The Status of Technology in the Education System: A Literature Review. Victoria: BC Ministry of Education, Skills, and Training; Community Learning Network http://www.etc.bc.ca/lists/nuggets/EdTech_report.html

Western Virtual University. (1996). Enhancing the Marketplace for Instructional Materials: Best Practices in Implementation of Advanced Educational Technologies at http://www.wiche.edu/

Wilkinson, G. (1997). Evaluating the Quality of Internet Information Sources. http://itech1.coe.uga.edu/faculty/gwilkinson/webeval.html


Appendix 1.0 - Working Definitions

Operationalizing "learnware" 

This may be a useful classification for the types of learnware.

Related terms

Distance learning / distance education 

Instructional technology and educational technology 

Multimedia

Multiple mode or multi-mode 

Open learning is particularly characterized:

User tools

Instructional agents


Appendix 2.0 - RELATED TOPICS THAT ARE NOT INCLUDED IN THE SCOPE OF THE STUDY

Topics that are directly related, but NOT included in this study of quality assurance and learnware, are the following.

For more information on software standards:

The Software Engineering Institute http://www.sei.cmu.edu/ provides a detailed list of SEI Products and Services that includes, e.g.,: 

Rutkowski, A. (1994) Today’s Cooperative Competitive Standards Environment for Open Information and Telecommunication Networks and the Internet Standards-Making Model at http://www.isoc.org/papers/standards/amr-on-standards.html

 

Internet standards at the technical and developmental level 

For more information on internet standards:

 

Internet standards and evaluation criteria for information sources 

For more information on evaluation of information sources on the Internet:

Library selection criteria for WWW resources (both criteria: access, design, and content) and other
evaluation websites at http://www6.pilot.infi.net/~carolyn/criteria.html 

 

Standards and quality assurance for information technology (IT) 

For more information on IT policies and practices in education:

 

Specific applications of learnware, e.g., on-line education or computer conferencing 

For more information on applications of technology in education :

 

Standards and quality assurance in distance education / distance learning other than that provided by learnware 

For more information on standards, quality and evaluation of distance education and/or distance learning:

 

 Standards in education and training 

For examples of standards in education / training:

 

Quality assurance in education / learning technology in general 

For more information:

 

Technical aspects of learnware 

The task of understanding, defining and describing quality in learnware is linked to, e.g., 

 

 

Finally, this study has been delimited in the following ways, that is, on the following premises. 

Learnware is considered, in this study, as a general or generic concept, although individual products may be: 

 

The use of technology in education and training is assumed to have advantages and disadvantages. This
study is not an endorsement of either position.

The development and implementation of standards, i.e., who, how and why / why not to develop standards
is controversial, to say the least, and assumed to be outside the realm of this paper.

The issues of quality and quality assurance via, e.g., performance indicators and measures, in education /
training is assumed to be a desirable goal. It is, however, not completely defined or described in this paper.


Appendix 3.0 - CONTEXT FOR THIS STUDY

The environment in which this study takes place is one in which there is a growing concern for quality assurance in education and training, in educational software, in the uses of technology in education, in distance education / distance learning combined with the potential of the learnware market.

The interest in quality assurance in education

1.encourages contacts between students and faculty 
2.develops reciprocity and cooperation among students 
3.uses active learning techniques 
4.gives prompt feedback 
5.emphasizes time on task 
6.communicates high expectations 
7.respects diverse talents and ways of learning 

The interest in quality assurance in educational software

The current situation is characterized by philosophical discussions of intelligent educational systems, on the one hand,
and ad hoc implementation on the other, i.e., a situation in which there is

In this context, there have been calls for standards, e.g.: 

According to Rada and Schoening (undated), digital educational technology has been non-standard for most of its 50 year history; however, the WWW provides a standard platform for educational technology that encourages the decomposition of educational technology tools into exchangeable components: technical standards that are critical if computers are to contribute to student-centered learning – enabling students to learn at their own pace, in their own style, and according to their own interests and strategy; hence, the creation of the PLS Initiative: (Personal Learning Systems ) 

The value of standards has been acknowledged. The existence of standards in many other technology areas has proven to accelerate the adoption and diffusion of technologies, e.g., in the microcomputer industry where competition around a common set of standards has brought the research and development capacity of the entire industry to focus on a single set of problems.

Quality assurance in the uses of educational technology 

Learnware is but a subset of the larger field of educational technology. It is conceivable, therefore, that the criteria for evaluating the effectiveness or quality of educational / instructional technology could be applied to learnware. Quality in the use of educational technologies is viewed from many different perspectives, as per the following:

If these are, in fact, the goals to be achieved, then quality learnware could be measured against them, in whole or in part.

Similarly, Frayer and West (1997) identify the following ways in which instructional technology can support learning:

If there are benefits of learning technologies vis-à-vis student learning, then those benefits could serve as quality criteria. Clearly, there is a need here to elaborate on these criteria rather extensively.

NCREL (North Central Regional Educational Laboratory – funded by the US government) has developed a "technology effectiveness framework" which posits that the intersection of two continua – learning and technology performance – defines the effectiveness of a particular technology in student learning. The framework’s horizontal axis is learning, which progresses from passive at the low end to engaged and sustained at the high end. The vertical axis is technology performance, which progresses from low to high. This framework, found at the end of this appendix, could be used to evaluate the effectiveness of particular technologies, such as learnware.

Quality assurance in the appropriate uses of technology 

Technology has the capacity to deliver better forms of student assessment, i.e., what the International Society for Technology in Education calls "authentic testing" which involves the following factors: 

From a different perspective, this list is the criteria, developed by the Open University in the UK, intended to differentiate between different media, helps to understand the various uses and appropriate uses of technology. 

The categories for comparison used are: 

Quality assurance in distance education / distance learning 

Learnware is often used in the context of distance education / distance learning, and the impact of technology on learning effectiveness. Advocates for distance learning claim that it makes learning and training more accessible, more convenient, more effective and more cost-efficient for the learners and for the education provider. Distance learning, and learnware, can be used for formal education, continuing education, advanced professional education and management/employee development. 

The environment for distance learning is characterized as one in which remote students have special needs including advising needs, access needs, communication needs, and administrative needs. In the traditional context – distance education delivered by traditional learning organizations for course / program credit – these needs are met through appropriate institutional support structures. However, in the distance delivery context – self-directed learners who may or may not want credit from traditional learning institutions – the learnware must assist distance learners to:

To develop independent and self-reliant distance learners (possibly, learnware learners), research indicates that the following three approaches are commonly advocated:

According to Lucent Technologies, a division of Bell Labs, the benefits of distance learning to organizations include:

According to Seligman (1992), the five elements of quality, specifically for the improvement of quality in distance are: 

All of these statements of values can be construed as criteria for evaluation or quality assessment. Criteria for evaluating distance learning, then, have some application in this study.

The nature of the learnware market 

Educational Learnware

The market for learnware is studied both by the industry and by educational bureaucracies within government. In short, both have concluded that the market for learnware is "a mess."

For example, a 1995 study of the educational software market, undertaken by the RAND Corporation, reached the following conclusions concerning the K-12 area.

In summary, the structure of school budgets, which requires that educational technology be acquired from the vanishingly small fraction of the budget that remains after all other requirements are met, a current emphasis in the schools on Internet service that tends to emphasize hardware acquisition, and an expanding home market for "edutainment" imagery in CD-ROM format combine to create an uncertain school market for an increasingly sophisticated educational software industry. 

A reason that the market is considered to be "a mess" is because the quality of educational software is uneven at best. For example, the California Software Clearinghouse evaluated 528 educational software packages between 1991 and 1994, and found that 376 were acceptable, 201 could be recommended for school use, only 63 were exemplary, and, in the end, only 8 were adopted as classroom materials. 

That having been said, there is an untapped market for quality learnware. The Software Publishers Association 1997 Education Market Report, including both Canada and the US, concludes, e.g., that:

Trends from the report indicate that: 

The full report is available from SPA via the SPA website.

 

Educom also assembled leading educators to study the learnware market, and to specifically consider the implications of learning technologies vis-à-vis educational productivity in post-secondary education. They concluded that: 

 

Learnware for Corporate Training 

The corporate training market is judged by many to be the most lucrative, in terms of accessibility and size, to
pursue. A 1996 Market Assessment Study of New Media Learning Materials was produced by Industry Canada
with Support from Human Resources Development Canada. This report identified major training requirements in
key industrial sectors, recognized a significant training role for new media learning materials, and outlined
significant opportunities for Canada's fragile but growing learnware industry.

 

The NCREL Technology Effectiveness Framework

Reproduced from http://www.ncrel.org/sdrs/edtalk/tef.htm

Now that we have meaningful and appropriate indicators for engaged learning and for high technology performance, we can use them to measure the extent to which individual technologies and technology-enhanced programs are effective - that is, the extent to which they support engaged learning. 

To this end, we (NCREL) have developed the technology effectiveness framework. This framework posits that the intersection of two continua - learning and technology performance - defines the effectiveness of a particular technology in student learning. The framework's horizontal axis is learning, which progresses from passive at the low end of the continuum to engaged and sustained at the high end. The vertical axis is technology performance, which progresses from low to high. (This is illustrated in Table 3 which was not electronically reproducible).

When we cross the two continua, four major learning and technology patterns emerge:

How to use the framework

The framework gives educators, researchers, and policymakers a way to evaluate technology and technology-enhanced programs and curricula against the learning goals they have for their student. Before doing so, however, these decision makers need to define their learning goals. That's where the trajectories for change come in.

Directions for Change

The framework encompasses four positive (desirable) directions for change: 

It is obviously counterproductive to move from D (passive learning with the least functional technologies) to C (passive learning with more functional, and more costly, technologies). If a school or group is not using technology to enhance engaged learning, there is little reason to pay the higher cost for greater functionality.

Once the school or school district establishes its curricular goals, the trajectories can guide it in determining what technologies can move learners toward these goals.

This framework provides a powerful matrix for analyzing particular technologies and programs in broad terms. Decision makers can use it as they select and work toward specific curricular goals to promote engaged learning. Researchers, curriculum developers, and staff developers can use the framework to design technologies and technology-enhanced programs. And schools can use the framework to evaluate technology and its costs. In doing so, the critical questions are:


Appendix 4.0 - PRINT EVALUATION CRITERIA

The material on this page was created by Jan Alexander and Marsha Tate, Reference Librarians at Wolfgram Memorial Library, Widener University, Chester, PA. We would like to thank them for making their work available to us.

Review of the Five Traditional Print Evaluation Criteria 

Adapting Five Traditional Print Evaluation Criteria to Web Resources


Appendix 5.0 - Criteria for Evaluating Internet Information

Reproduced from http://info.lib.uh.edu/pr/v8/n3/smit8n3.html

The following set of criteria for evaluating internet information resources was developed by A. Smith (1997) in the
context of the librarians task of evaluating, selecting and recommending information resources. Smith’s "toolbox of
criteria," listed below, may be most relevant to the content side of learnware (as opposed to the process side).
According to Smith, not all criteria apply to all resources and librarians would choose criteria from the toolbox. Criteria
for evaluating Internet information resources are the following.

1. Scope

2. Content

3. Graphic and Multimedia Design

4. Purpose and Audience

5. Reviews

6. Workability 

7. User Friendliness

8. Required Computing Environment 

9. Searching

10. Browsability and Organization 

11. Interactivity 

12. Connectivity 

13. Cost 


Appendix 6.0 - Library Selection Criteria for WWW Resources

Reproduced from http://www6.pilot.infi.net/~carolyn/criteria.html

Carolyn Caywood c1995. (An early version appeared on page 169 of the May/June, 1996 issue of Public Libraries.)

Libraries are beginning to select Internet resources to be linked on their websites. Just as book selection has been, this
process needs to be guided by a policy with stated criteria. The following are my suggested criteria for assessing the
value of a Web site to a library's users. Few sites meet all criteria, so the benefits must be weighed against the lacks.
Given the speed of change on the Internet, resources need to be re-evaluated on a regular schedule to determine if
they still meet these criteria.

ACCESS

DESIGN

CONTENT


Appendix 7.0 - Internet Information Resources Evaluation Sites

Reproduced from http://info.lib.uh.edu/pr/v8/n3/smit8n3.html

1.0 The Argus Clearinghouse
http://www.clearinghouse.net/
The Clearinghouse provides clearly laid out criteria that are used for evaluating the resource guides it includes. The
criteria are based on level of resource description, level of resource evaluation, design, organizational schemes, and
metainformation. These are useful criteria for evaluation of resources, although they are specifically intended for the
evaluation of resource guides. The criteria are listed at http://www.clearinghouse.net/ratings.html

2.0 Best of 1996 Social Sciences, Humanities & Asian-Pacific Studies WWW Resources
http://coombs.anu.edu.au/SpecialProj/QLTY/BEST/Method96.html
This is an example of a "best of" competition, but with an academic bent. Entries are to be rated under the criteria of
quality, structure, and presentation. Brief definitions of these criteria are listed under "Rating Procedure."

3.0 CyberHound
http://www.thomson.com/cyberhound/
Cyberhound is a service of Gale Research, well known for their print reference works. In addition to being a search and
directory service, Cyberhound offers reviews according to criteria under the headings content, design, technical merit,
and entertainment. The criteria are found at http://www.thomson.com/cyberhound/frames/content.html#rating

4.0 CyberStacks
http://www.public.iastate.edu/~CYBERSTACKS/
This experimental site arranges selected Internet information resources in science and technology by Library of
Congress classification. Its criteria (authority, accuracy, clarity, uniqueness, recency, reviews, and community needs)
are stated to be the same as those laid down for conventional resources in the American Library Association's
Reference Collection Development: a Manual, published in 1992. This does not address issues such as workability
which are more specific to Internet information resources. CyberStacks' criteria are at URL:
http://www.public.iastate.edu/~CYBERSTACKS/signif.htm

 5.0 Infofilter
http://www.usc.edu/users/help/flick/Infofilter/
As of July 1997, this project has ceased operation. It lists criteria of authority, content, organization, currency, search
engine, graphic design, and innovative use of the medium. The criteria are listed at
http://www.usc.edu/users/help/flick/Infofilter/template.html under "Review." 

6.0 The Internet Public Library
http://www.ipl.org/
The IPL states that its collection policy for selecting ready reference resources is based on content, updating, the
graphics being complementary rather than distracting, availability of text interfaces, evidence of proofreading, and
whether the document is a primary one. The selection policy is at http://www.ipl.org/ref/RR/Rabt.html#policy

7.0 Magellan Internet Guide
http://www.mckinley.com/
For ratings in the Magellan directory, McKinley Group uses the criteria of depth, ease of exploration, and net appeal.
The latter is assessed by asking, "Is it innovative? Does it appeal to the eye or the ear? Is it funny? Is it hot, hip, or
cool? Is it thought-provoking?" The ratings explanation is reproduced at http://www.lib.ua.edu/maghelp.htm#howdoes

8.0 SiteGrade 
http://www.sitegrade.com/
This site assigns "letter grades to websites in order to encourage responsible use of the World Wide Web medium so
that the widest possible audience can enjoy it." Fairly detailed criteria are stated at http://www.sitegrade.com/criteria/

9.0 Stevie's Web Site Ratings
http://www.steview.com/cgi-bin/STEVIE/rat_home
This is quite a complex voting system. Users can rate a site on access speed, applicability to different age groups,
ease of navigation, educational/informative quality, entertainment quality, appearance, timeliness, and usefulness.
Ratings are continually averaged, and the latest figures and top ten sites are available from the Web site.

10.0 World Wide Web Virtual Library Maintainers: 
(Criteria Used to Select Links for Resources' Catalogues)
In 1995 an email poll of WWWVL site maintainers was used to accumulate a range of criteria used by the maintainers
in selecting material for their sites. This provides a wide range of criteria, available at
http://www.ciolek.com/WWWVLPages/QltyPages/QltyLinks.html
The author maintains a page, which is part of the World Wide Web Virtual Library, with links to a number of resources relating to evaluation criteria for Internet information resources. See: http://www.vuw.ac.nz/~agsmith/evaln/evaln.htm