BUILDING AN ONLINE COURSE BY DEVELOPING AND SEQUENCING SHARABLE CONTENT OBJECTS (SCOs)

Scott A. Tanner

Brian Caudill

Cheryl J. Hamel

University of Central Florida Institute for Simulation and Training, Orlando, Florida

Elizabeth Blickensderfer

Naval Air Systems Command, Training Systems Division, Orlando, Florida

ABSTRACT

Instructional designers and developers summarize the process they used to build a Web-based course by creating sharable content objects that follow specifications of the Sharable Content Object Reference Model (SCORM). Many examples of SCOs are provided, including detailed descriptions of course content, screen shots from the course, and actual metadata. The course is a sequence of SCOs containing generic content for training supervisors of civilian personnel and most of the SCOs are stand-alone and reusable. The difference between reusable SCOs and SCOs developed as transition pages are discussed. The rewards and pitfalls experienced during design and development of the course are presented, and the need for development tools is emphasized.

AUTHORS

Scott A Tanner, M.A. Instructional Designer, UCF Institute for Simulation and Training. Mr. Tanner has participated in the analysis, design, development, implementation, and evaluation of paper, computer, and Web-based training systems. Most recent is his effort to complete a Web-based training course on civilian supervisory training. This effort includes designing the course to comply with the Advanced Distributed Learning (ADL) guidelines and Sharable Content Object Reference Model (SCORM). His previous experience in supervisor training includes designing ready-to-lead seminars, and other training manuals for the hospitality industry. Mr. Tanner holds a masters degree in instructional systems design. He may be contacted via electronic mail at stanner@ist.ucf.edu.

Brian Caudill. Mr. Brian Caudill is a computer programmer at the University of Central Florida Institute for Simulation and Training. He has participated in the design and implementation of Advanced Distributed Learning (ADL) prototypes for military and non-military applications that complied with the Sharable Content Object Reference Model (SCORM), versions 1.0, 1.1, and 1.2. He has experience in Web integration of training content and learning systems analysis. He is a certified auditor for SCORM 1.1, and has expertise in XML, Java, Java Script, C, C++, Visual Basic, Database Management, and Network Administration. He may be contacted via electronic mail at bcaudill@ist.ucf.edu.

Cheryl J. Hamel, Ph.D. Adjunct Faculty member, Department of Psychology, University of Central Florida (UCF), and Research Associate, UCF Institute for Simulation and Training. Dr. Hamel’s most recent research efforts include the design and evaluation of Advanced Distributed Learning (ADL) prototypes that comply with the Sharable Content Object Reference Model (SCORM). She has a number of publications and presentations related to guidelines for design and evaluation of computer-based training and Web-based instruction. Dr. Hamel holds a doctoral degree in experimental psychology and holds additional degrees in psychology and mathematics. She may be contacted via electronic mail at chamel@ist.ucf.edu

Elizabeth Blickensderfer, Ph.D. Elizabeth Blickensderfer is a research psychologist at the Naval Air Warfare Center Training Systems Division. She received her M.S. (1996) in Industrial/Organization Psychology and Ph.D. (2000) in Human Factors Psychology from the University of Central Florida. Her research interests include distance learning, joint training, team performance, mental models, and training effectiveness. Dr. Blickensderfer can be contacted via electronic mail at blickensdeel@navair.navy.mil

The University of Central Florida’s Institute for Simulation and Training (IST), in conjunction with the Advanced Distributed Learning (ADL) initiative and Naval Air and Warfare Center Training Systems Division (NAWCTSD), developed a prototype online course by developing and sequencing Sharable Content Objects (SCOs) according to specifications of the Sharable Content Object Reference Model (SCORM), Version 1.1 (http://www.adlnet.org). A SCO is an independent chunk of instructional content that can stand alone to teach a skill or concept or can be combined with other SCOs to create a larger course of instruction. Following SCORM specifications, a SCO represents the lowest level of granularity of content that can be tracked by a Learning Management System (LMS) using the SCORM Run-Time environment. SCOs with accompanying descriptors (metadata) can be stored in repositories for reuse. The SCORM is a collection of specifications adapted from many sources to provide a plethora of e-capabilities that enable interoperability, accessibility and reusability of Web-based learning content. The work of the ADL initiative to develop the SCORM is also a process to bring together many different groups in the government and training industry that may have similar interests. This reference model aims to coordinate emerging-technologies and commercial and/or public implementations (http://www.adlnet.org).

Based on the ADL initiative and the direction of our sponsors at NAWCTSD, our goal was to develop a multimedia enriched prototype web-based course for Department of Defense (DoD) civilian supervisory training using the SCORM. We were also required to follow the ADL design guidelines, a collection of recommendations based on instructional design and Web design principles (http:// www.jointadlcolab.org/guidelines). Due to time constraints, the content of the course was confined to a specific area within DoD civilian supervisory training—how to conduct performance review sessions. Our goal was to create a web-based training course, approximately two hours in length, by developing and sequencing Sharable Content Objects (SCOs) that contained content related to this specific area. We achieved many of our goals for web-based training while coming away with many lessons learned in the process.

CREATING SHARABLE CONTENT OBJECTS

Creating SCOs is not a simple task. While creating web-based training, it is important to follow a sound instructional design model in order for the training to succeed. Due to the relatively new concept of using SCOs to create courses, we approached our instructional design process using ADDIE, a relatively simple instructional design model. In the ADDIE model, the acronym stands for Analyze, Design, Develop, Implement, and Evaluate. Our project took us through the first three phases of the process. The implementation and summative evaluation of the course were not required. However, formative evaluation did occur with the use of subject matter experts (SMEs) and students evaluating the course during and following the design and development process. Our lessons learned are presented using the ADDIE framework.

Analysis: Putting a Design Team Together

We began our process of creating SCOs and designing the civilian supervisory training course by identifying and putting together the members of our design team. Our original design team consisted of two Technical Points of Contact (TPOCs) at NAWCTSD, several SMEs from different areas of the government services, and a project director and lead instructional designer from IST. The intent was to start with these initial members and then later add more members to the project team, such as a Web developer, graphic artist, multimedia developer, programmer, and technical writer. We believe that this decision played a role in the slow start of the project. Without a full design team from the very beginning, the instructional designer and project director had to play multiple roles, several of which were not in their area of expertise.

Once our initial design team was in place, our next step in the analysis stage was to create a focus group to narrow the content focus of the course. It would have been helpful to have all the intended members of the project team to attend the focus groups. This would have given them an opportunity to listen to the subject matter experts and develop an immediate understanding of the content. However, even without all members of the project team present, we narrowed the focus of civilian supervisory training to “Conducting a Performance Review for Civilian Supervisors.”

Our final stage of the analysis consisted of assigning roles to each design team member, researching SCORM issues, and gathering content. The only design team member that was added at this time was the Web course developer. The research and content gathering consisted of studying SCORM and the use of SCOs, reviewing the ADL guidelines, researching civilian supervisor training content sources, and gathering content from the subject matter experts. There were many difficulties involved in this process. First, there was no tutorial on SCORM because it was in its infancy stage, and constantly changing. The SCORM Version 1.1 was also very technical and written for programmers. Grasping the SCORM specifications was very difficult for anyone without a programming background. Additionally, the ADL guidelines were constantly being changed or expanded, so it was difficult to grasp all of the concepts. Fortunately, many of the important basic guidelines for web-based training were available and easily understood. Finally, an additional NAWCTSD point of contact was added to the project team who was able to serve as both a TPOC and a SME. The original subject matter experts were not always easy to contact, and having an additional SME was helpful.

Design: Decomposing Content Into SCOs

One of the greatest challenges of creating SCOs was taking existing content and breaking it down into reusable chunks of information. What made our particular course even more challenging was that there was not an existing course. All of the SCOs in our particular course were newly created and were not taken from existing training. The content that was gathered came from the SMEs’ previous experiences or from paper-based training from many different sources.

Once all of the content had been gathered, the instructional designer’s first step in the design process was to consult the SMEs and create learning objectives based on their expertise. The learning objectives were designed to be precise and measurable.

After the learning objectives were created, the content was broken down into topics of related subject matter that could be categorized under each learning objective. Based on this categorizing of content, the instructional designer was able to create a first draft of a topic outline. The topic outline was reviewed by the design team, adjusted, and then finalized.

Upon completion of the topic outline, the instructional designer began creating more detailed outlines in which specific content was added under each topic. At this time the instructional designer had to consider an important issue: how to break down the content into SCOs. These SCOs had to be created as “stand alone” chunks of content that could be put in a repository or database for reuse by another party. If the chunk of content couldn’t stand by itself, then someone else would not be able to reuse it without significant modification. For our particular course, “Conducting a Performance Review for Civilian Supervisors,” we identified nine SCOs shown in table 1.

Table 1. Definition of SCOs for “Conducting a Performance Review for Civilian Supervisors”

SCO1: Government Performance Review Regulations

SCO2: The Performance Management Cycle

SCO3: Performance Plans, Standards, and Elements

SCO4: Performance Review Preparation

SCO5: Setting the Proper Tone of the Performance Review

SCO6: Delivering the Feedback for the Performance Review

SCO7: Post Performance Review

SCO8: Body Language

SCO9: Listening Skills

Each SCO in the table above could be stored in a repository and reused for other purposes. We designed the level of detail of the SCOs based on the amount of detail needed for the course. If we were to create a course solely on body language (SCO8 above), then a greater amount of detailed content would be added and it would have been logical to create additional SCOs on types of body language such as: eye contact, posture, gestures, facial expressions, and physical appearance. However, our goal was to build a course on conducting performance reviews; therefore, we had to design SCOs that stayed within the focus of our particular prototype.

Design and Development: Storyboarding

Decomposing content to form SCOs was an effective way to break down the content for the learners. Once this process was completed and approved by the design team, the instructional designer was instructed to begin storyboarding the content for development into web-based training. It was at this point in time that several more members were added to the project team. These team members included a graphic artist, multimedia developer, programmer and technical writer. During the storyboarding process, it was important for the team members to work together to determine which development tasks would capitalize on each team member’s strengths. Most importantly, it was crucial that each team member understood the concepts behind SCORM and SCOs.

In order to begin the development process, we had to identify which types of authoring tools we would use to turn our storyboards into working html pages that included many forms of media. We decided to use Macromedia Dreamweaver to develop the course because our developer felt most comfortable with that particular program. We also enriched the course with graphics, audio narration, video clips, Director pieces, and Flash animations to make the course engaging to the learner. Besides making the course engaging, these forms of multimedia helped the learner to develop concrete representations of the different concepts we taught.

Each SCO that we developed consisted of a series of html pages and, thus, each SCO was identically structured. Each began with a description of the learning objectives for that particular SCO, followed by the content delivery and ended with a short knowledge check with feedback. Each html page contained at least one type of media object such as: video clips, Director pieces, Flash animations, audio clips, and graphics. The knowledge checks (also called quick quiz) were created using Director or Flash, and were not designed to test the learners. They were designed as a way to enhance what the students already learned. These knowledge checks provided repetitive exposure to the content and gave the learners feedback. If the course prototype were implemented on a SCORM-conformant LMS, each SCO would contain the script needed to implement the SCORM Run Time Environment, allowing the SCOs to be launched by the LMS and student scores to be tracked.

Navigation through the SCO was easy and straightforward. The learner had the opportunity to navigate through each SCO linearly, by way of a back and forward button, and non-linearly, by way of links at the top of the page to the different SCO topics. A navigation bar on the left side of the page linking to the different SCOs gave the learners the opportunity to choose their own path through the course.

Lessons Learned. Since the concepts of SCORM and SCOs were new to the project team, we learned many lessons during the storyboarding and development stage of the project. We learned early that SCOs are a useful method to break down content into reusable chunks. However, we also became aware of various restrictions on the design process when designing storyboards built from “stand alone” SCOs. Unfortunately, many of these issues we learned towards the end of the design process.

Naming. One of our more serious restrictions involved the titles of the SCOs that appeared on the screen at the beginning of each SCO. We learned that you couldn’t have titles with lesson numbers (or module numbers, etc.) because that would require the SCO to be modified before it could be reused. For example, if we titled a SCO “Lesson 9: Listening Skills,” someone wanting to reuse the lesson may want to use it as “Lesson 7” or may want it to be a part of a lesson. Thus, when the course was formed, the SCOs were sequenced without reference to particular lesson numbers, even though, from an educational standpoint, this would make it more difficult for learners to note where they were in the course. It seemed best to title each SCO according to the content held within that SCO and to use the identical title in the <title> tag in the html file. For instance, the title of SCO1 was “Government Performance Review Regulations.” Another naming issue was brought to our attention during development of the storyboards into an HTML format. Originally, we assumed that naming the files using a numbering system would make it easy for our team to understand the flow of the course. For example, some of our files had the following names: 8_1_.htm and 9_1.htm. However, in the spirit of creating “stand alone” SCOs, we realized it would be more helpful to name the files according to their content, for instance:

ipskills_bodylanguage.htm ipskills_listening.htm.

This goes against programming convention that recommends using 3 letter file name extensions and 7-10 characters in a file name. Yet these file names would be very useful to designers and programmers who were working with SCOs.

Course Context. The next design issue complicated by SCORM was our recognition that when the SCOs were sequenced in a linear fashion to form the course, the course did not flow smoothly. From an instructional perspective, the course needed transitions between SCOs and an instructional context. Transitions typically provide objectives, introduce the content of each lesson in the larger course context, or link the topic to previous topics in the course. To solve this instructional dilemma, the team came up with the idea of “transition SCOs.” Transitions SCOs were basically transition pages that linked the set of SCOs together to form a course. Although the content of a transition SCO was not reusable, graphic files or any other raw media files that were part of the SCO, could be placed in a repository for reuse. Figure 1 shows how we used a transition SCO to link SCO8 (Body Language) and SCO9 (Listening Skills). In the larger course context, both of these SCOs could be categorized under the general topic “Interpersonal Communication Skills.” We added that title above those two SCOs in the course navigation bar. The learners could click on the

Interpersonal Communication Skills

Body Language

Listening Skills

Figure 1. A Portion of the Course Navigation Bar Showing Content on Interpersonal Communication Skills as two SCOs

Interpersonal Communication Skills title that linked to a page that introduced the concept of interpersonal communication skills to the learner, and stated the objectives for the two topics contained in that chunk of the course. However, this page alone (transition SCO) would not likely be reused and is not a good candidate for a repository. If other instructional designers created a course, or section of a course, on interpersonal communication skills, their content and SCO structure would likely be different, making our transition SCO completely irrelevant. For example, see figure 2 showing an alternative section of content on interpersonal communication skills from a hypothetical course.

Interpersonal Communication Skills

Verbal Communication

Non-Verbal Communication

Listening Skills

Figure 2. Hypothetical Navigation Bar Showing Section of Content on Interpersonal Communication Skills as three SCOs

In general, the transition SCO helps to create a smooth-flowing course for the learner, yet it usually cannot be applied in different contexts.

Independent Chunks of Content. Another design issue brought to our attention by SCORM concerned the design of “stand alone” pieces of content. Due to our sponsors’ requirements, we had incorporated many Department of Defense (DoD) service references and acronyms into the course. One of the worst things a designer can do is to assume that the learner will understand acronyms. Good instructional design practice tells us that even defining an acronym at the beginning of a course is unacceptable when creating web-based training. If learners control sequencing options, a learner may decide to skip the introduction section where acronyms are defined. In the case of reusable SCOs, each SCO must stand alone, and so all acronyms must be defined within the SCO. For example, if a supervisor from the government or military is in the “Government Performance Review Regulations” section of the course, they will understand what “DoD” means. However, if a commercial company wanted one of their supervisors to take that lesson, the supervisor may not understand that “DoD” means “Department of Defense.” In other words, we had to make the SCOs generic enough to appeal to a general audience.

Our DoD target audience also mandated references to the various services. To make the SCOs standalone, all service references were removed from all the SCOs except the SCO on government performance review regulations. If a supervisor from the United States Navy was taking the course and a particular section of the course referred to a United States Army regulation, the Navy supervisor would have no need for that information. The intention was to provide a reusable course across all services, and not refer to a particular service. Additionally, referring to the services would probably prevent a supervisor from a commercial company from taking the course.

Creating Metadata Files. In order for our SCOs to be discoverable and ultimately reusable, they have accompanying metadata that provide the important descriptive information about the object. We used a standard metadata scheme described in SCORM 1.1. Although SCORM lists both required and optional data elements, we decided to reduce the time and effort devoted to this task by choosing to write metadata for only the mandatory elements. Metadata files were created for each SCO and each of the approximately 400 raw media files. The programmer developed a SCO metadata template, listing all mandatory elements for the SCOs, that was given to the instructional designer to complete. The template, shown in table 2, was used to provide descriptive information such as file format, file size, authorship, version number, and instructional characteristics.

Table 2: SCO Metadata Template Completed for SCO8 Body Language

Title: Body Language

Catalog: —

Entry: —

Description: Definition, examples, and a basic overview on the different types of body language and how it can be used between a supervisor and employee during a performance review session

Keywords: nonverbal, communication, body language, eye contact, posture, gestures, facial expressions, physical appearance

Version: 1.0

Status: Draft

Metadata Scheme: SCORM 1.1

Format: text/html

Location: http://www.jointadlcolab.org/civ_sup_repository/bodylanguage.htm

Cost: No

Copyright: Yes

Purpose: Educational Objective

Description: Entry Level Introductory Tutorial

Keywords: novice, introductory, beginner, basic

The template for writing data elements for the raw media files contained similar elements, but fewer elements were mandatory. The completed templates were returned to the programmer who programmed the metadata files. Writing the metadata and programming the metadata files were extremely labor-intensive tasks due to the large number of files. Later in the process, a metadata generator tool was used to generate the metadata files for the raw media. This tool can be downloaded at http://www.cnet.com.

Formative Evaluation

During the entire course design process, our design team routinely conducted formative evaluations. We felt that formative evaluations would keep the design team focused and correct mistakes early in the design and development process.

Due to the complex nature of SCORM, we needed to constantly consult with our programmer (and SCORM expert) to determine whether we could include certain types of content and multimedia in the course. Our programmer also supervised the addition of metadata for the <metadata> tag in each file.

Other members of the design team also participated in the formative evaluation process. Our sponsors and the SMEs evaluated the course content at repeated intervals. The sponsors made recommendations based on the course requirements, while the SMEs made recommendations based on their expertise. The instructional designer recommended sound training techniques, while the SCORM expert determined how those ideas could be created in the spirit of SCORM. The sponsors, SMEs, and instructional designer also evaluated the look, accuracy, and effectiveness of any graphics or multimedia created by the graphic artist and multimedia developer. Lastly, the entire team used checklists developed from the ADL Guidelines to evaluate the course’s instructional design, Web design features, and accessibility.

In addition, graduate students and SMEs outside of our project team evaluated the course during its development. The SMEs were asked to assess content accuracy, and students were asked to determine whether the course was user-friendly and understandable. For example, students were instructed to write down their opinions based on their experience taking the course. Students noted where further explanation was needed, and/or where navigation was difficult. They also explained why they may have liked or disliked certain types of graphics or multimedia. This helped us to revise different areas of the course, and make it more understandable and user-friendly.

While formative evaluation played an important role in the course development, summative evaluation was not conducted because the course was not implemented during the time of our project. Had the course been implemented, we would have conducted a summative evaluation using objective measures of course effectiveness. We would have also collected subjective measures on user interface dimensions such as readability, ease of navigation, and aesthetic appeal.

USING SHARABLE CONTENT OBJECTS

Designing online courses by developing and sequencing Sharable Content Objects (SCOs) has been a unique, challenging experience for our design team. We discovered many advantages and disadvantages of using SCOs. Designing a prototype course gave us the opportunity to take chances and see the areas where we were right on track or needed to improve.

Advantages of Using SCOs: A Commentary

One of the greatest advantages of using SCOs is that they could decrease development time and costs once an effective repository is created. This repository would be able to store the SCOs, along with other objects, such as raw media files. Unfortunately, large scale repositories that meet SCORM specifications do not yet exist, but we see the future potential of this approach. For example, performance reviews are extremely important throughout the government and commercial sectors. By storing our SCOs related to conducing performance reviews, many organizations would have access to performance review training, either free or a for a fee. These companies would not have to create their own training from scratch, but could download the SCOs and raw media needed for their particular course, make minor adjustments, and save the expense of creating their own training. Repositories containing SCOs and other objects spanning the entire subject domain of “supervisory skills” would provide organizations many opportunities for creating training and performance support. Another advantage of using SCOs is that trainers and instructional designers could concentrate on instructional design, rather that content development. These designers could have access to available training content in a repository, then make the adjustments necessary to gear it toward their target audience. Instructional designers would no longer have to spend as much time developing content from scratch, conducting research, and spending long hours with SMEs.

Disadvantages of Using SCOs and Suggestions for Improvement

There seem to be many opportunities and advantages for using SCOs. However, there are hurdles we must leap before the use of SCOs becomes a well-accepted aspect of instructional design.

One of the greatest disadvantages of using a repository for SCOs is that the content in the repository would require constant review and updating. The procedure would require an extreme amount of effort and expense. The individual/organization responsible for updating content and the criteria for keeping content current would have to be defined. Logically, the best type of content for a repository is that which is less likely to change over a relatively long period of time and which appeals to a broad spectrum of potential users.

Another disadvantage of building SCOs for a repository is that there is not enough descriptive metadata to create an effective repository. Designers need a standard metadata schema utilizing a standard vocabulary recognized by instructional designers. An appropriate and complete metadata scheme will make it possible for designers to have meaningful descriptions of SCOs when they are making their selections from a repository.

What Will the Future Hold?

Developing online courses using SCOs has a lot of future potential. Having the ability to access a repository for anytime, anywhere, reusable training can save government and industry money. However, many of the problems our design team experienced must be resolved before SCOs can be used successfully. Those problems include: building SCOs that are independent and stand alone, developing and sequencing SCOs with an instructional context, and generating metadata to search for SCOs and media files. Based on our design team experiences, we believe these efforts are possible with the improvement of web-based authoring tools, improvement of SCORM, and practice. Future issues will be centered on building a working repository. We will need a mechanism to constantly update the repository and standard metadata that is efficient for instructional purposes. We expect that SCORM will become more inclusive for all types of instruction.

AUTHORS NOTE

The views expressed herein are those of the authors and do not reflect the official positions of the organizations with which they are affiliated.

REFERENCES

http://www.adlnet.org. Information retrieved June, 2002.

http://www.jointadlcolab.org/guidelines. Information retrieved June, 2002.

http://www.cnet.com. Information retrieved June, 2002.