Why Open Rubrics?

In my past few posts, I tried to shed a little light on my interest in an open data model for educational rubrics. If you’re new to the general concept of a rubric, there’s a fine summary on Wikipedia. So what do I mean by an “open data model”? Let’s break that down.

Again from our friend Wikipedia:

A data model in software engineering is an abstract model that describes how data are represented and accessed. Data models formally define data elements and relationships among data elements for a domain of interest.

The jist is that we need a way to describe rubrics, whole or in part, for use in a software system.

Most of the online rubric generator tools produce a rubric document – usually in HTML, or possibly PDF or Excel, that lends itself well to printing and other pre-Internet use cases. But document rubrics are not easily integrated into any sort of information system – in these cases, they are merely presentational forms of a rubric, and contain little or no semantic information about the meaning of the various parts of the document. So the world of computerized rubrics is similar to the state of Web development in 1999 – lots of non-semantic, presentation-laden documents that are hard to process by any sort of software.

So why an open data model? My thoughts on this tend to group into two arguments:

  1. transportability – a rubric is a document that should be able to move from one technological system to another. There are a few existing rubric tools that do create a computer-readable rubric document, but the file format is proprietary – rubrics created in such a system can only be used in that system, and can’t be exchanged with other systems that might be able to use them, except in some presentational form like PDF.
  2. continuity – relying on any sort of proprietary system as the sole means of reading and storing important data is no longer an option. Even de facto standard formats like Microsoft’s Word DOC and other Office file formats are deemed too risky by many governments, leading to the creation OpenDocument Format Alliance.

So what type of format should we use? HTML and XML are great at describing the structure and content of documents, but less so the meaning implied by the information.

The Semantic Web provides some exciting possibilities for open data in all forms. So why not rubrics?

Next: Semantic Rubrics

Towards an Open Rubric – Part Two

In part one, I related the shambling development project to build an online generalized rubric builder/application tool, codename:”Rubricator”, at the IST Solutions Institute at Penn State from 2007-2008. The official project met an untimely demise as a result of a college reorganization. While this certainly wasn’t the first technology project to be offed by a surprise reorg, we had a more troubling problem – we had promised the tool to a colleague to help execute her research!

Carol McQuiggan, a friend of teammate Stevie Rocco is a member of Penn State’s instructional design community. Carol had provided the first rubric we marked up and used for early testing and development – a self-assessment rubric to help faculty members measure their own preparedness for online teaching. We had signed Carol on as the first pilot user of the rubric system, and she been accepted to the upcoming Sloan-C International Conference on Online Learning to present on her research.

Stevie had since moved on to a new position with Penn State Online, and I was in charge of building the new Extreme Events Lab at Penn State. Stevie and I resolved not to hang Carol out to dry. Through some long evenings, work sessions at the local Panera and the assistance of the local Adobe Flex Study Group, we managed to finish a limited version of the rubric tool. This version was enough for Carol to complete her research and presentation. Stevie and I were also able to parlay our experiences into a presentation at OpenEd 2009.

Stevie was also able to find a permanent home for the rubric tool as the Faculty Self-Assessment: Preparing for Online Teaching with Penn State Online.

In our rush to finish the “rubricator”, we unfortunately had to compromise on our initial design in a few severe ways. We were still no closer to an open model for rubrics, one independent of the application that displays them. In fact, we were left even without a clear path to release what we created as open source – it remains property of Penn State due to institutional intellectual property policies. Perhaps someone still at PSU will take up the charge.

Next: Part Three – Liberating the Rubric

Towards an Open Rubric – Part One

Though it seems like just a short time, almost three years ago my old workgroup at Penn State set out to do something crazy: help our faculty deal with an overnight tripling of class sizes in our college.

The College of Information Sciences and Technology had been created by University President Graham Spanier in 1999 under a protectionist model: class sizes were capped and admission to the major was restricted in order to try to create something different: a program built from the ground up around Problem Based Learning. At the same time, the administration recognized that the college couldn’t be self-sufficient with these restrictions, and provided the startup funding necessary to allow it to prosper.

When this additional funding came to an end, the college administration discovered a sobering fact: class sizes would have to nearly triple for the college to become self-sufficient. The artificial environment under which the college had prospered was coming to an end.

At the time of this inflection point, I was the Senior Technologist of the now-defunct IST Solutions Institute. SI was the education R&D arm of the college: an eLearning and technology “Skunk Works” comprised of Instructional and Multimedia Designers and Software Developers. A few months earlier, Stevie Rocco, one of our Instructional Designers and my partner in crime at SI, had come across an interesting project: a JavaScript-based rubric tool for evaluating student course work [I'm trying to find a reference to this project - BP]. There were a number of technical limitations of this prototype, but the idea was sound: have the rubric be the UI metaphor with which faculty could interact with a system that facilitated higher-quality, higher-speed grading by the faculty member by simultaneously:

  1. handling the accounting operations behind grading and giving feedback
  2. fostering the sharing of grading standards across a diverse faculty

We set about to design develop a rubric-centric application, one that would complement Penn State’s ANGEL LMS and SI’s existing Edison Services suite of eLearning tools.

In my mind, an absolute imperative behind developing such an application would be the separation of the definition of rubric documents (or data objects) from the application code of such a system. Many of the existing rubric tools (including that first JavaScript implementation) had no clear separation of data from behavior; at best, this makes them inseparable from their single, embedded rubric. In any case, the result is effectively a closed systems with little hope of sharing data with open systems in the education enterprise.

Still other rubric-based systems decomposed a rubric into multiple Relational Database tables, shattering the coherence of the rubric as a first-class part of the system. One can hardly fault such projects: this was the prime application design pattern of Web 1.0 and even Web 2.0 applications then coming into common use.

As we developed our prototype rubric tool (which we jokingly called “The Rubricator”), I made sure the design was built around a rubric as a document, at the time marked up in XML, that could be separated from the application, shared, remixed, etc. The UI was built in Adobe Flex with a server layer in ColdFusion, two technologies the SI gang was already very familiar with from previous projects. “The Rubricator” would load the rubric document payloads at runtime, ensuring a strong separation from logic and data representation.

The whole process of design at SI was one we took very seriously. To date, this project was the best example of team collaboration and iterative design and development I have experienced in my professional career. After two iterations of prototyping and design meetings, we now had a clear design and application flow:

Rubricator Application States

After the ensuing six months of back-burner and after-hours hacking, we approached the end of our third iteration and a magical “1.0″ release. Then the unthinkable happens: SI was dissolved and the team was scattered across other units in the college. While that was disappointing to all of us personally and professionally, we were leaving a big stakeholder in a really awkward position.

Next: Part Two – Finishing what we started