Towards an Open Rubric – Part Two

In part one, I related the shambling development project to build an online generalized rubric builder/application tool, codename:”Rubricator”, at the IST Solutions Institute at Penn State from 2007-2008. The official project met an untimely demise as a result of a college reorganization. While this certainly wasn’t the first technology project to be offed by a surprise reorg, we had a more troubling problem – we had promised the tool to a colleague to help execute her research!

Carol McQuiggan, a friend of teammate Stevie Rocco is a member of Penn State’s instructional design community. Carol had provided the first rubric we marked up and used for early testing and development – a self-assessment rubric to help faculty members measure their own preparedness for online teaching. We had signed Carol on as the first pilot user of the rubric system, and she been accepted to the upcoming Sloan-C International Conference on Online Learning to present on her research.

Stevie had since moved on to a new position with Penn State Online, and I was in charge of building the new Extreme Events Lab at Penn State. Stevie and I resolved not to hang Carol out to dry. Through some long evenings, work sessions at the local Panera and the assistance of the local Adobe Flex Study Group, we managed to finish a limited version of the rubric tool. This version was enough for Carol to complete her research and presentation. Stevie and I were also able to parlay our experiences into a presentation at OpenEd 2009.

Stevie was also able to find a permanent home for the rubric tool as the Faculty Self-Assessment: Preparing for Online Teaching with Penn State Online.

In our rush to finish the “rubricator”, we unfortunately had to compromise on our initial design in a few severe ways. We were still no closer to an open model for rubrics, one independent of the application that displays them. In fact, we were left even without a clear path to release what we created as open source – it remains property of Penn State due to institutional intellectual property policies. Perhaps someone still at PSU will take up the charge.

Next: Part Three – Liberating the Rubric

Towards an Open Rubric – Part One

Though it seems like just a short time, almost three years ago my old workgroup at Penn State set out to do something crazy: help our faculty deal with an overnight tripling of class sizes in our college.

The College of Information Sciences and Technology had been created by University President Graham Spanier in 1999 under a protectionist model: class sizes were capped and admission to the major was restricted in order to try to create something different: a program built from the ground up around Problem Based Learning. At the same time, the administration recognized that the college couldn’t be self-sufficient with these restrictions, and provided the startup funding necessary to allow it to prosper.

When this additional funding came to an end, the college administration discovered a sobering fact: class sizes would have to nearly triple for the college to become self-sufficient. The artificial environment under which the college had prospered was coming to an end.

At the time of this inflection point, I was the Senior Technologist of the now-defunct IST Solutions Institute. SI was the education R&D arm of the college: an eLearning and technology “Skunk Works” comprised of Instructional and Multimedia Designers and Software Developers. A few months earlier, Stevie Rocco, one of our Instructional Designers and my partner in crime at SI, had come across an interesting project: a JavaScript-based rubric tool for evaluating student course work [I'm trying to find a reference to this project - BP]. There were a number of technical limitations of this prototype, but the idea was sound: have the rubric be the UI metaphor with which faculty could interact with a system that facilitated higher-quality, higher-speed grading by the faculty member by simultaneously:

  1. handling the accounting operations behind grading and giving feedback
  2. fostering the sharing of grading standards across a diverse faculty

We set about to design develop a rubric-centric application, one that would complement Penn State’s ANGEL LMS and SI’s existing Edison Services suite of eLearning tools.

In my mind, an absolute imperative behind developing such an application would be the separation of the definition of rubric documents (or data objects) from the application code of such a system. Many of the existing rubric tools (including that first JavaScript implementation) had no clear separation of data from behavior; at best, this makes them inseparable from their single, embedded rubric. In any case, the result is effectively a closed systems with little hope of sharing data with open systems in the education enterprise.

Still other rubric-based systems decomposed a rubric into multiple Relational Database tables, shattering the coherence of the rubric as a first-class part of the system. One can hardly fault such projects: this was the prime application design pattern of Web 1.0 and even Web 2.0 applications then coming into common use.

As we developed our prototype rubric tool (which we jokingly called “The Rubricator”), I made sure the design was built around a rubric as a document, at the time marked up in XML, that could be separated from the application, shared, remixed, etc. The UI was built in Adobe Flex with a server layer in ColdFusion, two technologies the SI gang was already very familiar with from previous projects. “The Rubricator” would load the rubric document payloads at runtime, ensuring a strong separation from logic and data representation.

The whole process of design at SI was one we took very seriously. To date, this project was the best example of team collaboration and iterative design and development I have experienced in my professional career. After two iterations of prototyping and design meetings, we now had a clear design and application flow:

Rubricator Application States

After the ensuing six months of back-burner and after-hours hacking, we approached the end of our third iteration and a magical “1.0″ release. Then the unthinkable happens: SI was dissolved and the team was scattered across other units in the college. While that was disappointing to all of us personally and professionally, we were leaving a big stakeholder in a really awkward position.

Next: Part Two – Finishing what we started