A Generic Reference System Allowing Data-Fusion Within Continuous Improvement Processes of Engineering Education Programs

A Generic Reference System Allowing Data-Fusion Within Continuous Improvement Processes of Engineering Education Programs

Guy Cloutier and Daniel Spooner

National accreditation standards and international agreements redirect the thrust of engineering programs towards a “competencies” based education. The standard of the Canadian Engineering Accreditation Board calls for a continuous improvement process (CIP), which “demonstrates that program outcomes are being assessed in the context of the graduate attributes, and that the results are applied to the further development of the program." CIPs are closed loop systems, and require data-fusion from many sources of evaluation. This stretches many current evaluation practices beyond their capabilities. How can a professor construct a rubric specific enough to provide feedback to the student about the subject matter, yet general enough for its results to be merged with those of other courses evaluating the same competencies within the curriculum? How can such a CIP measure improvements without a reference system that displays stability over time and good correlation with other national standards? This paper addresses both questions and provides a proof of concept for the implementation of a generic system.

A reference system is proposed. It merges the criteria of the CDIO levels of proficiency and those of the European Qualification Framework (EQF), relying on the conclusions of the DOCET report. Echelons are defined within a five dimensional reference system: knowledge, cognitive process, complexity, autonomy, and commitment. Whenever possible, components borrow intensively from known taxonomies (Bloom-Anderson, Krathwohl) as well as international committees and agreements (International Engineering Alliance, Washington Accord). A seven-echelon scale is defined to promote and track student progress from beginner to professional.

To better structure the gathered data, all five components use key words/phrases regrouped in ranks, and the seven-echelon scale spans 1350 rank combinations. Each rank is expressed by associated keywords for existing taxonomies, and generic key phrases for the remaining components. These keywords and key phrases combine into descriptive texts, to express expectations. More than 3 million combinations are possible over the seven proficiency echelons: from over 1 million entry-level statements at echelon E1, down to over 160 thousands statements at graduate-level echelon E5 (echelons E6 and E7 pertaining to the new and experienced engineer). The expectations of courses covering different subject matters can then be compared, as they are built from ranked arguments for each component. Twenty rules are used to highlight polarities in the expectations that could affect the student outcomes.

Professors maintain their former evaluation habits. They are asked to construct generic statements that they feel appropriate, given their current practice. A natural linkage ensues, hopefully with increased collaboration to implement the change.

A proof of concept expectations generator was created in Excel, and is being tested on 10 undergraduate courses with the help of professors and teaching assistants. Successes, difficulties, and suggestions about the management of change as well as future developments to combine and manage the generated data are addressed. Some professors spontaneously proposed to embed the generic statements as evaluation parameters to their students, arguing it conveyed their true expectations beyond the subject matter. Others felt uneasy about the change, and wanted to develop their “private-generic” statements.

Proceedings of the 10th International CDIO Conference, Barcelona, Spain, June 15-19 2014

Go to top