Measuring What Matters
Performance measures can boots IT efficiencies, generate buy-in from stakeholders, and improve technology's impact on teaching and learning. But you have to do it right.
A hot air balloonist calls down to a man he sees on the ground. “Do you know where I am?” The man replies: “41° 28’ north latitude, 81° 37’ west longitude, and about 150 feet off the ground.” The balloonist calls:“Thanks. You must work in IT.” “Yes I do—how did you know?” “Because you answered my question, and I still don’t know where I am.” The man on the ground calls back: “You must be a superintendent.” “Yes I am—how did you know?” “Because when you got here you were lost, and you’re still lost, but now it’s my fault.”
As a district technology leader, you’re the key player in helping your superintendent and other senior leaders measure performance and provide accountability across all district functions. But if you don’t understand what exactly needs to be measured and why, or how to communicate this information to your non-IT colleagues in a language they understand, the best technology in the world won’t help improve your organization. What follows is a brief primer on how school CIOs and their staffs can begin to build, report, and act on performance measures.
STEP 1: UNDERSTAND THE METRIC SYSTEM
What do we mean by performance measures? In a nutshell, critical information that, once organized and shared, will spur action and provide accountability. There are three categories of measures:
Input Measures = what you have to work with. Example: the number of networked computers in a school, their age, RAM, and/or processor speed.
Output Measures = what’s been done with inputs. Example: improved graduation rates and test scores. Output measures in one context (end-of-year test scores) can be input measures in another (baseline data for the next school year).
Tech & Learning Newsletter
Tools and ideas to transform education. Sign up below.
Work Measures = how inputs become outputs. The most important work measures are “lead indicators”—data you can gather and report today that tells you something important about how your output measures will look in the weeks, months, and years to come. Example: student attendance describes current work—how much of the available time teachers and students actually work together—and also helps schools predict output measures such as grades, promotion, and test scores.
Measures best drive performance improvement and provide accountability when there’s a clear and shared understanding of cause and effect. For example, driving increased student use of computers will be easier if everyone involved believes that the action will improve student test scores or grades. Direct cause-and-effect relationships between measures are rare, however, so IT-related output measures need to be linked to school or district outputs through a “theory of action.” For instance, if you believe that student test results improve when they spend more time with particular software, than computer up-time is an output measure for IT activity that helps create a condition (software use) that enables the goal of higher test scores.
STEP 2: DEFINE YOUR GOALS
Now that you have some working definitions, come up with a set of four to eight IT output goals. Your goals should focus on technology architecture and its impact on the district’s ongoing operations and stem from district plans. No matter what goals you decide on, they should address efficiency and effectiveness. Efficiency goals might include:
- Reducing the annual overhead of noninstructional areas such as operation and maintenance of schools, transportation, food services, and central office administration.
- Realizing a positive return on investment for new technology projects.
- Reducing total cost of ownership of end-user equipment, especially PCs and peripherals.
Effectiveness goals for IT are harder to define because they should be tied to overall district goals. Some possible IT effectiveness areas that can be tied to district goals include:
- Customer satisfaction and service quality
- Technology availability and response time
- Technology usage rates
- Use: IT in schools is either for management (used to help run the school or district as a whole, like a student information system), teaching (used by teachers to support their work, like an electronic grade book), or learning (used by students in the learning process, like a biology dissection simulation).
- Architecture: IT capacity can be classified as infrastructure (Internet access, wide-area and local-area networks, servers, routers, and wiring); applications (finance, human resources, student information systems); data (operational data stores and data warehouses or stored data sets); portal (Web presence, content management, and software and systems interface issues); and end-user devices (PCs, printers, handhelds, and other devices).
Regardless of how IT work measures are organized, the appetite of non-IT people for them is limited, so reporting should focus on key IT outputs, their relationship to district goals, and any noteworthy trends in inputs or leading indicators. Be sure to revisit these measures, because the ones that matter may change over time, and new data sources are always coming to light.
Of course, no amount of measurement reporting can overcome harsh budget realities. Performance measures have not translated into more technology funding in my district, where a revenue crisis drove IT spending and staffing down more than 50 percent over a three-year period.
Nevertheless, measures are useful during such crises, as they help prioritize use of limited resources and guide daily management and continuous improvement efforts to squeeze maximum efficiency and effectiveness out of every dollar and every minute. And the discipline of making and using measures holds the hope that future investment of resources in K–12 IT will be better focused on improving efficiency and effectiveness. Our children deserve no less.
Peter Robertson is the chief information officer for the Cleveland Municipal School District. He’s on leave for the 2004-2005 school year to complete his doctorate in educational leadership at Columbia University.
A Trial-by-Error Example
Here’s how one district tweaked performance measures to better match their objectives.
BEFORE:
Pre-2003, Cleveland’s IT department had no formal role in PC procurement. Years of uncoordinated PC buying in reaction to grant funds had created a huge unfunded “total cost of ownership” liability. Efforts to reduce that liability generated mountains of data about help desk, field support, and server and bandwidth usage. Because such measures were relatively easy to create, IT reporting consisted of long reports of field support ticket statistics sorted by category. Summaries of that reporting showed a roughly 40 percent decline in average problem resolution time even while ticket volume doubled. Such reporting was used to explain that further service improvement required additional resources, but the explanations lacked a clear connection to district goals.
AFTER:
PC inventory and software usage hours were more difficult to measure because the data had to be collected at network servers, aggregated across the district, and associated with the PCs and software that triggered it. But it was still relatively simple and, once done, easy to maintain. The resulting measures of input (PC inventory) and output (software usage) were more useful in tying IT activity to district goals. Showing a 30 percent decline in software usage hours per computer was compelling to non-IT people trying to understand technology support problems. The same was true for the inventory data, which revealed that 15 percent of the classroom PCs were too old to be connected to the Internet and about half of the district’s PCs were out of warranty.
Sample Measures
Seven measures the Cleveland Municipal School District IT department reports to stakeholders:
- Tech support cost (disaggregated by month)
- Ratio of PCs to students (classroom, month, and PC status)
- Network up-time percentage (device, connection, and hour)
- Help desk call volume (building, day, and reason)
- Problem ticket resolution time (building, day, and reason)
- Software and Internet usage hours per PC (building, day, and software or Web site title)
- Number of data warehouse users and events (user, day, and report)
- Establishing an Integrated Performance Measurement System, the second volume in a six-part handbook offered by The Performance-Based Management Special Interest Group: www.orau.gov/pbm/pbmhandbook/pbmhandbook.html
- Balanced Scorecard Step-by-Step: Maximizing Performance and Maintaining Results by Paul R. Niven (Wiley 2002)
- Results: The Key to Continuous School Improvement by Mike Schmoker (ASCD 1999)
- The Six Sigma Way by Peter S. Pande et al, Robert P. Neuman, Roland R. Cavanagh (McGraw-Hill, 2000): www.sixsigmaway.com