2. Rationale for Data Driven Decision-
Making attempts to answer two basic
questions:
What is it that we do?
Versus what do we think we do?
How well do we do it?
Versus how well does our end-user(s)
believe that we do it?
3. Data Driven Decision-Making is a massive
culture change, not an activity change.
The high degree of culture change is also the
biggest reason Data Driven Decision-
Making fails.
No implementation of Data-Driven Changes often
because there is no training for all involved.
Change is implemented for the wrong reason.
4. Create hypothesis.
Data (converted to knowledge) must lead to change
which leads to greater accomplishments.
Involves pre-test (establishing a baseline) and post-
test (measure effects of change).
Transformation issue.
Create culture where change becomes part of the job.
5. When we create curriculum, our initial goal is
often the following:
“Add VALUE to the process of education.”
6. Create value-added evaluation.
New classroom activity: Does it add value to the
course or will it be viewed as busy work?
Presently, Data-Driven Decision-Making
tends to be done for justification purposes
rather than for knowledge and insight.
7. Define “VALUE” and “BY WHOM.”
Identify all stakeholders.
Valued by the Students.
Valued by the Administration.
Valued by the Faculty.
Valued by the Business Community.
Valued by the Tax Paying Community.
Valued by the Legislature.
Valued by the THECB.
Valued by SACS.
8. We tend to focus all of our data collection and
decision-making on satisfying the requirements
of our administration, the legislature, the
THECB, and SACS because those areas control
the rules and our funding.
9. Identify what activity is specifically valued by
the each stakeholder group?
Ask “Do I have control over the activity?”
No (Activity is required by the THECB, the Legislature,
or some outside source and is driven by funding, laws, or
rules). Ignore it because either you do the required
activity or you don’t.
Yes. Then measure the activity over and over to
determine value/worth of inclusion.
10. This is the fundamental question we need to
answer.
Ex: We teach courses well, but what is the correct
course to be taught. (If English, why English, and
why that particular course in English).
Question: Do we keep certain courses in program
curriculum for the right reason (are they sacred
cows)?
11. Why do the stakeholders value that activity?
Identify universally understood components (determined
by Root Cause Analysis)
Root Cause Analysis to the fifth degree
Purpose of Root Cause Analysis – Determine if we are
we collecting the right data for the right reason?
Create, maintain, and update benchmarks
12. Beware of moving parts/skewed data.
Ex: Different groups of students taking classes for
different agendas.
Root Causes Analysis is the equivalent to
Statistical Process Control 101.
13. Identify what activities are necessary/valued
in the learning process.
Instructor’s Perspective: Eliminate those
activities that do not contribute to the
learning process.
14. Is part of the curriculum simply busy work?
Is part of the curriculum done solely to cover the
rules/requirements?
There must be a continuous post-course
evaluation (6 months, 1 year, etc.) to study the
impact of course/degree (a longitudinal study).
We need to study the results of the course more than the
presentation skills.
Community colleges pride themselves on Applied
Learning. We need to evaluate the long-term
effectiveness of our Applied Learning techniques.
15. Implement the needed change and run post-
tests to determine effectiveness of change.
Leader’s Perspective: How do I mentor that
learning process to help it become more
effective?
16. Follow-up questions:
Did the degree/certificate/course matter to
them?
Did it have value?
Did the student go to work in a field identified
as the same as his/her major?
This information is already required by Perkins
Grant criteria, but it is also significant to determine
the worth/value of the related degree/certificate
curriculum.
17. Performance Driven Institution:
Leadership defines “Performance.”
Determine how do we measure that
performance?
Data Collection – Develop a regular predictable
system that drives change.
Convert data to knowledge and apply to
strategic planning to drive changes.
18. Performance Driven Institutions
Implement changes.
Administer post-test to determine effectiveness
of changes.
Note that people generally resist change
(sabotage), but also note reality (performance
gap) leading to the need for change.
19. Results of a Performance Driven Institution:
Less ego.
Less subjective evaluation.
More analytic thought and input.
Less emotion.
More predictable actions.
20. Results of a Performance Driven Institution:
Data driven change.
Must supply adequate resources to drive
change.
Need to act on change (implementation).
Courage to change.
Documentation of where the organization is versus
where it needs to be (Performance Gap).
21. National Education Initiatives directed at
Community Colleges and requiring Data
Driven Decision-Making:
Achieving the Dream
Completion by Design
Completion Matters
American Graduation Initiative
Foundations of Excellence
22. If you hold people accountable without
changing the culture, then people will do the
following:
Manipulate the Data
Ex: ISD in Atlanta that changed test scores.
Cherry-pick the Data
If you hold people accountable, but do not
change the system, then people will ignore
you.
23. How to Evaluate Programs (systematic process
and schedule - usually 5-7 years).
1. Complete Program Discrepancy Analysis
Initial Discussion or Collaboration:
This is what we do now - versus……
This is what we want the program to be based on best
practices, survey results, research results, and more.
Look at discrepancy between what is and what is desired.
Resources
Financial statements
Staffing
Technology
Professional Development
24. How to Evaluate Programs
2. Determine extent and/or dimensions
(scope) of program evaluations.
3. Revisit, revise, or draft a philosophy
statement for the program to be evaluated.
What do we want the evaluation process to achieve:
Hide problems or maintain status quo.
Discover problems and pinpoint possible causes.
Explore possibilities for real growth.
Use evaluation instrument to justify changes (including
hiring additional staffing).
25. How to Evaluate Programs:
4. Determine the questions to be asked
(attributes) and answered by program.
Should include the essential attributes of the program so that
you know what you are evaluating.
Know the degree to which those essential attributes are
implemented (needed for differences between control and
experimental groups).
Know levels of satisfaction of the users of the program and of
the consumers of the program
Examine results (test results, satisfaction levels, accomplishment
of objectives, and more).
Look for unintended results.
26. How to Evaluate Programs:
5. Determine the types of data and number/types
of data collection instruments to be used.
Include the following:
Classroom observation (degree of implementation)
Survey work (levels of satisfaction, implementation)
Focus groups (can help interpret results of surveys with
confusing results)
Data (what else is going on, what type of task forces are
involved, what type of study teams are being used, how are
levels of rigor determined)
Data that includes job placement %/ transfer %, number of
completers per year, how courses were chosen, number of faculty,
growth of program, program/course assessments, professional
development, program demographics, etc.
27. How to Evaluate Programs:
6. Decide who will be involved in the program
evaluation process.
Program Faculty
Other Faculty Members
Associate Faculty Members
Administrators
Students
Affected Business/Industry members
Community
28. How to Evaluate Programs:
7. Develop program evaluation timeline
(planning process, questions, data collection,
analysis, reporting)
Can be a very daunting process.
Can take several years to complete.