Connecting the Dots and Data to Improve Student Success

May 31, 2022
  • Enrollment Management
  • Retention
  • corporate
Woman holding chain of data points floating in the air.

Sponsored by Salesforce

There is broad agreement in higher education that persistence and completion rates need to improve at institutions of nearly every type, size, and control. While there have been slight gains in recent years, only half of students complete degrees at their starting institutions. Mobility studies have helped us understand that many students eventually complete degrees at other institutions, increasing overall completion to about 62%. Unfortunately, student departure remains an enigma for most institutional leaders as they seek to understand and remedy the causes of attrition.

There is no shortage of literature or advice on what works in student retention. Wes Habley at ACT ran decades of studies and publications, starting in 1980, on the topic. Internet searches on the topic are likely to reveal any number of lists, toolkits, or “key strategies” to improve retention. Given all the knowledge and references, it is surprising that rates have only gradually and stubbornly improved. Two reasons may explain it. First, institutions are swimming in data points and may not be able to identify the most important data to track or which data explains the specific persistence patterns of their student populations. Second, we may lack the connections between institutional supports and students who need them to see if we are genuinely “moving the needle.”

There are four steps to follow to help institutions organize, frame, analyze, and act upon their data. By using this method, the overwhelming potential data points can become manageable. Most importantly, they can lead to significant actions that improve student outcomes.

Step 1: Choose a Student Success Model to Organize the Possible Data Points

A myriad of student success models exists in the literature. Pick one that is fairly comprehensive and addresses the issues that your students face. One of my favorite models is Dr. Vincent Tinto’s “Theory of Student Departure.” It organizes the issues along five main lines:

  • Academic Integration

  • Social Integration

  • Financial Support

  • Goal Clarity

  • Support of Family and Friends

Whether it is Tinto’s or another model isn’t important. What is important is that the model chosen spans a wide array of issues that impact student persistence and completion.

Step 2: Create a Framework of Leading and Lagging Indicators

Once you have chosen a model, the next step is to assess what we know about students and their likelihood to persist and succeed. The most frequent data available to us are lagging indicators, those that reflect something that has already taken place. These are frequently used to analyze student persistence and completion patterns and may be simple in their relationships, such as a univariate analysis of high school GPA to first-year persistence rates, or complex, such as a non-linear regression model that seeks to understand the relative contributions of many factors in explaining the student’s action to leave or persist. These analyses are most helpful when used to develop an array of supports, services, opportunities, and experiences designed to increase the likelihood of success for some or all students.

Equally important are leading indicators. These indicators tell us what is happening right now in the student experience. Examples include early warning flags set by faculty, lapses in LMS logins, absence from class sessions, not swiping into a dining hall, library, rec-center in some time, and others. These allow us to see changes in engagement patterns in real-time. These are far less frequently used in higher education today; they are an opportunity for impact.

Step 3: Identify and Address Gaps

Analyses need robust and comprehensive data, yet those tend to exist in various systems/data silos. Institutions with data warehouses are one step ahead; identifying the important data and extracting them from those systems is still challenging, even with a warehouse tool in place. Many other institutions that lack these data tools wish for better insights and tools to help them collect, assemble, and analyze their data.

Certain areas suggest polls or surveys. While several strong third-party instruments are available today, these can also have limitations. One limitation is the timing of survey administration to new or current students, then returning the data into the hands of those who need it (especially advisors). This often means that the opportunity for meaningful intervention has already passed. Some of these instruments allow comparisons of student responses at your institution to others, which can be helpful, but outweighed by the timing, where these data then become lagging, not leading, indicators. Short polls, such as those that ask students to rate how they are doing that day with facial emojis, may capture those feeling overwhelmed. Longer polls that tease out the nuances of family and friend support shouldn’t exceed ten questions, and five would likely increase response rates. Survey burnout/overload can be an issue, but well-coordinated plans for the timing and number of short polls and surveys can mitigate this.

Step 4: Connect the Dots (and the Data)

Collecting, assembling, analyzing, and acting upon a robust dataset is evolutionary. Institutions should start with what they have and increase and improve their data and usage of it as a quality improvement process. Knowing what the desired state looks like is important when creating a solid road map toward it.

Assembling a robust set of lagging indicators means getting data from multiple systems, modules, and tables into one dataset. Once assembled, analyses and data visualizations need to be appropriately complex to reveal the interaction of the data categories from the framework. For example, human behavior is often complex, and the decision to stay or leave an educational program/experience is often a confluence of different factors coming from different parts of that experience, as well as factors pressing from outside the educational institution itself (child care, work pressures, family dynamics and dysfunctions, homesickness, etc.). While those external factors may be harder to capture through these data collection mechanisms, identifying the internal factors or those within the scope of control of the institution will enrich the analyses and avoid simplistic solutions or assumptions about why students are staying/leaving.

Assembling and using a rigorous set of leading indicators is often a tremendous challenge at the most forward-thinking institutions. If a student’s balance is unpaid after the due date, who is following up on this to see if help is needed? Does the financial aid office, often equipped to discuss solutions, even know that this has occurred? Is financial stress causing the student to look for additional outside work, such that she has less time to read and complete assignments? Is the early alert system cumbersome for faculty? Do residence hall advisors have a similar alert system for issues they may see? Who is following up on these, and what data do they have to help them create an empathetic and informed conversation with the student before she decides to leave?

Enabling a case management approach to student persistence and success has been used by institutions in first-year programs, care teams, and advising. Some of the more remarkable results have linked students to institutional resources and community services and supports in a “wrap-around” approach to student success. These approaches can only work when cases surface and those empowered to act have sufficient information to interact meaningfully with the student. It requires the integration of information from disparate systems.

Improving student outcomes also requires that the care/case managers and teams see the relationship between suggesting services and resources and the student’s engagement or lack thereof. Student support has been based upon a “build it, and they will come” approach and an invitational approach to student referrals. For example, faculty note that students are struggling with writing, as evidenced by their assignments and papers. In response, the English department creates a writing center staffed with graduate assistants and undergraduate peer tutors. Faculty meet with some struggling students during office hours and refer them to the writing center. Did they go? Is there a relationship between time at this center and improved writing performance?

Institutions are starting to realize the critical nature of connecting data to the student experience and making sense of that data to improve it. Actionable intelligence, equipping staff and faculty to focus their expertise on interventions, services, and supports, especially when these resources are scarce, begins the journey toward a digital-first, student-focused approach to improving outcomes and success. Higher education must leverage existing and emerging data tools to “move the needle” on student success, persistence, and completion.

To read the entire e-book, download a free copy from Salesforce.org.

Subscribe

AACRAO's bi-weekly professional development e-newsletter is open to members and non-members alike.