Hayley Blackburn, PhD Hayley Blackburn, PhD

Beyond Surveys: Using Project Checks for Meaningful Student Feedback

Go beyond standard course evaluations. Learn how educators use "Project Check" weeks to gather meaningful student feedback, support project completion & improve course design.

As educators, we constantly seek ways to refine our courses and ensure our students feel confident applying the material to projects and coursework. While institutional course evaluations provide one layer of feedback, I wanted more specific, actionable insights, especially during those crucial final stages of a unit. I also wanted to structure end-of-unit time to maximize student success without overwhelming them (or ourselves). 

One effective strategy is implementing "Project Check" weeks combined with targeted feedback collection. This approach, successfully used in a hybrid Intro to Technical Writing course (university gen ed), offers an efficient way to support students, review key concepts, and gather valuable data for course iteration.  

What is a "Project Check" Week?

Instead of introducing new material, a Project Check week is dedicated time for my students to focus entirely on completing their unit projects. The class meeting during this week transforms into a hands-on lab session. This structure involves:  

  1. Review: Briefly revisit key concepts and materials from the unit.  

  2. Focused Work Time: Allow students dedicated time to work on their projects.  

  3. Individual Check-ins: Circulate the room, checking in with each student to discuss progress, address blockers, and ensure their projects are on track.

This model empowers my students by giving them structured time to apply their learning and finalize complex assignments, while providing me with invaluable one-on-one interaction. It's an efficient way to ensure everyone is heading towards the finish line successfully.  

Weaving in Targeted Feedback

Project Check week also presents a unique opportunity to gather specific feedback, moving beyond generic evaluation questions. I modeled the unit's concepts (like analytical frameworks and primary data collection) by applying them to the course itself.  

Students brainstormed data types (grades, LMS metrics, student perceptions) that could inform decisions about keeping or changing course elements. This exercise not only reinforced unit concepts but also naturally led to a discussion about course feedback.  

Using a simple polling tool like Slido integrated into the presentation, I asked specific questions about elements they were considering changing. This generated:  

  • Quantitative Data: Anonymous poll responses provided quick insights.  

  • Qualitative Data: The polls sparked open discussion, allowing students to elaborate on the "why" behind their answers.  

This method provided more helpful feedback than standard evaluations from my institution. It yielded actionable insights, such as realizing the need to improve the layout of project pages to make examples more accessible.

I found that “Templates” were rated as the top resource for projects in the survey, yet the Canvas data and project performance showed that too few students were actually using them. I’ve decided to move the download button for the Templates to a more prominent spot in the page design to encourage more usage.
— Data from Spring 2025

 

Building Rapport Through Transparency

An unexpected benefit? My students asked thoughtful questions about how my contract and annual performance review works. They engaged in critical thinking about the pitfalls of institutional evaluations—noting on their own that a low response rate isn’t reliable because usually only the extremes (positive and negative) are likely to submit. We had a candid conversation about iterative processes and the importance of multiple channels and metrics to make performance decisions. My students appreciated the willingness to openly discuss course design decisions, limitations, and the value placed on their direct input. Sharing this "behind-the-scenes" look validated their perspectives, reinforcing the idea that the course evolves with them, not just for them.  

Combining Project Check weeks with targeted, instructor-led feedback collection offers a practical, user-friendly way for educators to enhance efficiency, personalize the learning environment, and gain insights that truly empower course improvements. It respects student time, fosters engagement, and provides a model for the iterative, data-informed decision-making we often teach.

How do you collect meaningful feedback to supplement formal evaluations? Share your strategies!

Read More