Model-Eliciting Activities (MEAs) are a class of interdisciplinary problems designed to simulate authentic, client-driven situations in classroom settings. MEAs allow teachers and researchers to observe student development of conceptual models by requiring students to make their models explicit through design-test-revise cycles. Here, we present a method for assessing the design of MEAs and the learning that occurs by applying a task model to student work products. We examine the relationship between the number of deep strategies employed and the usefulness of the mathematical model produced in solving an MEA in an undergraduate engineering course. A task model was created to represent the areas for strategy deployment, as well as to specify shallow and deep strategies utilized by student teams in these areas. Student work products were coded according to this model and data was analyzed using non-parametric statistical analyses. By explicitly modeling the problem-solving strategies, optimal pathways for task success were highlighted, providing information for instructors on valuable feedback for students engaging in the activity, as well as validation of holistic assessments of student work. This analysis also has implications for determining the specific learning that occurs during a complex problem-solving activity.