When all of MIT’s lectures and problem sets are online and freely available to the world, what added value will a residential experience at MIT provide?
My PhD advisor and I envision a future in which enriching interactions with MIT faculty and instructors are the difference between an education at MIT and an education that simply uses the same online materials. A “flipped” classroom –where problem-solving is done in class with the instructor and lectures are consumed outside of class time– will be part of this future. In a flipped classroom, students can use their time with professors for learning to solve complex problems, pursuing creative projects, and understanding and extending current research.
A major barrier to an effective flipped classroom is student preparedness. Education practitioners have noted that, when students do not complete assigned pre-class material, the in-class experience is more difficult and presumably less rewarding.
We aimed to create a tool to help instructors in our department implement a flipped classroom by assessing student preparedness. We called it PAL, for Pedagogical Adaptive Learning, because we planned to implement the spaced repetition algorithms found in popular software and apps like SuperMemo, Anki, and Skritter.
This post is an adaptation of a software specification I wrote. PAL is still in early phases of development. If you are interested in being a part of this mission, do contact one of us!
PAL will encourage and assess out-of-class learning of a course’s core knowledge, including definitions, equations, core concepts, and problem-solving algorithms. Instructors will enter a database of questions that reflect the core knowledge their students should have before they attend an interactive, flipped-classroom class. Students will access PAL through a web interface, and PAL will use adaptive learning methods to present those students with the questions that will best encourage and assess the student’s competence of the course’s core knowledge. Instructors will be able to use PAL to monitor students’ progress and identify course content that needs in-class explanation.
PAL - this software
course - the combined content and membership associated with an instance of PAL
developer - software engineer who builds PAL
instructor - person who creates the course content, administrates the course, and uses the student performance as a guide to in-class instruction
__modifier_ - person who modifies PAL’s codebase after the developer’s initial design_
performance evaluation - computation and display of a student’s progress in a course
problem - an object that encodes a problem or problem template, the way to generate problems on that template, and the ways to evaluate students’ responses
problem picker - algorithm that determines what problems to present to a student during a session
prompt - the text or visual displayed to the student that asks for a response
prompt generator - the method of the problem object that produces the prompt
prompt template - the property of the problem object that determines the format for the prompts generated by that problem
reprompt - a helpful message or hint given to a student after they submit a known incorrect response
response - the students’ input after a prompt
response evaluator - the property of the problem object that determines if the student’s input is correct, incorrect, or one of a set of known incorrect responses
root instructor - the instructor who instantiates a course and has control over adding other instructors
session - the series of problems presented to a student
student - person who logs in to PAL and responds to prompts
Terms with special definitions are shown in italics when first used.
The second part of this document contains a description of PAL, including its function, user characteristics, and implementation requirements. The third section contains a detailed description of the specific functions PAL should have.
PAL is an independent web application that can be hosted on MIT’s servers and can be updated and extended easily. Some of PAL’s functions will be initially basic but should have the appropriate interfaces so that those functions can be upgraded later by modifiers. Python is the preferred language since it is commonly used by scientists, should allow for the appropriate database interfaces, and supports object oriented programming.
Product functions and user characteristics
PAL has three classes of users: instructors, students, and modifiers.
log into PAL via a web interface
view their progress in a course
answer problems for a course
log into PAL via a web interface
open up specific sets of problems at different times
view students’ progress
export all data about student use of PAL
Every course has one root instructor, the person who began an instance of PAL, who can add other instructors to the course. Every instructor can add students. Every course can have multiple instructors and students.
Modifiers are people, probably also instructors, who will update and expand PAL’s codebase after the developer’s initial design.
The interactions between students, instructors, and the modules in PAL are summarized in the figure. When a student starts a session, PAL looks at the student’s past performance and the available problems using the problem picker, which decides what problem to present the student with next. The problem generates a prompt using its prompt template and any values that need to be selected from lists or generated randomly. The student sees the prompt and gives a response, which is parsed by the problem’s response evaluator, which compares the response to the problem’s correct answer. The problem places the student’s raw response and its evaluation of that response in a response log database, which in turn updates the problem picker’s choice for the next problem.
Students and instructors can view a student’s progress using a performance evaluator, which uses the response log to compute metrics of a student’s progress. Instructors can choose the performance evaluation algorithm and the question picking algorithm. They can export all the data about their students’ use of PAL from the response logs. They can create, remove, and modify questions.
Each course has its own problems database, problem picker, response log database, and performance evaluator. Problem objects contain methods for generating prompts and evaluating responses.
After logging in, students can view their performance or begin a session. In a session, they respond to prompts. On incorrect responses they can try again or move on.
Preferably, PAL can be deployed on MIT’s Athena server using a scripted installation system that specifies the creator as root instructor and provides a URL for the web interface. No further command-line interaction is necessary to create the course.
To modify PAL’s mechanics (e.g., by changing the types of problems, the problem picker, or the performance evaluator), PAL users will need to modify PAL’s code themselves.
PAL is compatible with a desktop computer. A mobile web interface is desirable but not essential. The web interface is compatible with Chrome (version 39 or later) and Firefox (version 35 or later) when accessed using Mac (OSX Mavericks or later), Windows 7 and 8, and Ubuntu (version 14 or later).
Preferably, access to PAL will be controlled by MIT certificates. The person who creates the PAL instance is the root instructor. The root instructor can add other instructors. Other instructors cannot add or remove instructors. Any instructor can add or remove students.
There will be an option for user/password access. When adding a student, the instructor specifies a username and a password is generated for that student. Students can modify their own passwords.
When a student indicates that they wish to be presented with a prompt, the prompt should appear in less than 1 second. When a student submits a response, the response should be evaluated and PAL’s reaction displayed in less than 1 second.
Database user interface
Instructors can add, modify, or remove problems, whose characteristics are described below. These modifications can be made while the course is in progress.
Every course has a set of problems associated with it. Problems have two important methods:
The properties and methods will be illustrated in the context of two example questions. The first problem (P1) asks the student to explain the combination of two primary colors. The student must respond with the correct color. The second problem (P2) asks the students to write out the number of possible hands from a deck of cards (allowing deck size and hand size to vary) using the binomial or “choose” function.
Prompt generation relies on a prompt template.
P1: If the two colors are “red” and “blue”, the prompt will be “What color is the combination of red and blue?”
P2: If the two numbers chosen are 52 and 7, the prompt will be “How many possible hands of 7 cards are there in a deck of 52?”
A problem may have exactly the same prompt every time (“fixed”), have random numbers inserted into the prompt (“numeric random”), or have random elements from a set of options inserted (“set random”).
P1 is set random, since the two values are chosen from the set of red, blue, and yellow.
P2 is numeric random, since the two values are chosen from a range of floats or integers.
Problems have different response formats. Multiple-choice questions will have radio button selection. Most questions will have a text box. Problems have a correct answer, e.g., “purple” for P1 and “52 choose 7” for P2.
For numerical answers, a rounding error is allowed. For example, “What is 1.0 + 1.0?” would require 100% accuracy, but “What is pi?” would only need to be, say, within 0.01 of 3.14.
Instructors may anticipate certain incorrect responses. For example, the instructor may anticipate that a student will respond with “yellow” to P1. To give that student useful feedback, a message, the re-prompt, will appear for that student. For P1, answering “yellow” might trigger the reprompt “Yellow is a primary color, like red and blue. When primary colors are combined, a secondary color comes out.” The problem’s answer doesn’t change; the reprompt merely provides a pedagogical aid.
Evaluation may involve
sloppy string comparison (so that “yellow” and “ Yellow ” might be considered identical responses)
smart string to number conversion (so that “1e0”, “1”, and “1.000” might be considered identical responses)
regular expression matching (so that “52 choose 7” and “binomial(52, 7)” might be considered identical)
A problem should only be presented to a student when the course has arrived at that concept. Every problem has an integer assignment order. When the course begins, only problems with assignment order 1 are accessible. When the instructor increases the course’s assignment order to 2, problems with order 1 and 2 are accessible, and so forth.
The problem picker can use assignment orders to determine problem selection. For example, an instructor might want to specify that students have answered 90% of order 1 problems correctly before being presented with order 2 problems.
Export and import
The instructor’s UI allows the problems database to be downloaded. A downloaded problems database can be uploaded to populate or repopulate a problems database.
PAL is adaptive, meaning that is selects problems based on past performance on those problems. The Leitner method places problems into _N _boxes. If a question is answered correctly, it moves into a lower-priority box. If answered incorrectly, it does in the highest-priority box. During a session, problems are drawn from the high-priority boxes first.
More sophisticated systems like SuperMemo determine the amount of delay between repeated presentations of the same problem with more complicated algorithms.
PAL should be programmed with the Leitner method but with a straightforward object structure so that more complicated problem pickers could be implemented later.
Students and instructors can both view the student’s performance in a course. Like the problem picker, the performance evaluator should be initially programmed with simple evaluation metrics that can be altered later.
The initial performance evaluator will show the number of correct and incorrect responses made by the student for problems in each assignment order.
Response log database
Entries in the log will record
the type of event
student is prompted
student chooses to retry the question or move on
in the case of a prompt or response,
a unique ID that identifies the problem in the problems database
the values inserted into the prompt template
the correct answer
the student’s response
the evaluation of the response
Export and import
The instructor’s UI allows the response log database to be downloaded. A downloaded response log database can be uploaded to populate or repopulate another database.