A Mathematician’s Quest to Shape AI into the Ideal Calculus Student
By Lew Ludwig
I am a B+ \LaTeX{} user. I use it to write all my weekly homeworks and tests. I know to use \displaystyle to make my limits look nice, and I can kludge an $\underline{\hspace{1.in}}$ to make a blank space for an answer on a test. But things like tables give me pause. I always have to look them up to get exactly what I want. And diagrams? Forget it - \usepackage{tikz} is not in the top matter of my .tex files.
Recently, a colleague (okay, my son) shared that he could upload a picture of a printed calculus test to ChatGPT 4, and it would convert the static page into usable \LaTeX{}. I wondered if the same could be done with hand drawings.
Wow! It even understood my poor spelling of “laex.” Okay, but what about tables?
Here I was intentionally pushing it. Notice the use of just two lines - a nontrivial thing for a \LaTeX{} table. I also wanted to see what it would do with the blank entries. And the checkmark - why not? Here is what it gave me:
Not perfect, but with my B+ skills, I can quickly edit this to suit my needs.
Now the big test, could I give it a hand drawing of a graph and have it produce something useful? For this example, I decided to go with a variation of the old standby when introducing Kuratowski's theorem about planar graphs - the three utilities problem. This is not the exact depiction, but complicated enough.
As I noted, I had no experience with Tikz before starting this. Here was ChatGPT’s first attempt.
Okay, this was a start but the nodes are misplaced and the edges are wrong. I looked at the \LaTeX code that ChatGPT created. It was actually well organized and easy to interpret. There was a section for defining the nodes and another for the edges. Two minutes of editing, and I got this.
Not exactly my original drawing, but probably close enough for a handout or assignment. I would definitely consider my initial foray into hand drawings to \LaTeX via ChatGPT a success!
This got me thinking - could this approach help my students with their take-home tests in calculus? I use a take-home test approach explained here where students create their own calc test based on my specifications. I grade each test by trying to complete it and comparing it to the student’s submitted solutions. Here is an example.
Students not only have to create questions to meet specifications, such as a definite integral equal to zero, but they also have to generate a piecewise function in a graphing calculator app like Desmos or similar. By the end of the semester, the test grows to 15 or more questions, so the graphs can get quite complicated. The more successful students graph their functions by hand to meet their specifications before converting to Desmos. While this may take some erasing and retries, it is less frustrating than blindly typing equations into Desmos.
With this assessment in mind, I wondered if the above scanning trick with ChatGPT could save my students some time. I gave ChatGPT the following hand drawn graph. To give it a head start, I dug up some graph paper.
In my first prompt, I asked ”can you describe the mathematical graph I just gave you?”
A great start! Next, it identified two parts of the graph, a parabola on the left and a “polyline” on the right. Okay, not bad. But then it states the parabola goes through (0,1) and the polyline goes through (2,6), (3,5) and (5,9). Oops!
Maybe that was a fluke. Let’s press on - “can you give me the equation for this piecewise defined function on the closed interval [-4,6].” To this prompt, it starts with a reasonable divide-and-conquer method.
A great start, but it tried to use the erroneous points from above. This resulted in the parabola x^2+1, not horrible, but the “polyline” gave it real problems. After about 45 seconds, a long time for most generative AIs, it gave me this piecewise defined function:
ChatGPT had clearly missed the mark, but perhaps I had as well. Even though I knew it made a mistake, I tried to press through for the easy answer. This reminded me of a recent piece by Ethan Mollick of the Wharton School, where he argues that teachers - not IT professionals - are ideally positioned to guide generative AI, to coax from the technology what we want. After all, that is what we do with our students. A student gives a wrong answer, you reflect, try to see where the error in thinking occurred, then redirect the student with a new question in hopes of leading them to the desired outcome. This process is our bread and butter! In my example above, I had clearly failed my student.
Armed with this insight, I returned to my “student.” After reflecting, I realized the initial error with the parabola vertex was what led us astray. I reloaded the hand drawn image, and the “tell me what you see” prompt. Again, it got stuck on (1,0) as the vertex. Before continuing, I asked, “are you sure about the vertex? Look closer at the origin.” (Yes, I am anthropomorphizing, which is discussed in Mollick's piece as well.). ChatGPT quickly replied with an apology, ever so polite, and the correct vertex followed by “the parabola must be y=x^2” - we were getting somewhere!
I excitedly responded, “yes, you are correct, this is a parabola from -3 to 3, and a line from 3 to 5 with a slope of -1. With this in mind, can you give me the two part piecewise defined function for this graph of a parabola and a line.” (I was actually rooting for the thing!) It responded
Oh, so close! It clearly had trouble at x=3. To be fair, it used the value of the function at x=3, but that was not the value needed to determine the equation of the line. A little nudge from me, and it got that right too.
This exercise has offered me several insights. First, I was intrigued by how ChatGPT attempted to "solve" the problem. The strategy it employed was remarkably human-like, incorporating techniques such as divide and conquer and applying known information. Secondly, I firmly believe that Mollick is onto something when it comes to teachers teasing out results from AI. Adopting a teaching mindset, I found myself keen to help my "student" succeed, despite part of me hoping technology wouldn't crack my "cheat-proof exam." Lastly, it's evident that there are no magic phrases or incantations (to borrow a term from Mollick) that will instantly produce the correct solution. Achieving this requires time and patience, a reality that is both reassuring (my exam remains secure from AI) and somewhat disheartening (my desire to see my student succeed).
What does all this mean looking forward? My prediction is that by the fall, advancements in technology will enable me to effortlessly guide generative AI to construct this piecewise defined function. Will it be a simple incantation? Likely not. If the process proves overly laborious, it may well preserve the integrity of my test for another semester. However, the educator within me will take joy in guiding my "student" to success.
In a recent presentation, Manolis Kellis of MIT argued that AI will soon surpass medical doctors in knowledge. He believes that the best physicians of the future will be those who are the most compassionate and with the best bedside manner, rather than those who are the most knowledgeable. I think the same will be true of teachers. The future's successful teachers will be those who prioritize their students' well-being, focusing on nurturing rapport and fostering community—principles that lie at the heart of the MAA's core values. Remember, your ability to connect with and care for your students is irreplaceable. You've got this!
Lew Ludwig is a professor of mathematics and the Director of the Center for Learning and Teaching at Denison University. An active member of the MAA, he recently served on the project team for the MAA Instructional Practices Guide and was the creator and senior editor of the MAA’s former Teaching Tidbits blog.