Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Amanda Caswell

I tested ChatGPT-5 Study mode vs Claude Learning mode with 7 prompts — and there's a clear winner

ChatGPT vs Claude.

As a lifelong learner who is constantly challenging myself, I have found ChatGPT’s Study mode and Claude’s learning modes are perfect companions for students of all levels and abilities. Current students and those who want to continue their education can benefit from these features because they help grow skills by leaning on AI as a tutor.

Here's what happened when I put the latest study features from OpenAI and Anthropic to the test with 7 prompts. I kept them fairly easy (high school level) to keep from dusting off the old textbooks in the attic. One thing is clear, these learning modes are very different.

1. Math concept breakdown

(Image credit: Future)

Prompt: “I’m learning how to calculate the standard deviation of a dataset. Teach me step-by-step, ask me questions along the way, and only reveal the final answer when I’m ready.”

GPT-5 understood the prompt fully and the model immediately engaged me in the first calculation step (finding the mean) with a specific question and using a provided dataset. This perfectly set up the sequential, interactive learning experience requested.

Claude demonstrated the ability to teach by building conceptual understanding first and focused on preliminary discussion and abstract questions before starting any calculation.

Winner: GPT-5 wins for an overall better answer for this specific prompt. It started teaching the calculation method step-by-step immediately, asking a relevant question during that step, and withheld the final answer (standard deviation) as required. Claude's approach, though instructionally sound in a broader sense, didn't prioritize the step-by-step calculation process the user requested.

2. Historical analysis

(Image credit: Future)

Prompt: "Walk me through the key causes of the Great Depression, asking me to connect each cause to its economic impact before moving to the next step.”

GPT-5 dove right into the first cause and forced me to connect it to its impact, just as the prompt requested.

Claude acknowledged right away that we were switching subjects, but the follow up questions might be better used in a broader tutoring context. It ignored the prompt’s specific directive to walk through causes immediately and demand connections before proceeding. For me, this felt like it interrupted flow compared to GPT’s action oriented and structured response.

Winner: GPT-5 wins for an action-oriented and structured response that executed the prompt’s instructions precisely.

3. Scientific method application

(Image credit: Future)

Prompt: “I have an idea for a science fair project testing if music affects plant growth. Guide me through designing the experiment, asking me questions about controls, variables, and how I’d collect data.”

GPT-5 broke down the prompt by asking just one primary question. It let me know that we would be working together building the project piece by piece.

Claude asked several questions to help move the idea along. However, all the questions at once felt a little overwhelming.

Winner: Claude wins for focusing on preliminaries with upfront questions that help illuminate the bigger picture. While the amount of questions caught me off guard, ultimately, answering these prior to moving forward could help direct where the project goes without the need to backtrack. Contrastly, GPT-5 directly addressed the prompt, starting the experimental design process immediately and asking a precise, necessary question one at a time.

4. Foreign language practice

(Image credit: Future)

Prompt: "Help me learn 10 essential travel phrases in French. Introduce them one by one, ask me to repeat them, quiz me, and correct my pronunciation.”

GPT-5 assumed I was a beginner and told me that we were going slow.

Claude was overly verbose, praising me for learning practical and rewarding skills. It then asked several questions before getting started. I appreciated the initial setup as the AI wanted to target my skills (or lack thereof) before beginning.

Winner: GPT-5 wins for diving into the task without excess comment. It understood the context, assuming that because I was asking for 10 essential travel phrases that I was a beginner. Claude didn’t assume and instead overloaded me with questions. For me, GPT-5’s approach was better because I just wanted to get started. Others may prefer extra hand-holding when learning a language, and prefer Claude's approach.

5. Code debugging and explanation

(Image credit: Future)

Prompt:“Here’s a short JavaScript function that isn’t returning the correct output. Teach me how to debug it step-by-step without giving me the fix right away.”

GPT-5 treated me like a developer needing action. As someone who learns by doing, I prefer this method.

Claude assumed I was a student who needed theory. Basically asking me to tell me about myself before beginning to debug.

Winner: Claude wins for meeting the user at a starting point and not assuming they knew much coding. Claude’s response was helpful for explaining debugging concepts, so the user could better learn for future work.

6. Exam-style problem solving

(Image credit: Future)

Prompt: “I’m studying for a high school physics exam. Give me one question on Newton’s Second Law, let me attempt an answer, then guide me through the correct solution.”

GPT-5 understood the assignment, acting like a practice test and starting to drill me immediately.

Claude acted like a first-day tutor: Prioritizes diagnostics over action.

Winner: GPT-5 wins for following the prompt. The prompt demands practice, not customization. Claude's approach would be ideal for: "Help me understand Newton's Second Law from scratch." But for exam prep, GPT’s structure is objectively superior.

7. Practical skill coaching

(Image credit: Future)

Prompt:“Coach me through creating a monthly household budget. Ask me about my expenses, income, and goals, then guide me in building a spreadsheet without just handing me a finished template.”

GPT-5 started gathering essential budget data in less than 15 words.

Claude consumed 150+ words without collecting a single budget figure.

Winner: GPT-5 wins for delivering actionable, prompt-aligned coaching. Claude's approach suits "Discuss budgeting mindsets," but fails this prompt’s call for immediate, concrete budget construction.

Bottom line: I preferred GPT-5’s teaching style

After testing the same seven prompts with the two chatbots, one thing is clear: these tutors are not the same. And that’s okay. No two teachers are the same and students learn in different ways. While I can declare a winner based on which one followed the prompts closest, it’s ultimately up to the usesr/student to try the free chatbots to determine which teaching style they prefer.

As I mentioned, I prefer active learning. The hands-on approach has always worked better for me, which is why I prefer GPT-5’s teaching style. For someone who likes to spend more time on theory and learning through concepts, Claude might be better.

My recommendation is to give both of these capable bots a try and experience them for yourself interactively. The right study partner for you truly comes down to learning style and how you prefer to learn.

Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.