Skip to content
5 min read Artificial Intelligence

When Answers Are Free with AI, What Is Education For?

AI reprices knowledge. The education systems built on knowledge accumulation are breaking. The question is whether we have the imagination to rebuild them.

When Answers Are Free with AI, What Is Education For?
Picture Credits: Bernard Leong - I took the picture of this painting at The Azure.

AI reprices knowledge. The education systems built on knowledge accumulation are breaking. The question is whether we have the imagination to rebuild them. When I first showed ChatGPT to my son, he was six. I typed in a question he had been curious about, and the screen filled with a confident, well-structured answer. His first response was not amazement. It was suspicion. He looked at me and asked, “Where did the answer come from?”


That instinct — to question the source of an answer that arrived too easily — is the precise capability our education system should cultivate. Instead, we spend twelve to sixteen years training students to do the opposite: to accumulate answers and reproduce them under exam conditions. In a world where AI generates answers on demand, that model is not just outdated. It is broken.

The reason it is broken lies not in technology but in economics. AI is repricing the value of knowledge itself. When intelligence becomes abundant — when a reasoning model can draft legal memos, debug code, and summarise research papers — the market value of possessing that knowledge collapses. What remains scarce, and therefore valuable, is judgment: the ability to ask the right question, to verify an answer against reality, and to make decisions under uncertainty. Our education systems are optimised to produce the commodity. They need to be redesigned to produce the scarcity.

The Bottom Rungs Are Disappearing

This is not a theoretical argument. The labour market data is already moving. A 2025 Harvard study by Hosseini and Lichtinger tracked 62 million workers across 285,000 U.S. firms and found that junior employment in companies actively adopting generative AI declined by 7.7% relative to non-adopters within six quarters. Senior employment in those same firms continued to rise. The researchers call this “seniority-biased technological change.” I call it reverse-ageism — and it should alarm every parent and educator.

A parallel study from Stanford’s Digital Economy Lab, led by Erik Brynjolfsson, analysed ADP payroll data and found that employment for software developers aged 22–25 dropped nearly 20% from its late-2022 peak, while developers over 26 remained stable or grew. The National Association of Colleges and Employers projects 2026 as the worst graduate job market since the pandemic.

The mechanism is not mass layoffs. Companies are not firing junior employees. They are not posting junior positions in the first place. The bottom rungs of the career ladder are being quietly removed. For graduates trained to believe that knowledge accumulation leads to career entry, the pathway is fracturing.

Jensen Huang, CEO of NVIDIA, framed this dynamic with precision:

“Because you're out of imagination. For companies with imagination, you will do more with more. For companies where the leadership is just out of ideas, they have nothing else to do, they have no reason to imagine greater than they are then when they have more capability, they don't do more”

The same logic applies to education. Schools that view AI as a threat to be banned are the institutions “out of imagination.” Schools that redesign assessment around judgment, verification, and reasoning are doing more with more.

The Ban Question Is the Wrong Question

In my academic circles even though I am a trained academic but now an AI practitioner, the debate over AI tools follows a familiar pattern. One side argues for banning AI totally from coursework. The other advocates unrestricted access. Both miss the point. Banning AI tools that future employers will require students to use is not protection — it is a disservice. Unrestricted access without structure produces students who outsource their thinking entirely. I have observed this first-hand: students who do not read the assigned reports but instead ask an AI to summarise the material into five to ten key takeaways. The tool becomes a shortcut past the thinking, not a catalyst for deeper engagement.

The right question is not whether students should use AI. It is how we assess them when they do.

I tested this in my Digital Transformation course over the last year at NUS. For the individual assignment, I required students to submit a two-page essay on an assigned use case — along with every AI tool they used and every prompt they entered (which is a requirement by the University if AI tools are used). Across my first two cohorts, only one earned a distinction for that specific assignment. The pattern in the data was clear. Weaker students used fewer than five prompts, produced essays overloaded with information, and failed to discriminate between what mattered and what did not. They treated the AI as a vending machine: insert question, receive answer, submit. The stronger students used iterative prompts, refined their reasoning, and engaged with the source material before prompting. Prompt quality correlated directly with essay quality.

The student who earned the distinction did something none of the others attempted. I had deliberately embedded hallucinated and incorrect data into the assignment brief itself — a trap I had warned students about repeatedly in class. Trust no output. Verify everything. The student cross-checked the brief, identified the planted errors, and built an essay that demonstrated genuine critical judgment with above 8 pages of prompts. That is the skill that matters in an AI-augmented world.

A parallel concern: children below 12 should not use AI tools without adult supervision. In fact, you can start with a child at the age of 9 where their cognitive capability of structuring questions are in the beginning stages. Current AI systems use reinforcement learning dynamics that push users toward their existing biases and preferences. For adults, this is manageable. For children whose critical reasoning is still developing, it creates a feedback loop that substitutes confirmation for inquiry. The tragic teenage suicide cases (source: NY Times) linked to AI chatbot interactions in the United States are the extreme consequence of this dynamic. For teenagers, structured controls — analogous to screen time limits — are a minimum safeguard.

From Walking Encyclopedias to Working Minds

The transition in the job market is real, but it is a transition, not an apocalypse. AI removes the boring and repetitive parts of professional work. The doctor trapped typing reports which he or she recalled after the meeting with patients rather than improving her specialisation and focusing on patient outcomes, the lawyer performing mundane research for years before reaching partnership work — AI automates the drudgery. What remains is the judgment layer: diagnosis, strategy, creative problem-solving, ethical reasoning. The education system must shift to producing graduates who are ready for that layer, not the one below it.

This means moving from an answer-based system to a question-based system. We do not need graduates who are walking encyclopedias, entering examinations to unload memorised content and forget it within a week. We need graduates who can interrogate a dataset, verify a claim, frame a problem, and exercise judgment under uncertainty.

I am reminded of a physics course I took as an undergraduate at the National University of Singapore. Dr. Alfred Huan taught electromagnetism to second-year students, and his examination was open-book. Students could bring every textbook, every problem set, every solution manual into the exam hall. That was decades before AI and it was an innovative approach to test students of their capabilities in physics rather than having them to memorize all the use cases and test them. Today, a reasoning model could solve those physics problems. But the ability to know which equation applies to which physical situation — to see the problem clearly before reaching for the tool — remains irreducibly human. Alfred understood something that our education systems are still catching up to: the value was never in possessing the answers. It was in knowing what to do with them. In a world where intelligence is abundant, the question is whether we have the imagination to educate for judgment — or whether we are, in Jensen Huang’s framing, the institutions that have run out of ideas.

References:

1. Hosseini, S.M. and Lichtinger, G. (2025). “Generative AI as Seniority-Biased Technological Change: Evidence from U.S. Résumé and Job Posting Data.” Harvard University. SSRN Working Paper.

2. Brynjolfsson, E., Chandar, B., and Chen, R. (2025). “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence.” Stanford Digital Economy Lab.

3. National Association of Colleges and Employers (2026). “NACE Job Outlook 2026 Survey.”

4. Huang, J. (2025). Remarks in Mad Money by CNBC at NVIDIA GTC 2025 keynote. NVIDIA Corporation.