Large Language Models (LLMs) have now been openly (and freely) available to the public for 3 years, ever since OpenAI launched ChatGPT. Recent surveys indicate that in countries like Denmark, half the population alreade use AI – typically as an alternative to search, but also as an aid to accomplish basic tasks such as writing short texts. Social media is ripe with AI generated content. And as any LinkedIn user will have noticed, suddenly a majority of posts are longer than before, use emojis and lists extensively, and are written in smooth, boring, but grammatically correct English.
Ever since that day in November 2022 when ChatGPT became available, I have been wondering what the effects will be on academia. I had no doubt that both academics and students would throw themselves on these tools… not because they would improve the quality of research, teaching, or learning… but simply because they provide short-cuts, and since people inherently like to minimize effort, these short-cuts will be used extensively.
Three years in, I am still not sure what the impact will be on a larger scale, but I at least see the contours of what I believe is happening: a greater inequality. And this will affect students, educators, and researchers, but in somewhat different ways.
Students first. We already now see schools and high schools around the world going in somewhat different directions. The most common way is to simply let teachers and students use AI tools with sonly a small degree of regulation of behaviour. This means that there might be some guidelines, and there may be some examination forms remaining where AI cannot be used… but there are also many opportunities for students to use these tools, whether in school or at home. This could be to write essays, to help solve math problems, to summarize things, and so on. The problem encountered with guideline-based regulation is that in practice it is almost impossible to verify, enforce, or sanction wrong behaviour. So most behaviour (AI uses) becomes acceptable.
Schools and high schools adopting the “guidelines” way tend to emphasize that students should learn to use these technologies, and use them “critically”. They state that AI should be integrated in a “meaningful” and “constructive” way in teaching and learning. Yet, assuming that students themselves are capable of self-regulation of use is probably utopic. If AI can help cut a corner, why would a student not do it, if it means they can more quickly return to what they really enjoy (gaming, football, shopping, dating, whatever)? My own daughter already last year complained to me that students in her high school are using AI tools so extensively that they are becoming incapable of independent thoughts. Group work becomes boring. Written work does not reflect true knowledge and skills.
An indirect way that AI use is being regulated beyond simple guidelines, is by regulating access to digital tools. For example, the Danish government announced in summer 2025 that mobile phones will be banned in schools, and that schools have to filter wifi internet connections, essentially limiting access to certain content. Solutions to filter internet traffic, or even surveillance of students’ computers, are in fact being rolled out across both schools and universities at the moment.
The harder way adopted by far fewer schools, is to much more heavily regulate AI use, essentially by forbidding it altogether. Students caught using AI are then sanctioned, or even expelled. This may seem drastic, but in reality it is very difficult to regulate use on a micro level, and thus a simple ban becomes the simplest, clearest, and least costly way to regulate. Those few schools then to some extent have to return to “pen and paper” in teaching, learning, and assessment. This is an approach seen in many private schools around the world.
The effect of having different approaches implemented at once in different schools and high schools will in my view be that we create greater inequality in the education system. Some students will come out of school and high school with high litteracy and numeracy skills, as well as high subject knowledge. Many will come out with lower such skills and knowledge.
The exact same choices exist at universities. However, here the funding systems in force do not favour a hardline approach. If students (or their parents) pay for education, punishing students for AI use is equivalent to punishing the “customer”, and results in a loss of revenue. If government pays, this is often done on a “per student, per semester” basis, and again hard sanctions will simply lead to loss of revenue, as well as poor numbers in terms of student retention and progression, numbers unfortunately used by governments to signal “quality”. For this reason, all universities I am aware of have taken the “soft” approach of specifying some vague guidelines, followed up with near zero verification, enforcing, or sanctioning of AI uses that do not comply with policy. To evidence my harsh statement, consider the recent news story that at Copenhagen University, the second-oldest (1479) and second-highest ranked university in the entire Nordic region, with almost 35.000 students, only 7 cases of illegitimate AI use were investigated in 2025 (Source: EkstraBladet). Seven! This indicates clearly that a majority of universities and professors are in reality closing their eyes on AI use by students.
Although there is a scenario in which all universities and programs close their eyes, I think another scenario more likely. The university effect will again be the same in my view – that we create greater inequality in the higher education system. Some students will still come out of university with high litteracy and numeracy skills, as well as high subject knowledge and skills. Many will come out with lower such skills and knowledge. Employers will struggle to tell the difference, and will de facto have to invest more money in both testing during recruitment, and training once a person is hired. The fewer and fewer graduates who actually have high skill and knowledge levels, independent of AI (i.e. who don’t have to prompt their way to an answer or a solution) will be in high demand, and will command higher salaries. Eventually some universities will realize that this is the case, and will simply ban AI (just like some schools and high schools have been doing). Why?
Let’s take the thought a step further. We thus end up with a higher inequality, and some universities may go down the zero-AI route, despite the cost in terms of recruiting fewer students, having to teach more, invest in AI-safe exam forms, kicking out more students etc. To signal quality to external stakeholders, we could imagine that such universities will start advertising the fact that they have a zero-AI policy, and we might see dedicated accreditation labels emerge. Zero-AI could become a badge of honour for some programs and universities.
What about academic scholars – i.e. the teachers and researchers at universities? Well we have the same problem. My big worry is that as academics increasingly use AI tools (for much the same reasons as students), they will erode their own literacy, numeracy, and analytical skills, as well as erode their own subject knowledge. Sure, they will be good at prompting their way to an answer, but they themselves will not know as many answers, to as many questions, as previous generations of subject experts knew.
But again I think we will see a bifurcation – two ways of approaching AI among scholars. Based on what I see, a majority are already using AI quite extensively. To identify papers, summarize papers and arguments, formulate questions, write texts, collect data, code data, interpret findings… basically all aspects of research. I have no doubt at all that some scholars across the world are even going as far as inventing data with AI. A huge danger in itself!
Some will instead choose to remain zero-AI researcher – which I have discovered is increasingly difficult to do when you work collaboratively with other academics. The most obvious place this is apparent is in my work as an editor. With a dozen published edited books, I am now suddenly facing a new situation in which a vast majority of authors send me papers that tools like GPTzero indicate are fully (or close to fully) AI generated. If you want to understand the extent of AI use, simply take a sample of recently published papers from a variety of journals, and put them through AI detection software. You will quickly discover the problem.
I thus see a future in which there will be two types of professors: those who have the title and have high independent numeracy, literacy, and analytical skills, coupled with independent subject specialist knowledge… and those who have the same professor title, but actually struggle to some extent with independent analysis and writing, and have lower degrees of independent subject specialist knowledge. These two types will co-exist, at least for a while. There will be a growing inequality among professors, in terms of knowledge and skills.
Eventually, I could foresee one of two futures. One in which everyone gives in to the mass use of AI tools. So everyone becomes a prompt generalist, the value of having published something goes down, and academics to some extent get replaced by AI. Professor salaries go down. Or a future in which some professors remain independent specialists, commanding higher salaries. Even within universities you could have two career tracks. Some universities during hiring might start to actually test the skills and knowledge of applicants, to understand the actual level of independent skills and knowledge of the person they are hiring. At a further provocation, I would even state that having a professor title, and a top publication, will become somewhat less meaningful in the short to medium term.
We might also see the creation of new top journals with zero-AI policies, to signal that what is published is purely human thought.
It occured to me when writing down my thoughts on this matter, that now would be a good time to ask chatGPT what it “thinks”. Unfortunately, I only have access to the basic version, as I don’t pay for any AI tools. But the answer it provides is interesting… I also asked it for an illustration of a professor who has become AI-dependent… and chatGPT thought this professor would look happy, since AI has set her free from the burdon of work…

Question: If academic scholars increasingly use LLMs to conduct data analysis, and to write academic articles discussing their results, what will be the consequences for the academic abilities of those same scholars?
Answer: This is a deep question, and the answer is not simply “skills will decline” or “productivity will rise.” The consequences will be uneven, field-specific, and strongly shaped by how LLMs are used and governed.
Below is a structured analysis of the most likely effects on scholars’ academic abilities.
1. Skill Redistribution, Not Uniform Skill Loss
Likely to weaken over time
If LLMs are routinely used for core intellectual labor, certain abilities will atrophy in many scholars:
- Routine analytical fluency
- Manual data cleaning
- Exploratory statistics
- Writing first-pass interpretations
These skills are learned through repetition; outsourcing them reduces practice.
- Low-level writing competence
- Crafting clear but standard prose
- Structuring conventional literature reviews
- Explaining methods in textbook ways
This mirrors what calculators did to long division: the skill didn’t vanish entirely, but it stopped being widely internalized.
Likely to strengthen or become more valuable
At the same time, other abilities will become more central:
- Problem formulation
- Asking the right research question
- Choosing meaningful variables
- Framing hypotheses that matter
- Conceptual judgment
- Interpreting outputs in theoretical context
- Distinguishing plausible from spurious findings
- Knowing when an LLM’s answer “sounds right” but is wrong
- Meta-methodological literacy
- Understanding how analyses can fail
- Anticipating bias, leakage, confounding, and misuse
- Designing validation strategies rather than executing them
In other words: execution skills decline, epistemic skills increase in importance.
2. A Bifurcation of Scholars
One of the most significant consequences will be greater inequality in academic ability.
Two emerging types
- LLM-dependent scholars
- High output
- Smooth prose
- Limited understanding of analytical internals
- Vulnerable to subtle errors they cannot detect
- LLM-augmented scholars
- Use LLMs as accelerators, not substitutes
- Retain deep statistical, theoretical, and methodological competence
- Can override, correct, or reject model outputs
The gap between these two groups may be larger than the gap between pre-LLM scholars.
3. Illusion of Competence and Overconfidence
A distinctive risk is false mastery.
Because LLM-generated analyses and text are:
- Fluent
- Confident
- Plausible
Scholars may:
- Believe they understand results they cannot reproduce manually
- Fail to notice invalid assumptions or inappropriate tests
- Overestimate robustness because the presentation looks professional
This is more dangerous than ignorance, because it feels like knowledge.
4. Changes in Training and Cognitive Development
Early-career scholars are most affected
Graduate students and postdocs are still forming:
- Intuition for data
- Taste for good arguments
- Sensitivity to methodological failure
If LLMs are introduced before these intuitions are internalized, scholars may:
- Never fully develop them
- Become excellent “editors” of reasoning they do not own
This mirrors concerns in mathematics education: tools are most harmful when introduced before conceptual grounding.
5. Narrowing of Intellectual Diversity
LLMs are trained on dominant patterns of:
- Argumentation
- Methodology
- Interpretation
Widespread use may lead to:
- Convergence on standard framings
- Fewer idiosyncratic approaches
- Reduced methodological creativity
This doesn’t eliminate originality, but it raises the cost of deviating from the norm.
6. Institutional Consequences Feed Back Into Ability
As LLM use becomes normalized:
- Journals may implicitly expect higher polish
- Reviewers may become less tolerant of exploratory or messy work
- Speed may be rewarded over depth
This pressures scholars to rely even more on automation, reinforcing skill displacement.
7. Net Effect: A Shift From “Doing” to “Supervising”
The dominant cognitive role of the scholar will shift from:
performing analysis and writing
to:
supervising, validating, and contextualizing automated intellectual labor
This is not necessarily a loss—but it is a change in what academic ability means.
8. Summary Table
| Domain | Expected Effect |
|---|---|
| Technical execution | Declines for many |
| Conceptual reasoning | More important, unevenly distributed |
| Writing fluency | Increases, but less diagnostic of understanding |
| Error detection | Critical bottleneck skill |
| Training depth | At risk without deliberate safeguards |
| Intellectual diversity | Some narrowing |
| Productivity | Strong increase |
Final Assessment
The use of LLMs will not make scholars “less intelligent,” but it will change which intelligences are cultivated.
The central risk is not laziness—it is epistemic hollowing, where scholars can produce convincing research without fully understanding it.
Whether academic ability degrades or evolves depends less on the technology itself and more on:
- Training norms
- Transparency expectations
- Whether scholars are required to demonstrate understanding beyond fluent output
If you’d like, I can also explore:
Policy or training interventions to preserve core abilities
Historical parallels (calculators, statistical software)
Field-specific effects (e.g., humanities vs. empirical sciences)
