Why Does Everyone Want to Be an Expert These Days?
Long form rant on the state of expertise on internet
It seems like everyone wants to be seen as an expert on the internet these days, and I understand why. Of course, there’s social validation in being seen as knowledgeable, and that feels good. But, with generative AI on the rise, becoming an "expert" has never been easier or more profitable. It's a new kind of faux expertise which is more about curating ideas rather than coming up with original thoughts.
In 2012, the book "The Internet as a Factory and a Playground" captured the dual nature of social media. It was seen as a place for connection, creativity, and self-expression but also as a platform where our creative work, thoughts, and data were exploited for profit. At that time, the concept of a creative economy was not fully realized, and users had little control over the value they were creating. Platforms benefited the most, while users gained very little in return. Slowly but surely, the internet came to accept the idea that platforms exploit our digital labor, no matter how non-labour-like or fun it may seem.
The rise of the creator economy has shifted the power back to users to an extent. You can own your own social media factory. Times have changed now. With the rise of generative AI, this shift has only accelerated. Now, individuals can build their personal brands and pose as intellectuals, thought leaders, or cultivated thinkers without doing the real work. AI has become a shortcut, allowing people to boost their intellectual capital with minimal effort, and to monetize through personal branding, sponsorships, and an ever-growing online following. Expertise has become a form of social currency that opens doors, making it more desirable than ever.
The Subversion of Expertise and Pursuit of Influence
The expert community is quite unhappy with how generative AI has given rise to a new type of pseudo-expert—individuals who lack genuine deep expertise but are skilled at using AI to up their intellectual reputation. These individuals are neither experienced professionals nor complete beginners. They fall somewhere in between and use AI to generate ideas, add a touch of cleverness, and create the appearance of depth and polish.
We are in an era where expertise is no longer protected by an elite few. Expertise is being subverted on social media, which I think could have been the start of something exciting honestly. Why should the elites control anything! But unfortunately, those subverting expertise aren't the ones furthering knowledge. Instead, they are diluting credibility and homogenising thoughts and opinions across the internet with their lazy, low-effort, self-satisfied posts.
Does AI-generated content pass off as genuine content without people noticing? In today's post-truth world, where misinformation and fake news are rampant, the answer may be surprisingly yes. Ordinary people confuse these AI-masked experts for real experts, and real experts may not bother calling them out.
To some people's annoyance, pseudo-experts flood social media with hollow content to game the system, hoping that "frequency + superficial expertise" will grow their influence. And in many ways, it works. Social media runs on supply and demand. Not everyone is looking for deep, well-researched insights. Sometimes, people just want surface-level wisdom—quick, relatable bites that allow them to switch off their brains but still feel the satisfaction of intellectual engagement. These quick takes play directly into System 1 thinking, where ideas feel instinctively right because they confirm pre-existing beliefs.
Meanwhile, System 2 thinking, which requires deeper, slower, and more reflective analysis, takes more work to cultivate. It's uncomfortable and takes time, so it's often neglected. Pseudo-experts thrive in this environment because they offer ideas that seem right and feel right without demanding the kind of intellectual rigour that deeper insights require.
The pursuit of influence online is also about gaining social and intellectual capital. Social capital involves building networks and relationships—being liked, followed, and included in important conversations. Intellectual capital is about establishing credibility and authority—being respected for what you know and contribute. When people use GenAI for their content, they're not just trying to keep up; they're striving to position themselves as thought leaders whose ideas are worth following. It's about strategic self-presentation. People are leveraging AI to climb social and intellectual hierarchies and fast-track a level of influence that would typically take years to build. And while this might seem harmless, like everything else on the internet, it changes the very nature of authenticity online. When people use AI to inflate their intellectual capital, they manipulate and shape perceptions in ways that don't reflect reality.
Consent, Control, and Incentives in the Age of AI
The question of consent is crucial when it comes to training AI. In the past decade or so, awareness around data protection has risen, especially with GDPR protecting users' interests. Users don't tick the "I agree" box on terms and conditions as readily as before. But what could possibly incentivise anyone to allow their ideas and creative labor to be fed into these massive AI systems?
For individuals still trying to build a following, GenAI is an opportunity and a great tool. It provides an easy way to keep up with the constant demand for content creation and to engage more frequently with audiences. However, AI poses a threat to individuals with unique voices and intellectual capital that sets them apart. When AI can absorb and replicate your ideas, style, and thought process, protecting what makes you unique becomes difficult. In the past, plagiarism was a known risk, but there was always the possibility of fighting back—calling out those who copied your work and demanding credit. Now, with AI systems quietly training on your content, there's no clear recourse. Your intellectual capital is fed into the system, and there's no way to retrieve it or claim back the originality that makes you, you.
And that brings us to a tough question: how do you put a price on someone’s intellectual identity? It feels weird to think you could slap a dollar amount on something so personal. But, at the end of the day, it’s better to have creators paid for their contributions than expect them to just let their data be used for free. Especially for individuals with years of cultivated interests, ideas, and expertise, as well as creatives, some form of recognition and meaningful incentive to contribute is key. It’s hard to put a price on originality, but it’s even harder to justify using someone’s ideas without giving anything back.
Gatekeeping Ideas
As GenAI evolves, we may see more creators begin to gatekeep their ideas. For those who have built their influence on originality, it's no longer all that safe to share freely. The risk isn't just that someone might steal their ideas—AI will learn from them, replicate them, and dilute their value.
This could lead to a future where the most valuable ideas are kept behind paywalls or inside private communities. Ironically, the platforms that once promised to democratize creativity may look towards intellectual exclusivity, where only a privileged few have access to the best ideas.
For creators, this means navigating a tricky balance between visibility and protection. Sharing too freely may cost them their originality, while keeping their ideas locked away could limit their reach. In a world where AI constantly learns, the question becomes: How do you maintain control over your intellectual capital?
The line between authenticity and performance is more blurred than ever. Some will continue to thrive on their originality, while others will ride the wave of AI-generated content, posing as experts and thinkers. Fake it till you make it. May be those faking will eventually make it as real experts too if put to good use. But, majority are not in it for knowledge. So, likely not. For now, the algorithms reward bite-sized, frequently posted superficial wisdom. But in the long run, who knows? Maybe the ones who stand the test of time will be those who offer more than surface-level engagement—those who bring depth, originality, and something real. Or maybe not. Algorithms don’t seem to care about that for now.
The challenge for experts is clear: How do you protect your intellectual capital, and cut through the noise of AI-generated pseudo-expertise?
Disclaimer: The views and opinions expressed in this content are my own and do not represent those of my employer or any affiliated organisations. I used AI for assistance with grammar and editing to ensure clarity, but the ideas presented here are entirely mine.