“Only intuition can protect you from the most dangerous individual of all, the articulate incompetent”
Robert Bernstein
One of my interviewee red flags is someone who is extremely articulate but fails the details. It’s the candidate who can talk a big game but falls flat when it comes to thinking ability.
‘How did you achieve this 1% growth you claim in the resume?’
This articulate person typically tells an engaging story. It’s full of fun characters, an organisational malaise, a history of self-serving employees and concludes with how he, the individual, pulled off a mini revolution. You will enjoy this story. But every so often, you realise that this articulate candidate hasn’t really told you how they achieved the 1% growth. The story feels hollow. The details are lacking.
Uncharitably, let’s call this type of candidate the articulate fool. The articulate fool is on the opposite end of the spectrum to a mumbling savant. The articulate fool gets a great reception in the world (until he proves that he is a fool). This is how the world works.
As an aside, this is why I like case studies in interviews as its a better filter of articulate fools than personality / resume based questions.
Increasingly though, we are all becoming articulate fools.
The internet is making us all articulate fools
Elon Musk is very ‘articulate’ on twitter. A Twitter meme lord / shitposter with little knowledge on complex issues can still sway public opinion on those issues. Is he a ‘fool’? On many issues, he most definitely is. Note that both the words articulate and fool are highly contextual to this platform.
Being articulate on WhatsApp is very different from being articulate on Twitter is very different from being articulate on TikTok. Everyday, thousands of people with a large following take over a little bit of your brain. You see an engaging twitter thread or a viral youtube video and you are captured by how well articulated the whole idea is.
Does it matter whether the person has an expertise or whether they got their facts from a reliable source? Does it matter whether the basement-living galaxy brain with no job is creating well crafted twitter threads on our economy, climate change or even our future. Increasingly, it appears that it does not matter.
We all crave the articulate fools. And want to be one.
Remember Web 3? There were quite a few articulate fools there. For some, it’s articulating wild theories on how web 3 will shape shift our world into some kind of futuristic, mortal-engines like world of anarchist city states. Most of those who talk about Web 3 have no first principles view of what they are talking about (or hawking). NFT bros regurgitate NFT language. Crypto bros HODL and repeat pop culture economics. It’s all now jargons and memes. Doge all the way to the bottom.
But I shouldn’t just be picking on Web 3 bros. There are many many types of articulate fools on the internet doing epic shit. We repeat memes without even realising we are doing so. The real danger is those that sound very grounded and articulate but with little way of establishing one’s smarts in the topic.
I have been guilty of writing on topics on which I know nothing more than the letters I wrote on Twitter. Being an articulate fool is seductive. It’s easy if you can be articulate on that specific medium. It would be impossible for me to be a good articulate fool on TikTok. Unfortunately, it pays to be an articulate fool on the internet.
Dunning Kruger Effect on steroids
In tests, the bottom 25% candidates often thought they were above average. The articulate fool on the internet is one of the manifestations of the Dunning Kruger Effect - the cognitive bias where if you were not smart enough, you’d probably overestimate your own knowledge or competence.
Voices on the internet are flattened into very few dimensions of judging whether they are smart or whether they make sense. This manifests in many ways. A couple of examples:
Folks who are poorer at telling facts from fake news are more likely to believe they can and more willing to share wrong info with the world. This becomes a vicious cycle of fake news getting peddled more than the real news.
#Doyourownresearch calls on the internet, for everything from vaccines to economic numbers, often result in people looking at a few pages on the internet and arriving at conclusions on topics that needed long studies and (let’s face it) certain levels of intelligence. The result is a teenager playing call of duty arguing with a biochemist on the lack of efficacy of vaccines and winning that argument because he knows the articulation of the internet.
The world is collectively telling us that the articulate fools get the juice. They get the attention. They get the rewards that social media has cooked up.
But all of this is a mere precursor - a little appetizer, to a future that’s coming. We are entering the era of the ultimate articulate fool. The master of articulating an answer to any question. The oracle of banality and the wizard of nonsense.
I am, of course, talking of ChatGPT.
ChatGPT
If you haven’t tried using ChatGPT, get out from the beautiful rock you are living under and go give it a try.
I am addicted unfortunately. When presented with my life’s problems, ChatGPT has been fluent and authoritative in giving me platitudes. I’ve been trying to use it to plan my day, a 6 month travel plan, summarising long articles on the internet, editing emails, giving me short story prompts and blurbs and many many more things none of which have fundamentally altered my life as yet but still feels incredible.
Sometimes, I cannot really tell if there isn’t a human on the other end, albeit an extremely quick one, typing responses to my stupid questions.
To dwell more on some of the things you could do with it:
Ask it to explain things. Like, ‘explain quantum entanglement like I am a five year old’ can produce this mildly creepy doll pair story that I think can be the basis of a good horror movie.
Ask to write code
Ask it to simulate a Linux terminal
Write hilarious short story blurbs with a few prompts. I’d totally read this India sci-fi, mango time-travel story.
Write poetry (who can really tell what a good poetry is anyway)
Solve Math equations
Ask it to plan a travel itinerary for you
Write an essay for your college admission
The possibilities are endless. I bet that if you play around with it for an hour, you will discover a whole new use case that you never thought possible. And also conclude that this is the ultimate sentience that will take over.
But is it?
Given the authoritative nature of its responses, it can be easy to confuse ChatGPT as some kind of fact generating machine. Or the ultimate compendium of knowledge - a kind of oracle. But it can generate any assemblage of words and content.
Examples
ChatGPT’s ‘funny’ solution to ensure we strengthen democracy around the world is to have voting happen on a made up ‘National Pizza Day.’ Or make the contest some kind of dancing with the stars with the populace picking the best dancer to be their leader. To be fair, I asked it to be funny here. Turned out to be a little eerie.
Sometimes, facts can be awry. The second largest country in Africa isn’t Sudan its DRC (Democratic republic of congo)
Here is a more flagrant example of ChatGPT getting basic math wrong. What’s the value of 10 lakhs compounding at 7.1 percent per annum over 10 years? The answer is 19.9 lakhs. ChatGPT, however, has decided to bump up the answer like an overzealous insurance sales person and spits out 22.5 lakhs. However, it presents the correct formula and a breakdown of the calculation right up until it spews out the wrong number. The danger here is not that this looks completely wrong but that it’s close to the right number that it can easily be mistaken as the correct answer.
ChatGPT is an incredible toy. Or is it a weapon? The truest analogy is that this is the looking glass in Lewis Carroll’s Alice in Wonderland sequel. Alice pokes at the mirror to step through to an alternative world where everything is flipped. The mirror often gives Alice confusing and contradictory answers. It reflects a distorted version back to Alice.
But ChatGPT has no malice. Or intent.
It takes in the world as we know it in words, processes it and decides to spit it out in the most logical way it knows. Most times it is sane and reflects back reality but it has no obligation to do so. It isn’t your neighbourhood algorithm with clear deterministic instructions. It is literally just trying to piece together words that ‘sound’ mostly correct.
It’s the ultimate articulate fool.
Our manifest destiny to sound smart
As much as I love pondering on the fall of humanity, I am a technology optimist. I cannot get enough of ChatGPT. I cannot wait to see how it is going to change the creator economy. Or even the boring real economy.
Microsoft is investing billions ($10 billion) into OpenAI. I am not sure it will revive Bing but it will create a definite shift in the microsoft suite enabled office worker complex. With a little puff of magic, you will create email responses, powerpoints that somehow make sense all of its own, chats that automatically talk on your behalf and write 6-pagers on your product with the help of a little AI elf.
The other day, a developer in my team shared a snippet of code which contained a complex weighting and ranking logic which we wanted to use. It was, however, written by someone a long time ago and never used. My goal was to decipher what it did and whether it aligned with what I wanted for the feature I was trying to build. Off I went plugging this into ChatGPT. The results were incredibly impressive. Not only did it give me a simple natural language breakdown of what the snippet did but also I could now make it simulate that function by giving it different inputs to see what emerged. What would have been a two day exploration for the team turned into a 10 minute one.
Google increasingly sucks for search. Searching for anything brings up a page full of sponsored links and after that I spend enormous amounts trawling through the links to find what I am looking for. Feels so early twenty-first century. But I am certain Google isn’t going to sit back and watch. We will have the war of the models all competing to articulate and expand on our ability to sound smart and create.
Soon, GPT-4 (ChatGPT is based on GPT-3), presumably more advanced despite the hype cycle will arrive.
But how do we know if ChatGPT is giving us the right answer?
Everyone will now be a Wordcel - with the power to weave words that sound moderately smart. Perhaps this will expose some original articulate fools in the process - those that clung on due to their ability to play with words more than their ability to think or create a point of view.
But assume that ChatGPT gets things right most of the time and that’s the real humdinger.
Essays will be published by authors who don’t believe the words they’ve generated. Code will be written (perhaps with subtle mistakes that add up). Art will be produced that surprises its own creator. Ultimately it will be an eternal cycle of rehashing, reproducing and recombining all the content that ever existed in the blackhole of content that is the internet.
The path toward being articulate fools feels inevitable. In fact, it feels like our ultimate destiny.
Is that all that we’ll ever be from now on? Is that all we ever were?
Sounds a bit dreary.
Could be worse,
Tyag