We’ve not gone past our first-contact with #AI. Web2 was centralised and brought to us by BigTech. The FAANGs remember? This was an era with #Trust erosion – just like climate change – at its worst/zenith. Everything popped when we got the Pandemic. Humans were so connected, yet never lonelier. The algorithms (directed by the few human overlords) solved for attention. To maximise and explicit human attention spans like batteries, that is. Sounds like a scene from Matrix 1, when Neo first woke from his pod after taking the Red Pill. Yes, social media was our first encounter with #AI. We’ve never really survived that – we just swept things under the carpet. Gnawing away at #Trust levels amongst humanity. Then in 2022 we got the ChatGPT moment, and this#Trust tsunami has levels exponentially, alarmingly, depleting. To the point today: we #Trust machines more than humans. Than ourselves even. It’s a lost cause if you think about it. Silicon Valley has plonked billions of dollars into developing algorithms aimed solely at your daughter’s brains. Well after all, the first-gen brush with AI solved for attention (your children’s attention spans).
History; I’m stoked that previous Computer Science lecturers from the early 90s (giving away my age) like Hinton have won the Nobel Prize. It’s really a significant milestone – demarcating the renaissance of AI – a far cry from the wintry conditions back then in the 80s and 90s. AI itself is not new – we’ve all heard about the Dartmouth 2-week summer project back in the 50s. How far we’ve come. How far humanity has come.
My early years as a practitioner (Computer Science wasn’t really as fancy a subject for reading at Uni then it is now), threw me into the deep, working with statistical and predictive models to maximise profits and reduce cost. The most opportune areas were found in the realm of marketing and advertising; giving hubris to algorithmic pioneers like myself to affront Wannaker’s now famous comment “ Half the money I spend on advertising is wasted; the trouble is I don’t know which half”. Oh those were the times a’ight. We built predictive models, we trained and retrained them, we prepared data sets and made sure they didn’t skew. In fact, we made drag and drop GUIs (archaic much?) interfaces for marketers to do segmentation, audience selection and birthed the first generation of marketing automation platforms. Now defunct, prehistoric names like Unica, SAS and Teradata reigned supreme back then.
Then in the early 2000s we saw the dawn of the FAANG fiefdoms- search was reinvented, and we became the product. Yes, our data, rather everything about us was productised and sold to advertisers. Humanity was inundated with ‘noise’ and the search for truth became a whole lot harder. The algorithms of ‘old’ (pre 2000s) solved for attention, max objective function was something called ‘engagement’. If not for the teen suicides and glacial destruction of democracy – it would have been passed over, continuing to be swept under socioeconomic rugs and fabrics of society. Australia, Down Under, is leading the way passing regulations banning social media for under 16 year olds. Kudos to that! I can’t stress enough, we need global AI regulations. We need East and West to authentically collaborate. Right now it’s akin to a prisoner’s dilemma – the new arms race (nuclear, Cold War was analogous in the 80s…) of our times. Not pretty. Not pretty at all.
And now, we’re grappling with the most powerful technology humans have ever created; our last invention. As East and West races for #SuperIntelligence – the One Ring to rule them all – our fates, the fate of humanity lies in the hands of a handful.
For previous tech evolutions, we’ve had BigTech (Silicon Valley) leading the charge. And it’s well understood. How Tech adds trillions to the global economy, while impacting and changing lives (for the better). There’s always duality in any Tech – but we can’t afford a single (one) mistake with #AI. That’s for sure. There’s finality in that. I argue in my book (Genesis: Human Experience in the Age of Artificial Intelligence) that we’ve only 5-7 years (tops, left) to make good the promised future of abundance we’ve been promising each other. These precious few years, we’ve got to put in place, to the best of our ability, all the regulations, safeguards, ethical boundaries and collaborative frameworks to increase humanity’s probability for a positive future. We’re running out of time! History repeats itself. We’re just too greedy. See how the EVs supplanted the ICE in the auto industry? The German bigs saw that from a mile (decades) away. Yet, our present-forward inertia, maybe our human nature, was to maximise profits till the very ‘end’ – the critical red line that gave away the industry to the EV icons such as Tesla. We’ve seen this with Prof Christensen’s illustration of how the lower-end of the steel industry gets displaced by cheaper, alternative entrants – eventually working their way up and usurping the entire industry.
The analogies are aplenty. With AI, the promise of profits is too hard to resist. Silicon Valley is finding it irresistible. OpenAI’s pivot to for-profit is glaring. For every 10 engineers working on getting to AGI, there’s 1 thinking about safety and regulations. Not pretty. Alarming.
Humans must (continue) to be in the loop. We mustn’t start getting lazy and taking our hands off the perennial wheel.

Leave a comment