AI’s Next Chapter: From Hype to Daily Reality
AI isn’t just a technical breakthrough. It’s a current running through every corner of life, reshaping how people work, how companies compete, and how we make choices. At Newcastle, we study how these shifts ripple into behavior, markets, and culture.
That’s why we tapped Alex Bertha, a recent addition to Newcastle’s Collaborator Network, for his expert perspective. Alex is a technical advisor at Gates Ventures, Bill Gates’ family office, where he focuses on emerging technologies.
Alex has a front-row seat to how AI is really evolving and offers a refreshingly candid and pragmatic perspective. We sat down with him to pick his brain.
What’s the biggest misconception you see about AI?
There’s an all-or-nothing dynamic that drives a lot of dialogue around AI. It’s either “so over” or we’re “so back.” You can even see it in the framing – at some point we cross an imaginary intelligence threshold (or don’t) and then, suddenly, computers take over.
The reality is likely closer to Moore’s Law – steady, incremental progress built on multiple vectors. When “scaling” slows down, reasoning picks up. New datasets emerge. Tools improve.
It feels borderline banal as we live through it, but looking back AI’s incredible progress is hard to overstate. Five years ago, GPT-3 was judged on its ability to multiply two numbers together. In August, researchers put together a benchmark on Condensed Matter Physics. We blew past the Turing test (the old standard is if a computer can convincingly imitate a human) and no one seemed to care.
The other big misconception is energy consumption. Several studies – both from the labs and independent analysis – show that answering a typical prompt uses about the same energy as 8–10 seconds of streaming Netflix to your device. That’s not to say new data centers won’t impact the environment, but models today are vastly more efficient than they were even a year ago.
Ethan Mollick, a Wharton professor who writes about AI, summed it up well in a recent article:
“Powerful AI is cheap enough to give away, easy enough that you don’t need a manual, and capable enough to outperform humans at a range of intellectual tasks. A flood of opportunities and problems are about to show up in classrooms, courtrooms, and boardrooms around the world.”
He’s not wrong.
What are some of those opportunities and problems from your perspective?
The obvious one is jobs. Recent economic data is starting to show the labor market cooling – and a few recent papers link a decline in junior-level employment to AI. That’s debated.
One thing’s for sure: AI will change a lot of jobs. As a former product manager, I can attest to how different it would be to build for an AI agent as my primary user, or to collaborate with engineers and designers who can now prototype in hours what used to take a week.
What’s interesting is that based on the “AI exposure” of specific tasks (writing, designing, coding), you’d think tech jobs would be most at risk. But the data shows software and data science still top the lists of fastest growing jobs through 2030.
Essentially, more powerful (and complicated) AI needs more technical people to wrangle it, integrate it, and guide it. Technical know-how and product sense are arguably more important than ever.
What emerging technologies do you think are most underhyped right now?
Funny enough, the answer that jumps out is actually… ChatGPT – specifically GPT-5 Pro. That’s the version that “thinks” for an extended period to give better, more accurate answers.
The same goes for Google’s Gemini 2.5 Pro Deep Think (yes, that’s really the name). Both are only available on the most expensive subscription tiers ($200/month for OpenAI, $100/month for Google), which limits their reach. And AI labs themselves still seem to be figuring out the market for a slower, “deep-thinking” model.
Most consumers don’t have questions that need that much GPU horsepower. But engineers do. Data scientists do. Mathematicians do.
As these advanced capabilities make their way into coding workflows, new “GPT-wrapper” companies, and enterprise tools, we’re going to see big shifts in how people work. Imagine kicking off data analysis, prototyping, or drafting a pitch deck – then walking away and coming back to find it nearly finished.
It’s wild when you stop to think about it.
What’s one way consumer brands could use AI more effectively?
I like Sarah Tavel’s recent framework which highlights two implications on consumer AI:
Products that go after consumer attention (AI companions, coaches and tutors).
Products that use AI to make consumer‘s lives easier (like how mobile and cloud led to Google Maps and Uber).
I’m rooting for the latter, if for no other reason than I’m terrified of the idea of my kids growing up with AI “friends.”
One interesting example of #2 in consumer healthcare is General Medicine. It’s a marketplace that does something the industry has been trying to do for decades – give a real, accurate price for a healthcare service before the transaction. AI can now “read” unstructured data, e.g. insurance paperwork, to determine the specific out-of-pocket price for each consumer. That wouldn’t have been possible before large language models.
Consumers have accepted that the internet can’t do certain jobs well. AI is breaking those assumptions: proving it can handle the complexity the internet couldn’t before.
Alex Bertha, Collaborator
Alex is a technical advisor at Gates Ventures. He previously held senior product roles at Amazon and Providence Health, and also founded a healthcare startup. Alex is married to a Microsoft product designer, father of two (small) children and lifelong lover of (small) dogs. View profile.