Every month, I write about artificial intelligence on LinkedIn, where I’ve built an audience of over 4,500 followers and reached more than 31,000 people in the past year. The engagement has been great, but LinkedIn posts have a short shelf life. They disappear into the feed within days, and the thinking behind them gets buried.

This page is my attempt to fix that. I’ve spent the last several months reading, curating, and writing about AI through the lens of someone who works in this space professionally (as a communications lead for an international AI governance platform) and has the technical background to understand it (a Master’s in Applied Data Science from the University of Michigan). I’m also a journalist with 16 years of experience, which means I’m trained to be sceptical of hype, precise about language, and focused on what actually matters to real people.

What follows are 12 themes I keep returning to. Each one links to a blog post where I explore the idea further alongside curated resources to help you think about it yourself.


01. The best AI strategy for most people is learning to think clearly without it.

AI tools are multiplying faster than anyone’s ability to evaluate them. Before asking which model to use, it’s worth asking what problem you’re solving and whether you actually need AI to solve it. That’s because the people getting the most from AI are the ones who were already thinking well before they started using it.

→ Read more: 18 Books to Help You Set Goals, Change Habits, and Actually Stick to Your Resolutions in 2026


02. AI is learning to mimic intimacy, and we’re letting it replace the real thing.

Companion apps are designed to create emotional dependency, and the results have been dangerous: attachments, delusions, and tragic cases where users have taken their own lives. The technology is getting better at simulating connection at the exact moment our capacity for real relationships is under strain. This should worry us more than it does.

→ Read more: 24 Books to Help You Build Trust, Deepen Connections, and Communicate Better in 2026


03. Most AI education teaches tools when it should be teaching judgement.

Schools and universities are banning AI tools while employers are requiring them. Having taken almost 300 online courses and designed curriculum reaching 20,000+ learners, the pattern I see is that formal education is struggling to answer a basic question: what should people actually learn when machines can do most of what we currently test for?

→ Read more: 16 Books to Help You Learn Faster, Build New Skills, and Actually Remember What Matters in 2026


04. The most interesting question about AI and art is one neither side is asking.

The debate over AI-generated art is stuck between “it’s theft” and “it’s progress.” Both positions miss something. The deeper question is about what creativity means when the cost of producing something falls to zero, and whether abundance devalues the thing we were trying to create in the first place.

→ Read more: How to Think Creatively, Be an Innovator, and Make Art in 2025


05. The question everyone is asking about AI and their job is the wrong one.

“Will AI take my job?” assumes a clean binary: employed or replaced. The reality is messier. Roles are being hollowed out, restructured, and redefined in ways that don’t show up in unemployment statistics. The parts of your job that have already changed are a better indicator of where things are heading than any prediction about full automation.

→ Read more: How to Find Your Passion, Get a Job, and Grow Your Career in 2025


06. Most companies are buying AI capabilities they don’t yet know how to use.

The corporate AI adoption story is less dramatic than the headlines suggest. Organisations are bolting AI onto existing processes and hoping for the best, often without a clear sense of what question they’re trying to answer. Understanding why you need the technology matters more than having access to it.

→ Read more: How to Start a Business, Find Gig Economy Success, and Be an Entrepreneur in 2025


07. The AI race is no longer just America vs China.

The Gulf states are luring researchers with tax-free salaries and instant residency. Europe has no frontier models but leads in manufacturing AI adoption. Chinese companies build world-class models but struggle to monetise them. The familiar two-horse race narrative has quietly become a far more complicated and interesting geopolitical picture, and whoever deploys AI effectively while maintaining strategic independence may matter more than whoever builds the most powerful model.

→ Read more: How to Grow Your Business, Scale Your Start-Up, and Be a Leader in 2025


08. AI safety is losing the public narrative, and the technology isn’t to blame.

The organisations working on AI safety are often brilliant at research and terrible at communications. They publish in academic journals that policymakers don’t read, speak at conferences that the public doesn’t attend, and use vocabulary that excludes the very audiences they need to reach. The accelerationist camp, meanwhile, has captured the popular imagination almost by default.

→ Read more: How to Market Your Business, Promote Your Ideas, and Grow Your Brand in 2025


09. Bigger models aren’t better models.

Corporate demand for specialised models is growing twice as fast as demand for large language models, and the economics of running massive models are starting to bite. Companies spent the last two years throwing frontier models at problems that a small, focused model could handle for a fraction of the cost. The assumption that scale equals capability is giving way to a more practical reality: the right model for the job is usually a smaller one.

→ Read more: How to Fight Digital Distractions, Cut Back on Social Media, and Get the Most from Technology in 2025


10. AI promised to save us time, but we’re busier than ever.

The productivity paradox of AI is that the people who use these tools most report higher stress, not lower. Faster output creates expectations of more output. The time saved gets immediately reinvested into additional work. At some point, the question shifts from “how do I use AI to be more productive?” to “productive at what, and for whom?”

→ Read more: How to Set Priorities, Plan Your Schedule, and Manage Your Time in 2025


11. The gap between what AI companies promise and what they deliver keeps widening.

Companies rebrand existing tools as “agentic” despite no actual autonomy. Safety commitments erode under commercial pressure. Business models that started with user protection pivot quietly towards advertising. A year ago, these were isolated incidents; now they look more like an industry-wide pattern that shows no sign of correcting itself.

→ Read more: How to Overcome Challenges, Recover from Failure, and Fix Your Mistakes in 2025


12. Nobody knows what happens next, which is exactly the point.

Scenario planning for AI ranges from a plateau where models hit technical limits to a concentration of power that governments can’t challenge. Anyone claiming certainty about the next five years is selling something. The useful response is building the judgement, adaptability, and human connections that hold up regardless of which scenario arrives.

→ Read more: How to Build a Success Mindset for 2026


If you’d like to follow this thinking as it develops, subscribe below for monthly updates. Each post shares my latest analysis alongside curated resources to help you navigate it all with more clarity and less noise.