top of page
Search

The Future of AI: What Will AI Look Like in 2035?

  • Writer: Owen Tribe
    Owen Tribe
  • 2 days ago
  • 7 min read


My AI assistant helps me in ways I never expected. It finds facts I hadn't thought of, connects different ideas, and makes complex data easy to understand. This makes me wonder: what will these digital helpers be able to do in ten years? Will they just be better versions of what we have today, or something completely new?

AI has grown at an amazing speed. From research labs in the late 1900s to the tools in our pockets today, AI is changing how we work, play, and talk to each other. While no one can predict the future perfectly, let's think about what might happen by 2035.

A peek at 2035: Beyond what we expect

Before we look at how AI itself might change, let's think about what daily life could look like by 2035.

Brisbane Olympics 2035

AI-trained athletes breaking records we think are impossible today. AI systems that understand how the human body works will create training plans that help athletes reach their best. This will lead to talks about what counts as "tech-enhanced" versus "natural" training.

Transportation Revolution

Self-driving cars will likely take over our roads, with normal driving becoming more of a hobby. We'll also start to see electric aircraft that take off and land straight up and down in our cities, though mainly as luxury options rather than everyday transport. These changes could reshape our cities and how we travel.

Energy Breakthroughs

We might finally see the first working fusion power plants, though they'll likely still be test projects rather than main power sources. The dream of clean, endless energy will be much closer, with different methods competing to be the best solution. British work on fusion research through the STEP programme could be paying off by then, helping with energy independence and fighting climate change.

Quantum Computing

Quantum computing will have moved from today's test systems to real uses in specific areas, mainly cryptography, materials science, and complex system modelling. Cloud-based quantum computing services will become vital tools for certain industries. Being able to model how molecules interact with great accuracy will transform drug discovery and materials development, with big impacts on healthcare.

Now, let's look at the deeper questions about intelligence itself.

Artificial General Intelligence (AGI) might become real



For years, the big goal of AI research has been AGI: machines that can understand, learn, and use knowledge across different areas with the flexibility and understanding we link with human intelligence. By 2035, we might be at AGI's doorstep, if not already there. Today's AI systems are good at specific tasks, like beating chess champions or spotting certain medical conditions, but AGI would be something truly different: systems that can use good judgment across any thinking task a human could do.

Some leading AI labs already claim their systems show signs of AGI. These claims need careful checking. Still, the path of progress seems clear. The question isn't if AGI will arrive, but when and, most importantly, whether we'll know it when it happens.

This raises a key question: how do we test for AGI? It might not show up with a big announcement or claims of being conscious. Instead, it might grow slowly, with systems becoming more and more capable until we realise, perhaps late, that artificial general intelligence has indeed arrived.

Key considerations about AGI:

  • AGI Achievement: Systems with human-like flexibility and understanding

  • Testing for AGI: How will we recognise true general intelligence?

  • Gradual Development: Systems becoming increasingly capable over time

AI and jobs: More work or mass unemployment?

People have worried about machines taking jobs since the early days of automation. Yet history shows that new technology has created more jobs than it has removed. The question is whether AI will follow this pattern or change it completely.

By 2035, AI will likely have automated not just physical tasks but many thinking jobs too. Legal research, medical diagnosis, financial analysis, and creative work, areas once thought safe from automation, may change greatly.

Despite these concerns, new jobs will appear that we can barely imagine today. Think about it: jobs like "social media manager" or "app developer" would have made no sense in the 1990s. Factory workers during the Industrial Revolution could never have imagined "user experience designer" as a job.



What's most likely is a big change in work rather than its end. We'll probably see more teamwork models: Humans working with AI systems, each making the other better. These human-AI partnerships will redefine productivity and creativity across industries.

The biggest challenge won't be finding jobs but helping workers move to new roles. Countries and companies that invest heavily in ongoing education and flexible retraining programs will do much better than those stuck in old ways of thinking about career stability. The UK, with its mix of innovation and tradition, faces both chances and challenges in this shift.

This is where AI in education becomes vital. Traditional systems have often focused on putting students in boxes rather than building up individuals. AI can change that while freeing teachers from impossible workloads. We need to act now so people can prepare for huge job market changes.

The evolution of work will include:

  1. Automation of Thinking Jobs: Legal research, medical diagnosis, financial analysis, and creative work changing greatly

  2. New Job Creation: Emergence of roles we can barely imagine today

  3. Human-AI Partnerships: Teamwork models where humans and AI make each other better

  4. Education Revolution: Ongoing learning and retraining becoming essential

AI ethics, rules, and possible risks

By 2035, the days of little regulation for AI will be over. The rules will have grown considerably, shaped by corporate failures, public pressure, and government action.

The EU's AI Act and Britain's post-Brexit rules will have grown into complete systems that group AI applications by risk levels and set up proper safeguards. High-risk applications that affect health, safety, basic rights, or democratic processes will face strict requirements for transparency, human oversight, and technical soundness.

Global governance structures for AI may emerge, though how well such international frameworks will work remains unclear. The tension between regulation and innovation will continue, with different regions finding their own ways to balance these competing needs.

The biggest ethical concerns will likely focus on privacy, surveillance, freedom, and fairness. AI systems will have access to huge amounts of personal data, raising critical questions about who owns, controls, and uses this data. Algorithmic bias, already a big challenge, will require more and more sophisticated approaches to ensure AI systems don't accidentally strengthen existing social inequalities.

Unchecked biases in AI systems could potentially cause substantial social harm when working at scale and speed. Making sure these systems reflect our shared values rather than continuing historical unfairness will require constant vigilance and improvement.

Key ethical and regulatory concerns:

  • Comprehensive Regulation: The EU's AI Act and Britain's post-Brexit rules growing into complete systems that group AI applications by risk levels

  • Global Governance: Potential emergence of international frameworks, with tension between regulation and innovation

  • Privacy and Surveillance: Critical questions about who owns, controls, and uses personal data

  • Algorithmic Bias: Sophisticated approaches needed to prevent AI systems from strengthening existing social inequalities

The possibility of AI superintelligence

If AGI means human-level general intelligence, superintelligence means going further: Systems that exceed human abilities not just in specific areas but across all intellectual tasks.

By 2035, we may start to see early examples of superintelligence in narrow fields, though complete superintelligence remains speculative. The development path might involve self-improving AI systems that can enhance their own abilities, potentially leading to an "intelligence explosion" where progress speeds up at a pace hard for humans to grasp.

The implications are profound and varied. Superintelligent systems could potentially solve humanity's biggest challenges: climate change, disease, resource scarcity. On the other hand, they could present significant risks if their goals aren't properly aligned with human welfare.

The alignment problem: Ensuring advanced AI systems remain helpful to humanity even as they surpass our intelligence, may become the defining technical challenge of the 2030s. This presents an unprecedented difficulty in guiding entities that may ultimately understand complex systems better than their human creators.

Some researchers worry about "instrumental convergence," the idea that virtually any highly intelligent system with almost any goal would develop certain subgoals, such as getting more resources or preventing itself from being shut down. Without proper safeguards, even an AI programmed for seemingly good purposes could pursue its goals with unexpected consequences if optimising for single objectives without proper limits.

The development of superintelligence will require sophisticated governance frameworks and technical safeguards to ensure these powerful systems operate in ways that benefit humanity.

Key aspects of superintelligence development:

  1. Intelligence Explosion: Self-improving AI systems enhancing their own abilities at an accelerating pace

  2. Alignment Problem: Ensuring advanced AI systems remain helpful to humanity

  3. Instrumental Convergence: Intelligent systems developing subgoals like resource acquisition

  4. Governance Frameworks: Sophisticated safeguards to ensure beneficial operation

Conclusion: A future shaped by AI

By 2035, AI will have touched nearly every part of society from healthcare and education to governance and leisure. The most profound changes may come from the complex interaction between advancing technology and human adaptation in ways we can't yet predict.

The main factor determining whether this transformation proves beneficial will be neither the technology itself nor some intrinsic quality of "artificial intelligence," but rather the collective decisions we make as societies about how we develop, deploy, and govern these systems.

The AI landscape of 2035 will reflect our values, priorities, and decisions, for better or worse. This is perhaps the most important point: the biggest factor in AI's future is not the technology itself, but human guidance and oversight.

About the author: Owen Tribe is a technology strategist focusing on AI policy and implementation. He works on the crossroads of technological advancement and social benefit, leading initiatives designed to ensure AI development remains aligned with human welfare and societal progress.

Connect With Me

I am a veteran of the digital industry (since 1993) with an established track record in delivering vision, strategy and leadership for cross-functional digital teams in both public and private sectors. As a published thought leader on AI and its role in augmenting humanity, I bring both philosophical depth and practical implementation expertise to technological transformation.

Let's shape a responsible AI future together. Connect with me on LinkedIn

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page