Economic AGI vs. Conscious AGI: The Race To Artificial Intelligence
AGI’s timeline depends on its definition. If it means full human-like intelligence, it’s likely far off. But if defined as automating all human economic tasks,it’s much closer:Peter Morgan, AI Expert.
My guest in this edition of “Minds And Machines” is a man who has been working in the field of artificial intelligence (AI) for years, much before the introduction of gen-AI commercially in late 2022.
Peter Morgan is a multi-faceted personality - in addition to his expertise in IT, data, and AI, he is also a businessman, an educationist, a trainer, and an adviser. He is the Founder and CEO of Deep Learning Partnership, an AI-first boutique consultancy based in London, UK. He is also Head Tutor of the 'AI For Business Leaders' course at the Saïd Business School, University of Oxford.
Starting with a PhD in Physics, Peter worked as a solutions architect within Cloud infrastructure across a variety of business domains. Cross-training to become a data scientist in 2013, he quickly saw machine learning as the most general purpose approach to solving business and scientific problems. With this insight, he founded his AI consultancy “Deep Learning Partnership” in 2016, which helps clients leverage AI within their businesses, including trialing PoCs and deploying AI solutions into production.
Excerpts from the email interview:
Q1) You have been in the business/company building exercise for many years, even before AI went commercial. How would you say this process has changed after the advent of gen-AI?
Peter: Yes, I am the CEO of an AI consulting company, Deep Learning Partnership, which I founded in 2016. I saw a remarkable uptick in public awareness after the release of ChatGPT by OpenAI. A hundred million downloads in a couple of months whereas before that, AI was mostly a side project confined mostly to within the AI research community. Suddenly most people around the world were aware of generative AI, many had used it and seen how good it was at generating text at least. It wowed a lot of people and really was the first indication that a new revolution was perhaps underway, one potentially as transformative as the industrial revolution of the 1800s or the computer revolution which began around the 1950s. We have only seen the adoption and AI capabilities grow since the release of ChatGPT with hundreds of new generative AI models being released by many companies and countries over the past two or so years. The pace of development will only continue to increase as we are on an exponential curve in terms of data, model size, and hardware.
Q2) For now, where do you think the impact of gen-AI is being felt the most? Which sectors - content, health, finance.... and your reasons?
Peter: As an AI consultant, I see gen-AI being trialed across all industry sectors, whether that be healthcare, medicine, finance, transportation, manufacturing, law, education, government, construction, and so on. Because AI is a general purpose technology, this should not be so surprising and we will see adoption continue to increase in the months and years ahead. The goal is the automation of all work, both white and blue-collar. Clearly, white-collar office work is “easier” to automate as it mostly involves the use of digital computers, whereas blue-collar manual labor requires physical action taking place within the physical world – think plumbers, surgeons, waiters, and construction workers. This means robots with AI inside and controlling mechanical and dexterous bodies. Rapid progress is also being made in robotics along with a dose of healthy investment.
Q3) What are your views on regulations for AI? Should AI be regulated only by govts, not regulated at all, or self-regulated by Big Tech itself?
Peter: AI needs the right amount of regulation, not too little or too much. As with any regulation, this ultimately needs to come from government for it to be passed into law. Currently, we see GDPR, the EU AI Act, and various state and country regulations starting to appear. Of course, regulation often takes years to formulate and pass into law, and AI development is occurring very rapidly, possibly faster than any previous technology. This keeps the regulatory space very dynamic and fast-moving. Obviously, it is critical that we get regulation right. Ultimately, we would seek harmonization across all countries and regions, but this is a big ask.
Join one of the fastest growing online communities around AI - AI For Real
Q4) What are your views on AGI? Will it ever come? If yes, how far off is it?
Peter: This depends on the definition of AGI. If it means that AI can do everything human intelligence does, then it may be some time away. This would include self-awareness, and subjective experience, because these are both forms of human intelligence, often referred to under the umbrella of consciousness. If one adopts a more pragmatic definition, such as a system capable of performing all the economic activity of humans, then we are much closer to achieving this definition of AGI.
If we look at white-collar work, it could be less than five years, blue-collar maybe a decade or so. I think it’s actually more meaningful to speak of automating parts of workflows, eventually automating entire jobs end to end. In this respect automation (and therefore AGI) lies on a continuum and is not a discrete moment in time. AI is already much better (faster, cheaper, and more accurate) than many humans at many tasks. For example, summarizing large volumes of text, generating art, coding, mathematics, and certain types of reasoning. Maybe not better than the best domain experts, but with every new release, these AI systems keep getting better and more performant. We have also seen no limit in this progress as yet, i.e., we haven’t hit any walls, so perhaps scale is all we’ll need to get to AGI, at least the economic type.
Q5) Will AI eventually turn humans into sloths, and will we become less intelligent?
Peter: Only if we let them, and yes, this is a real risk. If we have access to systems that can do our work for us just as accurately, at the same cost or cheaper, in one-hundredth of the time, should we not use them? A lot of human labor has already been replaced by mechanical or digital systems. Economically, companies will have no choice but to use AI in order to remain competitive, just as they use robotic arms and digital computers today. So, even though these systems can write stories as good as we can and produce high-quality art and music, perhaps we should keep producing our own art, music, and literature, keep playing chess and go, doing crosswords, etc., so that our brains and minds don’t atrophy.
I think most work will eventually be completely automated by AI/AGI. Maybe (getting a little science fiction here), we will have two worlds, the digital AI world that does most of our work, and the human biological world, where, freed from menial labor, we can pursue more creative and personally meaningful endeavors. Just to emphasize, this AI work replacement will be a transition over a period of time, but one that has already begun. I like to stay optimistic, so I look at the automation of work as a good thing, freeing humans to become more creative. Not everyone will agree, but this is the way I like to look at it. Of course, the issues of technological unemployment and universal basic income come up at this point. No one said revolutions were going to be easy or straightforward.
Q6) 2025 is being touted as the year of agentic AI? Your views? (True or false.) Also, in your opinion - if and how will agentic AI change the world of business?
Peter: 2025 is definitely going to be the year of agentic AI, although we’re just getting started with a lot of work to do from here to get robust agentic AI systems ready to deploy into production. Imagine hundreds, or even thousands, of separate AI agents all needing to be coordinated to complete an end-to-end task, and to fulfill a particular goal. The goal could be to solve a complex scientific problem, discover a new drug, automate a complex business task, replace an entire job, or automate a government department.
Currently an LLM like ChatGPT or Gemini could be considered as a single agent. Now we need to chain several of these agents together (they could be closed or open source agents) using an orchestration framework such as LangChain or AutoGen. We are at the early proof of concept (PoC) stage, but significant progress is currently being made. Obviously, if we are successful in our agentic rollout, businesses will change significantly, ultimately becoming fully automated. Brave new world indeed!
Q7) If you could explain the kind of role professionals like you play in guiding/transforming businesses toward AI adoption.... what is the process like? Are companies receptive to this transformation today as compared to 2022 or not, in your opinion? Also, are they willing to spend the big bucks required for the change? Or are they still in a wait-and-watch mode?
Peter: Companies, in my experience, are in the PoC mode. They are certainly trying these LLMs out in limited sandbox environments. For example, some companies are allowing their developers to use GitHub Copilot in order to make them more productive and already seeing positive outcomes as a result – a 30% increase in productivity, say. As these models become more capable (more accurate, faster, and cheaper), as well as agentic and multimodal, companies will have no choice but to look at deploying these systems into production, starting with limited PoC trials. Parts of workflows are definitely starting to become more automated with this trend only expected to increase over the coming months and years.
As an AI consultant, it is my job to help educate companies about the technology and how it can help them to become more productive. Then it is to help them design and deploy PoCs, and ultimately put safe and robust AI solutions into production. Companies should pay particular attention to ensure their proprietary data remains private and is not leaked into or stored by the model providers. This is where attention to Terms and Conditions is very important. AI cloud platforms, such as Microsoft Azure, Google Vertex AI, and AWS all come with privacy guarantees and provide popular and convenient paths to testing and deployment of AI solutions for many companies today.
In fact, any business that would like to know how AI can help it through discovery or PoC can reach out to us through our company website.
Q8) In your opinion, should any change in an organization because of AI start from the top or the bottom? Whatever your reply, please explain why.
Peter: Change can occur in both directions, bottom up or top down. Often it is the more technical people within a company who will have experimented with these models (ChatGPT, o1, Gemini, Claude, DeepSeek, Mistral, etc.) at home or in their spare time, and tested them extensively in a variety of situations and use cases. After such hands-on experimentation, they would likely tell their managers or CEOs about the somewhat remarkable capabilities they have discovered for themselves. Often the C-suite and other business executives will have read a lot of the news about these AI systems and have naturally become curious and may want to run some PoCs within their companies too. So it is a combined effort, top down and bottom up, but the ultimate sign-off needs to come from the C-suite. This is no different from any major project where budgets, ROI, and risk-reward trade-offs all need to be considered.
Q9) Which components of business do you see AI bringing an immediate transformation to - marketing, operations, finance, etc?
Peter: Large language models (LLMs) and large multimodal models (LMMs) are already very good at content creation. This can include text, images, audio, video, time series data, and even scientific data. Given this, AI is already good at generating content across all workflows (again, it is a general purpose technology – our final invention?) independent of industry vertical. Companies are training custom models using foundation models and their own proprietary data sets to get a competitive advantage. So while the open source Llama 3 model, for example, is trained on a vast amount of data from the Internet, companies can then train it further on private data creating a bespoke model specialized for their particular workflows. As these models get more capable, agentic, and multimodal, more workflows can and will be automated.
Q10) You are also the Head Tutor on the 'AI For Business Leaders' course at the Saïd Business School, Oxford. Do you see any change in the profile of students enrolling for this course or such courses today compared with the earlier batches?
Peter: I think the participant profiles are more or less the same now as they were in the early days, but there has been a major increase in the number of students enrolling in the course. For example, when we started the course around six years ago, about forty students enrolled in each course, whereas after the release of ChatGPT and the incredible momentum we have seen ever since, we are seeing between 500-1,000 enrolments for each course, which is quite remarkable, to say the least. The course is offered five times a year. The profile tends to be business people looking to get a foundational understanding of artificial intelligence and I think the course does a good job of providing this, although I may be a little biased. It includes faculty from both the Oxford computer science department and the business school. I include a link above, and I encourage people to sign up, as it covers the history of AI, a technical overview, business use cases, the ethics and regulation of AI, along with our possible future.
(If you, too, want to be featured in “Minds And Machines, get in touch by writing in the “Comments” section.)
"In recent discussions with CEOs, several misconceptions about AI have surfaced, leading to strategic missteps"
I recently interviewed Richard Foster-Fletcher, an AI advisor, author, speaker, and LinkedIn Top Voice. He is the founder of MKAI.org (Morality and Knowledge in Artificial Intelligence), an initiative dedicated to fostering AI’s responsible development and application.





