Riding the Waves of AI Disruption: Uncovering the Transformative Impact on Global Employment
Yann André LeCun, a Turing award-winning French computer scientist, has dismissed AI dominance apprehensions, asserting that AI will neither dominate the world nor permanently eradicate jobs.
The Chinese have a proverb, “May you live in interesting times”. It’s actually a curse. In reality, the person wishes you live in times of uncertainty and disorder as opposed to, say, peace.
We are going through such a time where artificial intelligence (AI) and global employment is concerned. It’s an interesting period in time in the true sense of the word, yet, AI has also injected a degree of uncertainty in the global jobs market.
While week after week, this newsletter tracks the developments on the employment front, I’ve decided to introduce the “AI Job Impact Indicator” from this newsletter on. This simple graphic will tell readers in one glance the degree of displacement of jobs by AI in the week gone by.
There were not many reports this week from around the world of jobs actually lost to AI. Yet, new reports spoke of impending losses.
What made this week interesting was the fact that even as a worried world debated AI's potential job takeover, one of the earliest experts in AI, Professor Yann LeCun, came out with a different point of view. He believes that AI will not dominate the world or permanently eliminate jobs. This differs from the perspectives of influential figures in the field of AI, like Geoffrey Hinton and Yoshua Bengio, who see AI as a threat. You can scroll down to find that story in our “Research Corner”.
Another intriguing development was the start of a debate on labeling content according to its “dependency” on AI to allow humans to clearly understand whether a human or a machine was the architect. All of this in the interest of transparency. That story is in our “AI Ethics and Morals” section”, if you wanna scroll down directly to it.
Recruitment Firm Predicts Major Job Losses In Hong Kong
The Development: A report by an IT recruitment firm, “Venturenix” says the workforce in Hong Kong is projected to face significant disruption from AI technologies.
The Details: By 2028, it is estimated that around 800,000 jobs, equivalent to 25% of the total workforce, will be displaced.
The report identifies several roles that are particularly vulnerable to AI, including data entry clerks, administration staff, and customer service representatives. However, the impact of AI is anticipated to extend beyond these sectors, affecting professionals in fields such as law, translation, illustration, and content creation.
Researchers from Alibaba Group's Damo Academy and Singapore's Nanyang Technological University have discovered that employing large language models (LLMs) like GPT-4, the technology powering ChatGPT, for data analysis costs less than 1% of hiring a human analyst while achieving comparable performance.
What Does It Really Mean: This study underscores the potential threat to job security due to the growing adoption of generative AI.
As AI technology continues to advance, it is essential to acknowledge the potential implications for the workforce and explore strategies to adapt to this changing landscape. The increasing popularity of AI models like ChatGPT has raised concerns about substantial job losses. Many companies in Hong Kong are now urging employees in positions that traditionally did not require IT expertise to learn how to utilize ChatGPT.
To navigate these transitions and mitigate the impact on job security, efforts should be made to upskill and reskill employees in emerging technologies and emphasize the augmentation of human capabilities alongside AI. It is crucial to consider the potential implications of AI on the workforce and explore ways to adapt to this changing landscape. Companies should invest in the development of AI technologies while simultaneously investing in employee training and development to ensure they remain ahead of the curve. By doing so, companies can ensure that their employees remain relevant and valuable in the face of increasingly sophisticated AI technologies.
Source: India TV
Thanks for reading “AI For Real”. This is a “pay-for” newsletter with the 1st month free. If you press the “Subscribe” tab, it will earn you a month’s free subscription, after which you will either have to go in for one of our paid plans or quit.
Here are the links you can use to pay:
Forrester Report Predicts Change in Advertising Job Dynamics
The Development: A recent prediction by research firm Forrester suggests that the advertising industry will experience significant changes in job dynamics due to automation by 2030.
The Details: About one-third of jobs in the industry are expected to be at risk of automation, but not all roles will be displaced by technology. Process-oriented or physical jobs that follow strict instructions are more susceptible to automation. Among the roles that are likely to face the highest risk of automation are clerical, secretarial, administrative, sales, and market research positions. Conversely, jobs that involve creative problem-solving, such as writers, authors, editors, prompt engineers, trainers, data scientists, and digital marketing and strategy specialists, are anticipated to thrive.
The Forrester reports emphasizes that the presence of originality plays a crucial role in reducing a job's automation potential. Highly creative jobs and roles that guide AI outputs are predicted to dominate the job market. Generative AI technologies like ChatGPT, Google Bard, DALL-E, Midjourney, and Stable Diffusion will be leveraged to enhance productivity in senior and creative positions without resulting in job losses.
However, Forrester anticipates modest job losses from generative AI in the next couple of years due to numerous legal and ethical concerns associated with these technologies. Issues related to intellectual property rights, bias, and accuracy are among the factors influencing this cautious approach. Additionally, production workers, printing press operators, and set and exhibit designers are expected to be among the least impacted job roles in the advertising sector by generative AI.
What It Means: Forrester joins a host of competitor agencies who have made similar predictions. The report highlights that generative AI will reshape the composition of the agency workforce, shifting from an abundance of junior talent to higher-salaried creative roles collaborating with generative AI assistants alongside an increased share of management positions.
Source: Campaign Asia
Generative AI Challenges Conventional Automation: High-Wage Knowledge Employees at Higher Risk Compared to Blue Collar Workers
The Development: McKinsey Global Institute has come out with yet another report on AI.
The Details: It says generative AI has the potential to contribute approximately $4.4 trillion annually to the global economy, equivalent to around 4.4% of the total economic output.
To put this in perspective, the entire GDP of the United Kingdom in 2021 was US $3.1 trillion. The study forecasts that generative AI, including chatbots like ChatGPT, could automate 60% to 70% of workers' time, leading to the automation of half of all work between 2030 and 2060.
McKinsey estimates that generative AI could generate between $2.6 trillion and $4.4 trillion across various industries. The increased adoption of this technology has the potential to boost productivity by 0.1% to 0.6% over the next two decades.
Industries such as banking, high tech, and life sciences are expected to experience significant impacts from generative AI, as it becomes a significant portion of their overall revenues.
What It Really Means: The significance of these numbers lies in the fact that generative AI's impact on the labor force challenges conventional notions of automation, especially for high-wage knowledge workers.
What it really means is that, unlike previous technological advancements that primarily affected lower-wage jobs, generative AI focuses on cognitive tasks, making higher-wage roles more susceptible to automation, something that I’ve been saying in earlier newsletters, too.
Developed economies are likely to experience faster automation, particularly impacting white-collar work.
Source: McKinsey
Subscribe to our YouTube channel: youtube.com/@aiforreal
#Some jobs will go, even as AI creates new jobs: OpenAI Executive#
At least Hollywood Actors and Technicians Are Safe
The institution giving out the Grammys recently revised its guidelines to establish that any works created exclusively through the use of AI will not be eligible for awards. The Recording Academy has made it clear that in all categories, it will not acknowledge works that lack human authorship.
Nevertheless, the academy has made an exception for songs that incorporate elements of AI. Such songs will be considered for recognition as long as they contain "meaningful contributions" from a human.
Source: Reuters , BlueMountain Eagle
Computer Vision Summit, Berlin. Bridge the gap between state-of-the-art research and value-driving practical application.
Collaborate with pioneering engineers & data scientists at the leading end-user-led summit dedicated to helping you deploy & optimize intelligent vision systems. Explore the best of computer vision technologies as you equip your technical teams with state-of-the-art knowledge & full-stack solutions.
(If you want your AI webinar/event to be featured here*, email us: marketing (at) newagecontentservices.com
*There’s a small fee.
How AI is Transforming Education in India
Edtech companies in India are embracing the power of artificial intelligence to pivot education.
By leveraging AI, these companies are personalizing education, tailoring content to meet individual students’ needs, and offering personalized guidance and feedback. AI not only provides personalized learning experiences but also enhances student engagement, enabling edtech firms to retain students on their platforms. Industry experts believe that generative AI has the potential to reshape the entire edtech sector.
One standout example is BYJU'S, an edtech firm that has introduced AI models such as BADRI and TeacherGPT. BADRI identifies students' strengths and weaknesses, delivering customized questions and learning videos to address their areas of improvement. TeacherGPT provides personalized guidance, evaluating student responses and gently guiding them towards correct answers, fostering independent problem-solving. BYJU'S aims to educate students not just on what to learn but also on how to learn using the power of AI.
Vedantu, another edtech firm, is developing a teaching assistant powered by generative Large Language Models (LLMs) trained on their extensive content repository. This AI assistant curates relevant content tailored to each student's specific needs, moving away from rigid rule-based approaches. Similarly, the Google-backed Unacademy utilizes AI for content creation and streamlining doubt-clearing processes, including regional languages.
Source: Your Story
Majority Developer-respondents in US Study Say They Already Using AI at Work
The News: A recent survey conducted by GitHub and Wakefield Research revealed that an overwhelming 92% of developers in the United States, working for companies with at least 1,000 employees, were already utilizing coding tools powered by AI.
The Details: The survey aimed to explore several important aspects of developers' professional lives, including productivity, team collaboration, and the role of AI in enterprise settings.
Over 80% of the developers surveyed expressed their belief that AI-powered coding tools had the potential to enhance team collaboration, elevate code quality, expedite project completion, and improve incident resolution.
Developers reported that acquiring new skills and creating innovative solutions had the most positive influence on their work. A substantial majority (92%) of the developers surveyed confirmed their usage of AI-powered coding tools, with 70% believing that these tools provide them with a competitive advantage in their work.
What Does It Mean: Clearly, this and other surveys have time and again showed that the developers community at least, perceive AI as an opportunity to focus on solution design and skill development.
AI-powered coding tools not only facilitate individual developer productivity but also foster greater collaboration within teams. Perhaps, such organizations should prioritize developers as the key to empowering customers.
Source: VentureBeat
AI Will Neither Dominate Nor Eradicate Jobs, Claims Expert
The launch of OpenAI's ChatGPT chatbot has intensified concerns regarding the potential takeover of human jobs by AI. But Professor Yann LeCun, one of the prominent figures in AI, has dismissed these apprehensions, asserting that AI will neither dominate the world nor permanently eradicate jobs. This viewpoint diverges from the perspectives of the other influential figures in AI, Geoffrey Hinton and Yoshua Bengio, who consider AI a threat to humanity.
LeCun contends that the fears surrounding AI are a manifestation of human nature projected onto machines, and he cautions against restricting AI research. Drawing an analogy with the invention of Turbojets, he argues that it is futile to seek ways to make technology safer before its creation.
LeCun acknowledges that computers will eventually surpass human intelligence, but he believes it will take considerable time to reach that point. He suggests that individuals who feel uneasy about AI's safety should abstain from developing it. According to LeCun, concerns about the risks associated with AI stem from a lack of imagination regarding how the technology can be made safer. He emphasizes the importance of maintaining open AI research to avoid impeding innovation and progress. LeCun's remarks offer reassurance to those who harbor anxieties about the impact of AI on the job market.
What Does It Mean: The rapid advancements in AI technologies have prompted worries about the displacement of human jobs by AI. But an AI veteran as LeCun has asserted that such concerns were unwarranted. He contends that the apprehensions surrounding AI stem from human tendencies projected onto machines. LeChun, in fact, wants to keep AI research open to foster innovation and progress without stifling its potential. LeCun's comments may alleviate to a degree the concerns of those who fear the repercussions of AI on the job market.
Source: Indian Express
EU Enacts Legislation
The News: The European Union (EU) has taken a significant stance against undisclosed artificial intelligence content by enacting legislation to make it illegal.
The Details: This move is crucial to ensure transparency in AI development, which has previously lacked proper legal oversight. Those who once held the belief that AI would lead to an early retirement are now realizing the reality is different. The EU's decision marks a pivotal shift in global AI regulation.
What It Means: The EU's action is a necessary step in promoting ethical and responsible AI development and usage. The absence of legal oversight in AI development has raised numerous ethical concerns, including the utilization of biased algorithms and the potential for discrimination. By outlawing undisclosed AI content, the EU sends a clear message that transparency and accountability are imperative in shaping this technology's future. As AI continues to gain prominence, the EU's decision is likely to have a substantial impact on the global development and regulation of AI.
Source: Medium
US Govt’s Pro-active Stand on AI
The News: In a concerted effort to address the emergence of generative AI systems and their potential implications for the economy and job market, President Biden recently engaged in discussions with artificial intelligence (AI) experts. Notable attendees included Tristan Harris, the executive director of the Center for Human Technology; Fei-Fei Li, Co-Director of Stanford's Human-Centered AI Institute; and Jennifer Doudna, a Professor of Chemistry at UC Berkeley, among others. Recognizing the significance of the matter, the White House also announced a $140 million investment to establish seven new National AI Research Institutes.
The White House chief of staff's office has reported regular meetings among top White House staff members to delve into this subject, and President Biden himself has engaged with numerous subject matter experts and technical advisors. These engagements aim to emphasize the importance of safeguarding rights and safety, ensuring responsible innovation, and implementing appropriate safeguards.
Source: Engadget
I am looking for sponsors for this newsletter. Contact me, and we can work out the best deal for you. Click on the tab or write to marketing (at) newagecontentservices.com
I came across this extremely interesting (and worrying) video by “SpaceWind” in which American Theoretical Physicist Michio Kaku talks about the world of quantum computers and its combination with AI; a kind of revolutionary software+hardware mix.
The US government has already asked Google and NASA to stop their quantum computer development efforts, says Michio in this video, because they noticed something terrifying happening.
For those of you who are unaware, quantum computers use qubits, which can represent information as both zero and one simultaneously. Quantum computing emerged in 1980 and has since revolutionized our understanding of reality. Quantum computers have the potential to solve complex problems faster, aid in analyzing data for extraterrestrial life, revolutionize cryptography, improve space exploration, and threaten data privacy and encryption.
But the combination of quantum computers with AI does pose a risk, as of today, in terms of misinformation, bias, and manipulation, both in international conflicts and law enforcement.
Source: YouTube
An Interesting Debate Emerges: Labeling Content to Discern Human or AI Influence
The Debate: A new development has been unfolding over the last few days around a debate on labeling content and other works based on its "dependency" on AI.
The idea behind such labels is to provide the end-user a clear understanding of whether a human or a machine was behind its creation.
This discussion has gained momentum in the pursuit of transparency, as labeling would enable individuals to discern the role of AI in shaping the content they consume.
Artefact's Three Principles for Building Trust in Generative AI: Transparency, Integrity, and Agency
The Details: Designing more approachable interactions with AI requires trust-building, and US agency “Artefact” believes this can be achieved through three principles: transparency, integrity, and agency.
While transparency and integrity draw from familiar concepts in human relationships and institutions, agency becomes crucial due to AI's non-human nature.
Artefact has envisioned three scenarios that offer users more control, aiming to foster trust in AI-generated content. Inspired by TV content ratings and nutrition labels, Artefact proposes a standardized badge system: H (100% human-generated), AI (100% machine-generated), and AI-H (a blend of human and machine-generated content). The AI-H badge visually represents the human-machine content ratio, with a comforting blue hue signifying trust as AI influence increases.
However, the successful implementation of these designs hinges upon the development of necessary technical and regulatory infrastructure.
Source: Fast Company
There’s another article in a similar vein by author Brandeis Marshall on Medium. Brandeis says while she recognizes the power and problems that AI can bring, gatekeeping AI’s power and reducing the adverse impact of AI aren’t happening fast enough.
What are your views on this? Please do write in the ‘Comments’ section.
Heads up: A little help was taken from a machine to write this newsletter.












