AI's Long-Term Impact on Jobs: From Replacement to Transformation
AI excels at crunching numbers and following pre-defined rules, but it struggles with ambiguity, empathy, and critical thinking.
While fears of robots stealing our jobs have dominated headlines for over a year, recent studies paint a more nuanced picture. Artificial intelligence (AI), it turns out, is unlikely to cause mass unemployment in the near future. Well, at least that’s what some new research shows. Instead, AI is poised to become a powerful partner, augmenting human capabilities and transforming the nature of work over the coming decades.
Several factors contribute to this slower-than-expected AI takeover. One is the inherent complexity of many jobs. Tasks requiring creativity, adaptability, and social intelligence remain firmly in the human domain. Additionally, the development and implementation of AI technology itself takes time and significant resources.
Also, the skills gap between AI and humans is vast. AI excels at crunching numbers and following pre-defined rules but struggles with ambiguity, empathy, and critical thinking. These are precisely the qualities that make humans irreplaceable in many workplaces.
The rise of AI will undoubtedly reshape the job landscape. However, instead of widespread job losses, experts believe we can expect to see a shift towards roles that leverage both human and AI capabilities. Jobs requiring collaboration, innovation, and ethical decision-making will become increasingly important.
Now, before going ahead, let’s first look at the AI Job Indicator.
NEW: You can search this newsletter’s website using our AI-powered chatbot. Search for answers to questions on AI and employment. Go ahead. Try it!
In Today’s Newsletter:
How Digital Marketing is Being Influenced by AI
Slower Pace of AI Job Displacement Than Projected: MIT Study
Experts Say Engage With AI
SAP to Restructure 8000 jobs
Open AI Tiesup With Arizona University
UK Leads in Adoption of AI
BMW to Deploy General Purpose Humanoid Robots
How Employees Have Started Posting Layoff Videos
AI Can Help Understand Human Imagination
Google DeepMind Achieves Milestone with AlphaGeometry
Facebook bootstraps LLaMa 2
Machines Use Leads to Decline in Quality of Translated Content: Study
MIT and Google Introduce Health-LLM
UK's National Cyber Security Centre Raises Alarm on AI's Impact on Cyber Threat Detection
Three Nations Form Alliance On AI
New Standard for LLMs Compliance With Copyright Laws
Someone Made Deep Fake of POTUS
Mexico’s Efforts to Regulate AI
UAE Gets New Body to Overlook AI
Well-known Japanese Writer Acknowledges Use of Gen-AI
Plus….Top Picks, Commentary and more.
Announcements
Get In Touch to Unlock the Power of AI for Your Business and Personal Growth
Are you a brand, a business, or an individual looking to get insights into artificial intelligence?
As an expert AI communicator, I specialize in bridging the gap between complex AI technologies and practical applications for businesses and individuals. Whether you're looking to enhance your company's tech capabilities or simply curious about how AI can streamline your daily life, I offer tailored solutions and insights.
Stay ahead of the curve with my services - your guide to navigating the ever-evolving world of artificial intelligence.
Contact me today to transform your AI understanding into real-world success!"
Copycats and Critics: The Copyright Conundrum of LLMs
Large language models (LLMs) are revolutionizing our interactions with technology, crafting poems, translating languages, and even generating code. But their impressive feats raise a thorny question: who owns the creativity when the training data is a vast ocean of copyrighted material?
Proponents of LLM development often invoke "fair use," arguing that copying snippets for training falls under this legal protection. They contend that LLMs don't directly reproduce copyrighted works, but rather learn patterns and statistical relationships, akin to a student absorbing knowledge from various sources. Additionally, they highlight the transformative nature of LLMs, where the output is often something entirely new and unexpected.
Critics, however, point to the sheer scale and nature of LLM training. Scraping millions of books, articles, and even code repositories raises concerns about fair use limitations being stretched beyond recognition. They argue that LLMs essentially build their capabilities on the backs of creators, without permission or compensation. This can devalue original works and potentially stifle future content creation.
The debate boils down to balancing innovation with creator rights. Some propose stricter "opt-in" systems for copyrighted material inclusion in LLM training data. Others suggest exploring compensation models or even granting creators partial ownership over generations based on their works.
Ultimately, navigating this legal and ethical quagmire will require collaboration. A nuanced framework needs to be established, recognizing the potential of LLMs while safeguarding the rights and livelihoods of the creative workforce. Only then can the next generation of language models truly flourish, built on a foundation of respect and shared value.
What are your views on this? Write in the ‘Comments’ section.
How AI is Impacting Digital Marketing
In a private meeting at Davos, top marketing and communications executives gathered to discuss the impact of AI on their profession. They acknowledged the potential of AI but also highlighted challenges such as accountability, transparency, privacy, and regulatory compliance.
Many executives have already deployed AI solutions to increase productivity, with examples of AI enabling tasks that were previously impossible.
Key points that emerged included the need for change management and adoption of AI, its impact on efficiency, balancing opportunities and risks, transparency and ethical considerations, accountability and liability, and the importance of the human element in marketing and communications.
The overall tone of the discussion was cautious optimism, recognizing the inevitability of AI's influence in their field.
Source: yahoo.com
MIT Study Suggests Slower Pace of AI Job Displacement Than Projected
New findings by MIT's Computer Science and Artificial Intelligence Laboratory indicate that the rapid replacement of human jobs by AI may not occur as swiftly as some predictions suggest.
The research reveals that only 23% of wages associated with vision-related tasks are currently economically viable for automation using AI.
The data highlights that the cost of constructing custom AI systems remains prohibitively high when compared to human labor expenses across various occupations.
Even with the availability of affordable pre-trained models, the study suggests that it would take decades for AI to effectively replace human workers in numerous low-wage, multi-tasking roles.
The study, funded by the MIT-IBM Watson AI Lab, used online surveys to collect data on visually-assisted tasks across occupations. It concluded that only 3% of tasks can be cost-effectively automated today, with a potential increase to 40% by 2030 if data costs fall and accuracy improves. The study also discusses the concerns about AI's impact on the job market, with some experts cautioning about the potential negative fallout.
The study underscores the significance of the pace at which AI costs decrease and performance improves, as these factors will significantly influence the extent of disruption in job markets.
What It Means For Us
Clearly, unlike initial predictions, AI’s influence on the labor market may unfold more gradually than anticipated, challenging some pessimistic predictions about widespread job loss.
Source: TIME
Engage With AI to Overcome Its Fear
Engaging with AI and learning about it is crucial to overcoming the fear of job loss and harnessing its potential for positive outcomes.
In a talk on AI and Robotics at the UK Pavilion in the Kolkata Book Fair in India, Robert Potts, an educator and researcher said the potential negative uses of AI, such as deep fake videos, can be countered with proper decoding technology and vigilance to distinguish real from fake content.
Potts covered the origin and concepts of AI, its uses, and potential misuse, including deep fakes and AI-generated voice samples. He emphasized the importance of engaging with AI to overcome fear and adapt to its impact on jobs.
Potts also highlighted the need for understanding and evolving with technology to prevent AI from functioning without human input. He emphasized the value of self-education and adaptability in working with AI.
Source: telegraphindia.com
SAP to Restructure 8000 Jobs
SAP, a major enterprise software company, is restructuring 8,000 jobs as it focuses on AI and plans to spend €2 billion on this transformation.
The company aims to prepare for future revenue growth and expects that most of the affected positions will be covered by voluntary leave programs and internal re-skilling measures.
SAP's CEO stated that the company is intensifying its investments in strategic growth areas, particularly in Business AI. This move comes as many companies, including Wipro and Huawei, are prioritizing AI and making large investments in the technology.
Source: cnn.com
OpenAI Forges Landmark Partnership with Arizona State University for ChatGPT Integration
OpenAI announced its inaugural partnership with a higher education institution on Thursday. Starting from February, Arizona State University (ASU) will have comprehensive access to ChatGPT Enterprise, intending to seamlessly incorporate it into coursework, tutoring sessions, research endeavors, and various other applications.
The initiation of this collaboration took shape over a span of at least six months, stemming from the visit of ASU Chief Information Officer Lev Gonick to OpenAI's headquarters. Significantly, prior to the formalization of this collaboration, the university's faculty and staff had already been actively involved with ChatGPT and various other artificial intelligence tools, as Gonick revealed in an interview with CNBC.
Launched in August, ChatGPT Enterprise represents the business-oriented tier of ChatGPT, providing unbounded access to GPT-4, performance enhancements of up to twice the speed of its predecessors, and allocated API credits.
ASU strategically plans to capitalize on this collaboration by developing a personalized AI tutor tailored explicitly for students. This tutor aims not only to provide assistance in specific courses but also to offer support across a diverse array of study topics. Gonick emphasized the focus on STEM subjects, acknowledging their pivotal role in higher education. The university's ambitious objective involves deploying the tool in its largest course, Freshman Composition, to deliver valuable writing assistance to students.
Source: cnbc.com
UK Government Leads in Embracing Artificial Intelligence
The UK Government leads the world in acknowledging the transformative potential of AI with the most investment in this technology.
Demonstrating a strong commitment to AI innovation, Britain has made substantial investments aimed at positioning the nation at the forefront of this rapidly advancing field.
Notably, the UK government has pledged significant financial support for research and development in the AI sector, spanning diverse domains such as healthcare, education, finance, and transportation. The allocated funding is designed to propel cutting-edge projects, fostering a vibrant ecosystem of AI innovation to address real-world challenges.
Source: bbntimes.com
BMW to Deploy General Purpose Humanoid Robots
Figure has signed its first commercial deal with BMW to deploy its general-purpose humanoid robots in real-world manufacturing tasks, marking a significant milestone for the company.
The humanoid robot, Figure 01, has demonstrated its ability to autonomously perform tasks, such as making coffee, and is being trained for specific use cases at the BMW plant, such as body shop work and warehouse logistics.
The CEO of Figure, Brett Adcock, anticipates a rapid advancement in the field of humanoid robotics, expressing confidence in the potential for significant progress and growth in the near future.
Source: newatlas.com
How Employees Have Started Posting Layoff Videos
The article discusses the trend of employees recording and posting videos of their layoffs on social media. While some see it as a way to expose company failings and seek support, others caution against potential negative effects on future job prospects and legal implications. Experts advise laid-off individuals to consider the purpose and potential consequences before posting such videos. The article highlights a shift in the traditional professional image and the impact of individuals like Brittany Pietsch, who recorded her layoff and received support from others.
Source: bbc.com
How AI Can Help Understand Human Imagination
A study by the University College London's Institute of Cognitive Neuroscience explores how AI can help understand human memory and imagination. The study, published in Nature Human Behaviour journal, highlights the role of AI in extracting and simulating experiences, similar to human recollection and imagination.
It focuses on the collaboration between the hippocampus and neocortex in memory, imagination, and planning. The AI model was trained using images to mimic the brain's encoding of scenes and conceptual representation. The study sheds light on how the brain reconstructs events and generates new experiences, offering insights into memory distortions and the potential of AI in understanding cognitive processes.
Source: wionews.com
Exciting Opportunities for Newsletter Sponsorship!
Discover available ad and sponsorship slots for this newsletter by clicking on this link. Explore the various packages to find the perfect fit for your brand.
If you would like further information, please don't hesitate to contact me. Let's collaborate and customize the best deal tailored to your specific needs. Click on the tab above. Or, for individual requirements, write to marketing (at) newagecontentservices.com
Google DeepMind Achieves Milestone with AlphaGeometry: AI Outperforms Humans in Complex Geometry Problem Solving
In a significant breakthrough, researchers at Google DeepMind have developed AlphaGeometry, an artificial intelligence (AI) system that surpasses the capabilities of most humans in tackling challenging geometry problems. AlphaGeometry demonstrated proficiency comparable to an International Mathematical Olympiad (IMO) gold medallist, solving 25 olympiad-level mathematical geometry problems, a notable improvement over the previous best method that could only solve ten.
AlphaGeometry stands as the first computer program to outperform the average IMO contestant in proving Euclidean plane geometry theorems. This advancement not only marks a triumph in the field of AI but also highlights the potential for automated invention in complex problem-solving scenarios.
Source: google.com
Get Our GPT - AI For Real Bulletin. The ultimate source for the latest AI news! 🤖 Dedicated to delivering up-to-date and accurate information on AI advancements and their impacts across various sectors. Whether it's the newest tech breakthroughs or historical AI context, “AI For Real Bulletin” makes understanding AI simple and reliable.
Stay ahead with “AI For Real Bulletin”, where AI news is always clear and credible. 🌐
Facebook bootstraps LLaMa 2 so it competes with GPT-4, Claude 2, and Gemini Pro
Facebook researchers have introduced a method known as "Self-Rewarding Language Models," employing language models to autonomously generate datasets for bootstrapping, resulting in enhanced performance. This innovative approach has proven successful, enabling them to take a LLaMa 2 70B model and fine-tune it to achieve competitive performance, as demonstrated through evaluations, comparable to more costly models such as GPT-4, Claude 2, and Gemini Pro.
Source: arxiv.org
Amazon Researchers Uncover Detrimental Impact of Cheap Machine Translation on Low-Resource Languages
Amazon researchers have revealed that the widespread availability of inexpensive machine translation had led to a decline in the quality of translated content, particularly in relation to low-resource languages.
The researchers highlight that machine-generated, multi-way parallel translations not only dominate the overall volume of translated content in lower resource languages on the web but also constitute a substantial portion of the total content in those languages.
What It Means For Us
The study raises concerns about a potential 'rich get richer, poor get poorer' effect in the evolving landscape of AI tools worldwide. While well-supported languages benefit from increasingly accurate translations and abundant data, low-resource languages face the risk of deteriorating digital representation. Automated translation engines, relying on sparse data, may contribute to an expanding cloud of poor-quality translations, further amplifying the existing disparities between languages.
Source: arxiv.org
Now, a Health LLM
MIT and Google researchers introduced Health-LLM, which evaluates LLMs for health prediction tasks using data from wearable sensors. The Health-Alpaca model, a fine-tuned version of the Alpaca model, emerged as a standout performer, demonstrating the feasibility of smaller, more efficient models in health prediction tasks. The study emphasizes the importance of context in enhancing model performance, opening up new possibilities for personalized healthcare.
Source: marketpost.com
UK's National Cyber Security Centre Raises Alarm on AI's Impact on Cyber Threat Detection
The National Cyber Security Centre (NCSC) in the UK has issued a warning about the growing complexity introduced by artificial intelligence (AI) in identifying cyber threats, including phishing emails. The sophistication of AI tools, such as generative AI and large language models, is anticipated to pose challenges in recognizing and defending against various attacks, notably ransomware incidents.
The NCSC also expressed concerns about the potential utilization of AI by state actors in advanced cyber operations. While the government has released new guidelines to bolster businesses' resilience against ransomware attacks, cybersecurity experts are advocating for more robust measures to tackle the evolving threat landscape.
Source: theguardian.com
Three Nations Form Alliance On AI
Germany, France, and Poland are forming an alliance to push for EU-wide coordination in developing and deploying AI to catch up with the US and China.
The EU is focusing on attracting top AI talent, ensuring responsible use of AI technologies, and fast-tracking access for AI start-ups to high-performance computers.
EU institutions and member states are urged to work closely together to shape the impact of AI on the bloc and take advantage of the opportunity to actively shape the technology breakthrough in Europe.
Source: sciencebusinessnet
New Standard for LLMs Compliance With Copyright Laws
There have been objections to the use of copyrighted material by tech cos to train their LLMs. This article discusses the emergence of certification programs, such as “Fairly Trained”, that provide labels to AI companies to demonstrate their compliance with copyright laws when training their models.
Fairly Trained's first accreditation, the Licensed Model certification, is awarded to companies that obtain permission to use copyrighted training data and do not rely on fair use arguments. The program has already certified nine generative AI companies in image, music, and voice generation. The issue of training AI models with copyrighted data has led to legal disputes, and proposed legislation may require AI companies to disclose their training data sources to address copyright concerns.
Fairly Trained offers a certification program for AI companies to demonstrate that they have obtained permission to use copyrighted training data, addressing the contentious issue of copyright infringement in generative AI models.
The first accreditation, known as the Licensed Model certification, will be granted to companies that have legally licensed protected data to train their models, with a focus on not issuing certification to developers relying on fair use arguments.
Source: theverge.com
Someone Made Deep Fake of POTUS
The New Hampshire attorney general's office is investigating an illegal robocall that used artificial intelligence to mimic President Joe Biden's voice and discourage voters from participating in the state's primary election.
The call falsely claimed that voting in the primary would prevent voters from participating in the general election. The robocall falsely appeared to come from the personal cellphone number of a former state Democratic Party chair. Experts warn that the use of generative AI technology for voter suppression is a growing concern, and there are calls for regulation.
The Biden campaign is considering additional actions to address the issue. The Trump campaign denied involvement in the incident.
Source: pbs.org
- Your story matters. Your innovation matters -
If you are an AI entrepreneur*, startup founder, etc, and are eager to say your piece in under a minute, drop in an email with the subject line: “Yes, to 60 secs” to marketing (at) newagecontentservices.com We promise to revert ASAP.
Mexico’s Efforts to Regulate AI
The article discusses Mexico's efforts to regulate the use of artificial intelligence (AI) in the country, aiming to balance innovation with controlling potential harms. Senator Alejandra Lagunes leads the National Artificial Intelligence Alliance and emphasizes the need for regulations to address issues such as disinformation and digital harassment.
The article highlights the importance of developing AI tools to address local needs and the potential impact on job opportunities. It also addresses concerns about AI's influence on elections and emphasizes the need for media literacy.
Lagunes plans to provide an AI policy paper to the winner of the upcoming Mexican election, outlining the groundwork for regulations and policies to harness AI's potential, including education and training initiatives for the population. The article underscores the potential for AI to benefit Mexico while also emphasizing the importance of responsible regulation.
Source: contextnews
UAE Gets New Body to Overlook AI
The president of the United Arab Emirates and ruler of Abu Dhabi, Sheikh Mohamed bin Zayed Al Nahyan, has established the Artificial Intelligence and Advanced Technology Council (AIATC) to oversee the development and implementation of AI policies and strategies.
The council aims to enhance Abu Dhabi's status in the AI field and collaborate with local and global partners. The UAE government has been focusing on technological advancements, including blockchain, crypto regulation, metaverse strategy, and offering commercial licenses for AI and Web3 businesses. Additionally, the country has established a dedicated free economic zone for Web3 and AI service providers.
Source: cointelegraph.com
Well-known Japanese Writer Acknowledges Use of Gen-AI
The recipient of Japan's highly esteemed literary award has openly acknowledged that approximately "five percent" of her futuristic novel was composed with the assistance of ChatGPT. She expressed gratitude for how generative AI played a role in unlocking her creative potential.
Ever since the introduction of ChatGPT in 2022, a user-friendly AI chatbot capable of instantly producing essays upon request, concerns have been on the rise regarding its implications across various sectors, including the realm of literature.
Source: economictimes.com
..where every week, I shortlist interesting articles, posts, podcasts, and videos on AI.
Visual Plagiarism by LLMs
Copyright infringement by LLMs for their training has become a contentious issue. For the opponents, this article is a good read. It discusses the potential for large language models (LLMs) and image-generating models to produce outputs that closely resemble copyrighted materials, raising concerns about copyright infringement.
The authors present evidence from experiments with Midjourney V6 and DALL-E 3, showing that these AI systems can create images resembling trademarked characters and movie scenes. They also highlight the lack of transparency regarding the training data used by these systems and the potential ethical and legal implications.
The authors call for greater transparency, proper licensing of training data, and compensation for artists. They also emphasize the need for technical solutions to accurately report the sources of generated images and to filter out potential copyright violations.
Source: spectrum.ieee
Adobe’s Approach to Training gen-AI models
Since we are on the subject of copyright, I find Adobe's approach to training its generative AI model, Firefly, quite refreshing. It does so using licensed content rather than scraping the web, addressing potential copyright issues and providing a competitive advantage.
The company's focus on developing a federal anti-impersonation right to protect artists from economic harm due to style appropriation underscores its proactive approach to addressing the consequences of generative AI.
The discussion around the intersection of AI and copyright law in this interview highlights the complexity and evolving nature of these legal and ethical challenges, requiring a careful balancing of innovation and legal compliance.
Dana Rao, the general counsel and chief trust officer at Adobe, discusses the company's approach to navigating the legal and ethical challenges posed by generative AI and copyright law. He explains how Adobe's AI model, Firefly, was trained on licensed works to avoid copyright issues. Rao also delves into the implications of generative AI on copyright law, including the potential economic harm to creators and the need for new legislation to address style protection and impersonation of artistic works. Additionally, he highlights Adobe's efforts in the Content Authenticity Initiative to tackle deepfake issues.
Source: theverge.com
Exploring the Path for AI to Attain the Trustworthiness of Wikipedia as a Source of Information"
We all fear it - the amount of machine-generated content being churned out daily may eventually lead to garbage floating around. There could be a question mark on the trust-worthiness of such content. I found this podcast where Katherine Maher, former Wikimedia Foundation CEO and newly appointed Chief Executive of NPR, delves into the essential elements of trust that underlie Wikipedia.
The integration of AI is set to reshape the landscape of internet-distributed information and its mechanisms. Aria Finger, Katherine, and the host explore the potential of creating and scaling positive online spaces and community-driven ideas within the realm of AI.
The conversation delves into the possibility of in-text citation as a viable approach for generative AI. Additionally, they discuss Wikipedia's response to its use as training data for AI models. Considering the global perspective, there is an examination of the current varying levels of optimism about AI, with the West appearing less optimistic than the rest of the world. The discussion also touches on factors such as cost and more.
Here's the full episode with Katherine:
















