It's Time for All to Be AI Literate
Artificial intelligence is the new language for workplace (and all life) transformation.
Let me begin this newsletter with some observations. They have been put out by a bunch of analysts, industry experts, artificial intelligence (AI) and human resources professionals, enterprise leaders, C-suite people….you get the drift?
Here’s the first one:
Generative AI will be integrated into the digital core of enterprises to optimize tasks, enhance human capabilities, and unlock new growth opportunities. As a result, these technologies will establish a new language for the transformation of enterprises.
But to fully realize the potential of these (AI) technologies, it will be crucial to reimagine how work is performed and to assist individuals in keeping pace with technology-driven changes.
– Accenture report titled, “A new era of generative AI for everyone”
To me, this means two things: Companies are very actively contemplating using generative AI and foundational models to improve how they work and grow their businesses. And, these technologies will create new ways of doing things, and it will be necessary for companies to adapt and help their employees keep up with these changes to benefit from them entirely.
The same March 2023 report further underlines the importance of AI by pointing out that:
98% of global executives (surveyed) agree that AI foundation models will play an important role in their organization’s strategies in the next 3 to 5 years. And, 40% of all working hours can be impacted by large language models (LLMs) like GPT-4.
Meaning: Within the next 3-5 years, the repercussions of generative AI will be felt in 40% of ALL working hours. And I infer (hopefully correctly), this 40% will primarily be manual, repetitive tasks such as answering customer queries or keying in figures in an Excel sheet. Jobs gone to AI.
Some more…
UNESCO strives to bring technology literacy to students throughout the world by ensuring educators are using technology in every aspect of their teaching. The more students are familiar not only with learning about technology but learning with technology, the more they will be prepared to use technology to improve their lives.
Meaning: Now, more than ever before, with the commercial advent of AI, it has become highly evident that from here on, literacy – the way we read and write – must be quickly followed up with digital and then, technology literacy to better our lives.
Digital literacy is not the same as technology literacy. There’s a line dividing the two. Remember the old tape recorder cassettes? Technology literacy is Side A while digital literacy is Side B. Different songs but mainly of the same genre.
Here’s my simple interpretation:
Technology literacy is the ability to be aware of and understand technology, its potential risks, and pitfalls. E.g.: AI, 3D printing, the Internet of Things.
Digital literacy means having the necessary skills to live, learn, and work in a world where communication and information are mostly done through digital technology, like social media, the internet, and smartphones. E.g.: Facebook, phone apps.
I am looking out for sponsors for this newsletter. Contact me and we can work out the best deal for you. Click on the tab or write in to marketing (at) newagecontentservices.com
So where am I going with all this?
AI is the new language for workplace transformation. All of you reading this article are digitally literate, but not all of you are AI literate. Like any other tech, AI has to be learned.
Here’s a quote by American futurist and writer Alvin Toffler:
The illiterate of the 21st century will not be those who cannot read and write but those who cannot learn, unlearn, and relearn.
Let me paraphrase Toffler:
The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn tech (in this case: AI.)
Yes, unlike other technologies or inventions in the history of humans, AI IS DIFFERENT.
Here are some reasons why:
Adaptability and learning ability: AI can learn and adjust to new situations without being told what to do. This means it can improve over time and become more effective.
Autonomy: AI can make decisions and act independently, without human intervention. This can be both positive and negative.
Complexity: AI is very complicated, with many layers of algorithms and millions of lines of code. It can be hard for humans to understand how it works or predict how it will behave in certain situations.
Scale: AI has the potential to be used everywhere, affecting many parts of society. This means that the consequences of AI can be far-reaching and significant.
Speed: AI can process large amounts of data quickly and make decisions much faster than humans can. This allows it to perform tasks that would be impossible for humans.
What all of this means is that AI WILL massively impact ALL employment. If not now, then eventually. That’s a given.
Armed with this and associated knowledge, we need to prepare ourselves for the change that’s coming; if not already come. For that, you need to do 3 things:
1) Track the progress made by AI
2) Be aware of its implications on various forms of employment
3) Subscribe to this newsletter (if you haven’t already) and make it a crucial co-pilot of your AI journey. This newsletter will cover important developments in AI related to employment, ethics and morality, and research.
Are You a Professional Into Any of These? Catch Up With the Progress AI Has Made
AI in commuting: AI has started helping with traffic management by analyzing data from sensors and GPS devices.
AI in smartphones: AI is making smartphones easier to use and more helpful with features such as face recognition, voice assistants, image editing, spam filtering, and gesture control.
AI in media streaming services: AI is suggesting personalized recommendations for videos and podcasts based on user preferences. AI is also creating captions, subtitles, and summaries for media.
AI in healthcare management: AI is helping doctors and nurses diagnose diseases, monitor patients, and perform surgeries. It is also helping researchers find new drugs and vaccines.
AI in banking and finance: AI is automating financial transactions, fraud detection, customer service, and in offering financial advice. It’s also helping investors make better decisions by analyzing market trends.
AI in navigation and travel: AI is helping travelers plan their trips and is providing information on traffic, weather, and public transport.
AI in social media: AI is helping users find relevant content, moderate online communities, and in the prevention of cyberbullying.
AI in manufacturing robots: AI is helping robots perform complex tasks that require precision and adaptability.
AI in education: AI is providing personalized learning experiences for students based on their abilities, interests, and goals. It’s also helping teachers grade assignments and design curricula.
AI in cybersecurity: AI is helping protect systems and networks from cyberattacks by detecting threats and responding to incidents. It is also helping users encrypt data and create strong passwords.
(This is not an exhaustive list.)
Source:
(1) 16 AI Application Examples In Your Everyday Life - USM. https://usmsystems.com/ai-examples-in-our-daily-life/.
(2) 36 Artificial Intelligence Examples to Know for 2023 | Built In. https://builtin.com/artificial-intelligence/examples-ai-in-industry.
(3) AI Applications: Top 18 Artificial Intelligence Applications in 2023. https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/artificial-intelligence-applications.
(4) 8 Examples of Artificial Intelligence in our Everyday Lives - EDGY Labs. https://edgy.app/examples-of-artificial-intelligence.
(5) Top 10 Applications of Artificial Intelligence(AI) in 2023. https://intellipaat.com/blog/applications-of-artificial-intelligence/.
Is It Too Late to Reign in AI? If No, Then How Do We Do It?
The first salvo against the march of machines was fired by humans in a letter signed by over a thousand AI experts and funders, calling for a six-month pause in the training of new AI models. OpenAI is also not working on ChatGPT version 5.
Which begs the question: Which is the best option, if any, to “control” AI and its advancement?
For now, there seem to be five options:
Temporarily halt all training till suitable mechanisms are put in place
Regulate it
Self regulate it
Let the lawyers loose
Let things continue as they are
Option 1 is of a temp nature so in reality we are left with 4 options.
My take on new tech has always been: embrace it. It’s the same for AI.
AI is a powerful and rapidly evolving technology with many benefits and risks for society. But isn’t it the same with humans? We all have a good and a bad side, plus a grey area.
Some people think (and they are not entirely wrong) that AI could go out of control and pose a threat to human dignity, autonomy, and democracy. Some go further to claim it could become sentient (capable of sensing and becoming self-aware of its surroundings) and destroy the human race, eventually.
The way things are today, I would think regulation by introducing laws across countries is the best way forward. Till circumstances force us to think of other options.
AI is too revolutionary to be simply halted in its tracks. Responsible AI systems could bring humanity enormous benefits, but only if we first address its potential consequences.
(But) for these systems to reach their full potential, companies and consumers need to be able to trust them," Alan Davidson, US Assistant Secretary of Commerce for Communications and Information, said in a National Telecommunications and Information Administrations news release about seeking input on AI accountability.
Which then brings us to the question - should the regulation start at the development stage itself or be limited only to AI’s deployment? I have no answer to this one. Not yet.
What do you say? I am putting it out here as:
Qs 1): Should there be rules governing AI’s development or should the rules only kick in at the implementation stage? Write in with your answer.
Regulation of whatever kind though will ensure some kind of ethical and responsible application of AI in the lives of humans. Some of these measures could go a long way in us having our cake and eating it too:
· Defining unambiguous and transparent standards and principles to guide AI development and implementation
· Establishing independent oversight bodies and mechanisms to oversee and audit AI systems
· Enforcing accountability and liability for any negative consequences and errors caused by AI systems
· Raising public awareness and educating individuals on the advantages and risks of AI technology
· Encouraging collaboration and communication between various stakeholders, such as governments, industry, academia, civil society, and end-users
Self-regulation WILL NOT WORK.
Very quickly, here are some recent developments on the regulatory front:
The White House is considering limits on how AI can be used. It could be placing rules on developing AI systems such as ChatGPT
The European Union is also debating regulating the development of AI
It is also considering laws to bolster regulations on the development and use of AI
Some experts feel the only way to rein in the AI threat is to bring in lawyers. I find that a rather “western” or American way of handling things. After all, not all legal systems around the world are built equal.
But here is a summary of their proposal: To ensure that AI does not perpetuate false truths, misinformation, or hate, or even sabotage rivals of companies, impose legal liabilities on developers and companies to “create a potent incentive for them to invest in refining the technology to avoid such outcomes.”
Ref: https://www.foxnews.com/opinion/rein-ai-threat-let-lawyers-loose
What do you say? I am putting it out here:
Qs 2): Do you think by implementing a legal liability framework for AI developers, companies will be compelled to prioritize ethical considerations and ensure that their AI products adhere to social norms and legal regulations? Write in with your answers.




