Push for AI Regulation by Governments and Industry Gains Momentum
There's suddenly a rush to put in guardrails around this new tech.
In the last few days, there have been remarkable endeavors by both the public and private sectors to establish regulatory frameworks for artificial intelligence (AI) technology. The United States, the United Kingdom, the European Union, the United Nations, and intergovernmental organizations such as the G7 have all shown a newfound and urgent awareness of the potential hazards posed by unregulated AI. As a result, there is a collective push to introduce safeguard measures. I’ve recorded some of them in this newsletter.
But before I go deeper into this, let’s look at the AI Job Indicator’s movement in the week gone by.
Humans were ahead of AI this week, where employment was concerned.
In Today’s Newsletter:
San Francisco is World AI Capital Where Jobs are Concerned
AI Unicorn Fails
AI Job Openings in UK Fewer As Compared to Two Years Ago
New Buzzword Hits AI Ecosystem - “AI Exposure”
Now, Gen-AI Assistant on the Shop Floor
There May Soon Be An AI Super Pilot in Airplane Cockpit
Back to School: Canva Announces “Classroom Magic”
AI “Godfathers” Face-off Over Existential Risks
Facebook Unveils “Habitat 3.0”
AGI Within Five Years?
Here’s One Study on How Americans Are Approaching AI
AGI is Hollow-meaningless Dream and is Never Going to Come: Expert
UK’s AI Safety Summit: We Are All In It Together, Say Countries
Biden Signs Directive on AI
G7 Nations Set to Establish Voluntary Code of Conduct
OpenAI Unveils 'Preparedness' Team
Chinese, Too, Want Strong AI Regulation
UN Prepares Interim AI Report and Advisory Body
Four AI Majors Set Up US $10 Million Research Safety Fund
Major Ruling in Copyright Violation Case in US
Don’t Move Too Fast and Break Things, Says Google Deep Mind Co-founder
Are Women Not Taking to Gen-AI as Much as Men?
Plus “Top Picks” and much more.
The Great AI Confusion
Artificial Intelligence has been a subject of research and conversation for many decades, dating back to the early '50s. However, despite its long history, there continues to be a considerable amount of ambiguity surrounding this field.
In my observations, readings, and experiments, it's evident that even those deeply involved in AI can find themselves puzzled about certain aspects of the technology. Understanding the inner workings of AI can sometimes be a challenge, giving rise to ongoing debates concerning the transparency of AI systems, often characterized as "black box" versus open source approaches.
While the concept of AI is not new, recent advancements in technology, such as the release of ChatGPT, have brought it to the forefront of public attention. The ball was set rolling last November, and now, like in a bowling game, it’s threatening to knock ‘em pins (humans) down.
“AI” is often used to describe various technologies, from simple chatbots to complex machine learning algorithms. Perhaps this lack of clarity is adding to the confusion and misunderstanding not only among the general public but also those connected with the AI ecosystem itself as AI scientists and researchers.
Take, for instance, the ongoing public debate between the so-called “godfathers” of AI over the likely events like AGI (Artificial General Intelligence) because it can potentially revolutionize our world in ways we can’t even imagine. AGI refers to a hypothetical machine that can perform any intellectual task that a human can. While some experts believe that AGI is inevitable, others are more skeptical and warn of the potential dangers associated with such technology.
The debate over AGI is just one example of how AI is still a relatively new and rapidly evolving field. As such, there are many unanswered questions and uncertainties surrounding it.
Some of the questions (in no particular order) over which there’s intense debate in society:
Will it benefit society at large or be detrimental to it?
Will and when will we see AGI?
Will humanity be eradicated eventually?
Will humans play second fiddle to a superior intelligence created ironically by Man?
Are we all going to lose our jobs?
Make no mistake, AI carries the same level of threat as pandemics and nuclear warfare, with some experts even hinting at the possibility of human extinction.
Concerns such as these aren't unfounded. For example, some of us are witnessing apparent issues with generative AI systems, which produce inaccurate and inexplicable content, often used to spread false information and deceive people. It's troubling to see how rapidly and extensively digital technology is advancing.
The worry is that it could lead to widespread surveillance and potentially disrupt our information ecosystem, undermining our democratic systems with deepfakes, false news, and online harassment.
The fear of a bleak job market, the rise of global crime, and the increasing consolidation of wealth and power in the hands of a select few corporate giants are also on the radar.
Furthermore, there's apprehension about the harmful effects of weaponized social media platforms, which could contribute to widespread stress, anxiety, depression, and feelings of isolation among the population.
One thing is clear, though: AI will transform our world in profound ways. The general public needs to be informed about developments around this tech to make informed decisions about how they want to interact with this technology. (That’s why this newsletter).
NEW
You Can Search This Newsletter Website For Answers To Questions On AI and Employment. Try it!
San Francisco Cements Position as Global AI Capital
Data provided by Comprehensive.io shows that San Francisco in the US has solidified its status as the world's leading hub for AI, primarily evidenced by the abundance of AI job listings.
San Francisco outpaces other cities, notably Cupertino and Los Angeles, by a substantial margin, with approximately 22% of AI job postings originating from companies headquartered in San Francisco. What's intriguing is the absence of a single dominant AI giant in the city, implying that job prospects are dispersed among various firms. Nevertheless, the influence of the AI sector extends beyond the city limits into the wider Bay Area, encompassing Silicon Valley.
When considering businesses situated throughout the Bay Area, the share of AI job postings swells to around 59%, surpassing the cumulative total of every other city in the nation. Corporations based in Cupertino and Mountain View, like Apple and Google, contribute nearly 18% of AI job postings. Despite a slowdown in hiring within the tech industry, AI job listings have surged by an impressive 22% in the last three months.
The AI boom has given rise to what is known as an "AI premium," where engineers with AI expertise are demanding an approximately 21% higher salary compared to their typical software engineering counterparts. Local professionals in San Francisco and the Bay Area who venture into the AI realm are being handsomely remunerated for their valuable skills. The demand for AI talent and the excitement surrounding AI technology have both played pivotal roles in elevating salaries and career prospects in the region.
In the grand scheme of things, San Francisco city, together with the broader Bay Area, has ascended to a global epicenter for AI. The area boasts a remarkable concentration of AI job listings, drawing talent and investments from all corners of the world. The growth of the AI industry in this region has been nothing short of remarkable, outperforming other cities and regions in terms of job opportunities and overall industry activity.
Source: sfstandard.com
AI Startups Are Already Failing
“Olive.ai,” the Columbus, Ohio, US-based company, has announced that it is selling off its remaining assets and winding down its operations.
The company recently divested its clearinghouse and patient access business units to Waystar, and its prior authorization business unit to Humata Health. Olive, a healthcare startup, experienced rapid growth in 2020 and 2021, driven by the surge in digital health funding and the increased demand for automation during the COVID-19 pandemic.
In December 2020, Olive secured US $225 million in funding at a valuation of $1.5 billion, followed by a substantial $400 million funding round just seven months later, which raised its valuation to an estimated $4 billion. The company amassed a total of $832 million in funding since March 2020, with support from prominent investors like General Catalyst, Ascension Ventures, Oak HC/FT, and SVB Capital.
In 2021, Olive's enterprise AI was deployed in more than 900 hospitals across over 40 U.S. states, including more than 20 of the top 100 U.S. health systems. However, Olive AI has experienced a similarly significant decline as its business operations have been contracting since the previous year.
Source: fiercehealthcare.com
Despite the Brouhaha, Not Too Many AI Job Openings in England
The artificial intelligence sector in England currently has approximately half the number of job openings as it did two years ago, indicating a cautious approach by employers toward this technology despite government efforts to establish it as a world-class industry.
Analysis of data from Reed Recruitment by Bloomberg reveals that job postings containing AI-related keywords, such as "machine learning" and "neural networks," have declined faster than the broader job market in England in recent years despite growing interest in AI technology.
In England, the number of AI job postings in the three months leading to September decreased by nearly 40% compared to a year ago. They have fallen by 61% since their peak in 2019, declining more rapidly than the broader labor market over the last 18 months.
What It Means For Us?
If this is a trend, it indicates that the integration of AI into the labor market may take time, with AI skills gradually becoming an essential part of existing job roles and augmenting people's capabilities and productivity rather than leading to a rapid creation of entirely new "AI jobs." It also indicates that employers are exercising a prudent approach to maximize the benefits of this emerging technology.
Source: bloomberg.com
“AI Exposure” is New Buzzword Related to Job Loss Due to AI
As employment and technology specialists endeavor to predict the employment implications of gen-AI, the term "AI exposure" has become the preferred way to characterize potential job-related risks.
A recent study by the Rand Corporation discovered that jobs necessitating higher education qualifications exhibit the most significant levels of "AI exposure." For instance, a study by AI researchers from NYU, Princeton, and Wharton revealed that numerous teaching positions rank among the most susceptible to AI impact.
Source: cnbc.com
Now, Gen-AI Assistant on the Shop Floor
Siemens and Microsoft will partner to drive cross-industry artificial intelligence (AI) adoption across industries. Microsoft boss Satya Nadella said in a press release, “With this next generation of AI, we have a unique opportunity to accelerate innovation across the entire industrial sector.”
The Siemens Industrial Co-pilot is a generative AI-powered assistant designed to enhance human-machine collaboration and boost productivity.
Soon, both companies will work together to build additional copilots for manufacturing, infrastructure, transportation, and healthcare industries.
Source: reuters.com
Thanks to MIT, There May Soon Be An AI Super Pilot in Airplane Cockpit
The move's on to make human pilots work in tandem with AI in the cockpit. To ensure that the synchronization between the two is always perfect, MIT researchers have now proposed "Air-Guardian". This deep learning system has been created to enhance flight safety by working alongside airplane pilots and the AI.
This AI super pilot or "commander" can identify critical situations that human pilots may overlook and can intervene to prevent potential incidents.
Without getting too much into the specifics, Air-Guardian uses a revolutionary new deep learning technology called Liquid Neural Networks (LNN), developed by MIT's Computer Science and Artificial Intelligence Lab (CSAIL).
So get this: Air-Guardian uses a distinctive approach to elevate flight safety. It observes both the human pilot's attentiveness and the AI's concentration, identifying situations where the focus of either diverges. In cases where the human pilot misses a crucial element, the AI system intervenes, assuming control of that specific aspect of the flight. In such case, the AI system takes control of that specific aspect, all while ensuring the pilot continues to have overall flight control.
Ramin Hasani, an AI scientist at MIT CSAIL and co-author of the Air-Guardian project, has explained that the system can collaborate with humans. In cases where humans face difficulties, the AI can provide support, while humans can continue handling tasks they are skilled at.
Source: VentureBeat
Canva’s New AI-powered Multimedia Design Features for Teachers and Students
With the introduction of gen-AI, rules of formal teaching in the classroom are changing. Issues like assignments, etc, where students are using AI to research and answer, are now a challenge.
Canva recently surveyed 1,000 US teachers to understand how educators utilized and perceived AI in the classroom. Teachers said they were interested in using AI to simplify language (67%), visualize data (66%), generate art (63%), and edit writing and assignments and lesson plans.
Perhaps with all this in mind, Canva has announced its new “Classroom Magic,” a version of its Magic Studio designed specifically for teachers and students.
The new Classroom Magic update could be the biggest introduction of AI into the classroom in the world to date.
Canva counts more than 600,000 different schools among its existing users of Canva for Education.
Among the AI tools is “Magic Write,” a generative AI tool that allows students and teachers to access quick actions from a dropdown menu. For those parents and educators worried about AI discouraging students from learning how to write on their own, Canva advises in a press release that Magic Write allows students to develop comprehension skills.
Source: venturebeat.com
AI “Godfathers” in Heated Debate Over Existential Risks
Artificial intelligence trailblazers Geoffrey Hinton, Andrew Ng, Yann LeCun and Yoshua Bengio are currently engaged in a spirited discussion surrounding the potential risks tied to AI. These pioneers are renowned for their unwavering enthusiasm for AI, particularly following their groundbreaking deep learning work.
Now, all of them are involved in an online debate.
Andrew recently accused major tech companies of downplaying certain AI risks to stifle competition and advocate for stricter regulations. In contrast, Geoffrey dismissed these allegations as a conspiracy theory, arguing that some tech leaders genuinely have concerns about AI risks while displaying an unwarranted sense of superiority, assuming they can push boundaries beyond what the general public can.
Just a year ago, Yann and Geofrrey had ardently defended deep learning against critics who asserted it had reached a stagnation point. They firmly believed in AI's potential to revolutionize multiple industries. Nevertheless, their viewpoints have since evolved as they delve into a discourse about the existential threats posed by AI.
Yan’s contribution to the current debate was by expressing his belief that tech leaders were exaggerating the existential perils of AI. He suggested that the potential superhuman AI, which some fear is imminent, would possess similar characteristics to current AI models. These divergent opinions underscore the intricacies of the discussion on existential risks within the AI community.
What It Means For Us?
These AI pioneers have ignited a new debate regarding the risks associated with AI. It seems they have climbed down a notch - from their previous united front of unwavering AI optimism to a more nuanced conversation. This debate also underscores the need for a comprehensive understanding of both the potential dangers and benefits of AI as it continues to advance.
Source: venturebeat.com
“AI For Real” Is On LinkedIn. Join, Now.
“AI For Real” Is Also On Facebook Now. Join here.
Here’s Our Twitter Account.
Subscribe to Our YouTube Channel: youtube.com/@aiforreal
Facebook Unveils “Habitat 3.0”: A Simulator for Human-Robot Collaboration Training
Facebook has introduced Habitat 3.0, the latest iteration of its software designed to simulate interactions between humans and robots in immersive 3D environments. This upgraded version showcases three key enhancements, providing a glimpse into a future where humans and robots collaborate seamlessly: improved simulated humans, tools for real-time human involvement within the simulator, and benchmark tasks for evaluating human-robot interaction.
Key Features:
Humanoid Simulation: Facebook has developed a diverse range of realistic 3D human avatars. These avatars feature articulated skeletons, high-fidelity surface rendering, motion and behavior generation policies, and a library of male and female avatars for selection.
Habitat Matrix - Human-in-the-Loop Interaction: This feature allows humans to collaborate with autonomous robots using mouse and keyboard inputs or a virtual reality (VR) interface. It enables users to seamlessly step into the simulation and take control of either the human or the robot, offering a first- or third-person perspective. This interactive setup facilitates the evaluation of AI agents' performance in scenarios closely resembling real-world interactions.
Two Human-Robot Interaction Tasks: Habitat 3.0 includes two tasks for training and assessing agents in human-robot collaboration. These tasks focus on social navigation, assessing how well robot agents can find and follow human avatars while maintaining a safe distance, and social rearrangement, simulating cooperative efforts between a robot and a human avatar in repositioning objects from their initial locations to desired positions through pick-and-place actions, emulating household cleaning.
What It Means For Us?
Over the past few years, various companies have introduced low-cost quadruped robots, while advancements in language models have enhanced the intelligence of robotic controllers. These developments are driving the creation of smarter and more capable robotic systems. Habitat 3.0 and similar software will serve as crucial training platforms for the future of robotics, facilitating the development of rich simulations and games for robot-human virtual reality environments.
Source: AIHabitat.org, markettechpost.com, arxiv.org
AGI Within Next Five Years?
Shane Legg, one of the co-founders of Google's DeepMind AI Lab, has reiterated that there was a high chance that artificial general intelligence (AGI) would be achieved in the next five years. AGI represents a state where a machine's intelligence matches or surpasses human intelligence.
Way back in 2001, Shane had projected AGI's arrival by 2028. He now believes there's a 50% chance it could happen even sooner.
Incidentally, Sam Altman, CEO of OpenAI, has also been a vocal advocate for AGI development, emphasizing its potential to greatly benefit humanity.
Source: firstpost.com
Exciting Opportunities for Newsletter Sponsorship!
Discover available ad and sponsorship slots for this newsletter by clicking on this link. Explore the various packages to find the perfect fit for your brand.
If you would like further information, please don't hesitate to contact me. Let's collaborate and customize the best deal tailored to your specific needs. Click on the tab above. Or, for individual requirements, write to marketing (at) newagecontentservices.com
Here’s One Study on How Americans Are Approaching AI
Data from the Ipsos Consumer Tracker reveals that Americans of all age groups exhibit a strong familiarity with AI. But it is the younger population that tends to engage with AI more than their counterparts actively. The Ipsos Consumer Tracker conducts occasional surveys covering a range of topics, including cultural aspects, economic factors, and influences on people's lives.
The primary goal of the current survey was to gauge people's self-assessment of their AI skills. The results showed that only a tiny percentage (3%) considered themselves experts in generative AI. In comparison, a significant portion (45%) had limited knowledge, and 23% admitted to not know it at all. These findings indicate a notable lack of expertise in AI within the general population.
Moreover, the survey delved into how those who claimed to possess some knowledge of generative AI acquired their understanding. The majority (52%) are self-taught, while 46% rely on social media and online tutorials, which can also be considered forms of self-education. Additionally, half of this group acquired AI insights from friends and family.
Source: ipsos.com
…..where every week, I shortlist interesting articles, posts, podcasts, and videos on AI.
AGI is Hollow-meaningless Dream and is Never Going to Come: Expert
AI researcher, machine learning engineer, etc. Devansh is well known in AI circles for his newsletters and articles on AI, primarily on Substack.
In a recent article, Devansh has made out an interesting case on why artificial forms will never form biological intelligence.
He writes that intelligence primarily arises as an evolutionary adaptation, with humans developing it in response to evolutionary pressures. Societal influences refine this intelligence, leading to distinct groups excelling in various domains.
According to him, human intelligence is a product of evolution, tailored to “our unique needs and the human context.” It doesn't extend well to our primate relatives, let alone other biological entities. Our intelligence is intricately linked to our bodies and their requirements. Without this context, "human intelligence" wouldn't exist. The notion that humans could create a truly human-like entity purely through binary code is implausible, he says.
In his article, Devansh points to research that corroborates this perspective. Studies comparing human and Language Model Machines (LLMs) language learning behaviors highlight significant distinctions. Even setting aside the idea of General Intelligence, AI systems have yet to fully grasp the intricacies of human linguistic intelligence, which varies based on language. AI programs may appear more human-like as they engage with us using language without assistance from humans, but this should not be misconstrued as genuine human intelligence.
AI systems excel at mimicking human intelligence, but they inherently lack quintessential human qualities like sentience, agency, true meaning, or the capacity to appreciate human intentions, Devansh says.
UK’s AI Safety Summit: We Are All In It Together, Say Countries
China has agreed to collaborate with the United States, the European Union (EU), and other nations to address the risks associated with artificial intelligence jointly.
Over 25 countries, including the United States, China, and the EU, signed the "Bletchley Declaration," emphasizing the importance of international cooperation and a common oversight approach. This agreement was reached at the AI Safety Summit in the United Kingdom, where the focus was on finding a safe path forward for this rapidly advancing technology.
Some leaders in the tech industry and politics have raised concerns about the potential dangers of AI's rapid development, suggesting that it could threaten humanity if not correctly managed. This has prompted governments and international organizations to race towards developing safeguards and regulations.
Worries about AI's impact on economies and society gained momentum when Microsoft-supported OpenAI made ChatGPT accessible to the public last November.
Even England’s King is Worried
In a recorded message to participants at the UK's AI Safety Summit, King Charles III emphasized the imperative of addressing the risks associated with AI with a "sense of urgency, unity, and collective strength." He also likened AI to the discovery of electricity in importance.
Elon Musk, who attended the Summit, expressed the need to establish a foundation of knowledge before implementing oversight, suggesting the use of a "third-party referee" to raise alarms when risks emerge.
Using natural language processing tools to create human-like conversations has sparked concerns, even among AI pioneers, that machines could eventually surpass human intelligence, leading to unforeseen consequences.
Now, governments and officials are attempting to find a way forward in collaboration with AI companies. These companies are worried about being burdened by regulations before AI reaches its full potential.
While the EU has been primarily focused on overseeing AI in terms of data privacy, surveillance, and implications for human rights, the British summit is specifically addressing existential risks associated with highly capable general-purpose AI models known as "frontier AI."
Source: UK Govt
Biden Signs Directive on AI
A few days before the Summit, US President Biden made a surprise move and issued an executive directive to set safety and security benchmarks for AI.
Here's what it means: Developers of the most potent AI systems must disclose their safety testing outcomes and pertinent data to the US government.
The primary objective is to safeguard American citizens from potential AI-related hazards. The directive also advocates for rigorous testing to ensure the safety, security, and reliability of AI systems prior to their public release.
Various government agencies, including the Departments of Energy and Homeland Security, will tackle the challenges tied to AI and critical infrastructure. Nonetheless, the feasibility of enforcing this directive without further legislative adjustments remains uncertain, necessitating the involvement of Congress to pass data privacy laws in order to shield Americans' data.
Seems Like Watermarking is the Way Forward
The new Joe Biden order on AI proposes that the Commerce Department create guidance for AI watermarking. This should help to prevent the misuse of AI for malicious purposes by identifying the source of AI-generated content.
Simply put: Guidelines on watermarking of content will soon be on the way. These will include recommendations for how to develop and use AI watermarking to identify the source of AI-generated content. The goal of this proposal is to prevent the misuse of AI for malicious purposes.
Source: techcrunch.com
G7 Nations Set to Establish Voluntary Code of Conduct for Advanced AI Development
The Group of Seven (G7) industrial nations is poised to establish a voluntary code of conduct for companies engaged in the development of advanced artificial intelligence systems, according to a G7 document. Governments are taking this step to address the potential risks and misuse of AI technology. This code of conduct, once agreed upon, will mark a significant milestone in how major countries regulate AI, addressing concerns related to privacy and security risks, as revealed in the document obtained by Reuters.
Source: reuters.com
OpenAI Unveils 'Preparedness' Team to Tackle Advanced AI Model Risks
OpenAI of ChatGPT fame has established a 'Preparedness' team led by Aleksander Madry, formerly the director of MIT's Center for Deployable Machine Learning. The team's core responsibilities encompass the identification, prediction, and mitigation of potential hazards posed by forthcoming AI systems.
In a bid to involve the wider community, OpenAI is soliciting proposals for risk assessments and is offering a $25,000 prize along with the opportunity to join the Preparedness team to the top ten submissions. OpenAI has a strong commitment to community engagement, making this initiative consistent with its approach.
The 'Preparedness' team at OpenAI will also formulate a "risk-informed development policy" to guide the company in evaluating, monitoring, and governing AI model development.
Furthermore, the team will undertake a comprehensive examination of "chemical, biological, radiological, and nuclear" threats in conjunction with AI models. This thorough approach underscores their commitment to addressing potential risks comprehensively.
Source: OpenAI
Chinese, Too, Want Strong AI Regulation
Just before the start of the AI Safety Summit in the UK, a bunch of AI scientists and researchers from China asked for strong regulations to govern AI.
Source: ft.com
UN Prepares Interim AI Report and Advisory Body to Address Global AI Governance and Risks
The United Nations (UN) is preparing an interim report on artificial intelligence, which seeks to encompass governance, risks, and opportunities within the AI domain.
This report is spearheaded by a newly formed advisory body of 39 members and is expected to be submitted by year-end, with a final report scheduled for the following year. The overarching objective is to identify deficiencies in existing cross-border governance approaches and to establish links to ensure comprehensive coverage. Amandeep Singh Gill, the UN's tech envoy, underscores the significance of assessing the current landscape of governance responses and orchestrating efforts to fill any voids.
The AI advisory body, co-chaired by Spanish Digital Minister Carme Artigas and Alphabet's James Manyika, will convene both in-person and virtual meetings to discuss AI governance. The primary aim is to foster cooperation and coordination among governments, the private sector, and other stakeholders to ensure the responsible and ethical development and utilization of AI. The report and the dialogues during the UN summit will contribute to shaping the future international governance of AI and addressing potential risks associated with its swift advancement.
What It Means For Us?
The UN's interim AI report and the establishment of the advisory body underscore the growing recognition of the imperative for comprehensive AI governance and regulation. This report will serve as a catalyst for governments and the private sector to prioritize AI governance and consider the associated risks and opportunities inherent to this burgeoning technology. By identifying existing response gaps and promoting collaboration, the UN aims to guarantee a unified and responsible approach to global AI development and deployment.
Source: reuters.com
Four AI Majors Set Up US $10 Million Research Safety Fund
Anthropic, Google, Microsoft, and OpenAI have set up a new AI Safety Fund. This is a $10 million initiative to promote research in the field of AI safety. They also announced the appointment of Chris Meserole as the first Executive Director of the Frontier Model Forum. The latter is an industry body focused on ensuring the safe and responsible development of frontier AI models.
Source: Google Blog
WHAT ALL OF THE ABOVE DEVELOPMENTS MEAN FOR US?
Almost a year ago, OpenAI introduced its initial version of gen-AI, known as ChatGPT. Just as that marked a pivotal moment, the final week of October to the first week of November 2023 will go down in history as the time when not only governments but also the United Nations and private entities, including major tech companies, took the first concrete steps to confront the challenges presented by what is arguably one of humanity's most significant inventions: artificial intelligence.
- Your story matters. Your innovation matters -
If you are an AI entrepreneur*, startup founder, etc, and are eager to say your piece in under a minute, drop in an email with the subject line: “Yes, to 60 secs” to marketing (at) newagecontentservices.com We promise to revert ASAP.
Midjourney, Stability AI, and DeviantArt Secure Initial Victory in Artists' Copyright Case
The long-debated issue of whether AI art generators violate copyright, particularly as they often rely on the work of human artists without their explicit consent, compensation, or knowledge, has seen a significant development in the United States.
In a recent decision, US District Court Judge William H. Orrick from the Northern District of California allowed the dismissal of a copyright infringement class action lawsuit brought against three entities: Stability AI, Midjourney, and DeviantArt (a popular image-sharing service and social network).
The lawsuit was initiated by three artists: Sarah Anderson, Kelly McKernan, and Karla Ortiz. The three AI image generator companies had sought to have the copyright infringement case against them dismissed.
Judge Orrick, in his recent ruling, essentially granted their motion. He stated that "the Complaint is defective in numerous respects," and went on to explain the reasons for his decision. One significant factor was that two of the artists, McKernan and Ortiz, had not registered their art with the U.S. Copyright Office.
Source: venturebeat.com
Don’t Move Too Fast and Break Things, Says Google Deep Mind Co-founder
Demis Hassabis, co-founder of Google Deep Mind, has, in an interview, emphasized the importance of caution in the AI industry.
He cautioned against adopting the "move fast and break things" approach commonly associated with Silicon Valley.
Hassabis suggests that AI development should prioritize safety and ethical considerations.
Are Women Not Taking to AI As Much As Men?
This is an interesting article on bbc.com that is anchored around a certain perception that women are not using gen-AI as much as men.
It quotes a report from earlier this year that says while 54% of men now use AI in either their professional or personal lives, this falls to just 35% of women.
The article then explores the plausible reasons for this apparent AI gender gap and whether it should be a concern.
Source: bbc.com













