Educators worry about the ethics of AI in education, while students are concerned about privacy
Over 50% of students said they've violated their school's AI policy, including 63% of high schoolers
Read more...California Governor Gavin Newsom announced over the weekend that he was vetoing Senate Bill 1047, aka the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which would have created regulations and safeguards around the use of AI, stating that the bill "establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology."
Provisions in the bill included mandating that developers needed to, "implement reasonable administrative, technical, and physical cybersecurity protections to prevent unauthorized access to, misuse of, or unsafe post-training modifications" before beginning to train their AI models, as well as implementing the capability to promptly enact a full shutdown.
The bill would also require a developer to retain an unredacted copy of their safety and security protocol, while prohibit them from using an AI model for a purpose not exclusively related to its training. Developers would be required to retain a third-party auditor every year to perform an independent audit of compliance with those provisions, which the Attorney General would have access to.
In addition, the bill would create the Board of Frontier Models within the Government Operations Agency, which would have annually issue new regulations. It would also establish a consortium in the Government Operations Agency that would develop a framework for the creation of a public cloud computing cluster to be known as “CalCompute” that advances the development and deployment of artificial intelligence that is "safe, ethical, equitable, and sustainable by, among other things, fostering research and innovation that benefits the public, as prescribed."
In his veto announcement, Newsome wrote that, "Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data."
"Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology," he said.
In response, Democratic state Senator Scott Weiner, who authored the bill, called the veto "a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and the welfare of the public and the future of the planet."
"The Governor’s veto message lists a range of criticisms of SB 1047: that the bill doesn’t go far enough, yet goes too far; that the risks are urgent but we must move with caution. SB 1047 was crafted by some of the leading AI minds on the planet, and any implication that it is not based in empirical evidence is patently absurd," he wrote.
Senate Bill 1047 was opposed by a number of major tech companies deploying AI models, including OpenAI, Google and Meta,as well as by The AI Alliance, which calls itself, "a community of technology creators, developers, and adopters collaborating to advance safe, responsible AI rooted in open innovation," with members that include IBM, Meta, Cornell University, Databricks, Dell, Intel, NYU, Sony, Uber, and Yale University.
Opposition to the bill also included Speaker Emerita Nancy Pelosi, who stated that, "While we want California to lead in AI in a way that protects consumers, data, intellectual property and more, SB 1047 is more harmful than helpful in that pursuit."
The bill was not without its supporters, as well, including X CEO Elon Musk.
This all happening as more money is being poured into startups creating AI models: Generative AI companies raised $25.9 billion in 2023, an increase of more than 200% from 2022. In the first half of 2024, investors already surpassed that number, putting $26.8 billion into 498 generative AI deals.
New York came in second, with 650 average monthly searches for detecting AI propaganda per 100,000 people, 41% above the U.S. average. Searches for ‘AI detector’ were also New York’s most searched term, with 71,500 average monthly searches. This was the same nationwide, with 829,533 average monthly searches across America.
California had the third highest number of searches about discerning AI fake news, with an average of 647 monthly searches per 100,000, 40% greater than America overall.
Earlier this month, Governor Newsom signed legislation aimed at cracking down on deep fake election content, including fake images, videos, and sounds.
(Image source: georgetown.edu)
Over 50% of students said they've violated their school's AI policy, including 63% of high schoolers
Read more...Chevron and Honeywell will collaborate on more AI solutions, including an Alarm Guidance application
Read more...Myelo uses the IMF's database to answer questions, make recommendations, and provide support
Read more...