House introduces bipartisan bill on AI in banking and housing
The bill would require a report on how these industries use AI to valuate homes and underwrite loans
Read more...Deepfakes, videos or audio of a person, in which their face, voice, or body has been digitally altered so that they appear to be someone else, are becoming a big problem, as it's typically used to spread false information, which can be especially dangerous during an election year; for example, in June, a company was caught sending out deepfake robocalls pretending to be President Biden in New Hampshire.
This type of AI-generated content is only becoming more prevalent: 43% of people aged 16 or older have seen a deepfake online in the last six months, and the same can be said of 50% of children aged 8 to 15. Meanwhile, the number of deepfakes online doubles every six months; in 2023, there were roughly 500,000 video and voice deepfakes shared on social media.
With less than two months left in office, the outgoing Biden administration is taking action, announcing the established an interagency task force this week, to be led by the Department of State, to foster digital content transparency globally.
The goal of the task force will make it easier for people to determine how and when digital content, be it videos, images, or audio, has been altered, generated, or manipulated using artificial intelligence tools. To do this, the task force will work with international governments, as well as partners, to drive technical transparency standards, build capacity, and increase public awareness about AI-enabled digital content.
In addition, it will also implement digital content transparency measures around official, government-produced digital content.
"The U.S. Government is committed to seizing the promise and managing the risks of AI. Improving digital content transparency measures at home and abroad is a critical component of that effort. Digital content transparency is vital to strengthening public confidence in the integrity of official government digital content, reducing the risks and harms posed by AI-generated or manipulated media, and countering digital information manipulation," the State Department wrote in the announcement.
"This initiative demonstrates American leadership in advancing AI governance and cultivating transparency, public trust, and online safety in the global information environment."
The Biden administration has made ethics around artificial intelligence a priority: a year ago, it issued an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, directing action to strengthen AI safety and security, while also protecting the privacy of Americans.
Earlier this month the government came out with new guidelines for agencies when it comes to buying AI, issuing a memo called the Advancing the Responsible Acquisition of Artificial Intelligence in Government, aka M-24-18, detailing how agencies should appropriately manage risks and performance; promote a competitive marketplace; and implement structures to govern and manage their business processes related to acquiring AI.
M-24-18 builds on OMB M-24-10, issued in March 2024, which introduced government-wide binding requirements for agencies to strengthen governance, innovation, and risk management for use of AI.
(Image source: news.cnrs.fr)
The bill would require a report on how these industries use AI to valuate homes and underwrite loans
Read more...The artists wrote an open letter accusing OpenAI of misleading and using them
Read more...The role will not be filled by Elon Musk, though he will be involved in who is chosen
Read more...