![]() On top of scams and fake news, Sahota notes that deepfakes have also been widely used to create non-consensual pornography. In one instance, an Ontario woman lost $750,000 after seeing a deepfake video of Elon Musk appearing to promote an investment scam. Scammers have also used deepfakes to produce false celebrity endorsements. "A digital twin is essentially a replica of something from the real world… Deepfakes are the mirror image of digital twins, meaning that someone had created a digital replica without the permission of that person, and usually for malicious purposes, usually to trick somebody," California-based AI expert Neil Sahota, who has served as an AI adviser to the United Nations, told CTVNews.ca over the phone on Friday.ĭeepfakes have been used to produce a wide variety of fake news content, such as one supposedly showing Ukrainian President Volodymyr Zelenskyy telling his country to surrender to Russia. Others are pushing for rules to ensure AI is not used to discriminate or violate civil rights.A UN adviser says the world needs to be "vigilant" as artificial intelligence technology improves, allowing for more realistic-looking deepfakes.ĭeepfakes refer to media, typically video or audio, manipulated with AI to falsely depict a person saying or doing something that never happened in real life. Some proposals being considered on Capitol Hill would focus on AI that may put people's lives or livelihoods at risk, like in medicine and finance. He urged use of a "Know Your Customer"-style system for developers of powerful AI models to keep tabs on how their technology is used and to inform the public of what content AI is creating so they can identify faked videos. ![]() Smith also argued in the speech, and in a blog post issued on Thursday, that people needed to be held accountable for any problems caused by AI and he urged lawmakers to ensure that safety brakes be put on AI used to control the electric grid, water supply and other critical infrastructure so that humans remain in control. Last week, Sam Altman, CEO of OpenAI, the startup behind ChatGPT, told a Senate panel in his first appearance before Congress that use of AI interfere with election integrity is a "significant area of concern", adding that it needs regulation.Īltman, whose OpenAI is backed by Microsoft, also called for global cooperation on AI and incentives for safety compliance. REUTERS/Pedro Nunesįor weeks, lawmakers in Washington have struggled with what laws to pass to control AI even as companies large and small have raced to bring increasingly versatile AI to market. President of Microsoft Brad Smith reacts during an interview with Reuters at the Web Summit, Europe's largest technology conference, in Lisbon, Portugal, November 3, 2021. "We will need a new generation of export controls, at least the evolution of the export controls we have, to ensure that these models are not stolen or not used in ways that would violate the country's export control requirements," he said. Smith also called for licensing for the most critical forms of AI with "obligations to protect security, physical security, cybersecurity, national security." "We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI." We're going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians," he said. "We're going have to address the issues around deep fakes. In a speech in Washington aimed at addressing the issue of how best to regulate AI, which went from wonky to widespread with the arrival of OpenAI's ChatGPT, Smith called for steps to ensure that people know when a photo or video is real and when it is generated by AI, potentially for nefarious purposes. WASHINGTON, May 25 (Reuters) - Microsoft President Brad Smith said Thursday that his biggest concern around artificial intelligence was deep fakes, realistic looking but false content. ![]()
0 Comments
Leave a Reply. |