News

AI fears are reaching the highest levels of finance and law

Published

on

Silicon Valley numbers have long warned about the dangers of artificial intelligence. Now their anxiety has migrated to other spheres of power: the legal system, global gatherings of business leaders and Wall Street’s top regulators.

Last week, the Financial Industry Regulatory Authority (FINRA), the securities industry’s self-regulator, classified AI as an “emerging risk” and the World Economic Forum in Davos, Switzerland, released research that concluded that AI AI-powered misinformation poses the biggest near-term threat to the global economy.

These reports came just weeks after the Financial Stability Oversight Council in Washington said AI could result in “direct consumer harm” and Gary Gensler, chairman of the Securities and Exchange Commission, publicly warned about the threat. the financial stability of numerous investments. companies that rely on similar AI models to make buying and selling decisions.

“AI can play a central role in after-action reporting of a future financial crisis,” he said in a speech in December.

At the World Economic Forum’s annual conference for top CEOs, politicians and billionaires, held in a chic Swiss ski town, AI is one of the central themes and a topic on many of the panels and events.

GET CAUGHT

Stories in short to stay informed quickly

In a report released last week, the forum said its survey of 1,500 policymakers and industry leaders found that fake news and propaganda written and driven by AI chatbots are the main greater short-term risk for the global economy. About half of the world’s population is taking part in this year’s elections in countries including the United States, Mexico, Indonesia and Pakistan, and disinformation researchers are concerned that AI will make it easier for people to spread false information and increase social conflict. .

Chinese propagandists are already using generative AI to try to influence policy in Taiwan, The Washington Post reported on Friday. AI-generated content is appearing in fake news videos in Taiwan, government officials said.

The forum’s report came a day after FINRA, in its annual report, said AI raised “concerns about accuracy, privacy, bias and intellectual property” while offering potential cost and efficiency gains.

And in December, the Treasury Department’s FSOC, which monitors the financial system for risky behavior, said that undetected flaws in the AI’s design could produce biased decisions, such as denying loans to applicants who would otherwise would be qualified.

Generative AI, which is trained on huge data sets, can also produce wildly incorrect conclusions that appear convincing, the council added. The FSOC, chaired by Treasury Secretary Janet L. Yellen, recommended that regulators and the financial sector devote more attention to monitoring potential risks emerging from the development of AI.

The SEC’s Gensler is among AI’s most outspoken critics. In December, his agency requested information about the use of AI from several investment advisers, according to Karen Barr, head of the Investment Adviser Association, an industry group. The request for information, known as a “sweep,” came five months after the commission proposed new rules to prevent conflicts of interest between advisors who use a type of AI known as predictive data analysis and their clients.

“Any resulting conflicts of interest could cause harm to investors in a more pronounced way and on a broader scale than previously possible,” the SEC said in its proposed rulemaking.

Investment advisors are already required under existing regulations to prioritize their clients’ needs and avoid such conflicts, Barr said. His group wants the SEC to withdraw the proposed rule and base any future actions on what it learns from its informational sweep. “SEC regulation misses the mark,” she said.

Financial services companies see opportunities to improve customer communications, back-office operations and portfolio management. But AI also carries greater risks. Algorithms that make financial decisions could produce biased lending decisions that deny minorities access to credit or even cause a global market collapse if dozens of institutions relying on the same AI system sell at the same time.

“This is something different from what we have seen before. AI has the ability to do things without human hands,” said attorney Jeremiah Williams, a former SEC official who now works at Ropes & Gray in Washington.

Even the Supreme Court sees cause for concern.

“AI obviously has great potential to dramatically increase access to important information for both lawyers and non-lawyers. But it is equally obvious that it risks invading privacy interests and dehumanizing the law,” wrote Chief Justice John G. Roberts Jr. in his year-end report on the U.S. judicial system.

Like drivers who follow GPS instructions that lead them into a dead end, humans may rely too much on AI in managing money, said Hilary Allen, associate dean at American University Washington College of Law. “There is a great mystique about the fact that AI is smarter than us,” she said.

AI also may not be better than humans at detecting unlikely hazards or “tail risks,” Allen said. Before 2008, few people on Wall Street predicted the end of the housing bubble. One reason was that because housing prices had never fallen nationally, Wall Street models assumed such a uniform decline would never occur. Even the best AI systems are only as good as the data they’re based on, Allen said.

As AI becomes more complex and capable, some experts worry about “black box” automation, which is unable to explain how it arrived at a decision, leaving humans uncertain about its soundness. Poorly designed or managed systems can undermine the trust between buyer and seller necessary for any financial transaction, said Richard Berner, clinical professor of finance at New York University’s Stern School of Business.

“Nobody did a stress scenario with machines running out of control,” added Berner, first director of Treasury’s Office of Financial Research.

In Silicon Valley, the debate over the potential dangers surrounding AI is not new. But it gained traction in the months following the launch of OpenAI’s ChatGPT in late 2022, which showed the world the capabilities of the next-generation technology.

OpenAI presents plan to address AI dangers

Amid an artificial intelligence boom that has fueled the rejuvenation of the tech industry, some company executives have warned that AI’s potential to unleash social chaos rivals that of nuclear weapons and lethal pandemics. Many researchers say these concerns distract from the real-world impacts of AI. Other experts and entrepreneurs say concerns about the technology are overblown and risk putting pressure on regulators to block innovations that could help people and boost profits for tech companies.

Over the past year, politicians and policymakers around the world have also struggled to understand how AI will fit into society. Congress held several hearings. President Biden issued an executive order saying AI was “the most important technology of our time.” The UK convened a global forum on AI, where Prime Minister Rishi Sunak warned that “humanity could lose control of AI completely”. Concerns include the risk that “generative” AI – which can create text, video, images and audio – could be used to create disinformation, displace jobs or even help people create dangerous biological weapons.

AI poses “extinction risk” equivalent to nuclear weapons, tech leaders say

Tech critics have pointed out that some of the leaders who have sounded the alarm, such as OpenAI CEO Sam Altman, are nevertheless driving the development and commercialization of the technology. Smaller companies have accused AI heavyweights OpenAI, Google and Microsoft of exaggerating the risks of AI to trigger regulations that would make it harder for new entrants to compete.

“The problem with hype is that there is a disconnect between what is said and what is actually possible,” said Margaret Mitchell, chief ethics scientist at Hugging Face, an open-source AI start-up based in New York. . “We had a honeymoon period where generative AI was very new to the public and they could only see the good, as people started using it they could see all the problems with it.”

Fuente

Leave a Reply

Your email address will not be published. Required fields are marked *

Información básica sobre protección de datos Ver más

  • Responsable: Miguel Mamador.
  • Finalidad:  Moderar los comentarios.
  • Legitimación:  Por consentimiento del interesado.
  • Destinatarios y encargados de tratamiento:  No se ceden o comunican datos a terceros para prestar este servicio. El Titular ha contratado los servicios de alojamiento web a Banahosting que actúa como encargado de tratamiento.
  • Derechos: Acceder, rectificar y suprimir los datos.
  • Información Adicional: Puede consultar la información detallada en la Política de Privacidad.

Trending

Exit mobile version