A growing number of companies are using artificial intelligence (AI) for everyday tasks. Much of technology contributes to productivity and ensuring public safety. However, some industries oppose certain aspects of AI. And some industry leaders are working to balance the good and the bad.
“We are looking at critical infrastructure owners and operators, companies in the water, healthcare, transportation and communications sectors, some of whom are starting to integrate some of these AI capabilities,” said Jen Easterly , director of the US Cybersecurity and Infrastructure Security Agency. “We want to make sure they integrate them in a way that doesn’t introduce a lot of new risks.”
US AGRICULTURAL INDUSTRY TESTING ARTIFICIAL INTELLIGENCE: “LOTS OF POTENTIAL”
Consulting firm Deloitte recently surveyed leaders of business organizations around the world. The results showed that uncertainty about government regulations was a bigger problem than the actual implementation of AI technology. When asked about the biggest barrier to deploying AI tools, 36% cited regulatory compliance first, 30% cited difficulty managing risk, and 29% cited lack of a governance model.
Easterly says that despite some of the risks AI can pose, she’s not surprised the government hasn’t taken more action to regulate technology.
“These will be the most powerful technologies of our century, probably more,” Easterly said. “Most of these technologies are developed by private companies that are incentivized to provide returns to their shareholders. So we need to ensure that government plays a role in establishing guardrails to ensure that these technologies are built in a way that prioritizes security. And that’s where I think Congress can play a role in ensuring that these technologies are as safe and secure for use and implementation by the American people“.
Congress has considered comprehensive protections for AI, but it has been primarily state governments that have adopted the rules.
“There are certainly a lot of positive things about what AI is doing. Plus, when it gets into the hands of bad actors, it can destroy (the music industry),” said Governor Bill Lee , R-Tenn., upon signing the state agreement. legislation in March to protect musicians from AI.
The Voice and Image Security Act, or ELVIS Act, classifies voice resemblance as a property right. Lee signed the legislation this year, making Tennessee the first state to enact protections for singers. Illinois and California have since passed similar laws. Other states, including Tennessee, have laws that determine that names, photographs, and likenesses are also considered property rights.
“Our voices and our portraits are indelible parts of us that have allowed us to showcase our talents and expand our audience, not simple digital sketches that a machine can reproduce without consent,” said the artist. country Lainey Wilson during a congressional hearing on AI and the intellectual. property.
AI HORROR FLICK STAR KATHERINE WATERSTON ADMITS NEW TECHNOLOGY IS “TERRIFYING”
Wilson argued that her image and likeness were being used via AI to sell products she had not previously approved.
“For decades, we’ve leveraged technology that, frankly, wasn’t created to be secure. It was created to speed time to market or for cool features. And frankly, that’s why we cybersecurity,” Easterly said.
The Federal Trade Commission (FTC) has cracked down on some deceptive AI marketing techniques. It launched “Operation AI Comply” in September, which tackles unfair and deceptive business practices using AI, such as fake reviews written by chatbots.
“I’m a technologist at heart and an optimist at heart. So I’m incredibly excited about some of these capabilities. And I’m not concerned about some of Skynet’s things. I want to make sure that this technology is designed , developed, tested and delivered in a way that ensures safety is a priority,” Easterly said.
Chatbots have received good reviews. Hawaii approved legislation this year to invest more in research using AI tools in health care. As a study reveals, OpenAI’s chatbot outperformed doctors in diagnosing medical problems. The experiment compared doctors using ChatGPT to those using conventional resources. Both groups achieved an accuracy rate of around 75%, while the chatbot alone scored over 90%.
AI is not only used to detect diseases, it also helps emergency teams detect catastrophic events. After mortal wildfires devastated MauiHawaii state lawmakers also allocated funds to the University of Hawaii to map statewide wildfire risks and improve forecasting technologies. It also includes $1 million for an AI-based platform. Hawaiian Electric also deploys high-resolution cameras throughout the state.
AI DETECTS WOMEN’S BREAST CANCER AFTER ROUTINE SCREENING MISSED IT: ‘DEEPLY GRATEFUL’
“It will learn over the months to be more sensitive to what is a fire and what is not,” said Dmitry Kusnezov, the Energy Ministry’s undersecretary for AI and technology. technology.
California and Colorado has similar technology. In just a few minutes, AI can detect when a fire has started and where it may spread.
AI is also being used to keep students safe. Several school districts across the country now have gun detection systems. In Utah, there’s one that notifies authorities seconds later when a gun might be on campus.
“We want to create a welcoming and safe educational environment. But we don’t want safety to impact education,” said Michael Tanner, CEO of the Park City, Utah, school district.
Maryland and Massachusetts are also considering funding state implementation of similar technology. Both states voted to create commissions to study emerging gun technologies. The Maryland commission will determine whether to use school construction funding to build the systems. Massachusetts members will examine the risks associated with the new technology.
“We want to use these capabilities to ensure we can better defend the critical infrastructure that Americans rely on every hour of every day,” Easterly said.
The European Union adopted regulations on AI this year. It classifies risks from minimal, which are not regulated, to unacceptable, which are prohibited. Chatbots are classified as specific transparent and are required to inform users that they are interacting with a machine. Software intended for critical infrastructure is considered high risk and must meet strict requirements. Most technologies that profile individuals or use public images to build databases are considered unacceptable.
CLICK HERE TO GET THE FOX NEWS APP
The United States has established some guidelines for the use and implementation of AI, but experts believe they will not go as far as the EU when it comes to risk classification.
“We need to stay ahead of the curve in America to ensure that we win this artificial intelligence race. And so it takes investment, it takes innovation,” Easterly said. “We must be an engine of innovation that will make America the greatest economy on the planet.”