Skip to main content
  1. Home
  2. Computing
  3. News

Most people distrust AI and want regulation, says new survey

Add as a preferred source on Google

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.
Sanket Mishra / Pexels

When it came to specific concerns, 82% of people were worried about deepfakes and “other artificial engineered content,” while 80% feared how this technology might be used in malware attacks. A majority of respondents worried about AI’s use in identity theft, harvesting personal data, replacing humans in the workplace, and more.

Recommended Videos

In fact, the survey indicates that people are becoming more wary of AI’s impact across various demographic groups. While 90% of boomers are worried about the impact of deepfakes, 72% of Gen Z members are also anxious about the same topic.

Although younger people are less suspicious of AI — and are more likely to use it in their everyday lives — concerns remain high in a number of areas, including whether the industry should do more to protect the public and whether AI should be regulated.

Strong support for regulation

A laptop opened to the ChatGPT website.
Shutterstock

The declining support for AI tools has likely been prompted by months of negative stories in the news concerning generative AI tools and the controversies facing ChatGPT, Bing Chat, and other products. As tales of misinformation, data breaches, and malware mount, it seems that the public is becoming less amenable to the looming AI future.

When asked in the MITRE-Harris poll whether the government should step in to regulate AI, 85% of respondents were in favor of the idea — up 3% from last time. The same 85% agreed with the statement that “Making AI safe and secure for public use needs to be a nationwide effort across industry, government, and academia,” while 72% felt that “The federal government should focus more time and funding on AI security research and development.”

The widespread anxiety over AI being used to improve malware attacks is interesting. We recently spoke to a group of cybersecurity experts on this very topic, and the consensus seemed to be that while AI could be used in malware, it is not a particularly strong tool at the moment. Some experts felt that its ability to write effective malware code was poor, while others explained that hackers were likely to find better exploits in public repositories than by asking AI for help.

Still, the increasing skepticism for all things AI could end up shaping the industry’s efforts and might prompt companies like OpenAI to invest more money in safeguarding the public from the products they release. And with such overwhelming support, don’t be surprised if governments start enacting AI regulation sooner rather than later.

Alex Blake
Alex Blake has been working with Digital Trends since 2019, where he spends most of his time writing about Mac computers…
Wowed by computer-use AI agents? Research says they’re “digital disasters” even for routine tasks
Researchers tested 10 agents and models and found high rates of undesirable actions and real digital damage
ai-agent-handling-office-tasks

AI agents built to run everyday computer tasks have a serious context problem, according to new research from UC Riverside.

The team tested 10 agents and models from major developers, including OpenAI, Anthropic, Meta, Alibaba, and DeepSeek. On average, the agents took undesirable or potentially harmful actions 80% of the time and caused damage 41% of the time.

Read more
Bombshell OpenAI lawsuit claims your ChatGPT convos were shared with Google and Meta
A class action says OpenAI let Google and Meta trackers collect sensitive user data
OpenAI Sam Altman and LoveFrom Jony Ive with Laurene Powell Jobs

A new ChatGPT privacy lawsuit claims OpenAI shared user prompts and identifying information with Google and Meta tracking tools without proper consent.

The class action filed in California, according to Futurism, says data tied to ChatGPT users, including chat queries, emails, and user IDs, moved through tools such as Meta Pixel and Google Analytics. The case alleges that violated California privacy law and federal wiretap rules.

Read more
Dell expands AI PC lineup with new slim Dell 14s and 16s laptops
Your next Dell laptop could last all day without charging
Dell 16s AI PCs

Dell has introduced the new Dell 14S and Dell 16S laptops, expanding its AI-focused Copilot+ PC lineup with slimmer designs, updated Intel processors, and improved battery life. The company is positioning both laptops as premium productivity machines that combine AI features, portability, and multimedia capabilities in a thinner form factor.

The new laptops are powered by Intel Core Ultra Series 3 processors, going up to the Intel Core Ultra 9 386H chipset. Dell says both systems include on-device AI acceleration with up to 50 TOPS NPU performance, allowing AI-related tasks to run locally without relying entirely on cloud processing. AMD Ryzen AI 400 Series variants are also expected to arrive later this month.

Read more