48% of Safety Professionals Imagine AI Is Dangerous

48% of Safety Professionals Imagine AI Is Dangerous

A latest HackerOne survey make clear the rising issues AI brings to the cybersecurity panorama. The report drew insights from 500 security specialists, a neighborhood survey of two,000 members, suggestions from 50 prospects, and anonymized platform knowledge.

Their most important issues associated to AI have been:

  • Leaked coaching knowledge (35%).
  • Unauthorized utilization (33%).
  • The hacking of AI fashions by outsiders (32%)

The survey additionally discovered that 48% consider AI poses essentially the most important safety threat to their group. These fears spotlight the pressing want for corporations to reassess their AI safety methods earlier than vulnerabilities grow to be actual threats.

How the safety analysis neighborhood modified within the age of AI

The HackerOne report indicated that AI can pose a menace — and the safety neighborhood has been aiming to counter that menace. Amongst these surveyed, 10% of safety researchers focus on AI. In reality, 45% of safety leaders take into account AI amongst their organizations’ best dangers. Information integrity, specifically, was a priority.

“AI is even hacking different AI fashions,” mentioned Jasmin Landry, a safety researcher, and HackerOne pentester, also referred to as @jr0ch17, within the report.

Of these surveyed, 51% say fundamental safety practices are being ignored as corporations hurry to incorporate generative AI. Solely 38% of HackerOne prospects felt assured in defending in opposition to AI threats.

Mostly reported AI vulnerabilities embody logic errors and LLM immediate injection

As a safety platform, HackerOne has seen the variety of AI belongings included in its applications develop by 171% over the previous yr.

Probably the most generally reported vulnerabilities in AI belongings are:

  • Basic AI security (comparable to stopping AI from producing dangerous content material) (55%).
  • Enterprise logic errors (30%).
  • LLM immediate injection (11%).
  • LLM coaching knowledge poisoning (3%).
  • LLM delicate data disclosure (3%).

HackerOne emphasised the significance of the human ingredient in defending programs from AI and preserving these tools protected.

“Even essentially the most refined automation can’t match the ingenuity of human intelligence,” mentioned Chris Evans, HackerOne CISO and chief hacking officer, in a press release. “The 2024 Hacker-Powered Safety Report proves how important human experience is in addressing the distinctive challenges posed by AI and different rising applied sciences.”

SEE: For the third quarter in a row, executives are extra involved about AI-assisted attacks than another menace, Gartner reported.

Exterior AI, cross-site scripting issues happen essentially the most

Some issues haven’t modified: Cross-site scripting (XSS) and misconfigurations are the weaknesses most reported by the HackerOne neighborhood. The respondents take into account penetration assessments and bug bounties the most effective methods to establish points.

AI tends to generate false positives for safety groups

Additional analysis from a HackerOne-sponsored SANS Institute report in September revealed that 58% of safety professionals consider that safety groups and menace actors may discover themselves in an “arms race” to leverage generative AI techniques and strategies of their work.

Safety professionals within the SANS survey mentioned they’ve efficiently used AI to automate tedious duties (71%). Nonetheless, the identical individuals acknowledged that menace actors may exploit AI to make their operations extra environment friendly. Specifically, respondents “have been most involved with AI-powered phishing campaigns (79%) and automatic vulnerability exploitation (74%).”

SEE: Safety leaders are getting frustrated with AI-generated code.

“Safety groups should discover the most effective functions for AI to maintain up with adversaries whereas additionally contemplating its present limitations — or threat creating extra work for themselves,” Matt Bromiley, an analyst on the SANS Institute, mentioned in a press release.

The answer? AI implementations ought to bear an exterior overview. Over two-thirds of these surveyed (68%) selected “exterior overview” as the best strategy to establish AI security and safety points.

“Groups at the moment are extra lifelike about AI’s present limitations” than they have been final yr, mentioned HackerOne Senior Options Architect Dane Sherrets in an electronic mail to TechRepublic. “People deliver numerous vital context to each defensive and offensive safety that AI can’t replicate fairly but. Issues like hallucinations have additionally made groups hesitant to deploy the know-how in crucial programs. Nonetheless, AI remains to be nice for growing productiveness and performing duties that don’t require deep context.”

Additional findings from the SANS 2024 AI Survey, launched this month, embody:

  • 38% plan to undertake AI inside their safety technique sooner or later.
  • 38.6% of respondents mentioned they’ve confronted shortcomings when utilizing AI to detect or reply to cyber threats.
  • 40% cite authorized and moral implications as a problem to AI adoption.
  • 41.8% of corporations have confronted pushback from workers who don’t belief AI choices, which SANS speculates is “attributable to lack of transparency.”
  • 43% of organizations presently use AI inside their safety technique.
  • AI know-how inside safety operations is most frequently utilized in anomaly detection programs (56.9%), malware detection (50.5%), and automatic incident response (48.9%).
  • 58% of respondents mentioned AI programs wrestle to detect new threats or reply to outlier indicators, which SANS attributes to a scarcity of coaching knowledge.
  • Of those that reported shortcomings with utilizing AI to detect or reply to cyber threats, 71% mentioned AI generated false positives.

HackerOne’s ideas for enhancing AI safety

HackerOne recommends:

  • Recurrently testing, validation, verification, and analysis all through an AI mannequin’s life cycle — from coaching to deployment and use.
  • Researching whether or not authorities or industry-specific AI compliance necessities are related to your group and establishing an AI governance framework.

HackerOne additionally strongly really useful that organizations talk about generative AI overtly and supply coaching on related safety and moral points.

HackerOne launched some survey knowledge in September and the complete report in November. This up to date article considers each.


Source link

May Super-Offer Beat the A.I Revolution with us  The Yearly Membership Plan

Biggest Saving EVER - " Unlimited Themes, Plugins and SEO Tools " 

June Super-Offer Beat the A.I Revolution with us The Yearly Membership Plan

Biggest Saving EVER - " Unlimited Themes, Plugins and SEO Tools " 

July Super-Offer Beat the A.I Revolution with us The Yearly Membership Plan

Biggest Saving EVER - " Unlimited Themes, Plugins and SEO Tools " 

August Super-Offer Beat the A.I Revolution with The Yearly Membership Plan

Biggest Saving EVER - " Unlimited Themes, Plugins and SEO Tools " 

September Super-Offer Beat the A.I Revolution with The Yearly Membership Plan

Biggest Saving EVER - " Unlimited Themes, Plugins and SEO Tools " 

Christmas Super-Offer Beat the A.I Revolution with The Yearly Plan - Biggest Saving EVER

October Super-Offer Beat the A.I Revolution with us - The Yearly Membership Plan

Biggest Saving EVER - " Unlimited Themes, Plugins and SEO Tools " 

November Super-Offer Beat the A.I Revolution with us - The Yearly Membership Plan

Biggest Saving EVER - " Unlimited Themes, Plugins and SEO Tools " 

January Super-Offer Beat the A.I Revolution with us - The Yearly Membership Plan

Biggest Saving EVER - " Unlimited Themes, Plugins and SEO Tools "