0

7 ways AI will both help and complicate cybersecurity in 2024

 3 months ago
source link: https://itwire.com/business-it-news/security/7-ways-ai-will-both-help-and-complicate-cybersecurity-in-2024.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Wednesday, 06 March 2024 11:31

7 ways AI will both help and complicate cybersecurity in 2024

By Jeff Broth

GUEST OPINION: For some years now, cybersecurity personnel have been excited about the impact that artificial intelligence (AI) can have on improving IT security. Every new AI technology brings a fresh wave of interest and enthusiasm and new ways to apply it to bolster network defences, from machine learning (ML) to Natural Language Processing (NLP) and generative AI such as ChatGPT.

Amongst all this enthusiasm, it’s worth remembering that AI isn’t purely positive. Like all new technology, the recent democratization of AI raises additional challenges and difficulties for cybersecurity, while at the same time delivering innovative solutions to complex issues.

To help cybersecurity teams be prepared for whatever AI brings their way, we’ve gathered seven ways that AI can both assist and obstruct cybersecurity in 2024.

AI helps by closing security gaps faster

Knowledge gaps are serious handicaps for cyber GRC teams, who need to be able to gather and present data showing their GRC profile. NLP-powered conversational interfaces help close these gaps. You can use natural language to ask AI for guidelines to evidence collection, and receive suggestions about steps to take for remediation.

Cypago conducts AI-powered User Access Reviews on behalf of cyber GRC teams. It collates information about users and their permission levels, saving a lot of time and effort for IT security professionals. The solution also enables admins to change access permissions for any tech tool within the Cypago app, using natural language commands.

AI hurts by improving phishing personalization

Phishing attacks are already the scourge of cybersecurity teams. The best defence includes thorough and frequent employee education. It only takes one person to fall for one well-worded phishing message, and the entire corporation could be on its knees.

Generative AI is able to develop highly authentic text with personal details drawn from publicly available information. These phishing messages can be just as convincing as those created by humans, and can be prepared in bulk within a matter of minutes, using minimal prompts. Cyber attackers exploit these methods to deploy dangerously effective phishing campaigns, often running numerous variations until they find one that works.

AI helps by delivering better continuous monitoring

IT teams always need to be on top of emerging risks and aware of new vectors of attack, but it’s increasingly difficult to keep up with rising threats in the ever-expanding web. AI monitoring tools can run ceaselessly in the background to analyze far more data than any human team could manage and highlight potential compliance gaps and cyber risks.

This is the role carried out by Flare’s AI assistant to enable prompt and effective responses. The platform automatically and continually scans both the surface web and the dark web to detect security exposures, and instantly carries out autonomous takedowns. The system notifies relevant security professionals using real-time alerts with automated event contextualization that draws on AI language models.

AI hurts by increasing code vulnerabilities

Like all powerful tools, AI can be exploited by malicious actors. Large language models (LLMs), which form the basis of all generative AI tools, can be used to produce problematic code that wreaks havoc on organizations and individuals. Some 31% of organizations are already using generative AI to write code, according to one recent study.

Such code might expose sensitive information from online databases, delete critical data, disrupt cloud services, or cause other damaging effects. When using generative AI to create apps, it’s always best to perform rigorous quality assurance checks before deployment.

AI helps by identifying patterns and anomalies

Traditional anomaly detection methods, like manual inspection and threshold-based techniques, are limited, presenting high rates of false alarms and human error. AI overcomes these drawbacks by incorporating machine learning algorithms that can digest large amounts of data very quickly and adapt to evolving trends, resulting in more accurate detection of anomalies. When the system finds outlier data, it alerts administrators and can even take predefined automatic actions, like suspending a user session or shutting down a server.

For example, Darktrace’s “Detect” feature analyzes every user and server in a company to learn the organization’s unique version of “normal” operations. AI is employed to monitor thousands of metrics to pick up on patterns – and thus, also subtle deviations from them, which may signal evolving threats. It can connect the dots between numerous singular events and reduce them to a handful of high-priority incidents for security teams to review.

AI hurts by obfuscating data protection

Chatbots and custom GPTs have proven revolutionary in enabling organizations to deliver timely support to employees and customers around the clock. However, the huge uptake has its downside. Chatbots gather and store data from millions of people, much of it potentially sensitive, subsequently using that data to inform AI models. Once gathered, this data can be leaked or hacked.

Additionally, it’s not always clear how AI technology processes data. Users might not realize that information that they share solely for internal reference can also be extracted by nefarious actors and used to cause harm.

AI helps by improving pen testing

AI can add turbo power to penetration testing by collating information and determining the best course of action, like identifying which host to attack first or what method to use. The results can be fed back into the AI model to generate new and more effective alternatives. AI can also analyze pen test results to generate actionable insights for improving defences.

IT teams can use PentestGPT to identify security vulnerabilities that may otherwise go unnoticed. It uses AI and NLP to deliver an automated security report that directs professionals to possible security issues. However, these types of capabilities can be used for good or for ill, as cybercriminals can also use AI-assisted pen testing techniques to successfully carry out real data breaches.

Using AI to boost defences, sometimes against AI

As in every field, in cybersecurity, it makes a big difference to be forewarned about trending challenges. Cybersecurity teams can and should plan to leverage AI for continuous monitoring, closing security gaps, and more, but also to prepare strategies to overcome the obstacles that AI places in their path.

Read 861 times

Please join our community here and become a VIP.

Subscribe to ITWIRE UPDATE Newsletter here
JOIN our iTWireTV our YouTube Community here
BACK TO LATEST NEWS here

ELASTICON SYDNEY 2024 LATEST ADVANCEMENTS IN GENERATIVE AI

On 20 February, keynote addresses from NAB, Canva, AWS, and Google Cloud, among others, will feature at ElasticON Sydney 2024.

This event will explore the latest advancements in generative AI

The one-day conference, hosted by leading search analytics company Elastic, will include networking drinks, hands-on labs, technical sessions and a stellar line-up of keynote speakers from finance, technology, and government e=sectors.

ElasticON Sydney 2024 promises to be an enriching experience with a comprehensive exploration of the latest developments in security, observability, generative AI and their real world applications

Don't miss out on this opportunity to network and find answers for what's next from your industry peers and leaders


Register for ElasticON Sydney 2024

REGISTER HERE!

PROMOTE YOUR WEBINAR ON ITWIRE

It's all about Webinars.

Marketing budgets are now focused on Webinars combined with Lead Generation.

If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.

Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.

We look forward to discussing your campaign goals with you. Please click the button below.

MORE INFO HERE!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK