prompt hacking

AI Large Language Models Security

Can GPT-4 Find 0-day Exploits?

This video from Low Level Learning explains why AI may be a massive security threat. THE AI HACKERS ARE COMING!… maybe… well… thats what I’m trying to figure out. I wanted to see if ChatGPT was able to hack servers. And I’m not talking about script kiddie stuff where you run Kali Linux scripts and […]

Read More
AI Large Language Models Security

Accidental LLM Backdoor – Prompt Tricks

In this video LiveOverflow explores various prompt tricks to manipulate the AI to respond in ways we want, even when the system instructions want something else. This can help us better understand the limitations of LLMs.

Read More
Large Language Models Security

What is Prompt Injection?

This video is from LiveOverflow. How will the easy access to powerful APIs like GPT-4 affect the future of IT security? Keep in mind LLMs are new to this world and things will change fast. But I don’t want to fall behind, so let’s start exploring some thoughts on the security of LLMs.

Read More