Mitigating Prompt Injection and Prompt Hacking
Ray Villalobos
06:03
Description
As large language models like Chat GPT, Bard, Claude and others have penetrated the culture, hackers are busy attempting to manipulate the models they are based on like GPT, Palm2 and others in order to change how they respond. In this course, Ray Villalobos discusses the mechanisms behind prompt hacking and some of the mitigation techniques. In a world where companies are rushing to develop their own implementations of these popular models, it’s important to understand the concepts behind prompt hacking and some of the defenses that are used to address the potential consequences of its use.
More details
User Reviews
Rating
Ray Villalobos
Instructor's Courses
Linkedin Learning
View courses Linkedin Learning- language english
- Training sessions 3
- duration 06:03
- English subtitles has
- Release Date 2023/12/23