llm-guard

LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance…

Installation

In a virtualenv (see these instructions if you need to create one):

pip3 install llm-guard

Releases

Version Released Bullseye
Python 3.9
Bookworm
Python 3.11
Files
0.3.15 2024-08-22    
0.3.14 2024-06-17    
0.3.13 2024-05-10    
0.3.12 2024-04-23    
0.3.11 yanked 2024-04-23    
0.3.10 2024-03-14    
0.3.9 2024-02-08    
0.3.7 2024-01-15    
0.3.5 yanked 2024-01-14    
0.3.4 2023-12-21    
0.3.3 2023-11-25    
0.3.2 2023-11-15    
0.3.1 2023-11-09    
0.3.0 2023-10-14    
0.2.4 2023-10-07    
0.2.3 2023-09-23    
0.2.2 2023-09-21    
0.2.1 2023-09-21    
0.2.0 2023-09-15    
0.1.3 2023-09-02    
0.1.2 2023-08-26    
0.1.1 2023-08-20    
0.1.0 2023-08-12    
0.0.3 2023-08-10    
0.0.2 2023-08-08    

Issues with this package?

Page last updated 2024-08-22 20:37:39 UTC