llmlingua-promptflow
To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
Installation
In a virtualenv (see these instructions if you need to create one):
pip3 install llmlingua-promptflow
Releases
Version | Released | Buster Python 3.7 |
Bullseye Python 3.9 |
Bookworm Python 3.11 |
Files |
---|---|---|---|---|---|
0.0.1 | 2024-05-08 | ||||
Issues with this package?
- Search issues for this package
- Package or version missing? Open a new issue
- Something else? Open a new issue