
TensorRT-LLM
github.com/NVIDIA/TensorRT-LLMNVIDIA's library for optimized LLM inference on GPUs with quantization and kernel fusion. — extracted from the official website or Wikipedia.
Learn TensorRT-LLM
Recommended resources to get started
Let's Connect
Interested in this technology?
Feel free to reach out if you would like to discuss this technology or explore how it can be applied to your projects.