🐼 model-serving

👇 1 Items

vllm

46.4k Python Apache-2.0

A high-throughput and memory-efficient inference and serving engine for LLMs

1 2 year(s) ago 1 month(s) ago
OSZAR »