TGI
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
text-generation-inference documentation and get access to the augmented documentation experience text-generation-inference is now in maintenance mode. Going forward, we will accept pull requests for minor bug fixes, documentation improvements and lightweight maintenance tasks. Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models (LLMs). TGI enables high-performance text generation for the most popular open-source LLMs, including Llama, Falcon, StarCoder, BLOOM, GPT-NeoX, and T5. Text Generation Inference implements many optimizations and features, such as: Text Generation Inference is used in production by multiple projects, such as:
Livekit
An open source framework and developer platform for building, testing, deploying, scaling, and observing agents in production.
Over 200,000 developers and teams, ranging from leading AI and robotics labs to Fortune 500 companies, use LiveKit as the default infrastructure layer for building AI that can interact with the world in real time.
TGI
Livekit
TGI
Livekit
Pricing found: $0/mo, $50/mo, $500/mo, $0.0100/min, $0.0015/min
Only in TGI (9)
TGI
Livekit