Skip to main content

C Transformers

This page covers how to use the C Transformers library within LangChain. It is broken into two parts: installation and setup, and then references to specific C Transformers wrappers.

Installation and Setup​

Wrappers​

LLM​

There exists a CTransformers LLM wrapper, which you can access with:

from langchain_community.llms import CTransformers
API Reference:CTransformers

It provides a unified interface for all models:

llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2')

print(llm.invoke('AI is going to'))

If you are getting illegal instruction error, try using lib='avx' or lib='basic':

llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2', lib='avx')

It can be used with models hosted on the Hugging Face Hub:

llm = CTransformers(model='marella/gpt-2-ggml')

If a model repo has multiple model files (.bin files), specify a model file using:

llm = CTransformers(model='marella/gpt-2-ggml', model_file='ggml-model.bin')

Additional parameters can be passed using the config parameter:

config = {'max_new_tokens': 256, 'repetition_penalty': 1.1}

llm = CTransformers(model='marella/gpt-2-ggml', config=config)

See Documentation for a list of available parameters.

For a more detailed walkthrough of this, see this notebook.


Was this page helpful?


You can leave detailed feedback on GitHub.