Quick Started

2vec, Towhee is all you need!

 

Towhee makes it easy to build neural data processing pipelines for AI applications. We provide hundreds of models, algorithms, and transformations that can be used as standard pipeline building blocks. You can use Towhee’s Pythonic API to build a prototype of your pipeline and automatically optimize it for production-ready environments.

Various Modalities: Towhee supports data processing on a variety of modalities, including images, videos, text, audio, molecular structures, etc.

SOTA Models: Towhee provides SOTA models across 5 fields (CV, NLP, Multimodal, Audio, Medical), 15 tasks, and 140+ model architectures. These include BERT, CLIP, ViT, SwinTransformer, MAE, and data2vec, all pretrained and ready to use.

Data Processing: Towhee also provides traditional methods alongside neural network models to help you build practical data processing pipelines. We have a rich pool of operators available, such as video decoding, audio slicing, frame sampling, feature vector dimension reduction, ensembling, and database operations.

Pythonic API: Towhee includes a Pythonic method-chaining API for describing custom data processing pipelines. We also support schemas, which makes processing unstructured data as easy as handling tabular data.

Install

Towhee requires Python 3.6+. You can install Towhee via pip:

pip install towhee towhee.models

If you run into any pip-related install problems, please try to upgrade pip with pip install -U pip.

Let’s try your first Towhee pipeline. Below is an example for how to create a CLIP-based cross modal retrieval pipeline with only 15 lines of code.

from towhee import ops, pipe, DataCollection


# create image embeddings and build index
p = (
    pipe.input('file_name')
    .map('file_name', 'img', ops.image_decode.cv2())
    .map('img', 'vec', ops.image_text_embedding.clip(model_name='clip_vit_base_patch32', modality='image'))
    .map('vec', 'vec', ops.towhee.np_normalize())
    .map(('vec', 'file_name'), (), ops.ann_insert.faiss_index('./faiss', 512))
    .output()
)

for f_name in ['https://raw.githubusercontent.com/towhee-io/towhee/main/assets/dog1.png',
               'https://raw.githubusercontent.com/towhee-io/towhee/main/assets/dog2.png',
               'https://raw.githubusercontent.com/towhee-io/towhee/main/assets/dog3.png']:

    p(f_name)

# Delete the pipeline object, make sure the faiss data is written to disk. 
del p


# search image by text
decode = ops.image_decode.cv2('rgb')
p = (
    pipe.input('text')
    .map('text', 'vec', ops.image_text_embedding.clip(model_name='clip_vit_base_patch32', modality='text'))
    .map('vec', 'vec', ops.towhee.np_normalize())
    # faiss op result format:  [[id, score, [file_name], ...]
    .map('vec', 'row', ops.ann_search.faiss_index('./faiss', 3))
    .map('row', 'images', lambda x: [decode(item[2][0]) for item in x])
    .output('text', 'images')
)

DataCollection(p('a cat')).show()

Learn more examples from the Towhee examples.

Contributing

Writing code is not the only way to contribute! Submitting issues, answering questions, and improving documentation are just some of the many ways you can help our growing community. Check out our contributing page for more information.

Special thanks goes to these folks for contributing to Towhee, either on Github, our Towhee Hub, or elsewhere:

https://img.shields.io/badge/all--contributors-32-orange
https://avatars.githubusercontent.com/u/34787227?v=4 https://avatars.githubusercontent.com/u/72550076?v=4 https://avatars.githubusercontent.com/u/57477222?v=4 https://avatars.githubusercontent.com/u/109071306?v=4 https://avatars.githubusercontent.com/u/20420181?v=4 https://avatars.githubusercontent.com/u/88148730?v=4 https://avatars.githubusercontent.com/u/83755740?v=4 https://avatars.githubusercontent.com/u/11754703?v=4 https://avatars.githubusercontent.com/u/47691077?v=4 https://avatars.githubusercontent.com/u/81822489?v=4 https://avatars.githubusercontent.com/u/6334158?v=4 https://avatars.githubusercontent.com/u/103474331?v=4 https://avatars.githubusercontent.com/u/67197231?v=4 https://avatars.githubusercontent.com/u/86251631?v=4 https://avatars.githubusercontent.com/u/24581746?v=4 https://avatars.githubusercontent.com/u/34296482?v=4 https://avatars.githubusercontent.com/u/106302799?v=4 https://avatars.githubusercontent.com/u/14136703?v=4 https://avatars.githubusercontent.com/u/37455387?v=4 https://avatars.githubusercontent.com/u/40853054?v=4 https://avatars.githubusercontent.com/u/28955741?v=4 https://avatars.githubusercontent.com/u/65100038?v=4 https://avatars.githubusercontent.com/u/5417329?v=4 https://avatars.githubusercontent.com/u/53459423?v=4 https://avatars.githubusercontent.com/u/107831450?v=4 https://avatars.githubusercontent.com/u/1500781?v=4 https://avatars.githubusercontent.com/u/56469371?v=4 https://avatars.githubusercontent.com/u/83750738?v=4 https://avatars.githubusercontent.com/u/5432721?v=4 https://avatars.githubusercontent.com/u/17022025?v=4 https://avatars.githubusercontent.com/u/68835157?v=4 https://avatars.githubusercontent.com/u/7541863?v=4

Looking for a database to store and index your embedding vectors? Check out Milvus.