api¶
- towhee.pipe¶
alias of
Pipeline
- class towhee.AutoConfig[source]¶
Bases:
object
Auto configuration.
- static LocalCPUConfig()[source]¶
Auto configuration to run with local CPU.
Examples
>>> from towhee import pipe, AutoConfig >>> p = (pipe.input('a') ... .flat_map('a', 'b', lambda x: [y for y in x], config=AutoConfig.LocalCPUConfig()) ... .output('b'))
- static LocalGPUConfig(device: int = 0)[source]¶
Auto configuration to run with local GPU.
- Parameters:
device (int) – the number of GPU device, defaults to 0.
Examples
>>> from towhee import pipe, AutoConfig >>> p = (pipe.input('url') ... .map('url', 'image', ops.image_decode.cv2()) ... .map('image', 'vec', ops.image_embedding.timm(model_name='resnet50'), config=AutoConfig.LocalGPUConfig()) ... .output('vec') ... )
- static TritonCPUConfig(num_instances_per_device: int = 1, max_batch_size: Optional[int] = None, batch_latency_micros: Optional[int] = None, preferred_batch_size: Optional[list] = None)[source]¶
Auto configuration to run with triton server(CPU).
- Parameters:
max_batch_size (int) – maximum batch size, defaults to None, and it will be auto-generated by triton.
batch_latency_micros (int) – time to the request, in microseconds, defaults to None, and it will auto-generated by triton.
num_instances_per_device (int) – the number of instances per device, defaults to 1.
preferred_batch_size (list) – preferred batch sizes for dynamic batching, defaults to None, and it will be auto-generated by triton.
Examples
>>> from towhee import pipe, AutoConfig >>> p = (pipe.input('url') ... .map('url', 'image', ops.image_decode.cv2()) ... .map('image', 'vec', ops.image_embedding.timm(model_name='resnet50'), config=AutoConfig.TritonCPUConfig()) ... .output('vec') ... )
You can also to set the configuration: >>> from towhee import pipe, AutoConfig >>> config = AutoConfig.TritonCPUConfig(num_instances_per_device=3, … max_batch_size=128, … batch_latency_micros=100000, … preferred_batch_size=[8, 16]) >>> p = (pipe.input(‘url’) … .map(‘url’, ‘image’, ops.image_decode.cv2()) … .map(‘image’, ‘vec’, ops.image_embedding.timm(model_name=’resnet50’), config=config) … .output(‘vec’) … )
- static TritonGPUConfig(device_ids: Optional[list] = None, num_instances_per_device: int = 1, max_batch_size: Optional[int] = None, batch_latency_micros: Optional[int] = None, preferred_batch_size: Optional[list] = None)[source]¶
Auto configuration to run with triton server(GPUs).
- Parameters:
device_ids (list) – list of GPUs, defaults to [0].
max_batch_size (int) – maximum batch size, defaults to None, and it will be auto-generated by triton.
batch_latency_micros (int) – time to the request, in microseconds, defaults to None, and it will auto-generated by triton.
num_instances_per_device (int) – the number of instances per device, defaults to 1.
preferred_batch_size (list) – preferred batch sizes for dynamic batching, defaults to None, and it will be auto-generated by triton.
Examples
>>> from towhee import pipe, AutoConfig >>> p = (pipe.input('url') ... .map('url', 'image', ops.image_decode.cv2()) ... .map('image', 'vec', ops.image_embedding.timm(model_name='resnet50'), config=AutoConfig.TritonGPUConfig()) ... .output('vec') ... )
You can also to set the configuration: >>> from towhee import pipe, AutoConfig >>> config = AutoConfig.TritonGPUConfig(device_ids=[0, 1], … num_instances_per_device=3, … max_batch_size=128, … batch_latency_micros=100000, … preferred_batch_size=[8, 16]) >>> p = (pipe.input(‘url’) … .map(‘url’, ‘image’, ops.image_decode.cv2()) … .map(‘image’, ‘vec’, ops.image_embedding.timm(model_name=’resnet50’), config=config) … .output(‘vec’) … )
- __dict__ = mappingproxy({'__module__': 'towhee.runtime.auto_config', '__doc__': '\n Auto configuration.\n ', '_REGISTERED_CONFIG': {'LocalCPUConfig': <function AutoConfig.LocalCPUConfig>, 'LocalGPUConfig': <function AutoConfig.LocalGPUConfig>, 'TritonCPUConfig': <function AutoConfig.TritonCPUConfig>, 'TritonGPUConfig': <function AutoConfig.TritonGPUConfig>}, '_lock': <unlocked _thread.lock object>, '__init__': <function AutoConfig.__init__>, 'register': <staticmethod object>, 'load_config': <staticmethod object>, 'LocalCPUConfig': <staticmethod object>, 'LocalGPUConfig': <staticmethod object>, 'TritonCPUConfig': <staticmethod object>, 'TritonGPUConfig': <staticmethod object>, '__dict__': <attribute '__dict__' of 'AutoConfig' objects>, '__weakref__': <attribute '__weakref__' of 'AutoConfig' objects>, '__annotations__': {}})¶
- __module__ = 'towhee.runtime.auto_config'¶
- __weakref__¶
list of weak references to the object (if defined)
- class towhee.AutoPipes[source]¶
Bases:
object
Load Predefined pipeines.
- __dict__ = mappingproxy({'__module__': 'towhee.runtime.auto_pipes', '__doc__': '\n Load Predefined pipeines.\n ', '_PIPES_DEF': {}, '_lock': <unlocked _thread.lock object>, '__init__': <function AutoPipes.__init__>, 'register': <staticmethod object>, 'pipeline': <staticmethod object>, '__dict__': <attribute '__dict__' of 'AutoPipes' objects>, '__weakref__': <attribute '__weakref__' of 'AutoPipes' objects>, '__annotations__': {}})¶
- __module__ = 'towhee.runtime.auto_pipes'¶
- __weakref__¶
list of weak references to the object (if defined)