Skip to content

Why is Pinferencia different?

Different?

Actually, it is not something different. It is something more intuitive, more straight forward or just more simple.

How do you serve a model yesterday?

Write some script, save a model file, or do something else according to the tools' requirements.

And you spend a lot of time to understand those requirements. And a lot time to get it right.

Once finished, you're so relieved.

However, after almost half a year, you've got new and more complicated models and serve them again using your previous tool.

What's in your mind now?

No way!!!!!!!!!!!!

You have your model, you train it in python, and you predict in python. You even write complicated python codes to perform more difficult tasks.

How many changes you need to make and how many extra codes you need to write to get your model served using those tools or platforms?

The answer is

A lot.

With Pinferencia

You don't need to do any of these.

You just use the model in your own python code.

It doesn't matter whether the model is - a PyTorch model or - a Tensorflow model or - any machine learning model or - simply your own codes or - just your own functions.

Register the model/function, and Pinferencia will use it to predict, in the way just as expected.

Simple, and Powerful

Pinferencia aims to be the simplest AI model inference server!

Serving a model has never been so easy.

If you want to

  • find a simple but robust way to serve your model
  • write minimal codes while maintain controls over you service
  • avoid those heavy tools or platforms

You're at the right place.