Image Generation
Model basic information¶
This model is encapsulated from [the paddlepaddle version of the photo2cartoon project of Xiaoshi Technology] (https://github.com/minivision-ai/photo2cartoon-paddle).
Example¶
Image Source (https://www.pexels.com)
Let's try it out now
Prerequisite¶
1. environment dependent¶
Please visit dependencies
2. mobilenet_v2_animals dependent¶
-
paddlepaddle >= 2.0.0
-
paddlehub >= 2.0.0
3. Download the model¶
hub install Photo2Cartoon
Serve the Model¶
Install Pinferencia¶
First, let's install Pinferencia.
pip install "pinferencia[streamlit]"
Create app.py¶
Let's save our predict function into a file app.py
and add some lines to register it.
app.py | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
Run the service, and wait for it to load the model and start the server:
$ uvicorn app:service --reload
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [xxxxx] using statreload
INFO: Started server process [xxxxx]
INFO: Waiting for application startup.
INFO: Application startup complete.
$ pinfer app:service --reload
Pinferencia: Frontend component streamlit is starting...
Pinferencia: Backend component uvicorn is starting...
Test the service¶
Open http://127.0.0.1:8501, and the template Url Image To Image
will be selected automatically.
Request
curl --location --request POST \
'http://127.0.0.1:8000/v1/models/image_generation/predict' \
--header 'Content-Type: application/json' \
--data-raw '{
"data": "base64 image string"
}'
Response
{
"model_name": "image_generation",
"model_version": "default",
"data": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0a..."
}
Create the test.py
.
test.py | |
---|---|
1 2 3 4 5 6 7 8 9 |
|
$ python test.py
{
"model_name": "image_generation",
"model_version": "default",
"data": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0a..."
}
Even cooler, go to http://127.0.0.1:8000, and you will have a full documentation of your APIs.
You can also send predict requests just there!