Semantic Segmentation
Model basic information¶
A lightweight portrait segmentation model based on the ExtremeC3 model. For more details, please refer to: ExtremeC3_Portrait_Segmentation project.
Sample result example¶
Enter the file path and the model will give its predictions:
Let's try it out now
Prerequisite¶
1、environment dependent¶
Please visit dependencies
2、ExtremeC3_Portrait_Segmentation dependent¶
-
paddlepaddle >= 2.0.0
-
paddlehub >= 2.0.0
3、Download the model¶
hub install ExtremeC3_Portrait_Segmentation
Serve the Model¶
Install Pinferencia¶
First, let's install Pinferencia.
pip install "pinferencia[streamlit]"
Create app.py¶
Let's save our predict function into a file app.py
and add some lines to register it.
app.py | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
|
Run the service, and wait for it to load the model and start the server:
$ uvicorn app:service --reload
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [xxxxx] using statreload
INFO: Started server process [xxxxx]
INFO: Waiting for application startup.
INFO: Application startup complete.
$ pinfer app:service --reload
Pinferencia: Frontend component streamlit is starting...
Pinferencia: Backend component uvicorn is starting...
Test the service¶
Open http://127.0.0.1:8501, and the template Url Image To Image
will be selected automatically.
Request
curl --location --request POST \
'http://127.0.0.1:8000/v1/models/semantic_segmentation/predict' \
--header 'Content-Type: application/json' \
--data-raw '{
"data": "/9j/4AAQSkZJRgABAQEA/..."
}'
Response
{
"model_name": "semantic_segmentation",
"model_version": "default",
"data": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRo..."
}
Create the test.py
.
test.py | |
---|---|
1 2 3 4 5 6 7 8 9 |
|
$ python test.py
{
"model_name": "semantic_segmentation",
"model_version": "default",
"data": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRo..."
}
Even cooler, go to http://127.0.0.1:8000, and you will have a full documentation of your APIs.
You can also send predict requests just there!