Gradio
Gradio (opens in a new tab) allows you to build and share machine learning model demos, all in Python. It's perfect for prototyping and deploying models in a web application in minutes, and easily sharing it with other users.
Hello World
This is a simple example of a Gradio app that takes a name as input and returns a greeting as output:
import gradio as gr
def greet(name):
return "Hello " + name + "!"
demo = gr.Interface(fn=greet, inputs="text", outputs="text")
demo.launch()
Here is the web UI that is generated by Gradio:
You could learn more about Gradio in the official documentation (opens in a new tab).
You could create your own Gradio app using the Build section in this guide, or deploy our pre-built Gradio app to ModelZ.
Use templates
We have several templates for Gradio apps. You could find them below. You could deploy them to ModelZ directly.
Besides this, you could also import Gradio apps from Huggingface space. You could find more details in the Huggingface space page.
Build from scratch
Building an Gradio app could be straightforward. You could use our template modelz-template-gradio (opens in a new tab) to bootstrap your project.
You will need to provide three key components:
- A
main.py
file: This file contains the code for making predictions. - A
requirements.txt
file: This file lists all the dependencies required for the server code to run. - A
Dockerfile
or a simplerbuild.envd
(opens in a new tab): This file contains instructions for building a Docker image that encapsulates the server code and its dependencies.
In the Dockerfile
, you need to define the instructions for building a Docker image that encapsulates the server code and its dependencies.
In most cases, you could use the template in the repository.
docker build -t docker.io/USER/IMAGE .
docker push docker.io/USER/IMAGE
# GPU
docker build -t docker.io/USER/IMAGE -f Dockerfile.gpu .
docker push docker.io/USER/IMAGE
On the other hand, a build.envd
(opens in a new tab) is a simplified alternative to a Dockerfile. It provides python-based interfaces that contains configuration settings for building a image.
It is easier to use than a Dockerfile as it involves specifying only the dependencies of your machine learning model, not the instructions for CUDA, conda, and other system-level dependencies.
envd build --output type=image,name=docker.io/USER/IMAGE,push=true
# GPU
envd build --output type=image,name=docker.io/USER/IMAGE,push=true -f :build_gpu
Deploy Gradio apps to ModelZ
You could deploy your Gradio app to ModelZ following the Deploy section in the Getting Started guide.