What is Multiport Support for Custom Image Deployments?

Multiport support for custom image deployments enables users to expose multiple ports for their applications, making it easier to manage multi-service architectures. Here’s how to use multiport support on MonsterAPI.

What is Multiport Support for Custom Image Deployments?

Multiport support for custom image deployments is a feature that allows users to configure and expose multiple ports for their Docker images. This capability provides significant flexibility for deploying complex applications that require multiple services or components to interact seamlessly.

Why is Multiport Support Important?

Multiport support is crucial for a variety of reasons:

  1. Applications Requiring Multiple Services: Many modern applications need to expose different services through separate ports, such as HTTP for web services and gRPC for backend communication. Multiport support simplifies this process.
  2. Microservice Architectures: In a microservices setup, different services often need dedicated ports to communicate effectively. Multiport support ensures that these components can interact without interference.
  3. Enhanced Debugging and Monitoring: By using separate ports for metrics, logs, and admin interfaces, developers can monitor and debug applications more efficiently without impacting the core application functionality.

Key Benefits of Multiport Support

  1. Simplified Configuration: Users can configure multiple ports directly during the deployment process, making it easier and more efficient to set up complex applications.
  2. Broader Compatibility: Multiport support allows for the deployment of images that are optimized for advanced workflows and protocols, enhancing the overall compatibility of the platform.
  3. Improved Scalability: With multiport support, applications can expand their capabilities without needing to redeploy or modify the base image, providing greater scalability and flexibility.

How to Use Multiport Support on MonsterAPI

With the introduction of multiport support on MonsterAPI, deploying custom Docker images with multiple ports has become more straightforward. Here's how you can leverage this feature:

  1. Configuring Ports

During the deployment process, you can specify multiple ports to be exposed.

This can be done through the deployment interface, where you can list the ports required for your application.

  1. Applications Requiring Multiple Services

Multiport support is designed for protocols like HTTP and gRPC, enabling seamless operation across different endpoints.

  1. Microservice Architectures

Use multiport support to connect various system components through dedicated ports. This facilitates seamless communication between different services within a microservices architecture.

  1. Enhanced Debugging and Monitoring

Allocate separate ports for metrics, logs, or admin interfaces. This setup allows for detailed monitoring and debugging without disrupting the primary application services.

Important Notes

Multiport configurations are currently supported only for non-HTTPS deployments. Ensure that your deployment setup adheres to this limitation.

Step-by-Step Guide to Deploy Your Docker Image On MonsterAPI🐳

Step 1: Log in to Monsterapi.ai

Sign in to your account and navigate to the Deploy section.

Step 2: Navigate to Docker Deployment under the Deploy tab, select Docker Deployment to begin setting up your containerized application.

Step 3: Fill in Deployment Details

Once you click on Docker Deployment, you’ll need to provide the following details:

Deployment Name – Give your deployment a unique name.
GPU Count & GPU RAM – Select the number of GPUs and allocate GPU RAM if required.
Registry Name – Enter your Docker image name.
Registry Credentials – Provide your Registry Username & Password to pull private images.
Service Protocol – Choose between HTTPS, or Others.
Service Port – Specify the port(s) on which your service runs.
Environment Variables – Optionally pass environment variables for your container.

Step 4: Multi-Port Support

If your Docker container requires multiple ports:

Select "Others" under Service Protocol.

You will then be able to specify multiple ports under Service Port.

Note: HTTPS protocol does not support multiple ports. You can only pass one port when choosing HTTPS as the service protocol.

Step 5: Access Your Deployed Service

Once your deployment goes live, you will receive an HTTP endpoint to access your service.

To find your service URL and port mapping: Go to the Job Info section of your deployment.

Since we map your provided ports to random ports on our host for security reasons, make sure to check the port mapping list to correctly access your application.

Code Implementation

Code to launch multi port deployment


url = "https://api.monsterapi.ai/v1/deploy/custom_image"  

payload = {
    "serving_params": {
        "deployment_name": "charming_galileo",
        "per_gpu_vram": 80,
        "gpu_count": 1
    },
    "env_params": {
        "mode": "testing"
    },
    "image_registry": { "registryName": "<registry_name>", "username": "username", "password": "p*a*s*w*r*" },
    "port_numbers": [8000, 8001, 8002, 5001],
    "service_protocol": "others"
}

headers = {
    "accept": "application/json",
    "content-type": "application/json",
    "authorization": "Bearer <token>"
}  

response = requests.post(url, json=payload, headers=headers)  

print(response.text)


Here we are deploying a Docker container on MonsterAPI by sending a request to its API. We specify important details like the deployment name, GPU settings, and registry credentials to pull the Docker image. The code also defines multiple ports to allow the service to communicate on different endpoints. Finally, the request is sent to MonsterAPI’s deployment API, and once processed, it returns a response confirming whether the deployment was successful.

You will get an Output code after execution

{"message":"Deployment Launched","servingParams":{"mode":"testing","deployment_name":"\"charming_galileo\"","port_numbers":[8000,8001,8002,5001]},"deployment_id":"2982fb35-28v8-41ma-bedb-2z5790605dc7"}


Code to fetch the status of the deployment ( if will provide you the port mapping, one the deployment is live (launched successfully))


deployment_id = "<deployment_id>"
    
    
url = f"https://api.monsterapi.ai/v1/deploy/status/{deployment_id}"  

headers = {
    "accept": "application/json",
    "content-type": "application/json",
    "authorization": "Bearer <token>"
}  

response = requests.get(url, headers=headers)  

print(response.text)


We are checking the status of a deployed Docker container on MonsterAPI. We provide the deployment ID to identify the specific deployment we want to track. The request is sent to MonsterAPI’s status API endpoint, and once processed, it returns a response with details like deployment status, service URL, and port mappings. This helps us verify if the deployment is live and accessible.
The port mapping field in the response is crucial, as it maps the user-provided ports to randomly assigned external ports for security reasons. Ensure that you refer the correct external port from this mapping to access your application successfully.

Output after code execution

{"status":"live","message":"Server has started !!!","URL":"http://32.117.119.111:59713","qblocks_url":"http://32.117.119.111:59713","api_auth_token":null,"credits_consumed":100,"created_at":"2025-02-22T03:13:36.740066","portMapping":["59713:8000","58772:8001","55902:8002","53283:5001"]}


In the output you can get the URL(http endpoint to access your service) and port mapping

What’s Next?

MonsterAPI is continually working to enhance your deployment experience. Stay tuned for more exciting features and updates that will further simplify and improve the deployment process.

Start using multiport support today to maximize the potential of your custom image deployments! If you need assistance, our support team is always ready to help.