All Products
Search
Document Center

Platform For AI:Generate titles for Chinese text

Last Updated:Mar 21, 2025

Model Gallery provides the easynlp_pai_mt5_title_generation_zh model that can be used to generate titles for Chinese text. You can directly deploy the model. You can also use your own dataset to fine-tune the model for specific scenarios. This topic describes how to deploy the easynlp_pai_mt5_title_generation_zh model in Model Gallery to generate titles for Chinese text.

Prerequisites

An Object Storage Service (OSS) bucket is created. For more information, see Create buckets.

Go to the details page of the model

  1. Go to the Model Gallery page.

    1. Log on to the PAI console.

    2. In the left-side navigation pane, click Workspaces. On the Workspaces page, find the workspace that you want to manage and click the name of the workspace. The Workspace Details page appears.

    3. In the left-side navigation pane of the Workspace Details page, click Model Gallery.

  2. On the Model Gallery page, click text-generation in the NLP section. In the model list on the right side, find and click the easynlp_pai_mt5_title_generation_zh model to go to the details page of the model.

    image

Directly deploy and debug the model

Deploy the model as a model service

  1. On the details page of the model, click Deploy in the upper-right corner.

  2. In the Deploy panel, verify the configurations and click Deploy.

  3. In the Billing Notification message, click OK.

    The details page of the service appears. On the Service details tab, you can view the service status in the Basic Information section. If the value of the Status parameter changes to In operation, the model service is deployed.

Debug the model online

Debug the model online in the PAI console

  1. On the Service details tab, enter the request data in the Online Prediction field. Sample request data:

    {
        "data": ["在广州第一人民医院,一个上午6名患者做支气管镜检查,5人查出肺癌,且4人是老烟民! 专家称,吸烟和被动吸烟是肺癌的主要元凶。"]
    }

    image

  2. Click Send Request.

    You can view the response in the lower part of the page.image

Debug the model online by running Python code

  1. View the call information of the service.

    1. In the Resource Information section of the Service details tab, click View Call Information.

      image

    2. In the Call Information dialog box, view the Access address and Token parameters on the Public network address call tab, and record the values of the parameters.

  2. Run the following sample code to send a request to call the service:

    import requests
    
    url = "<PredictionServiceEndpoint>"
    token = "<PredictionServiceAccessToken>"
    request_body = '{"data": ["在广州第一人民医院,一个上午6名患者做支气管镜检查,5人查出肺癌,且4人是老烟民! 专家称,吸烟和被动吸烟是肺癌的主要元凶。"]}'
    request_body = request_body.encode('utf-8')
    headers = {"Authorization": token}
    resp = requests.post(url=url, headers=headers, data=request_body)
    
    print(resp.content.decode())
    print("status code:", resp.status_code)
    

    Replace url and token in the preceding code with the values of the Access address and Token parameters that you obtained in the preceding step.

    The following figure shows the returned result.

    p670902.png

Fine-tune the model

  1. Optional. Prepare a dataset.

    Note

    To use your own data to fine-tune the model, perform the following steps to prepare a training dataset.

    1. Model Gallery provides a default dataset for model fine-tuning. You can use the default dataset or prepare your own dataset. Prepare your training dataset in the following format:

      {"text": "<text>", "summary": "summary"}
      {"text": "<text>", "summary": "summary"}
      {"text": "<text>", "summary": "summary"}
      ......
      {"text": "<text>", "summary": "summary"}
      
    2. Upload the prepared dataset to an OSS bucket. For more information, see Upload objects.

  2. Submit a training job.

    1. Return to the details page of the model. For more information, see the Go to the details page of the model section of this topic.

    2. Click Fine-tune in the upper-right corner. In the Fine-tune panel, set the Output Path parameter in the Job Configuration section to the path of an OSS bucket and click Fine-tune. In this example, the default dataset is used for model fine-tuning.

      Note

      If you prepared a training dataset, specify the training dataset in the Fine-tune panel and click Fine-tune. For more information, see the Deploy and train models section of the "Deploy and train models" topic.

      The details page of the job appears. You can click the Task log tab to view the training process.image.png

Deploy and debug the fine-tuned model

  1. The trained model is automatically registered in AI Asset Management > Models. You can view or deploy the models. For more information, see Register and manage models.

  2. Debug the model online. For more information, see the Debug the model online section of this topic.

OSZAR »