What is Prompt Engineering?

by Alex G., CTO

AI PWRD committed to keeping you ahead of the curve in the fast-moving world of AI. In this blog we’re focusing on the foundational principles of Prompt Engineering and how it has changed as AI models improve and increase in complexity.

Prompt Engineering 1.0

Prompt Engineering is the process of writing and refining AI prompts, model settings, and inputs in order to achieve the desired response from the AI model. It’s not only writing the text of AI prompts, but creating a formula in the preferred coding language (Python, Node.js, SQL, etc.) for static and variable inputs that can be applied on a large database of inputs and generate consistent and desired results. For legacy AI models, review the three most simple parameterized prompts are commonly used to achieve specific and reliable responses from an AI model on a large dataset.

1. Single Parameter Prompt

Description: Ideal for simple data retrieval tasks or single-variable analysis, such as pulling up specific user information or conducting sentiment analysis on customer feedback.

Formula:

User Prompt = (text) or (data1)

Example:

  • Prompt: Write a brief description about Microsoft

2. Dual Parameter Prompt

Description: Useful for tasks static text and variable inputs, like summarize text from a single variable or a data range.

Formula:

User instructions (data1)

Example:

You are a minute taker, your task is to write detail minutes of the call transcript: (call_transcript)

3. Multi Parameter & Static Text Prompt

Description: For creating complex queries involving both multiple variables and static text, ideal for tasks like customer segmentation based on multiple criteria.

Formula:

User Command (data1) (data2) ... (data_n)

Example:

  • Prompt: Generate an email response client (clientID) on behalf of (userID) based on their summary (AI_summary) and these email instructions (email_instructions)

These basic types of prompts create the foundation for more complicated prompt variables and involve experimentation in order to achieve the best results.

Prompt Engineering 2.0

The AI industry does not stay still for very long and soon after legacy models such as text-davinci-003 became deprecated as additional parameters in the request body of the API call enabled enterprise customers to structure their API call that better fit with their use case. One of the most significant improvements is the ability to create a system message with content that gave additional instructions to the AI model for how to respond to a user message.

  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}

This enables prompt engineers to distinguish which part of the API message is meant as a ‘system’ message that provides instructions to the AI and which part of the message is user content for chat completion. It also meant the additional content option for system message enables variable parameter inputs as context into the system content part of the message to generate even higher quality results.

Most recently there has been additional parameters to enable functions which enables clients to create their own automation and have the model intelligently choose to call the relevant functions. Let’s review how this works:

  1. App calls the AI model with the user query and a set of functions in the functions parameter.
  2. The model will intelligently choose to call one of functions depending on the user input.
  3. Another call to the AI model is made with the function response enabling the the AI model to integrate the function response into its response message.

The models gpt-3.5-turbo and gpt-4 have been fine-tuned to both detect when a function should be called and respond accordingly.

The combination of these tools offer prompt engineers countless ways to structure their prompt messages and trigger custom functions in order to meet the needs of our clients.

If you have any further questions about what prompt engineering is or are interested in a prompt engineering consultant, please contact us for a complimentary consultation.

More articles

Safe & Secure AI Database Access

This article explores AI API safety and practical strategies for managing database privacy

Read more

Tell us about your project

Our offices

  • Location
    192 Spadina Ave
    Toronto, ON M5T 2C2
  • Timezone
    PST/CST/EST