Zero-Shot Prompting
In this tutorial, we will learn about the Zero-shot prompting technique. In this technique, we provide the model with a prompt that is not included in the training set. This technique allows the model to predict the desired outcome without examples.
Zero-shot prompting involves providing a prompt to a language model without any prior fine-tuning or specific training on that prompt.
Example
Try this on the ChatGPT model:
Launch GPT, issue the following zero-shot prompt, and observe the language model’s response.
Prompt:
Classify the sentiment of the following text as “positive,” “neutral,” or “negative”:
Text: “This new coffee shop has amazing lattes and a cozy atmosphere, but the service was a bit slow.”
Sentiment:

The model response would be:
Sentiment: Neutral
The text has both positive and negative elements, making it overall neutral. It highlights the positives of the coffee shop (amazing lattes, cozy atmosphere) but also mentions a drawback (slow service).
Zero-shot learning means the model performs a task it hasn’t explicitly been trained on, with no prior examples provided in the prompt.
In this case, the model must infer sentiment based on its general understanding of language, even though no labeled examples of “positive,” “neutral,” or “negative” are given.
The model weighs both positive (“amazing lattes,” “cozy atmosphere”) and slightly negative (“service was a bit slow”) aspects to conclude “neutral.”
LLMs might not perform well for this prompting technique. Zero-shot summarization involves providing this prompt to the language model and expecting it to generate a summary of the article without prior knowledge of the specific article and its content.
This contrasts with few-shot prompts, where you’d include examples (e.g., “Text: ‘I love this movie!’ → Sentiment: Positive”). Zero-shot relies entirely on the model’s pre-existing knowledge. In the following tutorial, we will discuss Few-shot prompting. In this technique, we provide a few examples along with the prompt.