TechnoDG Logo
Gemini

Google's new prompt engineering playbook: 10 key points on mastering Gemini, and other AI tools.

Posted by TechnoDG on 8 day(s) ago .

Google is latest whitepaper is concentrated on providing guidance to users about writing effective prompts for LLMs like Gemini.

 

After the runaway success of OpenAI ChatGPT in 2022, coming up with a good prompt for a generative AI tool has become a specialized skill. This has led to the establishment of a totally new scientific discipline known as prompt engineering.

As technology advances and grows in popularity, some experts suggest that the quality of the AI generated outputs will be determined on how well and precisely users can give their instructions to large language models (LLMs).

LLMs are adapt to follow instructions and trained on the vast amount of data so they can understand a prompt and generate an answer. But LLMs are not perfect, the clearer your prompt text, the better it is for the LLMs to predict the next likely text, Google stated in its recent published whitepaper on prompt engineering.

A 68-page document written by Lee Boonstra, a software engineer and technical lead at Google, is aimed at helping users write better prompts for its flagship Gemini chatbot within its Vertex AI sandbox or by using Gemini developer API. As per the document, this is because when your prompt the model directly, you have access to the configuration like temperature etc.

The key high points of Google whitepaper on prompt engineering dated in February 2025.

What is prompt engineering?

In simple terms, a text prompt is utilized as an input that the AI model uses to predict an output, as per Google. There are many factors which are determinants for your prompt efficacy: the model you are using, the model training data, the models configuration, your word-choice, style and tone, structure, context and many others, it asserted.

When a user submits a text prompt to LLM, the model analyzes the sequential text as an input, then predicts what the next token should be based on the data the model was trained on.

The LLM is set up to operate continually like that, adding a previously predicted token to the sequential text for predicting a new token. The new token prediction is based on the relationship with what in the previous tokens and what the LLM has experienced during its training, stated by whitepaper.
Prompt engineering is the activity of designing high quality prompts that will guide (LLMs) to produce correct output. It is a highly iterative process; therefore, it involves tinkering formulation to find the best prompt which depend on its length, writing style, structure, and more, according to the document.

It elaborates on some key prompts techniques, such as general prompting or zero shot, one shot, a few shots, system prompting, contextual prompting, role prompting, step-back prompting, chain of thoughts (CoT), tree of thoughts (ToT), and reAct (reason & act), among others.

Ten key guidelines by Google:
Google offers the following 10 guidelines to become a master in prompting engineering:
1. Give prompts with examples: Google is recommendation providing at least one example or multiple examples in text so that AI model can follow the example or detect patterns needed for complete the task. It's like giving the model a reference point or target to aim for, improving the accuracy, style, and tone of its response to better match your expectations, says the whitepaper.
2. Keep it simple: Google has warned against using complex language and providing unnecessary information to LLMs in the text prompt.  Instead, using verbs, which clearly describe the action.
3. Be specific: Providing specific details in the prompt (through system or context prompting) can help the model to focus on what relevant, improving the overall perfection, says Google. While system prompt offers the LLMs with the understanding of the big picture, context prompt gives specific details or background information relevant to the current conversation or task.
4. Instructions over constraints: Instead of informing the model what not to do, tell it what to do instead. This can ignore confusion and improve the precision of the output.
5. Control the max token length: This means configurating the AI- generated output ask for specific desired length or ultimate token limits. For example: Explain quantum physics in a tweet-length message.
6. Use variable in prompts: If you need to use the same piece of information in various prompts, store it in a variable and reference that variable in each prompts, according to Google. This will save your time and effort by allowing you to avoid repeating yourself.
7. Experiment with different writing styles: AI-generated outputs depend on many factors such as model configuration, prompt format, word choice, etc. Trying out various prompt attributes such as style, word choice, and type of prompt can sometimes yield radically different results.
8. Mix response classes: If you expect AI model to classify your data; mix up the possible response classes in the multiple examples provided within the prompt, Google recommends. A good rule of thumb is to start with 6 few-shot examples and start testing the accuracy from there,î the company suggested.
9. Adapt to model updates: The document recommends users to always stay on top of model architecture changes, features and capabilities that are newly announced. Try out newer model versions and adjust your prompts to better leverage new model features, it states.
10. Experiment with output format: Google suggested that engineering your prompt to have the LLMs to return the outputs in JSON format. JavaScript Object Notification (JSON) is a structured data format that can be used in prompt engineering in various tasks like data extraction, selection, parsing, ordering, ranking or categorising data.

 

For more information on IT Services, Web Applications & Support kindly call or WhatsApp at +91-9733733000 or you can visit https://www.technodg.com
 

Google's new prompt engineering playbook: 10 key points on mastering Gemini, and other AI tools.
Articles
contact us
Connect with our EXPERTS and get the HELP you need. Phone: (+91) 353 25 76767
Mobile: (+91) 9 733 733 000
Whatsapp: (+91) 99 32 00 88 88
Email: info@technodg.com

payment gateway
comodo secure seal

Techno Develops Group.
Leave a Message