In this article, we will dive deeper into the vast world of prompts and how they work with Microsoft 365 Copilot. Learning the correct prompting practices means carefully choosing the words you use for your requests to better communicate with your digital assistant. Whether working alone or with a team, improving your prompt skills means you can achieve more, faster, and in more creative ways. Let’s dive in and discover how to talk to our technology in a way that it understands what we need, making every task simpler, faster, and more effective.
Copilot Prompts: an introduction
Imagine being able to easily write an email that resonates with your clients, summarize a complex report in minutes, or brainstorm ideas that lead to the decisive move that could kickstart your strategy—all thanks to the power of effective prompts.
This isn’t the future; it’s our current reality. As we continue to learn and master the art of creating prompts, we open doors to a world where technology amplifies our productivity and creativity, making every interaction with our digital assistant more intuitive and impactful.
According to Microsoft, a prompt serves as a way to give instructions or interact with various AI tools. It’s like having a conversation with an assistant, using simple and clear language to provide context and request specific actions. For example, when using Microsoft 365 Copilot, prompts are how you ask it to create, summarize, edit, or transform content.
An AI prompt is a command, a question, a request, or a statement that guides the AI tool to generate the desired response or output. The way the prompt is formulated significantly affects the quality and relevance of the results you receive.
There are many natural language-based prompts available to help you perform various tasks and serve different purposes, such as gaining insights into projects and concepts, summarizing information, editing texts, creating engaging content, transforming documents, or retrieving lost items. Curious? Let’s explore this further in the next sections.
Copilot Prompts: How do they work?
Prompting, essentially, refers to the correct formulation of requests made to the Copilot virtual assistant. Nothing more simple, but as we know, sometimes the simplest things hide a level of sophistication that can only be reached with intelligence and practice.
Effective prompts provide Microsoft 365 Copilot with the right and useful parameters to generate a valuable response, avoiding wasting time perfecting your requests through additional prompts and running the risk of confusing the chatbot, leading it, at best, to give you the same response with a couple of tweaks, or at worst, to suffer from what are now called “hallucinations.”
Let’s explain with an example: imagine you need to prepare for a meeting with an important client. You need to structure the meeting in a certain way, so you ask your Copilot assistant:
Generate 5 main points to prepare me for a meeting with my client X to discuss phase 3 of their branding campaign, focusing specifically on emails and Teams chats from June onward. Please use simple language so I can get up to speed quickly.
Seemingly very simple, but as we’ve already said, simplicity can be deceiving, so let’s look at the four elements that characterize the sentence’s structure to better understand the specific intent behind the request formulation:
- Objective ("Generate 5 main points"): What specific response do we want from Copilot?
- Context ("to prepare me for a meeting with my client X to discuss phase 3 of their branding campaign"): Why do we need this information, and how will we use it? Who else is involved?
- Source ("focusing specifically on emails and Teams chats from June onward"): What known plugins or sources of information should Copilot use to formulate its response?
- Expectations ("Please use simple language so I can get up to speed quickly"): How should Copilot respond to meet our expectations? In what format or for which audience should the response be tailored?
Copilot Prompts: What are they for?
Copilot, using Large Language Models (LLM) connected to Microsoft 365 apps and internal data, extends capabilities beyond typical LLM-powered chatbots. It integrates seamlessly with Microsoft 365 apps such as Word, Excel, PowerPoint, Outlook, and Teams, extracting data from articles, reports, emails, and presentations. Copilot excels in scenarios like content creation, editing, querying, summarizing, and updating information.
The versatility of Copilot as a tool is evident in various scenarios where it helps accelerate your company’s productivity and creativity. Specific work scenarios in an office environment include:
- Staying updated: Stay effortlessly updated and never miss a moment during crucial meetings or decision-making processes, thanks to carefully crafted summaries or highlights that capture the most recent developments. To quickly retrieve essential elements from a meeting, you can ask Copilot in Teams: "What were the main questions raised during the meeting?" or "What ideas were presented?"
- Creating content: Let’s embrace creativity and let Copilot help us create original content, brainstorm ideas, and outline effective strategies. We can say goodbye to wasting time and precious energy. Need an engaging presentation on time management? Try Copilot in PowerPoint with this prompt: "Create a concise and engaging presentation on effective time management." Need to respond to an email announcing a project launch? Use Copilot in Outlook with this prompt: "Write an email congratulating the project manager and the team on their successful launch."
- Seeking information and clarifications: You can interact with Copilot to ask questions or seek clarifications on complex topics. Its ability to process and simplify information makes it an ideal learning partner. Planning a trip to strengthen team relationships? You can ask Copilot: "Give me ideas for a 3-day trip to X" or "Give me ideas for a team-building activity in X."
- Editing documents: Enhance your documents with Copilot’s editing functions, using its ability to refine text, correct grammatical errors, and polish your content to a professional standard. In Word, you can ask Copilot to edit a paragraph by selecting it and choosing the Copilot icon to "Rewrite with Copilot." You can refine a PowerPoint slide with a prompt like: "Add an image of a target with arrows."
In every scenario, Copilot leverages its technological prowess with prompts to enhance the user’s abilities, personalize responses according to their needs, and offer dynamic, contextually aware assistance.
Copilot Prompts: best practices and most common mistakes
Working with Microsoft 365 Copilot is similar to collaborating with a tech-savvy colleague who is always ready to assist when needed. The secret to using it best lies in mastering the art of prompt creation.
Understanding what to do and what to avoid when creating prompts is crucial to unlocking Copilot’s full potential. Here are some essential tips to get the most out of your prompts:
- Clarity and specificity: Make your goal and expected outcome as clear as possible. The more specific the prompt, the better the productive response generated by Copilot. Always provide detailed instructions such as the topic, purpose, tone, and required length.
- Maintain a conversational tone: When interacting with Copilot, imagine engaging in a conversation with a collaborative colleague and structure your request as a question or instruction. Provide constructive feedback to Copilot based on the quality of its responses, helping the AI learn and adapt to your preferences.
- Emulate good prompting examples: Use proven successful prompt examples to guide Copilot’s response. This can save time and ensure you consistently receive the desired output. Use clear, specific keywords or phrases when asking Copilot to write text for you. This helps it generate more relevant and creative copies.
- Ask for feedback: Receiving feedback from Copilot allows the AI to better understand your needs and preferences, leading to more relevant and tailored responses to your specific needs and requests.
- Write clearly: Write clearly and understandably to ensure Copilot can easily comprehend the prompt, even if it’s complex. Always try to use correct grammar and a comprehensible sentence structure. Use proper punctuation, capitalization, and grammar when writing prompts to help the AI produce higher-quality texts and responses.
- Check the accuracy of responses: After receiving a response from Copilot, review it to check for accuracy and make any necessary adjustments. This not only helps spot errors or discrepancies but also assists Copilot in improving its future responses. Occasionally, Copilot may make mistakes. Therefore, it’s a good practice to always check its responses for accuracy, grammar, and style, and to look for irrelevant or inappropriate content.
- Provide details: When requesting a response from Copilot, provide a relevant amount of detail to help the AI understand the context and deliver the most detailed and appropriate answer possible. Emphasize that the details should be relevant and that it’s unnecessary to overload the prompt with superfluous details. As long as the request is clear and has key points, that will be enough to get the desired response.
- Maintain a polite tone: It should go without saying, but try to be as courteous and kind as possible when addressing Copilot. It may be artificial intelligence, but using polite language and expressing gratitude can help foster a positive working relationship with the AI.
Now that we have a clearer idea of the best practices to refine your prompts, let’s take a look at what to avoid during your interactions with Copilot:
- Being vague: Avoid using vague or generalized prompts, as this can lead to unsatisfactory responses from Copilot. Always provide specific instructions, context, and parameters for the AI to work with.
- Requesting inappropriate or unethical content: This should come as common sense, but not everyone who uses these types of technologies follows it. Needless to say, Copilot is not responsible for the content or results of its writing, and it’s important to adhere to local laws, regulations, and the rights of others. If someone is to get into trouble for something generated through AI use, that person will be the user.
- Using slang expressions: Again, common sense should apply here, but using slang and jargon might not be easily understood by Copilot, which could lead to lower-quality output and, in the case of frequent use, inappropriate or unprofessional responses. Copilot reflects the professional behavior of the user, so make it a "good" reflection.
- Giving conflicting instructions: Providing conflicting instructions can confuse Copilot and result in a lower-quality or inadequate response. Always try to be as consistent and concise as possible when making your requests.
- Changing topics abruptly: Copilot may struggle to generate a response if the prompt suddenly changes topic or direction. Always finish one task or close it before starting a new one. When starting a new task, use "New Chat."
Copilot Prompt Hacking: what it is and how to defend against it
Speaking of bad prompting practices, it’s time to talk about security. A topic that gets a lot of attention in today’s digital landscape but one you can never really talk about too much.
Named the best productivity tool in the AI era, Microsoft Copilot is a powerful ally for today’s businesses. But Uncle Ben always reminds us, "with great power comes great responsibility."
If your organization has poor visibility into its data security posture, Copilot and other generative AI tools have the potential to expose sensitive information to employees who shouldn’t have access, or worse, to malicious actors.
The term "prompt hacking" is used to describe attacks that exploit LLM vulnerabilities by manipulating their inputs or prompts. Unlike traditional hacking, which typically exploits software vulnerabilities, prompt hacking relies on carefully crafted prompts to trick the LLM into performing unintended actions.
Copilot’s security model bases its responses on existing user permissions within Microsoft. Users can ask Copilot to summarize meeting notes, find files for sales resources, and identify actions to take, saving an enormous amount of time.
However, if your organization’s permissions aren’t set up correctly and Copilot is enabled, users can easily view sensitive data.
Why is this a problem?
Simple: people have access to too much data. On average, an employee can access 17 million files on their first day of work. When you can’t see and control who has access to sensitive data, a compromised user or malicious insider can cause unimaginable damage.
Moreover, most granted permissions go unused and are considered high risk, meaning sensitive data is exposed to people who don’t need it.
Examples of prompt hacking techniques
There are several prompt hacking techniques, with numerous new variations emerging every day. However, the three most common ones currently are listed below:
- Prompt injection: it involves overwriting the original instructions in the prompt with special user inputs. It often occurs when untrusted input is used as part of the prompt.
- Prompt leaking: it is a form of prompt injection where the model is asked to reveal its prompt. You might wonder why someone would care about prompt leakage? Well, simple: sometimes people and organizations want to keep their prompts secret, much like they would with their marketing strategies or personal data of their users and customers. If the prompt is disclosed, anyone can use it without going through the person or company, potentially leading to significant losses.
- Jailbreaking: the most well-known prompt hacking technique by the general public (given the early exploits of ChatGPT, which were sensationalized by mainstream media) and involves tricking a generative AI model into performing or producing unintended outputs through specific prompts. It may be due to an architectural or training issue, aggravated by the difficulty in preventing prompts that can be deemed "hostile."
Prompt Hacking: How to defend against it?
To protect against prompt hacking, it’s necessary to adopt defensive measures. These include implementing prompt-based defenses, regularly monitoring LLM behavior and outputs to detect unusual activities, and using techniques such as fine-tuning. In general, prompt hacking is a growing concern for LLM security, and it’s essential to remain vigilant and take proactive steps to protect against these types of attacks.
The best defense is a good offense. While prompt hacking techniques like prompt injection and prompt leaking can still be highly effective, there are several easy-to-implement defenses we can use to protect ourselves. Let’s take a look at some in the list below:
- Filtering: it is one of the simplest techniques to prevent prompt hacking. It involves creating a list of words or phrases to block, known as a blocklist. This represents a good first line of defense. However, as new harmful inputs are discovered, your blocklist will continue to grow, and keeping up will feel like a game of whack-a-mole. Useful for establishing an initial defense line.
- Instruction Defense: it involves adding specific instructions in the system message to guide the model in handling user inputs.
- Post-Prompting: LLMs tend to follow the last instruction they receive. Post-prompting takes advantage of this tendency by placing model instructions after the user’s input.
- Random Sequence Enclosure: it involves enclosing the user’s input between two random sequences of characters. Enclosing the user’s input helps establish which part of the prompt comes from the user.
- Sandwich Defense: the "sandwich" method involves placing the user’s input between two prompts. The first prompt serves as an instruction, while the second reiterates the same instruction. It also takes advantage of the model’s tendency to remember the last instruction it received.
- XML Defense: similar to random sequence enclosure, wrapping user inputs in XML tags can help the model understand which part of the prompt comes from the user.
- Separate LLM Evaluation: it involves having a secondary LLM evaluate the user’s input before passing it to the main model.
These defenses give the model a better chance of functioning as intended and form a good "first line" of protection for your digital assistant against malicious actors and prompt hacking attacks. However, it’s important to continually review the prompts you use, as these things keep evolving, and security threats to your infrastructure and digital assistants grow day by day.
Conclusions
Microsoft Copilot is a tool with great potential, and it was named the best productivity tool in the AI era for a reason. Redmond’s digital assistant is a powerful ally for today’s businesses, but like any tool with great potential, users who want to take full advantage of it must learn to use it intelligently.
To get the most out of Copilot’s features, it’s essential to master prompt creation. By following the tips provided above on what to do and what to avoid, you’ll quickly and consistently obtain accurate responses tailored to your needs and requirements.
FAQ on Copilot prompts
What are Copilot prompts?
Copilot prompts are instructions given to AI tools (like Microsoft 365 Copilot) to generate specific responses or actions, such as creating, summarizing, or editing content.
How do prompts work with Copilot?
Effective prompts require clarity, context, and specific parameters, like the desired output format or content sources, for the best results.
What tasks can Copilot assist with?
Copilot excels in content creation, editing, summarization, querying, and task automation in apps like Word, Excel, PowerPoint, and Teams.
What are some best practices for prompt creation?
Use clear, specific instructions, avoid vague language, and provide feedback to refine Copilot's outputs.
What is prompt hacking, and how can it be prevented?
Prompt hacking exploits AI vulnerabilities through malicious inputs. Defensive measures include filtering, instruction defense, and proper permission setups.