AI Essentials: What you need to know when you’re getting started with this technology
You want to start using AI because you heard from your co-workers, podcasts, or news that it can increase your productivity and make your life easier. At the same time, you have heard about many possible alternatives- ChatGPT, Claude3, Gemini, etc.- but you don’t know how to start this process. If you relate to this description, this article will help you to start with the 1st steps to work on that.
We‘ll discuss the following topics:
- Difference between AI and Gen AI, briefly discussing why the two terms differ.
- What things Gen AI can’t do by itself?
- How can we build a good prompt?
- What is a prompt building cycle?
- How secure are we using Gen AI?
Difference between AI and Gen AI
As we’ve discussed the broad field of AI, it’s crucial to distinguish between traditional Artificial Intelligence (AI) and the emerging field of Generative AI (Gen AI). Despite shared roots, their core functions differ significantly. To clarify these distinctions, we’ll examine their Focus, Capabilities, and Output in the following comparison table. This analysis will help us understand how each contributes uniquely to problem-solving and content creation.
Feature | AI | Generative AI |
Focus | Analyze data to solve problems. | Create new content based on data. |
Capabilities | Pattern recognition, predictive analytics, data analysis. | Image synthesis, text generation, music composition. |
Output | Limited to predefined outputs and actions. | New and diverse outputs beyond training data. |
Limitations of Gen AI

Andrew Ng (image from Wikipedia)
According to Andrew Ng’s lectures, co-founder of DeepLearning.AI, several current limitations exist concerning the use of Gen AI. These limitations are particularly noticeable when directly interacting with GPT-3.5, the latest version from OpenAI, without the aid of additional interfaces, retrieval mechanisms, or fine-tuned models.
Keep in mind that these limitations apply only when you interact with the model directly, without an interface.
- Memory Retention: GPT-3.5 cannot retain memory of previous tasks or interactions across different sessions. Each prompt is processed independently, meaning it cannot recall past exchanges unless explicitly provided in the input.
- Frozen Knowledge: The knowledge in GPT-3.5 is static and limited to the data available at the time of its last training. It does not update itself with new events, discoveries, or emerging trends unless retrained with new data.
- Hallucinations: In situations where direct information is unavailable, the model uses a technique called “zero-shot reasoning”—it attempts to generate plausible responses based on its existing training data and reasoning patterns. This can sometimes result in fabricated or misleading information.
- Input / Output Length Limits: GPT-3.5 processes text using a token-based system, where both input and output tokens contribute to a total limit. This constraint affects how much context can be provided in a single prompt and can truncate responses when the token limit is exceeded.
- Bias and Toxicity: The model may inherit biases from its training data. Moreover, when asked repeatedly about specific topics, particularly controversial ones, hallucinations may compound, resulting in responses that deviate from factual accuracy or contain unintended biases.
How can we build a good prompt
First, let’s start by defining what a good prompt is: A good prompt is a clear, specific, and well-structured instruction that guides a large language model (LLM) to produce the desired output. It’s the key to effectively communicating with the AI and getting useful results. Here’s a breakdown of what makes a prompt ‘good’:
Key Qualities of a Good Prompt:
- Clarity
- Specificity
- Context
- Structure
- Format
- Tone
- Examples (This can be optional)
- Iteration
We recommend starting with a basic prompt and following these steps:
- Be detailed and specific: Give sufficient context for LLM to complete the task and describe the desired task in detail.
- Guide the model in thinking through its answer: Divide the instructions into multiple steps to help the model build on its reasoning from one step to the next.
- Experiment and iterate: You may not achieve the desired outcome with the first prompt, but you’ll gain insight into what might be lacking.
- For your first prompt, try something simple. Don’t try to cover all the scenarios in the first run.
- Avoid having the model perform multiple tasks in one prompt, as this may lead to confusion. The same applies to conditional statements; if you wish to use them, we recommend creating a single prompt instead.
Let’s now apply the recommendations
Let’s assume we receive some text discussing an organizational decision (confirm that it does not contain any sensitive information that could affect our company) and the steps we need to take to determine whether to implement the new policy. However, the text is difficult to read, so we want to utilize Gen AI tools to clarify it.
The first thing that comes to mind is that we can achieve this with a simple prompt: “Hey, Gen AI, I’ll send you plain text, and I want you to provide a more readable structure,” or even more simply, “Make this plain text readable.”
What happens in this case? Gen AI will use its definition of a readable structure for itself, or it might have some generic structure based on the lack of enough context.
Let’s use our steps to modify the prompt: “Hey Gen AI, I’m sending you plain text that contains information about the steps we need to consider for our company. We want you to follow these steps:”
- Read the plain text.
- Based on your reading, I would like to separate them into paragraphs.
- Return all the information that you were provided back without doing any resume or modification.
If we compare both prompts, we can see an improvement in the questions we want to ask. We are also giving more context about how we think about a readable structure.
Prompt building cycle
If we can summarize the building process, it can be divided into 4 steps:
- Build: Clear and specific start. Formulate your initial prompt with detailed and precise instructions regarding format, tone, length, and constraints to guide the model effectively from the outset.
- Interact and evaluate: Understanding the mismatch. Analyze the model’s output to pinpoint why it didn’t meet expectations. Identify misunderstandings in keywords, format deviations, or incorrect tone to inform the next prompt refinement.
- Apply new changes: Targeted refinement. Adjust your original prompt based on the evaluation. Rephrase instructions, add constraints, provide examples, or specify what not to do for clearer guidance.
- Repeat: Continuous improvement. Engage in multiple cycles of interaction and adjustment. Iteratively refine your prompt based on the model’s output until the desired result is achieved through persistent experimentation.
How secure are we using Gen AI?
Regulations for protecting our privacy and information in Gen AI are still in an initial state. Therefore, we must be careful when working with these tools. Our recommendations are the following:
- Read their privacy terms. Search for the recollecting information section.
- Use local solutions. You can try out the Ollama with Continue. This will require some tech background knowledge.
- Read if the provider offers a privacy feature. The privacy feature involves creating an instance that you only consume and that is not shared with others.
- Don’t add sensible information in your prompt.
- Don’t send any sensible documents to the AI.
- Don’t fully trust the output from Gen AI. Always double-check.
Lectures:
- OpenAI Prompting Guide:
https://platform.openai.com/docs/guides/prompt-engineering - DataCamp:
https://www.datacamp.com/blog/what-is-prompt-engineering-the-future-of-ai-communication - Deep learning course:
https://www.deeplearning.ai/courses/generative-ai-for-everyone/ - How to create Prompt:
https://www.linkedin.com/pulse/why-do-we-need-learn-how-create-prompts-alexandra-serrano-spwof/?trackingId=cDtquqIoR8Cx4cFJ8DyOZg%3D%3D
Tools
- AI searcher:
https://theresanaiforthat.com - Prompt templates:
https://ignacio-velasquez.notion.site/2-500-ChatGPT-Prompt-Templates-d9541e901b2b4e8f800e819bdc0256da
General:
- ChatGPT
- Claude
- Gemini
Data Analysis:
Newsletter:
- Superhuman AI:
https://www.superhuman.ai/
Tutorial Builder:
- Guidde:
https://www.guidde.com/
Meetings:
- Fathom:
https://fathom.video/ - ReadAI:
https://www.read.ai/
Studying:
Devs:
- Cursor:
https://www.cursor.com/ - Component helper:
https://tools.webcrumbs.org/frontend-ai