WEM AI Agent
Last updated
Was this helpful?
Last updated
Was this helpful?
The AI Agent tab in the resource pane is where Agents are created and managed within the WEM Modeler. This interface provides an intuitive way to configure AI Agents, define their properties, and integrate them into your application workflows.
Below is an image illustrating the location of the AI Agent tab in the resource pane. This tab serves as the central hub for managing agents, allowing developers to orchestrate application agents efficiently.
A new AI Agent can be added by right-clicking and selecting the New AI Agent option from the context menu. This will open the "New AI Agent" Wizard, where you can configure the following options:
Name: This serves as the identifier for the AI Agent. Choose a meaningful name to easily recognize the Agent's role within your application.
Instruction: This defines the behaviour and purpose of the Agent. It's essentially a description of what the Agent should do, guiding its decision-making process and responses within the application. This instruction is very important to setup correctly - like Prompt-Engineering for Generative AI is a newly emerging "art" to get better results.
Provider: currently limited to the OpenAI Assistant API - as it has support for tool calling (functions). In future, other providers may be added.
Model: The AI model assigned to the Agent determines how it processes input and generates output. This option allows you to select the most appropriate model based on the Agent's role and required tasks.
Temperature: This setting controls the randomness of the Agent's responses. A lower temperature will generate more deterministic responses, while a higher temperature introduces more variability and creativity in its output.
Top P: Top P affects the diversity of responses by influencing the probability of selecting particular words or phrases. It helps shape the response by controlling which alternatives are considered most likely in the output.
Please be aware that not all Models provided in this list may be properly supported - it is a work in progress and may depend on the license used.
o1: Best for simpler tasks with limited complexity, offering quick and efficient processing for basic AI needs.
o1 mini: Ideal for smaller applications with less intensive AI processing. Offers a lightweight alternative with quicker response times for low-demand tasks.
o3: Suited for tasks requiring moderate complexity and improved performance over earlier models. It balances efficiency and capability.
GPT o4: A powerful choice for complex applications requiring advanced reasoning, nuanced understanding, and context retention. Suitable for high-level AI interactions.
GPT o4 mini: A more efficient version of GPT o4 with lower resource usage, maintaining much of its power but with faster processing times, ideal for real-time applications.
GPT 4 Turbo: This model provides enhanced speed and efficiency compared to GPT o4, offering high performance for applications where both speed and complexity are important.
GPT 4: Designed for sophisticated AI tasks, such as deep natural language understanding and context-heavy responses, offering the highest level of versatility and power.
3.5 Turbo: Ideal for less complex applications or where performance speed is critical but without needing the highest level of sophistication. It's a more cost-effective choice for routine tasks.
After creating an AI Agent, its properties can be configured to define its behaviour and enhance its capabilities through functions. These properties determine how the Agent interacts with the application, which information it can process and what functionalities it can execute.
Conversation Context
The conversation context acts as a "session field" where relevant information is stored throughout an interaction. This data is not directly processed by the AI, but is part of the WEM Agent's context, allowing the application runtime to utilize it for continuity between exchanges. This enables the Agent to maintain context within a session without increasing token usage.
File Sources
Agents can be assigned files, which are stored in a vectorised format for efficient retrieval. These files function as an additional knowledge base, providing structured information that the Agent can reference. While the AI does not treat the entire file content as tokens, any quoted sections may count towards response tokens. This allows for efficient, context-aware responses without excessive computational costs.
Functions
Functions extend an Agent's capabilities by enabling it to execute predefined actions. These functions operate similarly to flowcharts in the Modeler but are specifically designed for the Agent to follow. By defining structured flows, the Agent can perform tasks beyond text generation, such as triggering processes, interacting with data sources, or modifying system states based a users input.