This page provides practical tips to help you optimize your WEM applications. Improving performance is key to delivering a responsive user experience, reducing delays, and ensuring scalability. By following these guidelines, you can address bottlenecks, prevent slowdowns, and design more efficient applications.
Issues related to template content often result in slow responsiveness during page refreshes, triggered by user interactions or long load times when visiting the page. This can occur when a refresh button is used or a field with a refresh action is filled in. These issues are often caused by too many or inefficient calculations within the template itself, either through calculated fields or by calculating values in labels. Finding the culprit can be challenging, especially in pages created by others. The Template Performance Profiler widget can assist in locating the source of the problem. This information can then be used to resolve the issue talked about later in the article.
Application logic issues, often caused by nodes in the flowchart, can lead to significant slowdowns. These slowdowns may manifest as long load times when visiting a page for the first time, response time errors, or pages becoming unresponsive after using a button, for example.
To identify and analyze these issues, you can use the Performance Profiler available in the DevOps portal or review the logs in the DevOps portal. The Performance Profiler provides detailed information about server actions, such as query execution times, helping you pinpoint the exact cause of the slowdown.
Another way to gain more insight into where processes cause slowdowns is by recording timestamps at different points in an application flow. This can be done by creating a list called "Performance Timer," for example, with at least two Date-time fields and optionally a field to record which process is being timed. For each flowchart segment you want to measure, add a row to this list with the start time recorded at the beginning of the process and the end time when the timed segment is complete.
Web service issues typically result in slow web-service responses or timeout errors. These problems are often caused by doing too much processing within the web service flowchart, rather than preparing the information in the application and only sending the data in the web-service flow.
To analyze and diagnose web service issues, you will almost always need to use the logs available in the DevOps portal, or create your own custom log regarding these service calls in your project. By reviewing these logs, you can identify where the bottlenecks occur and understand the sequence of events leading to the performance problems.
Once you've identified and analyzed the issues, it's time to implement strategies to resolve them and enhance overall performance. Here are some practical tips and tricks to help you achieve this:
When a user edits, adds, or removes application data, this information is stored in memory until a save or discard node is executed. This requires subsequent actions to merge in-memory changes with database records, adding complexity and processing overhead. Committing changes to the database (or discarding changes) before interactions ensures that following interactions can be evaluated on the database server directly with up-to-date information, improving performance and reducing processing time.
When working with expressions in WEM that interact with data, parts of those expressions can be compatible with the SQL database. When they are compatible they can be directly converted into SQL statements or treated as fixed values that the database can process efficiently. If expressions are not SQL compatible, more data must be retrieved and processed in memory, which has less resources and can require additional queries impacting performance. Understanding which parts of an expression are SQL compatible can help you design faster and more efficient data processes.
More information about this can be found in the articles dedicated to SQL compatibility.
Performing a count
on large lists can be a source of performance issues. This is often not noticed when starting to use an application, but as the database grows over time, it can become a significant problem. Although the count
expression itself is SQL compatible, it always requires a full scan of the database table. This becomes even more noticeable when the count
is combined with a filter or expression. When the database gets large enough, even counting an entire list without filters can cause delays.
If a count
expression is compared to 0 (Count([List]) = 0
or Count([List]) > 0
) to check if there are no rows or if rows do exist, this can be regarded as an IsEmpty([List])
or a HasValue([List])
situation - the WEM Runtime will automatically change this into an IsEmpty
or HasValue
approach, making it much more efficient - the expensive Table Scan on SQL can be avoided.
Processes that handle large amounts of data or perform heavy operations on large lists or sub-lists can benefit from batch processing to prevent timeouts and improve system stability. By keeping long-running processes between 10 and 20 seconds, you ensure that other requests can still be handled efficiently. Batch processing can be implemented using a row counter or time-based approach, allowing you to insert progress checkpoints and commit changes in manageable chunks. This reduces strain on the system and enhances overall performance.
Batch processing not only helps the current user but also ensures that other users' requests are not blocked for too long. By breaking heavy tasks into smaller batches, the system can process other requests in between, preventing the server from being stalled by long-running operations.
Using the GoToRow
node inside a loop on another list, is very inefficient and should be avoided, as it forces the system to fetch rows repeatedly, causing significant slowdowns. A better approach is to use a double loop, especially when comparing or merging data or handling imports. This technique allows you to iterate through lists more efficiently, reducing unnecessary row fetches and improving performance.
You can find more information about this in de video dedicated to the double loop.
Working with large amounts of data or look-ups in large nested lists inside a web service flow is a common cause of performance issues. Often, this data doesn’t need to be calculated real-time and on the spot but can be gathered and prepared beforehand. By pre-processing the data, the web service flow only needs to package and send the result, making responses faster and preventing timeouts. In the image is a example of this, the web-service had ran into a timeout before a response could be send. Instead of preparing this data as part of the web-service flow the data should already gathered and calculated, allowing the web-service flow to only consist of packaging and sending the data.
The copying of this data can also be done with an export node followed by an import node - preferably using the JSON format which allows for nested lists and little overhead. The prepared data is exported to JSON which is then imported to the web-service list. This is much easier compared to making a loop-node and adding rows to the output list.
Like web-services, interaction nodes can benefit from preparing data instead of using calculated fields. Using calculated fields in labels or as part of a data-grid can heavily impact performance - specifically when the calculated field is using other calculated fields and they are using list-expressions on other lists - it may become very incomprehensive and difficult to analyze... The Tempalte Performance Profiler widget can help pinpointing the culprit.
Every time a calculated field is used/shown its value is calculated on the spot, while its resulting value may not change that often. When this is the case it is often preferable to calculate this value once a day or every few hours and store it temporarily. This temporary stored field can then be used in your interaction template heavily reducing the calculations that need to be done when loading the page. Or you can use the calculated field to update another persistent field within the list, and use that whenever one specific row is added or updated. Doing the complex calculation once per row and storing the result will help to avoid performance hits when the value is needed in overviews of large collections. This idea may conflict with pure data-normalization strategies, but it will make users happier...
The following issues can be resolved using scheduled tasks or asynchronous tasks (available in version 4.2+ for private cloud environments)
Use the Template Performance Profiler to find template issues
Leverage the Performance Profiler and log analysis for flowchart and webservice issues.
Optimize data processing and avoid using calculated fields when not necessary
Use batch processing, GoToRow optimizations, and SQL-compatible expressions for improved performance
Restrict the memory usage of the application.
WEM No-Code can provide dedicated performance support through professional services, or you can use the public forum to start a more general performance discussion.