It is possible that our Maximo system slows down several years after it is installed or upgraded. Start Centers may take a little longer to load. Creating and retrieving Asset records, Service Requests, Incidents or Work Orders may take a little longer. As we continue to use Maximo and ask more of it, Maximo continues to provide, but must spread its underlying resources more thinly. There are some quick wins to be had, that would nevertheless be identified if we have deeper performance issues to address. It is not necessarily true that a slow Maximo needs more hardware.
We must keep in mind that Maximo is a family of enterprise level software solutions, on top of an intricate underlying database and server infrastructure. One of the powers of Maximo is that it includes best practice business process applications for management of many business practices and use cases. As we add more business, more users, more transactions, and more data, Maximo will allow the expansion, but will queue our requests with whatever storage and processing power we last gave it.
Looking only at data, and depending on our use, Maximo can store thousands, and even millions of records and will only be limited by the amount of storage provided, and how we have arranged it. Any large scale enterprise level database needs to be monitored for health and tuned regularly, just like our fleet vehicles, HVAC systems or roads. If you don’t clean the filter on your truck or furnace, soon you will have performance and reliability issues. Most of us have had older computers which at some point have become slow, often because our computer hard drive is full or fragmented. Our Enterprise systems are likewise susceptible, after years of operation, if not managed, the storage will slow down our system. Maximo needs regular health checks of its storage and database, with tuning for the best operation, a performance quick win.
Server File Storage health
Our Maximo servers are susceptible to running low on disk space. This can lead to the fragmentation of server data, and eventually failure. You can only fill the storage once. At that point, continued use will need more space for things such as system logs. As with our home computer, we must consider free disk space, but also fragmentation of the data. We should note that one symptom of low free space is more frequent fragmentation.
What does Fragmented Storage mean?
Imagine our storage is a chain of bins, seen below as empty bins (U) and full bins (O). In our example we show 6 empty bins, and 26 that are full. If we then need to store new data that in total consumes 4 bins, we have to store the first 2 parts in the first two bins, and then store the rest in 2 more isolated bins.
This new data would be discontinuous, or fragmented. In this case, fragmented storage means individual files that are stored in disconnected chains, which makes storage, retrieval, and updates slower.
Some fragmentation is a normal consequence of system usage. The rate of fragmentation is a function of available free space, together with allocation and use. There are many rules of thumb as to how much free space should be allocated, but it varies depending on the install, and should be set by the appropriate infrastructure specialist. After that, the determined amount of free space should be maintained going forward, depending on use.
Defragging the server storage can return much wasted space on it’s own. Even if space is not an issue, fragmented storage can quickly slow down data access, so regular defragging of the server hard drive is good in any case.
The amount of space can also be an issue. The server should always have free space to grow. Some files will remain limited by the file definition, but others can grow until there is no space available. It is wise to cleanup old log files, after they outlive their value.
Database Storage health
Data Base Management Systems (DBMS or DB) store data as a complex set of data constructs within a set of files on your database server. From the outside of the database we can talk about the files themselves. From the inside we have to consider how the space is allocated to the individual tables, and other constructs that make up the database. From both points of view, databases are susceptible to running low on space and fragmentation, depending on which version of which DBMS product you have.
What does Fragmented database mean?
Within the files that make up the DBMS, storage is partitioned and allocated to the tables of the database schema. Within these allocations, the table data can again become fragmented in the same way, increasingly, if they are low on space. Individual table rows or records can span multiple discontinuous bins. Again, this will slow down data creation, retrieval, and, the update of those tables.
Defragging the tables and table spaces can return much wasted space on its own. Even if space is not an issue, fragmented tables can quickly slow down data access, so regular defragging of the database across its structure is good in any case.
The amount of space can also be an issue. The table spaces and tables themselves should always have free space to grow per use. It is wise to confirm table usage with the business such that unintended DB growth is not ignored. If you only use certain Maximo functions, the space can become skewed; full in some areas and empty in others. We should balance table free space across the DB, per expected business usage, reallocating table unused space to tables with intentional high use.
If we have any individual tables with expected high data volume, we may consider adding a database index to the table, to improve its individual performance.
These actions are best done by a DBA, or someone comfortable in that DBMS product (DB2, Oracle, MYSQL etc.).
More quick wins in Maximo Queries
If you have your database server and storage tuned regularly, performance may still be an issue. Are there queries within the start center, application list tab, reports, or elsewhere that do not account for current usage or current data volumes? Possibly you have some application with extremely large data volumes, or possibly queries that are not optimized. In both cases, analysis of the underlying queries may suggest that data filters, default queries or other simple Maximo config be used to reduce large table scans or complex joins. Either in Maximo DB config or directly in your DBMS, the use of ‘unique indexes’ where possible specific to the exact search being done will save a query from doing lengthy table scans, and return results with much less effort. Even the queries delivered ‘out of the box’ can have an issue, per the individual use of Maximo, where data volume falls outside of best practice tendencies to meet a specific business need, and queries may benefit again from tuning.
As well, Maximo includes the ability to easily create global data filters. Possibly data that is many years old and closed is not necessary for most of the business and can be hidden to reduce search times for the majority users.
A DBA or Maximo DB specialist can look at individual cases and suggest tuning options, drawing upon these and other possible adjustments.
What if the quick wins still leave performance issues?
If the quick win solutions still leave us with performance issues, at some point we will have to ask the big questions. Do I have enough application servers for my user community? Depending on the age of your infrastructure and Maximo version, and depending on your use, IBM recommends 1 JVM per 50 concurrent users, with again varying amount of RAM and power to each JVM, per the install. In general, as a business grows in its IT consumption, it is only natural that we add power to our install over time. If we have installed Maximo on virtual machines, this might not be as complex as it sounds.
It is also possible the issue is not the number of JVMs for the number of users. In some cases we may have background tasks running during business hours that are affecting online access. The Maximo architecture also allows us to separate the JVM processors and isolate processing power by use. We can dedicate processors to reduce processing contention. Easier yet, maybe we can change the background task to run outside of peak business hours (sounds like another possible quick win).
Again, there are options, that can be had without buying hardware, but rather infrastructure tuning. We recommend you start with the quick wins, as they generally give performance improvement for low effort and cost. If performance issues persist, we also recommend investigating the application server configuration. Regular due diligence will keep your system running well, with good response times. It will also keep you aware well in advance of any larger potential growth needs.