Friday, April 23, 2010

The power of a single Service Management platform

When IBM acquired MRO and decided to build many Service Management products on the top of the Maximo platform, it certainly benefited from the same GUI, lots of common functionality (Integration Framework, Escalation, Security model, to name a few).

However the main benefit of putting all these products (CCMDB, TSRM, TAMIT, TPM) on the top of the Tivoli Process Automation Engine (TPAE) platform is the fact that the data is shared among all the products. So instead of having to push CMDB data into a Service Desk tool or push Incidents to a CMDB, the TPAE platform holds the data once for all these applications to use.

I can't think a better way to build a CMDB-centric ITIL implementation that having all ITIC processes sharing the same data. The alternative is to push CMDB data into a Service Desk tool, an Asset Management tool, and so on, making the project focus on data shuffling.

The benefit is that, using an IBM-based solution, the ITIL implementation can evolve from starting with one ITIL process to the next one, and the task is to install and configure the ITIL Process Management, not to be concern about how to move data around.

Thoughts?

How Tivoli Directory Integrator can be used in the federation of a CMDB

Tivoli Directory Integrator is a fantastic and flexible data integration tool that can be used efficiently in implementing the data federation in a CMDB.

IBM's approach for CMDB federation is based on an XML file that complies to an XML schema called Identity Markup Language. TDI is a great and simple way to generate these XML files, as it abstracts all the nuances of the XML file into CI types and attributes. This video shows how easy it's to create this IDML file.

The power of TDI thou doesn't come from the IDML Connector, but it comes from the plethora of available connectors (JDBC, JMS, Notes DB, Web Service, HTTP, to name a few) and the flexibility it provides to connect many data sources without the need to write a single line of code. Want to create an IDML with just the data that changed since last time? You got it. In TDI, it's just a few click away, just the Delta capability.

Another interesting aspect is that there are other TDI connectors available in the CMDB / Service Management space: the Maximo Generic Connector let us interface with CCMDB without having to generate an IDML file, but going through Tivoli Integration Framework. The TADDM Connector lets you create (and retrieve) CI from TADDM, which will be imported into CCMDB.

Questions? Comments?

Federation and reconciliation goes hand-in-hand in a CMDB implementation

When we talk about federation in a CMDB implementation, it's because there are multiple data sources that need to be combined as a single repository (even if the data is not physically combined into one single database).

Having multiple data sources implies that they will many times refer to the same Configuration Item, raising the question of how to reconcile these records. Traditionally, CMDB products chose a primary key that is used to merge the data; however, my experience shows it's difficult to identify a single attribute that can be universally to identify a CI type. As an example, although a computer serial number might seems a good candidate, we have seen that many products don't provide such attribute, but relies on IP address as the identifier. The same argument can be used for IP address and any other attribute.

At IBM, we decided to take a different approach: instead of relying an attribute or a collection of attributes to uniquely identify a CI, we have defined many naming rules for each CI types and enforce that each data source provides the attribute of at least one rule. So if a product uses serial number to identify a CI, another uses serial number and IP address, and a third one, just IP address, this ingenious approach will be able to determine when they represent the same CI.

So the integration with IBM CCMDB is based on having products providing data according to a common data model  in a process named Discovery Library Adapter.

The alternative of implementing this federation / reconciliation approach is to defer the reconciliation of the data instead of doing it before the data is imported into a CMDB. Some products define the concept of many buckets of data that are combined through a reconciliation mechanism. Although this approach simplifies the data loading process, it defers to the user the responsibility of federating and creates another hop in the data chain.

Next we'll talk about how we can use Tivoli products to assist in federating the data into a CMDB.

Federation capability on CMDB

Since the inception of a Configuration Management Database concept, there has been this notion that CMDB needs to be a federated solution. But what qualifies a federation, as the term is certainly overused and stretched in many directions.

Here is wikipedia.org definition of IT Federation:

Federation is multiple computing and/or network providers agreeing upon standards of operation in a collective fashion. The term may be used when describing the inter-operation of two distinct, formally disconnected, telecommunications networks that may have different internal structures. The term may also be used when groups attempt to delegate collective authority of development to prevent fragmentation.


Well, it doesn't tell a lot, except it says about multiple computers working together. I see two distinct ways to achieve such interoperability:



  • data integration, traditionally called ETL (extract, transform and load), where data from a data source is physically moved to another data source. A data integration has the benefits of all the data will be concentrated into one single repository, which addresses an intriguing question about a CMDB: without bringing the data together, how can we build a CMDB if the data sits in an external data without any kind of Change Management process to update it?
  • use tools, like IBM InfoSphere Federation Server to portrait the data as a single repository, even if it resides physically in many data sources. The main advantage of this approach is the fact that the data doesn't need to be synchronized and it's retrieved only when needed. It creates the issue of how to keep track of CMDB changes when there are many data sources, and the associated products are updating the data sources at their own will.

My recommendation for a CMDB is to have a balance between data integration and real-time federation, where the key attributes of brought to a centralized database and unsupervised or volatile attributes are kept in their original data sources.

Next I will talk how reconciliation is related to federation and CMDB.