Cerca
Close this search box.
Cerca
Close this search box.
Service Mapping

Choosing Service Mapping Technology

Critical aspects to be carefully evaluated

For companies that would like to know the internal relationships of assets and services, a guide dictated by our experience that highlights the critical aspects of a service mapping technology that could penalise the success of the mapping project. 

Remove all speculation: this is the task of the Configuration Management process, which is based on the population of data within the CMDB regarding the relationships between the Configuration Items (CIs) that support a service. Service Mapping technology is a key element in ensuring that this population takes place in a complete and automated way.

It is the answer to the question "what has changed or what will change?" when a service interruption occurs or a change has to be planned. by mapping the relationships between assets and services, it is easy to identify stakeholders at multiple levels and understand the causes of incidents in order to quickly restore service availability or analyse the potential risks of any changes to the IT infrastructure.

The availability of data provided by Service Mapping technology allows security teams and IT Operations to improve service management: on the one hand by reducing response times, and on the other hand by ensuring that new services and configurations are put into production without impacting operations.

If it is true that the flap of a butterfly's wings can cause a hurricane on the other side of the world, similarly small variations in initial conditions - sometimes even 'peripheral' or not directly correlated - can produce large variations in the long-term behaviour of the entire IT system.

The discovery of these invisible threads, in addition to enriching the CMDB and the working tools for ticket or change management with useful information, can also be the starting point for reviewing one's own service mapping: starting from the problems encountered or found, one can structure the corporate Service Catalog in a way that is more consistent with business needs, deciding which new services to put into production, which to withdraw or possibly modify.

Service Mapping: 8 critical elements

Let’s see them together:

  • discovery not designed for a hybrid IT environment

Let us start from a principle: it is possible to have discovery without service mapping, but it is not possible to have service mapping without discovery. A typical discovery software (e.g. Lansweeper, OCS Inventory, GLPI) provides an inventory of assets and some high-level relationships. A true service mapping software goes further by discovering in-depth relationships between assets, including hardware, applications and websites. These relationships are then represented in dynamic maps that should automatically adapt according to changes in the environment.

Nowadays we move into hybrid IT environments, so the service mapping technology used should be based on complete discovery data for data centre, edge and cloud, without wasting time and money integrating inventories from multiple systems. Where probes and CMDB fall short in terms of extent and quality of data, it must be possible to do integrations via APIs without having to maintain expensive SQL databases: especially if cloud dependencies are to be known, as many services now rely on both physical and virtual resources. We have discussed this in detail in another article. 

  • inflexible and time-consuming implementations


Cloud and on-prem
: every reality has different environments, so the same Service Mapping technology should be able to easily adapt to the available servers. And it should be fast to deploy: if there is a discovery data collector that can be activated in agentless mode it is already an added value, without having to struggle with downloading and deploying agents that in the long run should be subject to maintenance and updates.

Then one must also evaluate the number of discovery servers required to support the technology: a web-based solution simplifies the deployment process because it does not require additional servers (one is sufficient!). Some technologies require the installation of additional components (e.g. management servers) to support the discovery of IT resources, a process that may take a long time to implement.

  • Use of tools with manual processes

As much as service mapping technologies provide automated tools for mapping IT services and relationships, they may still require some degree of manual input and validation to ensure the accuracy and completeness of relationship maps. Such as the definition of relationship criteria.

Entrusting the mapping and recording of service topologies to manual processes nowadays leads them to be subject to errors, inconsistencies and longer times to quality output because they have to be continually revised: hence the use of service mapping technologies with machine learning techniques can guarantee the accuracy needed to generate maps.

Machine learning algorithms can be used to identify patterns and correlations between different IT components and business services, create predictive models and use continuous feedback from real-time data and user iterations to continuously improve the learning models employed. If new resources are discovered, AI can automatically classify them and move them to CMDB.

  • Algorithms that are supposed to do this are not trained on reliable sources

Here we connect to the point above: if we use service mapping technologies that incorporate machine learning algorithms, these must be “traced” to reliable sources. So the technology for discovering assets and services must be based on a respectable application dictionary. That ensures the right depth of detail and a constant update frequency with all evolutions affecting IT configurations.

In this regard, having a visual detail in the maps on which assets have known vulnerabilities or what is their impact on the service level is an added value. Therefore, training the algorithms on application dictionaries that incorporate and utilise data from the National Vulnerability Database (NVD), a repository of information on IT vulnerabilities managed by the US government, provides actionable insights on how to prioritise interventions.

  • Data cannot be brought out

Another aspect that everyone is aware of but about which too little is said is data lock-in. There are conditions under which a company finds it difficult to move its data from the Service Mapping system to platforms that are not of the same vendor: either because of the proprietary structure used to store and manage the data, or because of contractual conditions or costs that govern extractions, or because of the high integration of services and products of the same vendor that makes it difficult to migrate to different platforms without losing functionality or interoperability.

Strict constraints on data portability limit the freedom of choice, should one tomorrow wish to engage in projects involving other technologies. With some vendors, one must already budget for training and adaptation costs, as they invest in creating a strong user base and internal integrations.

  • Overlaps are created

With Service Mapping and CMDB (but also any ITSM tool) operating independently, data may be duplicated or overwritten in the two systems. IT components such as servers, applications or network devices may have been identified and registered separately, even though they refer to the same IC. If they are then not synchronised in real time or regularly updated, discrepancies and overlaps may occur when changes are made to the IT infrastructure. For this reason, it would be good to rely on technologies that can automatically manage overlapping ITSM records on service maps with ITSM products and/or update the data contained in CMDB, in order to have a single aligned source of truth. Data maintenance and cleaning processes increase the workload of IT staff.

  • There is a lack of clarity in the presentation of the maps

The two dimensions concerning the information to be displayed in the maps, which fall under SCOPE (which entities one wishes to manage) and DETAIL (the depth of detail concerning the managed entities), must be able to be managed flexibly: in this regard, the possibility of creating filters to focus only on the types of resources that are of interest, or in any case the possibility of organising the level of detail and the nature of the relationships with assignments, coding, colours, facilitates a clearer understanding of the relationships themselves.

In addition to maps, a valuable aid to understanding could be dashboards and charts that provide a 'glimpse' into key metrics and data related to IT infrastructure and business services. These could include information on performance, availability, security, compliance and more.

  • High licence costs

Most service mapping solutions require a licence for each discovered device: for companies with a large IT infrastructure or a large number of devices to be monitored, the final bill could be quite high.

This is why it is preferable to opt for those realities that, for the same functionality offered, only allow payment for the operating systems of the discovered servers (or virtual machines, where the software can be installed), with advantages in terms of cost transparency: licence costs in this way would only translate to the resources actually used.

We will go into all these aspects in more detail in the webinar "Service Mapping: invisible technology at the IT services dance" (here, in italian language only) scheduled for 28 February, where we will look with concrete use cases at a Service Mapping technology that on the one hand has the necessary functionality to uncover the relationships between assets and services and on the other hand can be implemented without requiring costly investments or integrations.

Article by Francesco Clabot, WEGG's CTO and ITSM lecturer at the University of Padua.

02-s pattern02

Would you like to evaluate the best Service Mapping solutions?

CONTACT US TO LEARN MORE!