ITIL’s Last Gasp? It may be more relevant than ever

Recently Charles Araujo penned an article raising the question of whether we were seeing ITIL’s last gasp.

I thought it was a very good and thought provoking post and my immediate conclusion, expressed in a tweet, was that what ITIL prescribed was still a prerequisite to positioning an organization to adopt such concepts as DevOps and Digital Transformation.

Troy du Moulin responded to Araujo’s article with a spot on reminder of the ‘Lean components of value’ – Quality, Speed and Cost — and ITIL’s role in ensuring those parts that might be less sexy in light of the current focus on DevOps and Digital Transformation. He emphasizes that ITIL covers the entire set of capabilities for creating value and reminds us that approaches such as DevOps and Agile do not do this, nor do they aspire to.

This is what we continue to find, that there are a lot of organizations that continue to have to work on their blocking and tackling before any consideration can be made of seriously applying approaches such as DevOps. In our experience the precepts laid forth in ITIL continue to be the best source for laying this groundwork. The ability to deliver consistent Quality at an acceptable Cost is a requirement that must be met before focusing on Speed and Velocity.

There are some data from the Service Management software market that back this up. ServiceNow estimates the current ITSM software market as about $1.5 Billion, and has set goals to grow their market to $4 Billion by 2020. I think a good part of these sales, present and future, represent IT organizations investing in a Service Management platform that will allow them to take care of the fundamentals.

In a few recent posts, I have been waxing nostalgic about the old ITIL v2 approach to these ‘fundamentals.’ One of the areas I continue to emphasize is the old ‘Blue Book’ approach to ‘Release and Control’ – the triad of the Change, Release and Configuration processes. Having a solid control over your environment is an absolute requirement before you can consider automating and accelerating your delivery through the Service life cycle.

In fact, one of the books I still frequently recommend is the old Visible Ops book. Even in the SaaS and Cloud environment the message of that volume is critical to set the stage for considering how to automate the value chain:

  • Stabilize and Control your environment using Change Mgt.
  • Identify your CI’s – particularly your ‘fragile’ CI’s
  • Build a Repeatable Build Library

Once this level of control is in place we have defined a design and transition method that we call Service Onboarding. While not specifically in the ITIL Volumes this is a unified approach to the Service Delivery and Transition stages of the ITIL lifecycle. It expands on the concept of the ‘repeatable build library’ to standardize the components and steps to building and transitioning new IT Services. If you will, it standardizes ITIL’s Service Design Package and defines the steps to actualize Service Design in a consistent manner throughout the organization.

The point is, if these principles that cover the entire lifecyle are not in place you will not be ready to realize the benefits from automating the Development through Change and Release processes using DevOps. These fundamentals are absolute requirements and ITIL is still the best source for them.

In our experience at Service Catalyst- DevOps and Digital Transformation have made ITIL more necessary and relevant than ever. The sales figures from ServiceNow cited above would seem to support that.

We would love to discuss with you how you might lay the groundwork to realize the benefits from DevOps automation and make the turn towards Digital Transformation. You can contact us at or call us at +1.888.718.1708

Introduction to Asset Management

Asset management is the fiscal and physical tracking of purchases made by an organization.  This includes the requisition, deployment, maintenance and disposal of products.  In its most simple form, it would be to track what has been bought, where it is in use and how much it has cost.  As the process matures, asset management will include the tracking of depreciation as well as involve the governing contracts and agreements that facilitate the relationships with vendors.  The importance of asset management starts with its most basic functions: knowing what is in stock and where to find what has been deployed.

The physical knowledge of equipment enables and IT organization to be set up for reorder points as well as identify potential sources of theft.  Being armed with these two abilities, an IT organization can make sure that they are not over or under procuring items such as laptops and desktops.  This simple measure will keep the customer base happy when it comes time to onboard a new employee or upgrade an existing one.  The organization will also be able to track if someone is requesting new devices too frequently.

While asset management is concerned with what is on hand, the sister process of configuration management is more concerned with the technical build of a device.  For example, configuration management tracks how much memory or what the CPU power is of a device.  As such, it is easy to think of asset management as the process that ties in to incident and request management while configuration management ties in to the change management process.

One of the best places to start with asset management is to decide what needs to be managed.  The next question posed then is, what should we consider an asset versus a consumable?  To start, let’s define a consumable.  A consumable is anything that is purchased in which the depreciation does not need to be tracked; another way to think of it is, this purchased item will NOT be asset tagged.  At this any guiding principles about what dollar value this definition should be applied really needs to be addressed internally with the financial and audit teams at a company.

With these decisions have in hand, the next point of identification is what classes are will be tracked.  These selections also needed to be guided by fairly internal processes but a simple way to start is, how does data need to be reported?  Is it at the device level?  We have 200 computers and 100 servers.  Or is it classified further?  We have 150 laptops, 50 desktops, 75 Windows servers and 25 Unix boxes.  If there is not a strong sentiment towards one of these approaches, but there is a feeling that in the future you’d like to be able to do that latter, it is best to design and set up with the latter structure.  For example, the ServiceNow CMDB has a computer class.  This is a good time to create desktop, laptop, and tablet classes as children to the computer class; they will be sibling classes to one another.

As data is brought in to ServiceNow, these organizational units will help with reporting.  Contrary to the physical asset models being tracked like this, consumable will be class agnostic.  The models for consumables are tracked with a counter that goes up and down as requests are fulfilled and reclamation processes occur.  It is still incredibly important to have these models populated.  If it I has been decided that mice are consumables, a typical choice, the counters will allow an IT department to see if a large amount of money is seemingly disappearing with respect to some wireless mice out there.

These are great starting points for asset management.  Simply knowing what you have, where it is, and where it is going will be eye opening.  The organization will begin to be able to track down discrepancies between a device never having left the stockroom based on requests made but the monitoring system finding it on the network.  It will enable the organization to see what consumables are being over provisioned and help support new processes to prevent that from occurring.  Once these principles are implemented, the organization can begin to develop full fiscal management, including depreciation of goods, as well as further mature the asset management process.


Service Side Up!


Last week in “Intake Management – A (non)-ITIL Process to ‘Center’ Your ITSM Initiative” Bill Cunningham talked about Intake Management and how implementing such functionality allows for a more efficient means of processing incidents, service requests and more.  This week we’ll dive in to the auxiliary components of Intake and how each piece works together to support this process.

Like most technical solutions in ServiceNow, the data behind the scenes plays a pivotal element to the successful implementation of Intake.  Service Catalyst created a custom application to help manage these data components.  The process of Service Data Management captures the Services, Applications, and Symptoms that make up intake routing.

The Dish on Services

While there are many terms for a Service (business service, technical service, etc.) anything that delivers value to customers by facilitating desirable outcomes is considered a service.  A consumer avoids the ownership risk and costs of a service by leaving it in the hands of the provider.  Services are often times made up of a combination of resources, processes, and capabilities.  A service owner works alongside appropriate personnel to manage and deliver it to consumers.  Additionally a service may have a primary and secondary support group that assists in providing ancillary functions.     ServiceSideUp_Image2

With an App Please!

Applications are the technical components that comprise a service.  Applications are individually “owned” or managed and may also indicate specific support groups.  Understanding this hierarchal relationship between services and applications is the base of Service Data Management.

Symptom Spices

Symptoms are indications of an issue or need.  Some key types include incidents (something is broken), requests (something is needed), inquiries (clarification on something), and password resets.  Ideally, the list of symptoms should be modest and generic. Creating symptoms overly specific to a singular application, generates additional, and often times unnecessary, efforts in data management.  Instead, it is recommend that a single symptom can be used across a variety of services and applications.


The Full Plate

The combination of Services/Applications to Symptoms is what determines the routing piece of the intake process.  Maintaining this data is important to the successful implementation of Intake, as it ensures the correct record is generated based on the backend pre-determined logic.

Who’s at the Table?ServiceSideUp_Image4

Service and Application owners are paramount to the integrity of Service Management data.  These individuals are responsible for ensuring the information about their Services and Applications are accurate and up to date.  They indicate operational status, who the primary and secondary support groups are, and what symptoms can potentially be associated to a service/application.  Specified primary and secondary support groups ensure that the intake process routes appropriately if it’s apparent the help desk should not be managing this particular reporting.

Just Desserts

Data management is always an undertaking.  However, if you eat all your dinner you’ll be sure to get dessert: an operational and functional Intake process with a cherry on top!

Coming Up…

Service Catalyst’s Ben Ramseyer will dish out Business Relationship Management, so be sure to visit us again soon!

Want to hear more? Listen to our podcast on this topic.

Is the CMDB Promise Achievable?

Let’s face it.  The configuration management database is really the Holy Grail of IT Service Management.  Business services  are defined  that support one or more business processes.  These business services  connect to various software and hardware elements (or infrastructure services) that represent connectivity, processing and storage capabilities used to support the business service.  Ideally through an extension of the CMDB referred to as the Coniguration Management System (CMS)  you might also connect supplier contracts (underpinning contracts), OLA’s, and SLA’s.  Additionally you would include links to incidents, problems and changes.  The end goal would be to have optimal visibility to see what services you are supporting along with all of the past, present and future activity regarding these services.  It is the IT data warehouse that transforms data from multiple IT management operational data store so that key IT management decisions can be made.The vision for a CMDB/CMS strategy is spot on as a critical underpinning for holistic service management.  The execution piece is very tricky.  And, in the case of the CMDB, this is a consummate example of the importance of ITIL’s  guidance on breaking vision down into manageable, achievable interim goals.For organizations that have substantial infrastructure and have no current tracking mechanism, be realistic about the results you hope to achieve.  Auto discovery tools can be helpful but are also very complex and require you to access all points in the network to give you comprehensive results.   A structured, slow but reliable approach to getting your arms around the relationship models is to target a handful of services to begin with and do one service at a time. Once each service is validated in the CMDB assuring that you are managing it under your change management process is key.Identifying business critical services and prioritizing them within this strategy will allow you gain better control and visibility to the areas that are most important to your enterprise as the first phase of this process.  Once you’ve got these critical services captured, you can tackle others.  In a large organization, this discover & control method will be a multi-year process, but the approach makes the CMDB promise achievable.