A Chatbot as Your Website’s Receptionist: 3 Concepts From Practice

A Chatbot as Your Website’s Receptionist: 3 Concepts From Practice

There’s the dream of a chatbot functioning as an automated full-scale service desk: Answering questions even when asked in an atypical way using wrong terms, capable of recognizing the exact circumstances of a customer’s problem and forwarding solutions with the exact level of detail they need. We can get there, but it’s a lot of work. For the first step, we need a closer objective.

Instead of mimicking a service desk employee with all the knowledge and handling a wide scale of requests, let’s start out with a bot working as a receptionist with a single task: telling the users where to look for a solution. Why is this a good starting point? Because we already know how to navigate our website. Once the bot can tell where to find the answer, you can concentrate on enabling it to extract the information and directly provide it to the user.

There are three approaches on how to implement this functionality: full text search, guided search and using intents. They require different levels of development effort and data preparation, but that’s not a downside: you can move on from one to the other, starting small and building on what you already have. Let’s start with the easiest one.

 

Concept 1: Search Engine Interface

You probably already have a search engine implemented in your website. If you think about it, this engine does exactly what you want the chatbot to do. It takes a free text input and returns a list of places to look for information on the terms the user entered. Thus, think of your first chatbot as an enhanced interface for classic search. It asks the user for some keywords and pops out a list of pages, maybe in combination with a short summary stored in the page’s metadata.

One could argue that this won’t add any value because there is no new functionality. But functionality is not the only thing that adds value. You can use this first bot to test the performance of the tool you use. Your developers can collect first experiences on working with this kind of technology and on how to integrate it into your existing environment. Your conversation flow designer can experiment with ways on how to map the concept to a set of conversation nodes. And of course you can collect first feedback from your users without investing too much.

And to make it clear: Even for the users there will be added value. Providing an alternative interface may help some of them or enrich the user experience. Moreover, while the search engine is done when the result page is displayed, the bot can continue supporting the user, e.g. by asking whether these results answer the question and suggesting additional contact information in case they don’t.

 

Concept 2: Guided Search

Once the bot is working and executing at least this basic task, you can increase its helpfulness. A good search engine provides some kind of filtering, right? How do you implement this in the chatbot? Well, the chatbot can ask for specific information and provide options to select. This is where the bot can start to show something that at least looks like intelligence. For example, if there are many results to a certain request, it could ask the user to set exactly the filter that reduces the number of results the most (e.g. “Which Operating System do you use?” followed by a list of all OS in the result). Thus, instead of being overwhelmed by a huge range of options the user must only make the decisions that really help.

This concept requires your pages to be enriched with some additional metadata and the bot needs direct access to this information, without the search engine functioning as a broker in between. But this is only a small adaption and since your developers already know how the bot processes data, they probably won’t run into big issues.

If your data has an accurate structure you can even remove the free text input and use only a set of questions with pre-set options as answers for setting filters. This prevents users getting wrong results due to wrong terms in the query. However, to some users this might seem like a step backwards.

 

Concept 3: Understanding the Intent

Your bot is already capable of having a conversation – without even trying to understand the user. Now your developers know how to modify the bot, your conversation designer is experienced in developing flows and the bot is integrated well into your website. Time to tackle the last missing step to a real chatbot with all the AI stuff running in the background.

For those new to the topic: Chatbots use Machine Learning to match a text input to one of several pre-defined intents. The more intents, the harder the task. Thus, it is best to start with a small set of things the users might be interested in. For a first attempt, in order to get experience in working with intents, you might want to train the bot on intents related to the conversation itself like “explain one of the options you asked me to choose from” or “I want to change a filter I set earlier”. This is a lot easier than letting the bot recognize what information the user is looking for, since there is less variety.

Later you can try to replace the search engine integration by connecting pages to intents. Nevertheless, keeping search as a fallback in case the bot fails in recognizing the intent is a better idea.

 

 

You started out with a search engine interface and got to a navigation assistant. With some additional training, the bot will be able to predict the correct location to point the user to with high accuracy. From that point on, it is only a small step to the bot answering questions by itself. This is how you get the service desk bot you dreamed of in the beginning.

Do you have any question? Simply email to: marketing@avato.net

Imprint: 
Date: January 2020
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© 2020 avato consulting ag
All Rights Reserved.

The 7 Biggest Mistakes of Knowledge Management and ECM

The 7 Biggest Mistakes of Knowledge Management and ECM

The 7 Biggest Mistakes of Knowledge Management and ECM

Why Information Management? A Request for an Innovative Approach.

Why have Knowledge Management (KM) or Enterprise Content Management (ECM) failed for decades? Why do they not bring any added value to the organisation?

The simple answer is: If you keep using the same approach, you shouldn’t wonder when the result is always the same!

An organisation generally has the unpopular topic of “documentation” pushed on it from external sources. However from the perspective of those affected it is, at most, a second priority and expects something from the organization that is not part of its core competencies. A lot of effort is made to produce results that simply comply with the formal and regulatory requirements, but don’t provide the organisation with any practical added value or are only selectively or temporarily useful.

The 7 Biggest Mistakes

What might seem like a law of nature is actually more of a home-made problem. Examples from the industry (technical documentation/communication) show that innovative ideas and methods as well as the right technologies can very quickly achieve astonishing results with comparatively little effort. We’ll concentrate on the 7 biggest mistakes of ECM and KM projects. Information Management overcomes these by using methodical approaches and innovative technologies.

1. Lack of a comprehensive approach.

Information Management (IM) is complex. “Think big!” Every organisation needs to see IM as comprehensive and integrated. Many different groups of people, diverging information needs and IM technologies all need to be brought into an overall picture. This begins with the objectives of all relevant stakeholders, integrates the requirements of information users and identifies the relevant SMEs (Information Providers).

2. Focus on IM technology

In classic projects the focus is neither on the user nor the information. Projects are quickly shaped by discussions about IM technology, especially when company IT plays an important role in the setup of the projects.

3. Inadequate know-how

Describing processes, technologies, standards or services – to name just a few aspects – well and in an understandable manner is not an easy endeavour. One needs to understand content, efficiently cooperate with the specialists, recognise who uses information and then prepare information in a way that is appropriate to the target audience, interesting and understandable. Specialists who manage to do this are rare and expensive. Their importance is often underestimated.

4. Methodology-free work

Methodology often likes to focus on versions and approval procedures. However these are, at best, secondary methods. Information must be broken down into small units (Information Units) and assigned to a taxonomy using metadata. This avoids redundancies and means that information can be maintained. There must be prioritization in the preparation of content, content must be evaluated and the governance of content must be structured. The feedback of users should always be integrated. In fact, these central methods almost never dominate ECM or KM projects, which are generally driven by IT.

5. Cumbersome, inflexible processes with little content and a lot of formality

Traditional KM and ECM are neither slimline nor agile processes. In principle they work based on the waterfall method, are set up for the long term and are focused on formal processes. In general they work without evaluation of content-related results. This means that they are often simply a continuation of traditional documentation associated with new tools and comprehensive but unfortunately often only superficial requirements.

6. Cultural deficits

Organisations tend to view the current state as a natural law: There is no such thing as relevant and well-prepared information that is easily accessible. As a result there is no culture of communication, feedback or structured sharing of information. Everyone simply looks after their own small area. All sorts of information is collected and hoarded. This creates a digital dumping ground that is soon unmanageable.

7. Managerial failure

Half-heartedness expresses itself in many ways: Unclear responsibilities, insufficient prioritization or a lack of performance measurement are only the most visible signs. Management doesn’t seem to “like” the topic. As well as this, managers often don’t understand the importance to their own organisation and are unable to estimate the complexity of Information Management. This means that the organisation permanently drags the topic around, everyone half-heartedly addresses it and as a result it offers hardly any added value to the organisation.

Instead

If it is approached correctly, and the 7 biggest mistakes are avoided, self-documenting can be quick and easy. The best part is: Information Management works in all organisations. They become better, faster, more agile and more business and customer oriented.

How to do it right: Information Management has an integrated approach to people, information and IM technology. Reliable and useful information is always a snapshot in time. That’s why IM is a fast and agile process, uses innovative approaches and learns methodically from established procedures in other industries (technical communication).

Do you have any question? Simply email to: marketing@avato.net

Imprint: 
Date: January 2020
Author: Gregor Bister
Contact: marketing@avato.net
www.avato-consulting.com
© 2020 avato consulting ag
All Rights Reserved.

5 Basic Rules for a Good Meta Data Schema

5 Basic Rules for a Good Meta Data Schema

Automation is today’s hot topic. The never ending flood of information makes it impossible to maintain each file and each dataset individually – and never mind manually. Meta data are the key to the solution of this problem. They allow the grouping and batch processing of information in accordance with specified properties. To ensure the smooth operation of such processes, meta data must be captured in a structured form. This article explains why a structure, i.e. a meta data schema, is so important and what needs to be considered in the development of such a schema.

 

Why do I Need a Meta Data Schema?

Machines are not designed to process unstructured data – be it for simple, short scripts or KIs – because they lack an ability for logical interpretation. A specific, fixed structure is needed for their use. The more context there is for a particular bit of information and the more precise the definition of its structure and meaning, the lower the effort will be for automated processing and the more reliable and meaningful the results will be. A meta data schema is basically nothing more than a definition with the purpose to make such contexts available for machine processing.

However, a schema isn’t just good for the use of meta data – it is also beneficial for data capture. Since a meta data schema defines what data must look like, many errors can be detected at the time of input of the data and it doesn’t matter if that is done manually or (partially) automatically. In addition to avoiding errors, a good schema will also reduce the amount of work you have to put in, because when the meaning and the relationships of the meta data is clearly defined, then much of that data can be captured automatically or can be generated from other (meta) data.

The bottom line: A meta data schema…

  • …facilitates effective, automated data processing and maintenance;
  • …increases the quality of the meta data and with it their value;
  • …reduces costs for capturing the meta data.

 

What Makes a Good Meta Data Schema?

The best schema is one that supports data input and data processing the most, and makes these steps easiest. A few basic rules will help you to develop a schema that optimally matches your data and its purpose.

 

1.      Defining the Area of Use

What type of data should the meta data schema be applied to? A schema that matches all available data will also allow the processing of all the data with the same automatisms. Very varied data, on the other hand, will also have very few properties in common. Think about what kind of data you want to process (manage, search) together. That data set should share one schema. Then the schema will not have to consider other types of data and formats. There is, of course, no reason not to reuse parts of the schema for other data.

 

2.      Selecting the Right Fields

A meta data schema consists of so-called ‘fields’, whereby each field contains exactly one defined information. It is well worth your while to think about which fields you will need and where you want the data to come from. The key question here is: What will be the purpose of the meta data? It is a complete waste of time to define a field that isn’t needed at all. The same goes for fields that can’t be filled out for a large portion of the datasets, because mining that information would be too costly or not possible at all.

The data should be split into its smallest possible components, because it is much easier and less error-prone to join together two clearly defined fields, than it is to break down the content of a field. You should therefore check for each individual field you want to use, whether it may combine two or more independent bits of information. You could always add another filed in case of a combination of data that is frequently needed in this form – but that field should then be populated automatically to prevent contradictions.

 

3.      Don’t Re-Invent the Wheel

Meta data has been in use for quite some time and in many areas. The necessity for data exchange has resulted in the development of more robust, well-documented meta data schemas and exchange formats, which cover most of the requirements of a specific sector. Using a standard schema has a lot of advantages. Data provided by external sources can be used immediately and without any modifications, provided the same standard scheme was used for its capture. There are various tools and masks available for commonly used schemas, which further simplify data maintenance. And of course you save a lot of time and effort you would have used for creating your own schema. When you therefore find that iiRDS, Dublin Core or MODS offers everything you need, then choosing one of these will in all likelihood be a better idea than developing your own schema tailored specifically to suit your data.

 

4.      As Tight and Exact as Possible

The fewer selection options and freedoms a schema offers, the better. Every selectable option represents an opportunity for human error. Specify exactly, what information must be entered in a field and how. Data types, drop-down lists and regular expressions (a language to describe character strings) are a great help here. You avoid typos and make sure that identical information always appears in the same format. But there are even simpler ways that offer plenty of benefits. In a “Ranking” field, you only allow a numerical input of 1 to 6. A short explanation of the exact type of information this field refers to can be very helpful.

 

5.      Optional or Mandatory

If you are planning to capture meta data automatically or using experts, then it must be mandatory to fill out all fields of which you know that they apply for all instances. Every person has a name, every file a format and every digital text an encoding. Should one field remain empty, then the dataset cannot be processed by all processes accessing that dataset or will at least require special treatment. That will significantly impact the benefit of the schema.

There is, however, an exception, in which a limiting of the schema by keeping the number of mandatory fields as high as possible can also be a drawback: that will be the case if the meta data is entered manually by people, whose main responsibility is not the maintenance of that data. Too many mandatory tasks will mean a lot of time spent, which can lead to a drop in motivation and with it to careless, faulty and even inadvertent input. Where that is the case, it may become necessary to think about how much time spent on data input makes sense to ensure the best possible data quality.

Optional fields will, of course, also be useful in automated data capture processes. A “Most recent renovation” field will be a good idea in a meta dataset about a house – but will not be applicable for a new construction. Optional fields make sense, where the fact that an input is missing also represents a statement.

 

In addition to all these basic rules, the rule of implementability must also be applied. Should the cost for the creation and maintenance of a drop-down list be simply too high or the technical implementation of the perfect schema would take too much time, then some compromise in terms of specificity will be unavoidable. But anyone, who right from the start isn’t really sure about what the perfect meta data schema should be, will find it difficult to implement the best possible schema anyway.

Done with your meta data schema? Then it is time for the next step: Capturing! Or better stick with Create?

5 Basic Rules for a Good Meta Data Schema (pdf)

For further questions send an email to: marketing@avato.net

Imprint

Date: June 2019
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© avato consulting ag – Copyright 2019.
All Rights Reserved.

(The Right) Information Makes Service Management Successful!

(The Right) Information Makes Service Management Successful!

3 Components for Successful (Customer) Service Management

Whether IT service desk or customer service, the challenges faced when trying to achieve high customer satisfaction are the same. If customer satisfaction drops, the costs of customer service typically increase in turn. If the service is self-explanatory, easy to use and functions trouble-free, customer satisfaction is high and customer service costs are reduced.

Well-functioning services form the foundation of customer satisfaction. What are other success factors?

Success is always based on 3 components:

1.  Information for every user
It needs to be easy to understand, self-explanatory, reliable and complete.

2. Technology
It must provide necessary information to users (Information Users) in a direct and easily understandable manner. Further, it must be suitable for managing information without great effort.

3. Information Management
Information and technology without Information Management remain fragmental. Both must be coordinated and aligned with the goals of the key stakeholders.

 

1) Information

What information are we talking about, who are the Information Users? Simply put, all the information a user needs in all phases of use. Users are customers and employees, especially service desk agents. What specific information do users need? A lot of it is shared information, some of it is only needed by individuals. All users employ a variety of different communication channels, applications and systems. And this is the root of all evil: information is not managed; it is typically tool- or user-specific. It is therefore not integrated, contradictory, redundant, erroneous and incomplete. Moreover, it is not easy to find and difficult to use (language, style, no uniform terminology …).

 

2) Technology

The quality of information and information systems depends on numerous factors. At its core, it is determined by a few basic rules for the technologies used:

1. No redundancies of information. This is why a superior model for information units and metadata (taxonomies) is needed.

2. Consistent separation of content management, content delivery and analytics.

3. Centralized analysis of all data from service usage.

 

No alt text provided for this image

Content Delivery

Even in smaller companies, content is made available for internal users and for customers in several content delivery systems. Known systems include web portals (Wiki), applications, FAQs, chat and chatbot, audio or video systems as well as printed materials.

The most important ground rule is not to manage any content in the delivery system. As soon as you start to create and manage content (information) in the content delivery system, this content becomes barely usable for other areas. The content turns into single-purpose content. In addition, delivery systems are rarely suitable for content management, at best they are the second choice. So, if you really want to manage content professionally, do not do so in the delivery system, save for exceptional cases.

 

Content Management

It should be possible to create content as decentralized and to manage it as centralized as possible. In the field, it is hardly possible to manage everything in one system. Different types of content simply require different CM systems. This includes text, audio, video, graphics, code and tables (structured data). But what matters across the board:

1. Content must always be broken down into small units (Information Units). This ensures usability in different delivery systems (application, FAQ, chatbot, web) and enables effective and efficient content management in the long term.

2. Content must be created in such a way that it can be used in different delivery systems. Content management systems should therefore have appropriate interfaces and be able to provide the required formats.

3. Metadata models (taxonomies) must always be cross-system. This ensures that information remains maintainable, manageable and thus usable.

 

Analytics

Here we want to stick to Information Management and leave out analyses of services and customer satisfaction.

Content delivery systems permanently generate valuable data for use and evaluation. This data is generated through page views, ratings, comments and questions (recorded on pages or in CRM systems as well as ticketing tools), searches or input in chatbots. This data offers a high added value when it is combined and evaluated.

There are no objections to using out-of-the-box analysis options from individual systems. Avoid making time-consuming adjustments to individual tools in order to improve or extend the built-in analytics engines. The number of your systems for content delivery and user communication tends to increase rather than decrease. Especially in communication with customers, ever larger amounts of data from ever more data sources will be available in the long term.

A good example is the handling of text-based information and user reviews. Text-based information is created for different delivery systems (Web, Wiki, Chatbot, FAQ …). At the same time, numerous text-based or text-related information (comments, ratings for comprehensibility, click rates and length of visit, search entries, chatbot …) is generated during use. The analysis of such data not only provides a basis for better (more comprehensible) content, but also valuable information on gaps or superfluous content as well as findability.

 

3) Information Management

It sounds like a platitude: nothing really works without management, architecture and design. But why should information function reliably and in an integrated fashion in complex environments without management? Information Managers are the key to success. They replace the traditional Technical Writer.

These are the most important differences resulting from the use of an Information Manager:

1. Firstly, Information Managers manage the process. They know what to do and how to get it right.

2. Secondly, all Information Managers use the same methods and followed the same procedure models and they know how to employ them quickly and with high-quality output.

3. Lastly, the most important parts: The taxonomy, the concept of Information Units and the metadata. Every Information Manager knows the model, works according to the same rules and concepts, and uses the same set of metadata.

 

Summary

You don’t have to follow a lot of rules to achieve successful customer service management. Think big, develop your vision and then start with individual sub-topics. The vital thing is to recognize the importance of information for users and the enormous value of usage data. And then what counts is understanding Information Management as a process and adhering to a few rules on the use of technology.

Do you have any question? Simply email to: marketing@avato.net

Imprint: 
Date: December 2019
Author: Gregor Bister
Contact: marketing@avato.net
www.avato-consulting.com
© 2019 avato consulting ag
All Rights Reserved.

The Problem in AI no one Talks About

The Problem in AI no one Talks About

With Artificial Intelligence, everything is possible today, isn’t it? We have Machine Learning and Neural Networks and all that stuff. Machines can help your customers online via chatbot (if they need help with the right things), categorize text by topic (sometimes) and tell what’s depicted in an image (okay, that works quite well… usually).

Yes, of course there is still a lot of work to do. We need to improve the accuracy, we need a bit more computing power and we have to talk about the social implications. But beside that, we can use AI to solve all our business problems!

Well, no.

Why? There are a lot of answers to that, from legal issues to result-interpretation to infrastructure to cost. But the one problem that affects every single AI is rarely mentioned: data. The reason for that: The leaders in AI technology – Universities, Google, Apple and so on – don’t have that problem. But if you start using AI, you will run into it.

See, all AI must be trained first. You need training data for that. Then you must test the quality of the models produced during training. You need separate testing data for that. And to ensure it is really working, you must feed some validation data to the model. And don’t think it is done with some small datasets for each of those tasks. Especially the youngest generation of AI technologies needs lots and lots of data. Depending on the variance within your data, you may want to start with several hundreds of records, better thousands.

 

But there is Big Data! Or we could use our company’s databases, data shares and SharePoints!

 

Well, no.

How will you gather this data? It is often hard to find a single source providing enough of the data you need. Combining several sources is even harder since all records must be in the same format. If you are planning to build an AI, be aware that you might end up investing more time into the preprocessing than in actually building and testing your models. You will also need to involve some specialists for the data you are processing to tell you how to structure your data.

 

Oh, okay, so we will do all that. May take a while, but at some point we will have enough data and then we can feed it to the algorithm.

 

Well, no.

You will get a result and at first glance it might even look marvelous. 97% of records classified correctly by your AI for categorizing the documents on your intranet into “useful” and “garbage”. That’s great! Until you recognize that the AI just decided to label all documents as garbage, independent from their quality.

That is what happens if your dataset is unbalanced. In the example case, only 3% of it were high quality documents, thus by putting all the documents in one class the AI reached a high accuracy and an even higher degree of uselessness. Balancing your dataset is a difficult task. You need enough examples for every class and a representative amount of variance within the data. If all training records for one class are from the same author, from the same time period, on the same topic or whatever connections might be hidden within your data, the model will work in testing, but will completely fail in operation.

There are techniques that to some degree can handle the bias resulting from unbalanced data. But you should not rely too heavily on them. Thus, you don’t just need thousands of records – you need thousands of records for every single class your AI should identify. And you need a subject matter expert to help you uncover the hidden dependencies.

 

*Sigh*… okay. We collected the data, we processed it and we ensured the set is balanced. But now, finally, AI can solve the problem we initially set out to solve?

 

Well, congratulations, you achieved something that can take years and, in many cases, is even impossible. Beside that… No. But you finally reached the point where you can start coping with all the issues of AI itself! Find the best algorithm for your use case, set up the necessary infrastructure, solve the model selection problem, find an efficient way for interpreting the results and learn how to take advantage of your findings. Then AI will help you to solve this one single business Problem.

Do you have any question? Simply email to: marketing@avato.net

Imprint: 
Date: November 2019
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© 2019 avato consulting ag
All Rights Reserved.

Business Case: IT Information Management

Business Case: IT Information Management

Save up to 75%

IT Information Management (ITIM) is a departure from familiar IT documentation and traditional document management. Such a paradigm shift begs the question: Will it be worth it in the short term, or at least the long term? Or will it simply create more expenditure? Is there a positive business case?

In the following we will consider the ITIM Business Case on the basis of two different examples:

  1. Application documentation
  2. Documentation for a service transition

… and with a comparison of two action scenarios:

  1. Traditional documentation (uncoordinated, downstream creation of Single Purpose Documents by technical experts)
  2. Use of IT Information Management

Spoiler: Use of Information Management leads to enormous savings. This article will focus on how these savings arise.

Business cases are generally calculated over long periods of time and based on extensive assumptions. In addition to direct and measurable financial parameters they often also take into account strategic effects, which often results in a positive business case. The aim is for this not to happen in this case. Only directly measurable factors should be included. Strategic improvements and long-term cost optimisations are only mentioned if they also represent an actual long-term benefit of IT Information Management.

 

1.1   Time Spent on Documentation is Reduced by 75%

But how? Lets look at what consistently takes a lot of time. This is where reductions need to occur. Documentation is carried out by IT experts, and the reduction essentially comes from two components:

  • They document only and exactly what is needed. This will save them at least 50% in terms of effort. Information managers control the process. Nothing superfluous, no overlapping documentation any longer.
  • Everyone does what they do best. This will save them another 50% of the effort. Technical experts focus their expertise on small units, not on the creation of large documents, neither on formulations, formats, or graphics. This is what Information Managers can do better.

(Watch the video about the Business Case [02:20 min])

Overall, the total effort for IT Information Management, in other words IT experts plus Information Management, is at least 75% less when compared to the time spent on traditional processes. Graphic

What do Information Managers do differently?

Information Managers specialise in editorial work and the creation of graphics, videos, audio and process overviews. They are very familiar with tools that lead to higher productivity. This is further increased by the use of templates as well as learned procedure models and methods. At avato, Information Managers also undergo additional training including the ITIL Foundation and certification as a Certified Information Professional (CIP).

 

1.2   What Does Traditional Documentation Cost?

Up to 75% of the ongoing efforts for traditional documentation can be saved. But what does that amount to? Depending on the area and scope, experience shows that this accounts for between 4% and 8% of total working time for IT staff. This might sound harmless, but it adds up. As well as this, there can also be special documentation requirements, such as for an upcoming audit or a service transition. Here the proportion of working time spent on documentation is no longer 6% but can instead at times easily be at least 25%.

 

1.3   Initial and Ongoing Costs – ITIM vs. Traditional Documentation

How high are the initial and ongoing costs for ITIM? The quick answer is: It depends! The more detailed response: Do you already know where you want to start and where the need is most pressing? Do you know your goals and can you name Accountables and Responsibles*? Can Information Managers adequately access these?

If you can already answer these questions then it will take a few days to a few weeks depending on the amount of documentation.

The following examples show what the business case could look like in concrete terms.

 

Example 1: Application Documentation

Regulatory requirements mean that technical documentation should be created for an applications with up to 10 environments. The current development and operations team contains 26 people, half of which are external employees.

Prerequisites: The Accountable for the application is included and can name all the Responsibles (those with technical responsibility). They are all adequately available and can each invest a total of 1 working day. In this case, ITIM can be set up for the application after one week and the reduction of effort will take effect in the second week.

The use of ITIM will save you EUR 7,000 per month (details of the calculation can be seen in the annex).

 

Example 2: Documentation for a Service Transition

A service will be outsourced to a service provider. To prepare for the service transition, your ITSM processes and technical environment need to be documented. There is also a need to prepare a number of Work Instructions and How-Tos. Your current team contains 26 people, half of which are external employees.

The basic principle applies again: The Accountable for the service area is included and can name all the Responsibles. They are all adequately available and can each invest a total of 2 working days. In this case, ITIM can be set up for the application after three weeks and the reduction of effort will take effect in the fourth week.

With ITIM you save EUR 42,200 by investing in documentation that not only makes the transition possible but is also usable and maintainable in the long term (details of the calculation can be seen in the annex).

 

⚠  Assumptions were used in the examples. The expenditure for ITIM and the potential savings naturally depend not only on the scope of IT and the areas included but also on the current operating expenditure for traditional documentation.

 

When is the Best Time? When is ITIM a Must?

In these 4 scenarios there is no alternative to Information Management on the path to success:

1. Your external partners document in the conventional way.
Start IT Information Management to reduce the number of required external partners!

2. You have a backlog of your IT projects.
Start IT Information Management to stop “wasting too much time” on the documentation part.

3. The support for your experts is external and too expensive.
Minimize the need for external support – with a comprehensive, high-quality documentation.

4. You have a backlog of documentation.
Start IT Information management to utilize the power of specialization. The backlog will be eliminated in no time.

 

Summary

Information Management offers IT great savings potential when it comes to documentation effort. There are two reasons: For one thing, only what is really needed is documented. Nobody documents only for themselves, or documents the same thing as their colleagues. The documentation process is comprehensively controlled. Secondly, the high level of savings comes from the fact that everybody does what they do best. IT experts contribute information while Information Managers control the process and take care of templates, methods, language and formats. All in all, IT experts save 75% of the time that they spend on documentation without ITIM. In certain situations there is no alternative to IT Information Management.

 

Appendix

Example 1: Application Documentation

Regulatory requirements mean that technical documentation should be created for one of your applications with up to 10 environments.

Your team contains 26 people, half of which are external employees.

Your Costs for Traditional Documentation

The monthly cost for traditional documentation is:

  • 6% of the project/work time: A total of 28 man-days
  • Monthly costs: EUR 18,525 (26 * EUR 650 * 18 * 6%)
    (average 18 man-days per person per month; EUR 800 external day rate and EUR 500 costs for internal employees)

ITIM Saves you EUR 7,000 per Month

Every team member will invest one working day (= 26 days) in the transition from traditional documentation to Information Management.

Your one-time investment costs: 26 * EUR 650 = EUR 16,900

After successful transition to ITIM, your team will save up to 75% of the time they spent on traditional documentation. This is a reduction of 21 man-days.

The outlay for Information Management is around EUR 5,900 per month.

Your one-time investment for documentation that is usable and maintainable in the long term is EUR 16,900.

After the transition to ITIM you will save EUR 7,000 per month.

 

Example 2: Documentation for Service Transition

An IT service is being outsourced to a service provider. To prepare for the service transition, your ITSM processes and technical environment need to be documented. In addition to this, there is a need to prepare a number of Work Instructions and How-Tos.

Your team contains 26 people, half of which are external employees.

Your Costs for Traditional Documentation

In normal daily operations each of your team members spends 6% of their time on documentation. However, if there is a need for more extensive documentation, for a certain period of time (here 1 month) the effort will increase to 25% of work hours.

Your costs for traditional documentation are:

  • 25% of the project/work time: A total of 117 man-days
  • Costs: EUR 18,525 (average 18 man-days per person per month; EUR 800 external day rate and EUR 500 costs for internal employees 26 * EUR 650 * 18 * 25%)

Your one-time documentation effort without ITIM is 117 days. This corresponds to around EUR 76,000.

ITIM Saves you EUR 42,000

Each member of your team invests 2 days, making a total of 52 man-days. With ITIM your one-time documentation effort is therefore EUR 33,800 (EUR 800 external day rate and EUR 500 costs for internal employees).

Your investment for documentation that not only makes the transition possible but is also usable and maintainable in the long term is EUR 33,800. You save EUR 42,200.

 

⚠  Assumptions were used in the examples. The expenditure for ITIM and the potential savings naturally depend not only on the scope of IT and the areas included but also on the current operating expenditure for traditional documentation.

Do you have any question? Simply email to: marketing@avato.net

Imprint: 
Date: November 2019
Author: Jennifer Gitt
Contact: marketing@avato.net
www.avato-consulting.com
© 2019 avato consulting ag
All Rights Reserved.