Slimming Cure for Confluence Wikis

Slimming Cure for Confluence Wikis

A corporate Confluence wiki is a great thing. Everyone can feed it with their knowledge. Over time, this creates an extensive knowledge base, providing information on any corporate issue.

As different authors add more and more information, corporate Confluence wikis can end up with thousands (or even ten-thousands) of pages. While all of it might have been important when it was created, much of it becomes outdated as time passes. At that point, wrong information stands next to still relevant knowledge. This causes an increase in time to find what you need, errors due to wrong information, and general distrust of the wiki. On the long run, users might even abandon the wiki as they find it unhelpful.

To prevent this, Confluence wikis should be put on a diet from time to time. Duplicates and other stuff that is no longer needed must be removed. What is kept should be brought into a structure that is easy to navigate and maintain. But how to do that? The effort of manually checking all pages created in corporate history is too high. The process must be at least partially automated.

And this is how you achieve that:

Step 1: Know where you are

First you need a way to access the data. An export of or a connection to the wiki’s database would be perfect, but a simple page export is enough for basic evaluation. Any space admin can do this using the confluence UI.

Next, get an overview. This can be achieved with simplest technology. Who created and edited which pages? Are there experts for the different parts of the wiki who could give an estimate on what is relevant? Look for outdated team names and alike. Those are good indicators for how well a page is maintained. The date of the last edit is also worth a look, but as different topics change at different pace, you should be careful about that. Consider the wiki’s menu structure. Too deep menus or too many pages under one menu entry make it hard to navigate and increase the risk of duplicates.

While doing this analysis, do not focus on the data of single pages, but rather on the range of values. From this you will be able to estimate how close you are in general to what you want to achieve.

Step 2: Analyze

Time to extract deeper insights from the pages. There is a wide variety of aspects you can analyze automatically. The structure of a page, its legibility and the use of abbreviations indicate how hard it is to understand the page’s content. Too complex pages should be removed or at least reworked. You can let the system check if the content matches the page’s intention, and it can search for similarities between pages to detect duplicates or topics spread across different branches of the menu.

The number of graphics gives another hint, as graphics help to understand an explanation. Creating them required some effort, so the page’s creator did care about the content and its quality. In addition, the age of graphics can tell a lot on how well the page is maintained.

There are many more methods you can apply. Which ones to use depends on your quality goals, what you want to optimize the wiki for and, of course, the technologies you have access to. Information Managers can help you decide what is needed based on your goals and the overview you gained in the first step. They are professionals in applying information management methods and technologies interpreting the outcomes.

Step 3: Clean up

Now there is only one last step left: deciding what to keep, what to rework and what to remove or archive. This can be done rule-based, using the collected insights with respect to demands of the different areas, or with a combination of machine learning and input from experts. The one thing you will most likely need to rework is the wiki’s structure. Once again, Information Managers can assist you with that. They are trained to arrange information in the optimal way to achieve the stakeholders’ goals.

In the end, your Confluence wiki lost some excess weight – probably up to 70%. It is faster and simpler to use, leading to better decisions and performance. Maintaining it is also easier, now you have the overview of what is where. If you keep an eye on the wiki, you will be able to sustain what you have accomplished: a lean, healthy knowledge base which increases efficiency, minimizes errors, and which your users like to work with.

Do you have further questions? We will be happy to advise you: marketing@avato.net

Imprint: 
Date: April 2021
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© avato consulting ag – Copyright 2021
All Rights Reserved.

How to Automate your Glossary in Content Creation

How to Automate your Glossary in Content Creation

Knowledge Management (KM) is becoming increasingly important. It leverages sales, efficiency, and employee satisfaction. KM is an interdisciplinary task, requiring the cooperation of subject matter experts, IT, writers, and management. Together they create a shared knowledge base that is the foundation of cooperation, communication, and daily business.

Having a shared understanding of important terms is key when creating the content for a knowledge base. Else the information users will get confused, removing most of the information’s value. Glossaries help to avoid such issues by providing shared definitions, allowed and forbidden synonyms and abbreviations. But do you really want all your authors look up every single term? How should they know if there is a definition for a term?

Your Glossary should tell them proactively. You need to automate your Glossary. The following steps tell you how to do it on a basic, advanced, and complex technological level.

 

1. Declare the relevant scope

This is the most difficult step. Luckily, it is only necessary for large glossaries that provide different definitions for a term in different contexts. If your glossary differentiates between contexts, you need to tell it which ones apply to the text you want to check. The simple way is to let the author select the relevant scopes.

If you have the resources, you can create a small UI. An even simpler way is to automatically create a CSV-file with a list of all available scopes and let the author put an X next to all applying scopes. The glossary can read this information and thus create a list of relevant terms.

A more elaborate method would be to use text analytics and AI to derive the topic and the relevant scopes from the text. Setting this up will take some time and you will want to get sure it is worth the effort first.

 

2. Prepare the text

Glossaries store terms in their basic form. In texts, there is grammar. For languages where words do not change that much (e.g. English), you can just ignore that for a basic automation (especially if your Glossary mainly contains nouns). For languages with more inflection and for a more precise result, you need to bring the words back into their basic form. You can do this by just removing the most common suffixes or apply more sophisticated techniques like stemming or lemmatization.

 

3. Find the used terms

With a simple search script, find the terms, abbreviations and synonyms used in the text. In addition, search for any word that contains multiple capitalized letters, as it is probably an abbreviation. To make the results more useful, let the script categorize the matches into at least 3 groups:

  1. Matches that are almost certainly a misuse of a term; this includes:
  • forbidden synonyms that are not listed as an allowed term or allowed synonym of another term
  • abbreviations not listed in the glossary
  1. Matches that might be a misuse; these are mainly phrases that are forbidden synonyms in one context but allowed in another one.
  2. Matches that are probably used correctly; this includes:
  • phrases listed only as preferred term or allowed synonym
  • abbreviations listed in the glossary

This categorization helps to decide which matches must be checked and for which ones you are willing to take the risk. Matches of category 1 mean that the author either used a wrong term or the glossary is incomplete. They must be checked. Matches of category 2 should be checked, but if the text is of low relevance or needed urgently, you might skip that. Matches of category 3 only need to be verified for the most important texts, as it is possible that the author used them in a way not covert by the definition in the glossary. But assuming the glossary is mostly complete, and the author knows the topic well, it is unlikely to find mistakes among these matches.

If you want to, you can split up the third category, so it differentiates between the use of allowed synonyms and the preferred term.

 

4. Do the correction

Once again, there is a fancy and a fast way to do this. The fast one is: Let the search script create a list of all matches, containing some additional information:

  • the position of the matching phrase in the text (e.g. number of paragraph or sentence)
  • some words before and after the phrase
  • the term that produced the match (might be different from the phrase in the text, e.g. if a synonym was used)
  • the term’s definition
  • the match’s category

With that information, the author can replace misused terms with the preferred one and check if the definition corresponds to what they wanted to say. If the context provided in the list is not enough to decide, they can go to the text and check it there.

The fancy way is to build an UI or an editor plugin, using the list as input. Let the UI jump from match to match, providing the information listed above and buttons for ignoring the match, replacing it, or requesting a change to the glossary.

With this technique, you can take a considerable amount of work from your authors and proofreaders while making the use of language in your contents more consistent. All it takes is a simple script and a glossary in a format the script can read. Once you have that first version, you can improve it by adding more precise or user-friendly technologies. Your authors will thank you for the support. Your terminologist will thank you for the improved visibility of their work and the fresh input they get from authors. Your Management will thank you for the increased efficiency in content creation. And most important, your information users will thank you for easily understandable information.

Do you have further questions? We will be happy to advise you: marketing@avato.net

Imprint: 
Date: February 2021
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© avato consulting ag – Copyright 2021
All Rights Reserved.

From “Customer Service” to “Customer Experience”:

From “Customer Service” to “Customer Experience”:

Why are KCS and Knowledge Management so Important?

 

Customer service is increasingly important for businesses wishing to achieve a competitive edge. Quite a few analysts see customer service as a decisive competitive factor in many areas of business. This means that the list of publications on customer service is almost endless. In recent years, the focus has broadened and research has emerged that increasingly focuses on customer experience.

A good overview of many aspects of customer experience can be found on the Gartner website.

The approach “Running the Business through Your Customer’s Eyes” is increasingly important for companies, according to research by Bain & Company. According to a 2014 survey by the market research company Gartner, 89% of companies expect to compete mostly on the basis of customer experience.

Companies that have tried to improve their customer experience often fail because of the discrepancy between what customers expect and experience, and those metrics actually measured and reported internally. In the fog of the internal competitive battle, managers receive conflicting reports. Internal data show that the processes are working fine, but negative testimonies are piling up in forums. One thing is clear to everyone involved: Customer Service matters! So what does a possible approach to a solution look like?

(A good summary of how the question of “Why customer service matters” impacts on the Bottom Line is provided in the article 8 Ways Customer Service Affects Your Business’s Bottom Line)

 

Meeting Customer Expectations

So what really is the key issue? It’s about the expectations of your customers. And here it becomes clear that the best service is “no service”. By the time a customer contacts Customer Service, their expectations have already been under-fulfilled. According to research by the Consortium for Service Innovation, customer expectations are very quickly undercut and contacting Customer Service is usually preceded by numerous attempts at self-help.

The following graph illustrates the progress of customer value with increasing efforts to solve the problem:

(Source: Consortium for Service Innovation)

Gartner has also pointed out the importance of self-service in a number of publications. Here you can also download the study “Does Your Digital Customer Service Strategy Deliver?” free of charge.

3 Rules for an Optimal Customer Experience

In summary, 3 essential rules can be formulated for an optimal customer experience:

  1. The best service is no service
  2. The customer needs optimal self-service support
  3. Customer Service must provide optimal support

Customer Service is Always About Information

If you look at these rules, you will quickly see the one element that unites them. Besides excellent products and services, information is the biggest success factor for an optimal customer experience.

The challenges are very great for most companies. Information is only available in an insufficient form, it is outdated, incomplete and incorrect. Moreover, it is only found in under-integrated information silos. This results in major disadvantages when it comes to gaining a competitive edge.

We will now consider this in more detail, addressing the rules formulated above in reverse order.

Rule 3: Customer Service Must Provide Optimal Support

What do most companies already succeed in doing today? Well-implemented CRM systems support Customer Service in case processing with information about the customer and also offer good statistical evaluations of customer behaviour.

The situation is quite different with information about products and services, processes and technical documentation. Even important information on organisation and responsibilities is often not available to agents in case processing, especially when service partners are involved.

The solution: Do not abandon the agent to information chaos. Trustworthy, understandable, complete and integrated information that is immediately available during case processing is the key to success here. This is especially true for the integration of service partners.

Rule 2: Optimal Self-Service Support

Most companies leave self-service largely to their customers. Information that is important for customers in the self-service process is scattered or can only be found as testimonials from other customers in various forums. Customers must search for themselves, and have to separate the useful information from the erroneous. Their questions remain unresolved, information is incomplete, and advice is opinionated. With every minute that passes in this search process, your product or service loses value (value erosion).

The solution: Do not leave the customer’s information needs (only) to a community. Easily understandable and complete information that can be found immediately by the customer during research is crucial for customer satisfaction. In case of doubt, a company can manage this better than a community.

Rule 1: The Best Service is No Service

Increasing your no-service rate is certainly the ideal solution. But to achieve this, you have to do more than just analyse data. It is not enough to collect statistical data on customer transactions or to control your partners through case numbers – in case of doubt, anyone can do that. Using information is the key. This means that information must be collected from all areas and as many process steps as possible, it must be brought together and it must be analysed.

The solution: Communicate, expand, analyse and proactively improve feedback. Communication should take place at all levels, with all stakeholders and at all times. “Information providers” include customers, Service agents, partners and the entire company organisation. Integrating this information and bringing it together with data from Customer Service is the best way to ensure improvements and an increase in the no-service rate. A very simple and straightforward example is that product features described in a misleading way can lead to false customer expectations. As a result, negative posts in forums and contacts to the service desk pile up. Their active feedback leads to a more comprehensible version of the described features.

 

Information / Knowledge Management

Intelligent information is the key to success – but it is not simple to find, it is not simply good and it is not integrated. And it is never really easy to use. Intelligent information needs information management – “There is no Operational Excellence Without a Correctly Working Knowledge / Information Management“.

Think beyond Customer Service. In the end, information is needed by everyone involved and everyone contributes to the documentation process. Everyone contributes to creating and revising information, and thus ensuring quality. This includes not only Customer Service, but the entire company organisation, customers, partners and suppliers.

5 Machine Learning Applications in Knowledge Management

5 Machine Learning Applications in Knowledge Management

Most knowledge subject to a formal management process is available in written form. Those text collections contain more information than what is written in each of the documents. Analyzing the collections with AI methods like Machine Learning (ML) can support the knowledge management process in multiple ways and makes it possible to gain additional insights and advantages. This article provides some ideas on how to gain additional benefits from Knowledge Management (KM) using ML.

 

Example 1: Problem Management

Let’s start with knowledge in its second most common form: informal text with little to no structure in some document that is not subject to any KM process. (The most common form is within people’s heads.) Examples are error reports, maintenance reports or repair work protocols. They may come from technicians trying to solve the issue or users contacting the support. There is a lot of valuable information hidden in a collection of such documents. Which are the most common errors? What is the solution with the highest success rate? Are there areas where additional employee or user training might be needed? This is how you get that information:

First, we need to tag each report with the error described in it. This can be done either with supervised or unsupervised ML.

In supervised ML, you first manually tag a large enough number of documents and then train the system to guess the error from the description. In case the error is described with a code, this part is trivial. If the description is a free text listing symptoms, it is more complicated. If the number of possible errors is high and symptoms vary a lot, unsupervised learning might be the better choice. The system will group the descriptions by similarity of symptoms. Afterwards you manually tag each group with the corresponding error. The drawback is that you might not get one group per error. There might be cases where the system can only limit the number of possible errors, but not provide a clear decision.

Now the system can tell what issue the document is about. Thus, you can monitor how often which issue occurs. Here are some examples for the advantages you get from this:

  • You can find the weaknesses of a product by looking at its most common errors.
  • You know there is an issue with a new update or a special product batch if numbers rise right after the release.
  • You can optimize your training programs to focus on the common issues.

 

Example 2: Suggesting Solutions

Guessing the error was only step one. Step two is to train the system not only to recognize the error, but also to suggest a solution. Take the repair protocols for an error and train the system to detect similar procedures. (If you have information on whether the solution worked, use only the successful procedures.) For each group you write an instruction. You can do this either manually, especially for common or high impact issues, or use a text generation algorithm to create an instruction based on the repair description you already have.

The system can now suggest the most common solution to an error (and if you want, also the second most common in case the first one did not work or is not applicable for some reason). It can tell the error from the description. This makes the investigation process much more efficient. And as a bonus, your technicians do not need to write the repair protocols any longer as the system can pre-fill them in most cases.

How well a system like this works depends on several factors. Most important are the number of available documents, the number of errors and the variety among the solution descriptions. The more documents per error and the less variety, the better the system will perform. Even with a great AI model, you should not blindly trust the suggestions. But having them definitely is of big advantage.

 

Example 3: Efficient Search

The next level is having the information in some sort of central system like a wiki, SharePoint or a knowledge module of a ticketing system. In that system you most likely have some search implemented to allow users to quickly find what they need. Search engines are very sophisticated these days and sometimes even employ AI technologies for various purposes like ranking or spellchecks. Especially a good result ranking is important for search. If the result you are looking for is on position 24 on the result list there is only a minor difference to not being included at all.

The number of times your terms are used in a document does not necessarily determine its usefulness in your situation nor does its click rate. What you need are the pages most used in your case. While ranking results, the search engine should consider which pages users with a similar interest read, which results they skipped or closed after a short look, and which document they finally used. Web analytics and user tracking can provide such data.

To find out which users are looking for the same information, several techniques can be used. Comparing the search terms is straight forward, but might suffer from use of synonyms, different languages, or even misuse of terms. Defining and training intents is an alternative. The technique is primarily used in chatbots to extract the information need from free text input. But as the use case is similar in search it can easily be transferred. Collect search queries that aim for the same information, use them to train the system on recognizing the intent and then let the search check if a new query matches one of the intents. If so, rank results usually used by users with this intent higher.

The drawback of this method is that defining intents is not that easy. However, there are other ML techniques that can suggest new intents to add based on the search requests.

 

Example 4: Personalization

For KM systems with a wide user range there is the challenge to provide everyone with what they need and keep them updated on changes – without making them search for it among content not relevant to them or burying them in notifications. You need to personalize your content delivery. Content should know to which users it is relevant.

To get there, again we collect information via web analytics and user tracking. This time we are interested in who uses which content. Then we use ML to build user groups based on user behavior. In most scenarios, every user will be a member of multiple groups. Once learned, the system can add the users to groups automatically. However, assigning them manually should be possible in addition to that.

For the content, you do the same. You train the system to predict which user groups might be interested in it by looking at the groups interested in similar documents. Now when adding a new document, the system can notify the users it is relevant for, add it at a prominent spot in their view of the interface, and hide it for other users.

 

Example 5: Quality Monitoring

User Feedback is vital to KM. Without it, you are missing out an important stakeholder group and risk the acceptance of the KM program. There are many ways to gather feedback: Ratings, surveys, user tracking… The best way to gather feedback is enabling comments. Comments allow the user to give a detailed opinion. They can ask questions, point out minor weaknesses and engage the user in the improvement process directly as they can give input. And in contrast to a survey, they do not need too much preparation and, on a small scale, little interpretation.

However, when the number of comments on your content grows large, moderating discussions can get time intense. In addition, it becomes nearly impossible to grasp the overall mood of all comments on a document. Luckily, both issues can be addressed with the same method: Tag a set of comments with the information you need for your comment monitoring. Then train the system to recognize these categories from the text. In marketing context, this is called sentiment analysis since the desired information is whether customers like or dislike a brand or product. In KM however, other categories are important, e.g. whether a comment is a question, critique, a suggestion, or a praise. Questions and critique should be addressed by the moderator or content responsible within a short period of time, while a suggestion might be relevant only with the next content review. A praise, while being the reaction you hope for, might not require any reaction at all. By sorting comments that way using ML, the workload for moderators and responsibles decreases.

The same information can be used for quality monitoring. While a high number of comments tells you that a piece of content is perceived, it does not tell you whether it does so for being useful and important or for being insufficient. The ratio of different kinds of comments can tell a lot more. The meaning of praise and critique is obvious. High numbers of questions and suggestions mean the content is important and used (thus has some quality) but might need refinement. This way, you can use the comments to monitor the quality of the content, improving where the need is greatest and noticing early if the quality drops or is not met.

 

These where only 5 examples on the benefits of combining KM and ML. The implementation of AI often fails because of missing data – but KM can provide data in form of text. And with the data available, all you need to start is knowing what you want to know. There is so much more possible if you keep in mind that in KM the whole provides more information that its parts.

Experimenting as a team – avato and Würzburg University

Experimenting as a team – avato and Würzburg University

The first networking meeting of the Centre for Digital Experimentation (ZDEX) of Würzburg University was held in late October. The objective of the centre: Knowledge exchange and collaborations on concrete issues of digital transformation. The program is funded by the European Social Fund and the state of Bavaria, and will run until the end of 2022.

Innovation, digitisation and automation are central components of avato Information Management. That is why there are many interface points with the thematic networks of the ZDEX programme: The scope of planned projects spans from search optimisation, analytics and AI to virtual assistants. The implementation of some of the project ideas that came up when the university invited us to participate has already begun.

In close collaboration with the Network Digital Media and Social Agents (organised by the Chair of Media Informatics), we will work on optimally adapting our digital assistant Liibot to user needs. Intuitive conversation threads, a pleasant personality, and close alignment with the context of use are essential for a successful chatbot. Within the ZDEX we will analyse and optimise these aspects of Liibot based on current scientific standards.

Several topics are planned simultaneously for our collaboration with the Network for Data Analysis and Natural Language Processing of the Chair of Digital Humanities. Initial trials are already underway for AI-supported analysis of user feedback. The basic idea here is to automatically detect whether a comment from an information user is criticism, praise, a suggestion or a question. This information can then be used to evaluate the quality of the commented content. That will allow targeted optimisation where it is needed most.

The quality of information and documentation can also be measured with other means. An AI, for example, can assess data reliability based on formal criteria. This would be particularly useful where large volumes of historically grown documentation are to be migrated to a modern information management system. An automated presorting of the information allows a more accurate estimate of how much effort will be required for a migration. We also collaborate with the Network for Data Analysis on this issue.

More projects are in the pipeline. Assessment of text comprehensibility, long-term quality assurance and process optimisation are just some of the topics in which the combination of AI and information management offers great potential. The option of involving our partners and customers allows more people and companies to benefit from ZDEX directly. We are very much looking forward to our joint experimentations and the innovative solutions that will emerge from this collaborative effort.

Regional Innovation Drives Global Digitisation

Regional Innovation Drives Global Digitisation

The following article provides insights into a project in which companies have successfully implemented Information Management (IM) together with avato consulting ag and the Munich University of Applied Sciences. The overall objective of the project was to provide user information intelligently using innovative methods and technologies.

2 projects. Moving away from old filing systems, towards modern information management. We report on the parallel projects of avato consulting ag for two of its partner companies in cooperation with the Munich University of Applied Sciences.

You will gain insights into methods used as well as challenges, such as metadata modelling, information creation, modularisation of content, cultural change in dealing with information, virtual, global teams and company-wide information management, in spite of (or because of) Corona.

The two field reports show how an IM project can be planned and carried out with a virtual team from different organisations (university, corporation, consulting company). This also includes the selection of suitable methods and solutions for emerging challenges.

How did the cooperation come about? It all started at the tekom Annual Congress 2019, during an exchange of avato with Prof. Dr. Ley, Professor for Information Management at the Munich University of Applied Sciences. It quickly became clear that there was great interest on both sides in a collaborative student project. The planned project will deal with information management in companies. The students will be accompanied during the project by Prof. Dr. Ley and avato. The implementation is to take place in two of avato’s partner companies, by the students from the 7th semester of the course of studies “Technical Writing and Technical Communication” at the Munich University of Applied Sciences.

The requirements. The task must fit the subject matter of the degree programme and should be realisable within the project. A current task must be solved for the partner companies. There must be added value in terms of both methodology and technology. Since the project in the partner company runs parallel to the everyday business, the time required by the partners must be kept to a minimum.

Results. The results are a classification and metadata model developed according to the latest scientific standards and implemented with the help of innovative technology, as well as an information portal filled with content for both partner companies.

Goals of Information Management

  1. Become faster: Reduce information searches to a minimum, each individual becomes more productive
  2. Become better: Provide the correct and necessary information at all times, keep information up to date with minimal effort and actively record user feedback and use it for improvements
  3. Become more flexible: Familiarise employees more quickly with different topics and enable them to perform more varied tasks
  4. Reduce costs

1.       Initial Situation in the Partner Companies

1.1   Global IT Provider

An outsourced Data Centre Service for a global Cloud Service has moved from one Service Provider to another, “Partner1”. In the course of this, an existing Wiki was also handed over to “Partner1”. This Wiki contains information about the Data Centers, Service Providers, technologies used, technology manufacturers and service processes. This information is used to coordinate Service Requests in a complex network of the various parties involved in the event of Incidents or for maintenance work.

The quality of the “inherited” Wiki is not ideal. The contents are not (or no longer) correct, are missing or are repeated multiple times. Another disadvantage is that the information is scattered and does not support the complex service process. These disadvantages lead to the fact that the employees are not able to process the tickets efficiently:

  • In one third of Service Requests, errors are caused by insufficient information (deficiencies in completeness and quality).
  • For the remaining two thirds of the Service Requests, time problems arise in the processing. Tickets could be processed faster if better structured information was available.

This results in numerous complaints from the service recipient.

Therefore, the existing documentation will be revised and in the future made available in a new information system, with the following project objectives:

  • A new structure should make information easier to find
  • A revision of the information should improve the quality of the content
  • The modular structure of the documentation should help with the use of the individual pieces of information later for other issues, for example in the Service Management Tool ServiceNow
  • Control by Information Managers is intended to ensure that the information system is not outdated and is regularly maintained in terms of content and technology
  • The technology should support a continuous improvement process, e.g. through gamification and commentary functions

1.2   Global Air Conditioning and Heating Technology Company

A section of an existing Wiki is to be transferred to the new information system iPortal. It is information that has evolved historically and is contained in a system that has become outdated. The information was and is produced by 3rd level support and research and development. The information is mainly used by aftersales (1st and 2nd level support). The information in the legacy system is information about troubleshooting and maintenance of the products.

 In future, the documentation will be made available in a new information system, with the following project objectives:

  • Through better preparation and presentation, higher first resolution rates are to be achieved in aftersales, both in 1st and 2nd Level Support
  • Control by Information Managers is intended to ensure that the information system is not outdated, but is regularly maintained in terms of content and technology
  • Information Managers should ensure that content is better understood by using standardised templates and language
  • Multiple languages and rapid translation by Information Managers will provide better support for the European national companies
  • Through the possibility of assigning authorisations at the lowest level (even within an article), the articles should also be accessible for service/distribution partners
  • The modular structure of the documentation is intended to help with the use of the individual pieces of information later for other outputs, for example for a chatbot/voicebot or in other systems, such as salesforce or SAP
  • The technology should support a continuous improvement process, e.g. through gamification and commentary functions

 

2.       Planning and Selection of Methods

Preparations by the partner companies and avato began before the start of the semester. The project MagIM – Magic of Information Management – was officially launched during kick-off meetings.

The avato process model was intended to serve as a basis for both projects. For the target-performance comparison, an inventory of the existing documentation/information was then carried out. Usually, as in these two projects, an old stock of information is available. In this case, the Evaluate method is recommended to assess whether adoption is worthwhile (Capture method) or whether the information must be recreated (Create method).

Fortunately, the content in the legacy systems of the two partner companies did not have to be completely recreated, but could be brought to the desired quality level through a thorough review, application of templates and implementation of the Style Guide. The content was then published by the Information Managers in the new iPortal information system (Publish method), where it will be subject to regular reviews in future using the Governance & Maintenance method.

Details of the individual methods used are described here (accessible after free registration).

Infobox avato Procedure Model

The avato process model in information management provides for 6 steps. In the first step the stakeholders are defined, in the second step the objectives of the stakeholders. Step three deals with the broad objectives broken down to the information system and the new, rough target structure. In the fourth step, the broad objectives are broken down into detailed objectives. The target structure is also defined in detail. In the fifth step, the methods can be sensibly chosen. Finally, the selection of suitable tools and technologies takes place.

The students’ task was to develop a classification/metadata model for documentation in the partner companies. Parallel to this, an analysis and evaluation of the technology, the iPortal, was to take place. The students divided the project into three sprints: the analysis, the conception and the development of the concepts. At the beginning the teams formed the following subgroups:

  • The customer team dealt with the customer’s point of view and recorded possible problems and questions.
  • The Alfresco team worked on the technical implementation of the iPortal CMS to filter out possible adjustments
  • The iPortal Team analysed the iPortal CDS in order to find possible problems and weaknesses and also dealt with the conception of a possible future iPortal

The students prepared a project plan that included all three sprints. The selection of methods is explained in the following sections.

 

3.       Requirements of the Partners

In the interviews with the partner companies by the students and avato, the requirements quickly became clear.

When designing an interview guideline, the students were guided by the Value Proposition Canvas. The daily tasks of the employees in the partner companies should be analysed as well as their Pains and Gains. The Gain Creators, Pain Creators and the requirements for the information system were to be derived from this. The interviews with the partner companies made it clear how the employees worked and what problems they would encounter. The partner companies also expressed their wishes as to what a helpful information system should look like. The students derived the requirements and arranged them using the MoSCoW method (Must, Should, Could, Won’t).

The students then developed a proposal for a roadmap for the implementation of the requirements. Many of the requirements were even implemented during the ongoing project, such as improved search, additional languages and design adaptations.

4.       Metadata Model Development

Metadata

  • are the prerequisite for automation
  • are used by text analytics and by the chatbot
  • are needed to search content better and to find content faster
  • are needed to manage versions and initiate workflows
  • are the basis for dashboards

You can read more about metadata modelling here.

In principle, there are content-independent metadata which should be available for each information unit. Examples are the version number and author. There are also content-dependent metadata. One example is the “topic”, which is quite different in the documentation of an IT provider than in the documentation of an air conditioning and heating technology company.

In the case of content-independent metadata, it is worthwhile to make a comparison between the chosen technology (the iPortal) and other information systems and the usual standards. Apart from selective differences in the naming of metadata, there were no major differences. This part of the metadata model could be created without having to know the documentation in detail.

The situation was different for content-dependent metadata. In order to develop a good model for this, the students analysed the existing documentation. They also conducted interviews with the partner companies. In this way, the comprehensive metadata models for the partner companies were created. These were based on established standards such as PI-Mod or iiRDS.

The metadata models were visualised in mind maps, presented to the partner companies and subsequently refined with the help of feedback.

 

5.       Analysis of CDS and CMS

In the course of the projects the students dealt intensively with the Content Delivery System and the Content Management System.

Compared to other systems, the iPortal performed well. Although the initial familiarisation with the iPortal is not (yet) trivial, it offers many functions that reflect the methods of successful information management.

Another major advantage of the iPortal is the comparatively low licensing costs.

From a technical point of view, the iPortal is particularly characterised by the fact that it can be accessed via the browser anytime and anywhere. However, to avoid network problems, the content is also available for download – and an improved offline version is planned. Technically, CDS and CMS are separate, which brings enormous advantages. In this way, the information system can include and link any number of CMS and CDS. Additional CDSs are the chatbot and the app. In addition to the CMS for text/image/sound based information, external databases or ticketing systems can also serve as CMS.

Most important, however, is that the tool supports the user optimally in their daily work and that the users are satisfied. A tool that is not accepted by the users is useless. Interviews with the partner companies helped the students find out what requirements the future users will have of the iPortal. The collected requirements were classified using the MoSCoW method. The result was a clear prioritisation of requirements (Must, Should, Could, Won’t).

User-friendliness and design were also closely examined. The students gave free rein to their thoughts to determine which design they would like to see for themselves. The groups thus developed very different design ideas. The strengths and weaknesses of the iPortal could be deduced from this.

 

6.       Migration and Information Production

After analysis of the existing documentation and customer requirements, templates were developed and agreed with the partner companies. These templates were used consistently in the new information system – both for newly created information and for the migration of existing information.

An information structure was also developed. This supports the work processes of the employees. The partner company can now see information at a glance that was previously scattered.

During the development of the information structure it was decided which units should be displayed where. The reusability of units (reuse) is the reason why redundancies do not occur.

Before the existing content was adopted, it was subjected to a thorough review by avato Information Managers. The Information Managers remove redundancies, apply templates and create the structures for the new pages. They also make sure that the language and naming are clear and standardised and that the Style Guide is applied consistently. This helps the user to find their way around and thus to absorb information.

The global service provider took over 130 pages from the existing Wiki. The global air conditioning and heating technology company had 30 articles in up to four languages.

 

7.       Results

The results elaborated by the students were presented to the partners in a final presentation and subsequently discussed. In addition to the templates, content migrations and metadata models, results have been achieved in the areas of design and features.

The project plan foresaw a comprehensive implementation of the results at the end of the PoC. The creation of templates and taxonomies was thus completed, as were quickly implementable ideas and suggestions in the area of design and features, including the introduction of a continuous improvement process (use of gamification). The content was also migrated to the new information systems.

 

8.       Final Statements and Conclusion

Our experience is: If the number of people and parties involved is increased, the project becomes more complex – but in the end everyone benefits from the experience and results. The students learned about information management from a new angle. It was also new for the students to participate in a global project in which the participants are spread all over the world. Time zones had to be respected and communication with the other countries was of course in English.

In this project, the partner companies and avato each had the chance to come into contact with over 20 students. The partners and avato benefited from the students’ structured, scientific approach and can use the results for their own benefit in the future.

The project started at a time when almost no-one could estimate how Covid-19 would develop. Personal meetings were replaced by virtual meetings. The students’ group work and their evaluation also took place remotely. These circumstances required all the more discipline and coordination. Right from the start, Munich University of Applied Sciences, and in particular Prof. Dr. Ley, stood out as extremely flexible and committed. Thus, despite Corona, the project was able to start without any significant delay and was implemented very successfully. The interest and personal engagement of those involved was also very high among the partner companies and was an important factor in the success of the project.