Make More Simple: Simpler language for faster reading

Make More Simple: Simpler language for faster reading

Admittedly, it was a struggle not to make the subtitle of this article “Simplifying language for increased readability”. A lot of writers will agree that complex phrases and words come naturally while writing – they make a text more sophisticated, entertaining, diverse, … a lot of good things. But there are cases when you do your readers a favor by stepping away from complexity and turning towards simpler language: when you’re writing knowledge articles. Let’s take a look at why that is and how to get there.

(You’ll find throughout this article that we’re not sticking very closely to the described rules, as this article isn’t itself a knowledge article. We invite you to treat it as an exercise in spotting where to make improvements, if this were to qualify as a simplified knowledge article.)

Why?

When creating knowledge articles, you are likely writing for an audience of experts. Why would that require simpler language if your readers have a high level of expertise? There are several reasons you are helping your readers by simplifying:

  • Easier to understand: Even for an expert audience, simpler language can be read and understood even more easily than more complex words and phrases. Less concentration is needed for reading specific steps or an entire article, leaving more focus for the work at hand.
  • Shorter: Generally, simpler terms and a more straightforward sentence structure are shorter than their more complex and elaborate counterparts. The extra word-length and word-count you save over an entire article can make a big difference. And of course shorter means faster to skim and read, saving your readers a lot of time.
  • Higher consistency: If you stick to simpler language and a limited vocabulary, it makes your content more consistent. The same words will be used to refer to the same things across the entire content. This makes it easier for readers to switch between different articles and know exactly when the same topics are addressed.
  • Better searchability: This consistency of terms also means that your readers can rely on search functions more, as they will deliver more matching results for a given term.

How?

There are several ways you can make the language in your knowledge articles simpler. Of course there are complex topics you are writing about, which will make it hard to stick to all these suggestions. But especially when you are already forced to include some highly specialized terms, consider these tips to balance out the rest of your article:

Simpler words

This one seems obvious, but it can be hard to keep in mind while writing. Simpler words are faster to read and easier to understand. They also limit variety, which, as mentioned, is a good thing: You introduce fewer synonyms for the user to keep in mind and search for.

A few examples in this very section could be written shorter in a knowledge article:

Instead of “obvious”, use “clear”.

Instead of “introduce”, use “add”.

Instead of “keep in mind” use “remember”.

Using simpler words can make you feel like your text is repetitive and belittling your readers’ skills of understanding. But remember that you are not writing knowledge articles for their entertainment, but for them to quickly take in specific information. They’ll be grateful for a text that is easier to read.

Controlled vocabulary

Even when using simpler words, you can still end up with a variety of synonyms for the same thing. Try to limit this. With a more limited vocabulary, you get more clarity, consistency and greater searchability.

For example, picture yourself describing how to close an app:

Different words: You could use the word “X symbol”, “X icon”, “X button”, “x control” or simply “X” to describe what the user should click on.

Controlled vocabulary: If you use a consistent approach to what you refer to as a symbol, icon or button, e.g. always using “X control” in this case, it becomes easier for your reader to search for a specific element and recognize it in a different article.

You can use a style guide to help yourself and other writers always use the same terms for the same things.

Active language

Going for more active instead of passive language is a good writing tip in a lot of areas. But it is especially helpful in knowledge articles. Using active language for instructions makes it clearer who is supposed to do what.

Here’s a comparison:

Passive language: “After the alert mail is sent and a call dispatched to the hotline, the system can be restarted.”

Active language: “After the helpdesk team has sent the alert mail and has made the call to the hotline, you can restart the system.”

In the first statement, it is unclear which steps are things the reader should do and which ones are triggers they need to wait for and that should be performed by someone else. In the second statement, the reader knows exactly who should do what.

Short sentences

Another clear candidate for simpler language: Keep your sentences as short as possible. Longer sentences can lead to complex sentence structures, where you start making a point in the beginning and finish it in the end of a very long sentence. That can be hard to follow for your reader who is trying to find specific information quickly and is likely already dealing with a relatively complicated subject. Another advantage of shorter sentences is that you save on connecting words you would put into longer sentences, leading to a shorter text overall.

Consider the difference of understandability and length in this example:

Longer sentence: “It’s advisable that after entering the code 1234 and toggling the settings A and B to be active, you save.”

Shorter sentence: “Enter code 1234. Set settings A and B to active. Save.”

Not only does the shorter sentence safe space, it’s also easier to understand the 3 actions described in it.

Especially with sequential steps, this can also help break down the instruction into smaller steps. (We’d suggest using a numbered list to make things even clearer, but that is a point for an article on structuring tips.)

Leading sentence structure

What is a leading sentence structure? By this we mean arranging a sentence so the most important parts or the steps that come first are in the beginning of the sentence. The less important parts or steps that come last, are in the end. While this rule does not make sense for every type of sentence, it is especially useful for describing things that follow a specific condition or steps that happen in order.

Two examples:

Highlight conditions: If you want to inform your reader that they should do something in case of another event, you could write:

“Restart the application if you get error code 123.”

But putting the condition first makes it easier for the reader to know when this applies – so instead, write:

“If you get error code 123, restart the application.”

Clarify sequences: When describing some sequential steps, you are not wrong to write:

“Before submitting the form (by clicking the ‘done’ button), be sure to tick the ‘save for later’ box at the bottom of the pop-up.”

But it will help the reader to orient themselves while they follow the instruction if you instead write:

“At the bottom of the pop-up, tick the box ‘save for later’. Then click the ‘done’ button to submit the form.”

This way you start at the biggest orientation help (“at the bottom of the page”) and the first step (“tick the box ‘save for later’”), giving the reader a good idea where to start, both visually and in order of steps.

Paying attention to this rule will help your readers know the most important part right away, allowing them to follow the sentence more easily or letting them know immediately that the condition the sentence starts with might not apply to them.

How simple is simple enough?

Trying to stick to these points already means you’re doing a lot to help your readers consume your content faster. But if you are curious just how readable your content is, there are several ways to measure this.

  • Web tools for copy-pasting and evaluating your texts: There are plenty of websites that allow you to copy-paste the text you want evaluated and then see how it performs against different measures of readability. This is a great way to help get a gauge of how readable your typical articles are or how certain changes affect them. (For example, the free options from readable and WebFX are a good starting point.)
  • Customized analytics: If you want to frequently measure the readability of content, perhaps not only yours but bigger amounts from a knowledge base, setting up your own analytics measurement might be the way to go. You can tweak it to the conditions of your content, accounting for necessary specialist vocabulary and have it scan your content automatically at publishing or in big batches. Of course, this requires a bit of expertise – ideally in general programming as well as text processing and natural language processing areas. If you are curious to learn more, stay tuned for our article on content analytics, coming soon.

By now, you are probably painfully aware of all the ways in which this very article cold be simplified if it had to fulfil the standards of a quickly readable knowledge article. Take that mindset along to your next knowledge article, being critical of every time you could use a simpler word or a shorter sentence and continue enabling your readers to work faster and with less frustration.

You are curious to learn about more ways aside from simpler language in which you can make your knowledge articles even more reader friendly? We’ve got an overview for you in our article The Art of Simple Information. Also, keep an eye out for more articles coming soon.

Did we miss ways of simplifying language you always use in your knowledge articles?

Any ideas, comments, feedback? Tell us: marketing@avato.net

Impressum: 
Date: August 2021
Author: Kris Schmidt
Contact: marketing@avato.net
www.avato-consulting.com
© 2021 avato consulting ag
All Rights Reserved.

Webinar: Knowledge Management & Analytics – Automatically Evaluating Knowledge Bases

Webinar: Knowledge Management & Analytics – Automatically Evaluating Knowledge Bases

In the modern world, knowledge has become the dominant production factor. A major fraction of the knowledge a company possesses is locked away in texts – documentation, reports, mails and so on.
Text analytics and knowledge management enable you to unlock this potential by identifying, sorting and contextualizing the underlying information.
 
Join our webinar to learn how text analytics can help you in getting all the value out of your texts and in setting up efficient knowledge management.
 
Where: Online, free webinar
Date: 15 September 2021
Time: 10 am EDT / UTC -4  (16 Uhr CET)
Duration: 1 hour
 
The Art of Simple Information: Optimizing Knowledge Articles for Your Readers

The Art of Simple Information: Optimizing Knowledge Articles for Your Readers

If you write knowledge articles and instructions, you know there’s plenty to pay attention to: Are all necessary steps described, are all details correct, is everything up to date, do you need to include code snippets or flow diagrams…? With so many aspects to juggle, it’s normal that focus can drift away from the essential factor: your reader. To help set the focus back on them, here’s a checklist of easy ways you can make your knowledge articles and instructions easier and faster to read. Help your readers get through their workday with more success and less frustration with simpler knowledge articles.

Simpler language

Typically, readers of documentation are in a hurry to find information or instructional steps. Don’t drag things out. Keep it short and simple. That includes:

  • Simple language: Short sentences and simple language make text easier and faster to skim and understand. If you can break a sentence into several, do so. If you can replace a complicated word with a simple one, that’s good. Don’t be afraid you’re coming off as belittling or boring to your readers when using simple language. They’re reading your knowledge article not for entertainment, but to quickly find what they’re looking for.
  • Limited vocabulary: Try to use the same word for the same thing throughout the whole article or instruction. It might seem repetitive, but it helps follow the instruction more easily and it’s also great for readers using the search function.
  • Leading sentence structure: This means structuring the words in your sentences so that the parts that help readers most to orient themselves or that happen first appear in the beginning of the sentence.

Guide the eye

Not only the language can make your knowledge article simpler: Making it visually easy to segment the content makes both reading and skimming much faster.

  • Give an overview: Especially for longer pages, offer an interactive table of contents with links that can take the readers straight to the parts they need.
  • Break up text with structure: Short paragraphs, headings and structuring elements like lists help with skimming content and quickly finding what you’re looking for.
  • Highlighting: Use things like bold text or colors to help the eye find the important parts. But try to keep it balanced. If every second word is an eye-catcher, it won’t be helpful to the reader anymore.

Stick to conventions

Try to stick to common conventions to make reading your content more intuitive, such as:

  • Common terminology to refer to things rather than customized names
  • Familiar color schemes like red for things that are negative and green for things that are positive

Reader-friendly illustrations

First of all: You do not have to use illustrations. If the content doesn’t need them or they might even make things more confusing, there’s no need to come up with a visualization or to add stock pictures. You want a simpler knowledge article, not a more cluttered one.

But if your content can benefit from an illustration, consider these tips:

  • Which type of illustration best matches the content: If you’re displaying a process, you likely want to use a flow diagram; if you have data to show, use a chart; if you show how to operate an application, use a screenshot.
  • Declutter: Only include what is necessary, so the illustration can be understood quickly and without further explanation.
  • Give a key: If the illustration does need an explanation, be sure to give one. Let the reader know what different colors, symbols and columns stand for.
  • Make your illustration accessible: You can help not only users with visual impairments but all your readers by making your illustrations more accessible. Make sure all details are clearly visible and explained. Put further explanations in the text, so they’re searchable.

Ask your readers what they need in a simpler knowledge article!

Sometimes it’s hard to decide what the best way of describing a specific information or instruction is or how to optimize your knowledge article further. If you struggle, focus on the readers’ perspective.

  • Note down a reader persona: Noting down some typical attributes your readers have can help you focus on them and their needs.
  • Ask a colleague: Getting a second perspective can always help make improvements. Your colleagues might not be your readers, but they can still help you find parts that could be explained more clearly or have good ideas for simplifying an illustration.
  • Ask a reader: Lastly, of course, ask your readers what they think. This doesn’t have to be before publishing. Feedback can also be collected via reviews, comment or contact forms. All of these offer valuable input, not just for the content you just published, but for future knowledge articles, too.

There’s a lot of details to branch out to from this general overview: tips for limiting vocabularies, highlighting rules, a bigger look at web conventions or accessibility guidelines. Stick around for more articles coming soon, helping you make the perfect content for your readers’ needs.

Did we miss something you always do to help your readers? Send us an email to marketing@avato.net and let us know!

Do you have questions? We will be happy to advise you: marketing@avato.net

Imprint: 
Date: August 2021
Author: Kris Schmidt
Contact: marketing@avato.net
www.avato-consulting.com
© 2021 avato consulting ag
All Rights Reserved.

AI meets KM: The Synergy of Artificial Intelligence and Knowledge Management

AI meets KM: The Synergy of Artificial Intelligence and Knowledge Management

AI makes it possible to examine large amounts of data quickly and thus derive the greatest possible benefit from it. The more complex the problem, the more data is needed for the AI to work at all. For this reason, data acquisition is often one of the biggest obstacles to the introduction of AI, especially for SMEs.

On the other hand, knowledge management is often confronted with the opposite problem. The large amount of data, including knowledge articles, templates, metadata, access data, etc., makes it difficult to keep track of it all and to fully exploit the value of the available information.

That is exactly why AI and KM are made for each other. The strengths of one are the solution to the problems of the other. This article presents some use cases where the synergy of KM and AI opens up new opportunities, increases efficiency and improves quality.

Find knowledge: Search and navigation

The larger the knowledge base, the more difficult it is to find the information you are looking for. AI can support this and thus minimise time expenditure and increase efficiency. Here are some examples:

Recommendations

A frequently used method, which shows how AI can provide support, is recommendations in the form of “Other users were also interested in”. These recommendations are mostly based on which articles users have viewed before or after the current article. Articles that come up particularly often are recommended. The prediction can be even more accurate if the users are divided into groups. Then the interests of the group the respective user is in can be taken into account to an even greater degree.

The same mechanism can also influence the ranking of search results. If a page is used more frequently by a particular user group, it appears further up in the results list.

Identifying user groups

User groups can be derived, for example, from the user’s job description, department, or rights. AI systems are also able to form groups automatically and assign new users to a group. The basic concept is that users with similar user behaviour form a group.

Optimising menus

If the knowledge base uses a hierarchical menu for navigation, this can also benefit from the findings about the user groups. The system can suggest which pages should be next to each other in the menu because they are often used together. A separate menu structure can even be created for each user group, adapted to their respective needs.

The timing of the search

In addition to the question of who is searching, AI can also consider the question of when. If a user enters several search queries in a row without thoroughly looking at any of the results, it may be worth proactively suggesting another communication channel.

Calendar information can also be included. Certain information is only needed at the end of the month or year, or at certain time intervals. AI recognises these regular patterns and makes it possible to offer the respective information at the right time, without the user having to search for it first.

KM processes: Monitoring and moderation

Information becomes outdated. Even the requirements for a knowledge article can change over time. That’s why the quality of the content must be constantly monitored. Manual monitoring would be too inefficient, so it makes sense to use AI here as well. A whole range of KPIs are conceivable and can be collected automatically.

To take one example: Many knowledge bases offer a comment function. AI can use sentiment analysis to determine whether comments are broadly positive, broadly critical, or generally asking questions. In combination with other information, such as the frequency of access and the number of likes, this can be used to determine how well the article is rated from the user’s point of view. The system automatically places articles with poor or rapidly declining ratings on the list of articles that need to be revised.

(More details on how to monitor the quality of a knowledge base can be found in the article “Slimming Cure for Confluence Wikis“.)

Wherever there is a comment function, there must also be a certain amount of moderation. Again, it would be inefficient to have content experts regularly scan the comments for questions and suggestions. However, if an AI system assesses the comments according to their urgency and only asks the person responsible to react if necessary, the best possible result is achieved with the least possible effort.

Training: Identifying and eliminating knowledge gaps

Knowledge management does not only include the administration of written information. The knowledge in the minds of customers and employees can and should also be managed. This is done through targeted training. Personalised training is more effective than a standardised plan that is based on the average rather than the individual’s needs. AI can identify the needs of the individual and suggest appropriate refresher courses and in-depth training. If a user often looks for information on a certain topic or makes similar mistakes again and again, a training course on this topic will be suggested to that user.

Training courses are difficult to generate automatically due to their complexity. Tests to check the level of knowledge, on the other hand, can be generated automatically, at least in cases where a query about factual knowledge is sufficient. Modern methods for knowledge extraction can generate a question including the corresponding answer from a text. After editorial review, these tests can then also be used to determine individual training needs.

Conclusion

Without AI, only a small percentage of the data generated in and around knowledge management can be used. AI, in turn, needs this data and can generate real added value on the basis of this. This helps to improve knowledge management. The result: high-quality content, efficient use and, in the end, better results and fewer errors. AI and KM: a perfect team.

Do you have questions? We will be happy to advise you: marketing@avato.net

Imprint: 
Date: July 2021
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© 2021 avato consulting ag
All Rights Reserved.

FAQ Automation: Taking advantage of the “F”

FAQ Automation: Taking advantage of the “F”

Frequently Asked Questions (FAQs) are among the first places users go when they need information. The advantage of FAQs is that they are confined to the most important points and most common problems. This means it is much easier to monitor your FAQs than your entire documentation. But to facilitate this, FAQs must be well sorted, easy to navigate and, above all, up-to-date. In other words, the success of FAQs stands and falls with the capabilities of the editorial team. A large number of questions results in a great deal of work for the FAQ editorial team. However, a high volume and frequency of individual questions can also be advantageous. The “F” (“Frequently”) in FAQ enables automation that minimises manual and / or repetitive editorial work. Here are a few ideas on where to start.

Automatic Recognition of Similar Questions

A first step towards FAQ automation is recognising similar questions. If users cannot solve their question (quickly enough) via FAQs or navigation, they will switch to other channels. Most of the time, this is either a search or a service centre. If the Service Centre is contacted via ticket, email, or chat, or if requests are logged, the collection of user questions expands. Depending on the format and source, this collection can be automatically and continuously searched for patterns.

Identifying common terms in search queries is quite simple to do. More complex methods are needed to identify common themes and similarities, e.g. in the problem descriptions in a ticket. These methods come from the field of Natural Language Processing (NLP). However, using topic modelling, distance measurement and dendrograms, it is still possible to recognise groups and patterns.

If a topic is identified as occurring particularly frequently, the system can flag this for the FAQ editorial team. The system will list the questions and searches that are associated with the topic. Instead of going through all the questions themselves, the editorial team only have to decide whether the detected pattern is meaningful and relevant.

Automatic Detection of Duplicates

Another step towards more automation of FAQs is the detection of duplicates. If the same question has two different answers, this is not helpful for users in cases of doubt. On the contrary, it leads to confusion and frustration. That’s why duplicates must be detected and eliminated. Before adding a question to the FAQs, it is important to check whether it already exists.

With good sorting and few questions, this should not take too much effort. But for complex areas with different subdivisions and perspectives, it’s a different story. And if several FAQs need to be merged or different departments handle their own FAQs, this can also cause complications.

Recognising duplicates works similarly to recognising similar new questions. In this case, the difficulty is that the texts are very short, which is a problem for many NLP techniques. In addition, there is little data with which to train an intelligent system. This is because in most cases duplicates are simply removed, rather than being marked as such. Listing questions with similar keywords and categories is therefore the more efficient approach. Then the editorial team can make comparisons with the new question with minimal effort.

Automatic Sorting

Another possibility for FAQ automation is in the area of sorting. In order to keep the FAQs clear or make them more accessible for a search, the questions are almost always divided into categories or even arranged in a topic hierarchy. This sorting can also take place automatically. To do this, the AI learns the common features of the questions in a category or branch of the hierarchy. It then checks which group the new question is most likely to fit into. If a group becomes too large, the system can also examine which questions within a group are most thematically similar and thus suggest a subdivision.

Furthermore, if subject areas and organisations change over time, the old sorting criteria may no longer fit. A new structure can be created based on similarities between questions. However, an editorial follow-up is strongly recommended. This is due to the fact that, while the results have the advantage of being aligned to real data, independent of personal preferences, the subdivisions are not necessarily comprehensible to a human reader.

Bottom Line

AI and NLP can support the automation of FAQs in many places. They reduce effort and help with uniform structuring. This gives the editorial team the opportunity to concentrate on content and design work and thus continuously increase the usefulness of the FAQs and thus user satisfaction.

Do you have questions or feedback? Please get in touch: marketing@avato.net

Imprint: 
Date: July 2021
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© 2021 avato consulting ag
All Rights Reserved.

5 Ways to Use Machine Learning in Knowledge Management

5 Ways to Use Machine Learning in Knowledge Management

Most knowledge subject to a formal management process is available in written form. Those text collections contain more information than what is written in each of the documents. Analyzing the collections with AI methods like Machine Learning (ML) can support the knowledge management process in multiple ways and makes it possible to gain additional insights and advantages. This article provides some ideas on how to gain additional benefits from Knowledge Management (KM) using ML.

 

Example 1: Problem Management

Let’s start with knowledge in its second most common form: informal text with little to no structure in some document that is not subject to any KM process. (The most common form is within people’s heads.) Examples are error reports, maintenance reports or repair work protocols. They may come from technicians trying to solve the issue or users contacting the support. There is a lot of valuable information hidden in a collection of such documents. Which are the most common errors? What is the solution with the highest success rate? Are there areas where additional employee or user training might be needed? This is how you get that information:

First, we need to tag each report with the error described in it. This can be done either with supervised or unsupervised ML.

In supervised ML, you first manually tag a large enough number of documents and then train the system to guess the error from the description. In case the error is described with a code, this part is trivial. If the description is a free text listing symptoms, it is more complicated. If the number of possible errors is high and symptoms vary a lot, unsupervised learning might be the better choice. The system will group the descriptions by similarity of symptoms. Afterwards you manually tag each group with the corresponding error. The drawback is that you might not get one group per error. There might be cases where the system can only limit the number of possible errors, but not provide a clear decision.

Now the system can tell what issue the document is about. Thus, you can monitor how often which issue occurs. Here are some examples for the advantages you get from this:

  • You can find the weaknesses of a product by looking at its most common errors.
  • You know there is an issue with a new update or a special product batch if numbers rise right after the release.
  • You can optimize your training programs to focus on the common issues.

 

Example 2: Suggesting Solutions

Guessing the error was only step one. Step two is to train the system not only to recognize the error, but also to suggest a solution. Take the repair protocols for an error and train the system to detect similar procedures. (If you have information on whether the solution worked, use only the successful procedures.) For each group you write an instruction. You can do this either manually, especially for common or high impact issues, or use a text generation algorithm to create an instruction based on the repair description you already have.

The system can now suggest the most common solution to an error (and if you want, also the second most common in case the first one did not work or is not applicable for some reason). It can tell the error from the description. This makes the investigation process much more efficient. And as a bonus, your technicians do not need to write the repair protocols any longer as the system can pre-fill them in most cases.

How well a system like this works depends on several factors. Most important are the number of available documents, the number of errors and the variety among the solution descriptions. The more documents per error and the less variety, the better the system will perform. Even with a great AI model, you should not blindly trust the suggestions. But having them definitely is of big advantage.

 

Example 3: Efficient Search

The next level is having the information in some sort of central system like a wiki, SharePoint or a knowledge module of a ticketing system. In that system you most likely have some search implemented to allow users to quickly find what they need. Search engines are very sophisticated these days and sometimes even employ AI technologies for various purposes like ranking or spellchecks. Especially a good result ranking is important for search. If the result you are looking for is on position 24 on the result list there is only a minor difference to not being included at all.

The number of times your terms are used in a document does not necessarily determine its usefulness in your situation nor does its click rate. What you need are the pages most used in your case. While ranking results, the search engine should consider which pages users with a similar interest read, which results they skipped or closed after a short look, and which document they finally used. Web analytics and user tracking can provide such data.

To find out which users are looking for the same information, several techniques can be used. Comparing the search terms is straight forward, but might suffer from use of synonyms, different languages, or even misuse of terms. Defining and training intents is an alternative. The technique is primarily used in chatbots to extract the information need from free text input. But as the use case is similar in search it can easily be transferred. Collect search queries that aim for the same information, use them to train the system on recognizing the intent and then let the search check if a new query matches one of the intents. If so, rank results usually used by users with this intent higher.

The drawback of this method is that defining intents is not that easy. However, there are other ML techniques that can suggest new intents to add based on the search requests.

 

Example 4: Personalization

For KM systems with a wide user range there is the challenge to provide everyone with what they need and keep them updated on changes – without making them search for it among content not relevant to them or burying them in notifications. You need to personalize your content delivery. Content should know to which users it is relevant.

To get there, again we collect information via web analytics and user tracking. This time we are interested in who uses which content. Then we use ML to build user groups based on user behavior. In most scenarios, every user will be a member of multiple groups. Once learned, the system can add the users to groups automatically. However, assigning them manually should be possible in addition to that.

For the content, you do the same. You train the system to predict which user groups might be interested in it by looking at the groups interested in similar documents. Now when adding a new document, the system can notify the users it is relevant for, add it at a prominent spot in their view of the interface, and hide it for other users.

 

Example 5: Quality Monitoring

User Feedback is vital to KM. Without it, you are missing out an important stakeholder group and risk the acceptance of the KM program. There are many ways to gather feedback: Ratings, surveys, user tracking… The best way to gather feedback is enabling comments. Comments allow the user to give a detailed opinion. They can ask questions, point out minor weaknesses and engage the user in the improvement process directly as they can give input. And in contrast to a survey, they do not need too much preparation and, on a small scale, little interpretation.

However, when the number of comments on your content grows large, moderating discussions can get time intense. In addition, it becomes nearly impossible to grasp the overall mood of all comments on a document. Luckily, both issues can be addressed with the same method: Tag a set of comments with the information you need for your comment monitoring. Then train the system to recognize these categories from the text. In marketing context, this is called sentiment analysis since the desired information is whether customers like or dislike a brand or product. In KM however, other categories are important, e.g. whether a comment is a question, critique, a suggestion, or a praise. Questions and critique should be addressed by the moderator or content responsible within a short period of time, while a suggestion might be relevant only with the next content review. A praise, while being the reaction you hope for, might not require any reaction at all. By sorting comments that way using ML, the workload for moderators and responsibles decreases.

The same information can be used for quality monitoring. While a high number of comments tells you that a piece of content is perceived, it does not tell you whether it does so for being useful and important or for being insufficient. The ratio of different kinds of comments can tell a lot more. The meaning of praise and critique is obvious. High numbers of questions and suggestions mean the content is important and used (thus has some quality) but might need refinement. This way, you can use the comments to monitor the quality of the content, improving where the need is greatest and noticing early if the quality drops or is not met.

 

These where only 5 examples on the benefits of combining KM and ML. The implementation of AI often fails because of missing data – but KM can provide data in form of text. And with the data available, all you need to start is knowing what you want to know. There is so much more possible if you keep in mind that in KM the whole provides more information that its parts.