Waht do you learn?
- Current developments that increase the relevance of knowledge management
- A methodical approach to automatically analyze knowledge bases
- Actual results from practice
Waht do you learn?
Today it’s all about text meeting visuals. Not illustrations – we’ll have another article on that topic soon – but how to use structuring and highlighting to guide the eye, to make text easier to skim, read, understand, and remember.
To start, consider an overview, like the one you’ve read just now. For some subjects and content, an overview can seem like wasted space, but think twice on whether you might need one:
So, for your knowledge articles and instructions, consider this:
Next, let’s move on to the body, the bulk of your content. This, too, can be tweaked in many ways to help guide the eye. Eye-tracking studies show how users’ gazes and attention skip along different elements of a page (typically following an F pattern). And you can help direct these skipping patterns in more purposeful ways along your content, so that readers have an easier time skimming it and keeping track of where they are.
Try to keep your paragraphs short.
Long paragraphs require more attention to read thoroughly, completely and without accidentally skipping anything. As a result, readers get tired out with longer instructions, which can lead to errors when information is missed.
Instead, try to stick to short paragraphs. This makes them easier to skim and read. There’s no single ideal length, but for anything with more than 5-8 lines on the target device, consider if you can break it up to help better guide the eye.
Relatedly, keep in mind your screen might be wider than that of some of your readers. Therefore, for you, a paragraph might not look too long; but on some readers’ slimmer devices, it might fill the entire screen height.
You can further break down long texts by not skimping on headings. They’re doubly great: They break up text for better orientation and they act as a mini-summary.
Additionally, they can be helpful as an anchor to link to, not just from the table of contents.
Lists and tables also are great for breaking down longer texts, as their very format is about splitting information into smaller bits. Of course, not every information is suited to being a list or table; but if you find you’re writing a paragraph with things such as…
…try out if a list or table also works. They’re much faster to skim than paragraphs.
Not only the way a text is broken up can help guide the eye, but also the way the text itself looks. Not every writer can influence how their knowledge articles or other content will be displayed on various mediums. But if you do have control over the styling, consider these tips:
Make sure the contrast between colors, e.g., for font and background, is big enough. That ensures the text is readable without tiring out the eye and costing the reader unnecessary concentration.
You can use an online tool like Contrast Checker by WebAIM to verify. Using a tool is especially helpful as not every reader has the best possible eyesight. So don’t just judge by your own measure but use some objective input.
Next, to make your content look less cluttered and easy to read, pay attention to these points when it comes to fonts:
Also, make sure the text and line spacing are big enough for most readers to easily read. Current recommendations for size and spacing are:
Further, you can work with spacing between paragraphs and other elements to help guide the eye better. It’s hard to give a single recommendation for the ideal space between any sort of elements. But if a page looks a little full and cluttered, try increasing the spacing.
And don’t limit yourself to vertical spacing. It can also be very helpful to create larger margins on the sides. The whitespace makes the page less overloaded, and you are slimming down the line width of your text. As a result, you make it easier for the eye to skip from line to line fluently. The ideal value is 50-60 characters per line.
At times you also want to make important or special information stand out and thus help your readers to find it more quickly. What types of highlighting to use depends on your content, of course. Here’s a few suggestions to get you started:
|When to use||Highlight suggestions|
|The most important key words||Bold text|
|Code||E.g., typewriter font, separate paragraphs and optionally a different color|
|Right and wrong / recommended and not recommended ways||Green and red font color or small icons (like green check mark and red cross)|
|Warnings of possible errors||A warning icon or colored background (like yellow or red)|
However, make sure you and your colleagues use consistent highlighting to build trust. If different articles in a knowledge base use different highlighting, it can be confusing to the readers and it stops them from adapting your highlighting system.
And finally, consider giving a summary of the most important points. Like the overview, this isn’t suitable for every type of information. But it’s great as a final checklist and for quickly reading up on.
So, to help guide the eye in your knowledge articles, keep in mind:
Is there something you’d like to add to the list? Let us know in the comments!
We are living in a knowledge society where knowledge is usually just a click away. But what happens when the amount of information becomes too great? The relevant information gets lost! As a result, trying to find the information we need is time-consuming and exhausting. In the end, we may even end up with the wrong information. Automatic knowledge base analysis can help.
Companies are faced with the task of managing information overload and making knowledge available in the right places. However, the editorial effort required often exceeds the staff available resources. The result is outdated, incorrect and redundant information in the company’s knowledge base. This increases costs, for example because searches take longer and errors occur more frequently.
It is therefore advantageous for companies to increase the quality of their knowledge base. On the one hand, this reduces follow-up costs. On the other hand, using knowledge effectively can become an important competitive advantage.
Most of the time, those responsible for the knowledge base are themselves not aware of where the problems lie. It is rarely clear which information is important and up-to-date and which is outdated, incorrect or redundant.
Manual evaluation is scarcely an efficient use of time. Let’s say it takes 10 minutes to get an overview of one page. For a knowledge base with 1,000 articles, this already results in a workload of more than 166 hours.
Clearly, manual analysis will lead to very high costs – if it is even possible at all. In order to get a handle on the current situation quickly and inexpensively, it is a good idea to analyse the contents of the knowledge base automatically.
Even though the content is presented as text in human language, Natural Language Processing means that computers can evaluate this text. This allows the content of a knowledge base to be analysed quickly and cost-effectively in an automated manner.
The development of a model that estimates the quality of information automatically proceeds in 5 steps.
Once you have analysed the company’s knowledge base with the help of the model, it is time to draw some conclusions. The model generates a quality score for each article. For the entire knowledge base of a large company, the distribution could look like this, for example:
The majority of the articles are in the score range from -5 to 4. Some articles also achieve very good or very poor scores. In addition to this complete overview, the analysis provides details on how the rating of an article is arrived at.
These results provide a high level of transparency and a good basis for planning knowledge management projects. They allow the current quality of the knowledge base to be identified. It also becomes clear where there is potential for improvement. The time that will be required for a project can also be determined more precisely.
The model can furthermore be used to evaluate the success of actions.
Companies are often faced with the challenge of measuring and evaluating the effectiveness of knowledge management actions. The model can be applied to the knowledge base first before the project starts and then after the project ends. The results can then be compared. In this way, the actual improvement in quality can be determined.
Due to the continuing flood of information, the topic of computer-aided analysis of information will become even more relevant in the future and will provide a wide variety of use cases.
Information Management can help. IM makes it possible to establish a company-wide knowledge management system. This is the basis for successful customer service, cloud migrations or IT operations, for example.
“Keep it simple” is a well-known principle in documentation. A manual or description that is written to be simple has a far lower risk of causing errors. “Simple” refers not only to the structure of the text, but above all to the wording. How well a text reads can be measured using simple methods. Here’s a quick overview of how to do it and why you should measure readability.
… to be able to recognise and revise documentation that is difficult to read. Good readability is often not the focus, especially for internal documentation. After all, subject matter experts are the target audience. Here are a few reasons why easy readability is still important:
So there are many reasons why we should measure readability – even if the primary audience for the docs are subject matter experts.
Besides the content, 3 aspects determine how well a text can be understood:
The quality of the layout is difficult to determine. A few tips on what to look out for are given by Kris Schmidt in the article “The Art of Simple Information: Optimizing Knowledge Articles for Your Readers“.
But structure and formulation can be measured easily. In fact, this is common in marketing. The same measurement methods can be used for documentation, even if you may be aiming for different values.
To evaluate the structure, check the word count. MS Word and most other text editors can do this. So there is no need for complicated tools. A paragraph should have no more than 100 words. Try to have a new subheading after 300 words. It does not matter if the odd paragraph or section is a little longer. Good highlighting can make up for that. On average, however, these are the target values you should follow.
The level of complexity in the formulation of a text can be measured with the Flesch Reading Ease Score (FRE Score). There are many variants and adjuncts for this metric. The FRE score takes into account 3 properties:
The score indicates how well a reader must be able to read in order to understand the text quickly. High values mean that the text is easier. To understand difficult texts, you need either more concentration or more prior knowledge.
In marketing, the target is an FRE score of 60-70. Only academics can fluently read texts with a score below 30. For documentation, the FRE score should therefore not be below 30. Values of 40-50 are usually easy to achieve and still sufficiently understandable for most users.
Measuring readability is easy. If you take care to write texts in an understandable way, you will create better, more useful docs. Tasks will be performed with greater speed and reliability. Moreover, the same documentation can be reused in many different areas because it can be understood even by people who are not experts in the subject matter. This saves time when creating and maintaining the documentation. Measuring readability is not just a gimmick, but a must for efficient knowledge management.
When we meet a person, we form an initial opinion within a few moments. Posture and facial expression give us an impression, which is then complemented by behaviour and expression. Whether we consider someone likeable, competent, or trustworthy often depends on the little things.
And the same is true for chatbots. Since they communicate in a human way – namely through language – we judge them by the same characteristics. That’s why it is important to pay attention not only to what a chatbot says (content) but also to how they say it (personality). But what kind of chatbot personality is the best? What are the most important factors?
We investigated these questions in cooperation with the University of Würzburg. Liibot, a chatbot for knowledge bases in the IT environment, served as the test object. The surprising findings are summarised in this article.
Many thanks to Fiona Wiederer for the design, implementation and evaluation of the study.
In the same way as with people: through clothing, behaviour, and manner of expression.
The “clothing” of a chatbot is the chat window. Bright and colourful or subtle and understated? Square or rounded? Times New Roman or Calibri? The chatbot icon is also important. A neutral speech bubble evokes different associations than a grinning mascot.
The behaviour of a chatbot mainly includes its timing. A bot that addresses the user of its own accord may seem more helpful – or, if the timing is bad, more intrusive – than a bot that waits until it is addressed. Timing is also important during the conversation. How long does it take to get the answer? When are additional questions asked, suggestions made, and tips given? When is the inquirer passed on to a human colleague?
Manner of expression is the third aspect of the chatbot personality. This was the focus of our study. Detailed or concise, casual or formal, personal or mechanical?
As an illustration, one variant of the Liibot uses phrases such as “Let’s have a look…”, incorporates smileys into the answers and speaks in the 1st person (“I can find pages and answer FAQs.”). This variant expresses views such as: “I suggest writing an email to the service desk.”
We contrasted this with a serious, goal-oriented bot. This bot always chooses the shortest possible wording and refers to itself as “This bot”. The answers consist only of facts and instructions, instead of recommendations and suggestions.
This has been investigated in a whole series of studies (Jain et al 2018, Chaves & Gerosa 2020, Ruane et al 2021, …). In general, the effect of a personality trait depends on the domain. This means that, depending on the purpose of the bot and the user group, the same personality can create different impressions. For example, different qualities are desired for financial advice compared to fashion advice.
Users evaluate chatbots (like all media) based on certain characteristics. Above all, these include:
These factors can also be combined into one question: Does the chatbot increase my productivity? This is the most important consideration when it comes to user satisfaction. Other influencing factors are entertainment value and general social intentions (e.g. to experience affirmation).
In other words, a bot that is too serious may not achieve its full potential because it comes across as boring. A bot that utters too many motivational phrases and jokes runs the risk of being used only as a toy. There is no universally optimal ratio between friendly small talk and task-oriented work ethic. Different users have different preferences. For this reason, it is important to adapt the bot not only to the task but also to the user group.
To find out, we gave the Liibot chatbot 2 different personalities. Liibot helps users from different IT sectors (DevOps, Service Desk, Management, …) to find information in an internal knowledge base. Technology-wise, the two versions of the bot are identical. They use the same language model, access the same data and use the same interface. The course of the conversation is also the same. Apart from the differences in the manner of expression already described, there are also two small differences in behaviour. Firstly, the avatar of the socially oriented bot changes its facial expression depending on the situation. The task-oriented bot, on the other hand, always maintains a neutral facial expression. Secondly, the social bot sometimes answers with a delay of up to 1 second, while the task-oriented bot always answers as quickly as possible.
80 test subjects (40 per bot variant) were then asked to solve a series of tasks with the help of the Liibot. Beforehand, they were asked about their attitude towards chatbots and afterwards about their evaluation of the Liibot.
The initial assumption was that
Productivity is generally considered the most important quality of a medium. For IT companies in particular, efficiency plays a major role. Emotional aspects are considered less relevant. That’s why we also assumed that the task-oriented bot would perform slightly better overall than the social one.
The first assumption turned out to be true. The social bot was rated as significantly more entertaining than the task-oriented bot. The difference in the evaluation of social competence was also significantly in favour of the social bot.
In terms of professional competence and usefulness, the two bot variants were rated equally. Since the bots conveyed the same content, this may not be too surprising. While it is true that personality can also influence perceived efficiency, this seems to require major adjustments, e.g. to the course of conversation or to the interface.
The other results were surprising. In terms of usability, there was a tendency – contrary to expectations – for the social bot to come out on top here as well. Users of the social variant even rated the overall quality of the Liibot service significantly better. They tended to be more likely to indicate that they would use the Liibot in the future and were generally more satisfied.
The fact that the social bot was rated as equally good or even better in every respect could have various causes. First, there is the test group. For the most part, the group did not consist of “real” IT workers, but of students who had to put themselves in the shoes of an IT worker by means of a scenario. This kind of methodology is often used. Nevertheless, it can distort the results. Students may have different requirements as a user group than IT staff.
The second reason could be that the social variant is the original personality of the bot. The course of the conversation, the interface and the avatar are chosen to suit this variant. This perhaps overshadows some of the strengths of the task-oriented variant.
And finally, there is of course a third explanation. It is quite possible that a serious, results-focussed bot might actually be less well received, even in the IT sector.
However the results of this study are interpreted, one thing is indisputable. The chatbot personality can be noticeably influenced by small changes in the manner of expression and behaviour. In addition, personality has a significant impact on how users evaluate the chatbot. This also means that the success of a chatbot does not depend only on the technology. The content and design are also important. Consequently, when planning a chatbot project, it is important to give as much consideration to the selection of the personality and the formulation of the answers as to the selection of the tools. Ensure your bot fits the company and creates the right impression with customers and partners. Just as you would in the case of a human employee.
(You’ll find throughout this article that we’re not sticking very closely to the described rules, as this article isn’t itself a knowledge article. We invite you to treat it as an exercise in spotting where to make improvements, if this were to qualify as a simplified knowledge article.)
When creating knowledge articles, you are likely writing for an audience of experts. Why would that require simpler language if your readers have a high level of expertise? There are several reasons you are helping your readers by simplifying:
There are several ways you can make the language in your knowledge articles simpler. Of course there are complex topics you are writing about, which will make it hard to stick to all these suggestions. But especially when you are already forced to include some highly specialized terms, consider these tips to balance out the rest of your article:
This one seems obvious, but it can be hard to keep in mind while writing. Simpler words are faster to read and easier to understand. They also limit variety, which, as mentioned, is a good thing: You introduce fewer synonyms for the user to keep in mind and search for.
A few examples in this very section could be written shorter in a knowledge article:
Instead of “obvious”, use “clear”.
Instead of “introduce”, use “add”.
Instead of “keep in mind” use “remember”.
Using simpler words can make you feel like your text is repetitive and belittling your readers’ skills of understanding. But remember that you are not writing knowledge articles for their entertainment, but for them to quickly take in specific information. They’ll be grateful for a text that is easier to read.
Even when using simpler words, you can still end up with a variety of synonyms for the same thing. Try to limit this. With a more limited vocabulary, you get more clarity, consistency and greater searchability.
For example, picture yourself describing how to close an app:
Different words: You could use the word “X symbol”, “X icon”, “X button”, “x control” or simply “X” to describe what the user should click on.
Controlled vocabulary: If you use a consistent approach to what you refer to as a symbol, icon or button, e.g. always using “X control” in this case, it becomes easier for your reader to search for a specific element and recognize it in a different article.
You can use a style guide to help yourself and other writers always use the same terms for the same things.
Going for more active instead of passive language is a good writing tip in a lot of areas. But it is especially helpful in knowledge articles. Using active language for instructions makes it clearer who is supposed to do what.
Here’s a comparison:
Passive language: “After the alert mail is sent and a call dispatched to the hotline, the system can be restarted.”
Active language: “After the helpdesk team has sent the alert mail and has made the call to the hotline, you can restart the system.”
In the first statement, it is unclear which steps are things the reader should do and which ones are triggers they need to wait for and that should be performed by someone else. In the second statement, the reader knows exactly who should do what.
Another clear candidate for simpler language: Keep your sentences as short as possible. Longer sentences can lead to complex sentence structures, where you start making a point in the beginning and finish it in the end of a very long sentence. That can be hard to follow for your reader who is trying to find specific information quickly and is likely already dealing with a relatively complicated subject. Another advantage of shorter sentences is that you save on connecting words you would put into longer sentences, leading to a shorter text overall.
Consider the difference of understandability and length in this example:
Longer sentence: “It’s advisable that after entering the code 1234 and toggling the settings A and B to be active, you save.”
Shorter sentence: “Enter code 1234. Set settings A and B to active. Save.”
Not only does the shorter sentence safe space, it’s also easier to understand the 3 actions described in it.
Especially with sequential steps, this can also help break down the instruction into smaller steps. (We’d suggest using a numbered list to make things even clearer, but that is a point for an article on structuring tips.)
What is a leading sentence structure? By this we mean arranging a sentence so the most important parts or the steps that come first are in the beginning of the sentence. The less important parts or steps that come last, are in the end. While this rule does not make sense for every type of sentence, it is especially useful for describing things that follow a specific condition or steps that happen in order.
Highlight conditions: If you want to inform your reader that they should do something in case of another event, you could write:
“Restart the application if you get error code 123.”
But putting the condition first makes it easier for the reader to know when this applies – so instead, write:
“If you get error code 123, restart the application.”
Clarify sequences: When describing some sequential steps, you are not wrong to write:
“Before submitting the form (by clicking the ‘done’ button), be sure to tick the ‘save for later’ box at the bottom of the pop-up.”
But it will help the reader to orient themselves while they follow the instruction if you instead write:
“At the bottom of the pop-up, tick the box ‘save for later’. Then click the ‘done’ button to submit the form.”
This way you start at the biggest orientation help (“at the bottom of the page”) and the first step (“tick the box ‘save for later’”), giving the reader a good idea where to start, both visually and in order of steps.
Paying attention to this rule will help your readers know the most important part right away, allowing them to follow the sentence more easily or letting them know immediately that the condition the sentence starts with might not apply to them.
Trying to stick to these points already means you’re doing a lot to help your readers consume your content faster. But if you are curious just how readable your content is, there are several ways to measure this.
By now, you are probably painfully aware of all the ways in which this very article cold be simplified if it had to fulfil the standards of a quickly readable knowledge article. Take that mindset along to your next knowledge article, being critical of every time you could use a simpler word or a shorter sentence and continue enabling your readers to work faster and with less frustration.
You are curious to learn about more ways aside from simpler language in which you can make your knowledge articles even more reader friendly? We’ve got an overview for you in our article The Art of Simple Information. Also, keep an eye out for more articles coming soon.
Did we miss ways of simplifying language you always use in your knowledge articles?