Webinar: How good is your wiki?

Webinar: How good is your wiki?

In her Bachelor’s thesis, Anna Busch researched how one can measure the quality of knowledge bases in an automated way. Are you interested in the methodology, results and practical examples of how your company can benefit from it? Register now!

Waht do you learn?

  • Current developments that increase the relevance of knowledge management
  • A methodical approach to automatically analyze knowledge bases
  • Actual results from practice
Where: Online, free webinar
Date: 25 November 2021
Time: 10am EST (4pm CET / 3pm UTC)
Duration: 1 h
Guide the Eye

Guide the Eye

Welcome back to our series on optimizing knowledge articles to your readers’ needs – we’re glad you’re here and ready to learn more about how to help make your readers’ daily work faster and less frustrating.

Today it’s all about text meeting visuals. Not illustrations – we’ll have another article on that topic soon – but how to use structuring and highlighting to guide the eye, to make text easier to skim, read, understand, and remember.

Give an overview

To start, consider an overview, like the one you’ve read just now. For some subjects and content, an overview can seem like wasted space, but think twice on whether you might need one:

  • It helps your readers find out whether the page or section even holds the information they are looking for and, if yes, where.
  • It also helps them prepare for the upcoming information. E.g., if they know that instructions for a specific app are coming up, they can start up the app and re-familiarize themselves with the interface before moving on.

So, for your knowledge articles and instructions, consider this:

  • If your article is a little longer, include a table of contents. Ideally, this is created automatically and links to the respective headings.
  • Judge whether a summary is helpful. Sometimes summaries don’t yield helpful information, e.g., on pages with a lot of data. Other times, a brief summary in the beginning of each longer group of steps can help greatly to understand what you’re about to do before following the next part of the instruction.

Break up text

Next, let’s move on to the body, the bulk of your content. This, too, can be tweaked in many ways to help guide the eye. Eye-tracking studies show how users’ gazes and attention skip along different elements of a page (typically following an F pattern). And you can help direct these skipping patterns in more purposeful ways along your content, so that readers have an easier time skimming it and keeping track of where they are.

Short paragraphs

Try to keep your paragraphs short.

Long paragraphs require more attention to read thoroughly, completely and without accidentally skipping anything. As a result, readers get tired out with longer instructions, which can lead to errors when information is missed.

Instead, try to stick to short paragraphs. This makes them easier to skim and read. There’s no single ideal length, but for anything with more than 5-8 lines on the target device, consider if you can break it up to help better guide the eye.

Relatedly, keep in mind your screen might be wider than that of some of your readers. Therefore, for you, a paragraph might not look too long; but on some readers’ slimmer devices, it might fill the entire screen height.

Headings

You can further break down long texts by not skimping on headings. They’re doubly great: They break up text for better orientation and they act as a mini-summary.

Additionally, they can be helpful as an anchor to link to, not just from the table of contents.

Lists and tables

Lists and tables also are great for breaking down longer texts, as their very format is about splitting information into smaller bits. Of course, not every information is suited to being a list or table; but if you find you’re writing a paragraph with things such as…

  • Multiple conditions
  • Steps
  • Comparable data

…try out if a list or table also works. They’re much faster to skim than paragraphs.

Visibility

Not only the way a text is broken up can help guide the eye, but also the way the text itself looks. Not every writer can influence how their knowledge articles or other content will be displayed on various mediums. But if you do have control over the styling, consider these tips:

Contrast

Make sure the contrast between colors, e.g., for font and background, is big enough. That ensures the text is readable without tiring out the eye and costing the reader unnecessary concentration.

You can use an online tool like Contrast Checker by WebAIM to verify. Using a tool is especially helpful as not every reader has the best possible eyesight. So don’t just judge by your own measure but use some objective input.

Fonts

Next, to make your content look less cluttered and easy to read, pay attention to these points when it comes to fonts:

  • Make sure fonts are easy to read in long text blocks (try not to get too wild with cursives etc. 😉 ).
  • Stick to a small number of fonts overall (e.g., one for headings, one for text, one for special text such as code blocks).

Also, make sure the text and line spacing are big enough for most readers to easily read. Current recommendations for size and spacing are:

  • Size: At least 16px for text-heavy pages
  • Line spacing: 130% – 150%

Spacing

Further, you can work with spacing between paragraphs and other elements to help guide the eye better. It’s hard to give a single recommendation for the ideal space between any sort of elements. But if a page looks a little full and cluttered, try increasing the spacing.

And don’t limit yourself to vertical spacing. It can also be very helpful to create larger margins on the sides. The whitespace makes the page less overloaded, and you are slimming down the line width of your text. As a result, you make it easier for the eye to skip from line to line fluently. The ideal value is 50-60 characters per line.

Highlight

At times you also want to make important or special information stand out and thus help your readers to find it more quickly. What types of highlighting to use depends on your content, of course. Here’s a few suggestions to get you started:

When to use Highlight suggestions
The most important key words Bold text
Code E.g., typewriter font, separate paragraphs and optionally a different color
Right and wrong / recommended and not recommended ways Green and red font color or small icons (like green check mark and red cross)
Warnings of possible errors A warning icon or colored background (like yellow or red)

However, make sure you and your colleagues use consistent highlighting to build trust. If different articles in a knowledge base use different highlighting, it can be confusing to the readers and it stops them from adapting your highlighting system.

Final summary

And finally, consider giving a summary of the most important points. Like the overview, this isn’t suitable for every type of information. But it’s great as a final checklist and for quickly reading up on.

So, to help guide the eye in your knowledge articles, keep in mind:

  • Consider a table of contents and overview
  • Break up text with shorter paragraphs, headings, lists and tables
  • Increase visibility with contrast, readable fonts and spacing
  • Highlight important points consistently
  • And consider a final summary

Is there something you’d like to add to the list? Let us know in the comments!

Do you have any ideas or feedback? Tell us via mail to: marketing@avato.net

Imprint
Date: October 2021
Author: Kris Schmidt
Contact: marketing@avato.net
www.avato-consulting.com
© 2021 avato consulting ag
All Rights Reserved.

Managing Knowledge Intelligently – How Companies Can do a Knowledge Base Analysis in a Time-Saving Way

Managing Knowledge Intelligently – How Companies Can do a Knowledge Base Analysis in a Time-Saving Way

We are living in a knowledge society where knowledge is usually just a click away. But what happens when the amount of information becomes too great? The relevant information gets lost! As a result, trying to find the information we need is time-consuming and exhausting. In the end, we may even end up with the wrong information. Automatic knowledge base analysis can help.

Companies are faced with the task of managing information overload and making knowledge available in the right places. However, the editorial effort required often exceeds the staff available resources. The result is outdated, incorrect and redundant information in the company’s knowledge base. This increases costs, for example because searches take longer and errors occur more frequently.

It is therefore advantageous for companies to increase the quality of their knowledge base. On the one hand, this reduces follow-up costs. On the other hand, using knowledge effectively can become an important competitive advantage.

The Question is: Where to Start?

Most of the time, those responsible for the knowledge base are themselves not aware of where the problems lie. It is rarely clear which information is important and up-to-date and which is outdated, incorrect or redundant.

Manual evaluation is scarcely an efficient use of time. Let’s say it takes 10 minutes to get an overview of one page. For a knowledge base with 1,000 articles, this already results in a workload of more than 166 hours.

Clearly, manual analysis will lead to very high costs – if it is even possible at all. In order to get a handle on the current situation quickly and inexpensively, it is a good idea to analyse the contents of the knowledge base automatically.

Even though the content is presented as text in human language, Natural Language Processing means that computers can evaluate this text. This allows the content of a knowledge base to be analysed quickly and cost-effectively in an automated manner.

Developing the Model – How to Get Meaningful Results

The development of a model that estimates the quality of information automatically proceeds in 5 steps.

  1. In the first step, you need to identify all relevant data and make it available. Most knowledge bases provide the option of exporting the individual articles as XML or HTML files.
  2. Next, you should prepare the data and investigate which key metrics can be calculated from the data. For example, comprehensibility can be determined by using the FRE score or the content can be checked for outdated terms. Natural Language Processing can also be used to identify duplicates within the knowledge base. Based on the findings, you can calculate the key metrics.
  3. In the third step, you can evaluate the key metrics calculated in step two. What specific FRE score does my content need to achieve to be rated as “good”? In this way, values can be given for each key metric, which lead to plus or minus points in the evaluation.
  4. Then you can translate the findings into a Python script that calculates a score for each article in the knowledge base.
  5. Finally, apply the script to the knowledge base under investigation and derive further steps from the results.

Knowledge Base Analysis: What Does the Result Tell Me?

Once you have analysed the company’s knowledge base with the help of the model, it is time to draw some conclusions. The model generates a quality score for each article. For the entire knowledge base of a large company, the distribution could look like this, for example:

diagram of the  analysis of the documentation

The majority of the articles are in the score range from -5 to 4. Some articles also achieve very good or very poor scores. In addition to this complete overview, the analysis provides details on how the rating of an article is arrived at.

These results provide a high level of transparency and a good basis for planning knowledge management projects. They allow the current quality of the knowledge base to be identified. It also becomes clear where there is potential for improvement. The time that will be required for a project can also be determined more precisely.

The model can furthermore be used to evaluate the success of actions.

Companies are often faced with the challenge of measuring and evaluating the effectiveness of knowledge management actions. The model can be applied to the knowledge base first before the project starts and then after the project ends. The results can then be compared. In this way, the actual improvement in quality can be determined.

Conclusion

Due to the continuing flood of information, the topic of computer-aided analysis of information will become even more relevant in the future and will provide a wide variety of use cases.

Whenever you

  • want to gain knowledge from texts,
  • need very good documentation,
  • want to optimise your business knowledge

Information Management can help. IM makes it possible to establish a company-wide knowledge management system. This is the basis for successful customer service, cloud migrations or IT operations, for example.

Do you have feedback, or ideas? Tell us via email: marketing@avato.net

Imprint 
Date: October 2021
Author: Anna Busch
Contact: marketing@avato.net
www.avato-consulting.com
© 2021 avato consulting ag
All Rights Reserved

As simple as possible: Measure readability for better documentation

As simple as possible: Measure readability for better documentation

“Keep it simple” is a well-known principle in documentation. A manual or description that is written to be simple has a far lower risk of causing errors. “Simple” refers not only to the structure of the text, but above all to the wording. How well a text reads can be measured using simple methods. Here’s a quick overview of how to do it and why you should measure readability.

Why should I measure readability in docs?

… to be able to recognise and revise documentation that is difficult to read. Good readability is often not the focus, especially for internal documentation. After all, subject matter experts are the target audience. Here are a few reasons why easy readability is still important:

  1. Misunderstandings, i.e. mistakes, occur less frequently.
  2. The text can be read more quickly. This means less time is lost.
  3. Readers can be more certain that they have understood everything correctly. This means that they trust the documentation more.
  4. Even non-native speakers can easily understand the text.
  5. New employees, unskilled temporary workers and trainees can all use the text.
  6. The text can still be understood correctly even when the reader is concentrating less (e.g., just before closing time, under time pressure, or while having a mild cold; or all of that at once).
  7. Other departments can use the documentation, even if they are not experts on the subject (e.g., the customer hotline).

So there are many reasons why we should measure readability – even if the primary audience for the docs are subject matter experts.

How can you measure readability?

Besides the content, 3 aspects determine how well a text can be understood:

  • Layout: Images, highlights, and other visually prominent elements provide orientation for the eye, making it easier to consume the information.
  • Structure: Short paragraphs and subheadings help the reader to grasp the topic.
  • Formulation: Short sentences are easier than long ones. (Short) everyday words are easier than (long) technical terms.

The quality of the layout is difficult to determine. A few tips on what to look out for are given by Kris Schmidt in the article “The Art of Simple Information: Optimizing Knowledge Articles for Your Readers“.

But structure and formulation can be measured easily. In fact, this is common in marketing. The same measurement methods can be used for documentation, even if you may be aiming for different values.

To evaluate the structure, check the word count. MS Word and most other text editors can do this. So there is no need for complicated tools. A paragraph should have no more than 100 words. Try to have a new subheading after 300 words. It does not matter if the odd paragraph or section is a little longer. Good highlighting can make up for that. On average, however, these are the target values you should follow.

The level of complexity in the formulation of a text can be measured with the Flesch Reading Ease Score (FRE Score). There are many variants and adjuncts for this metric. The FRE score takes into account 3 properties:

  • the language (German, English,…)
  • the average number of syllables per word
  • the average number of words per sentence

The score indicates how well a reader must be able to read in order to understand the text quickly. High values mean that the text is easier. To understand difficult texts, you need either more concentration or more prior knowledge.

In marketing, the target is an FRE score of 60-70. Only academics can fluently read texts with a score below 30. For documentation, the FRE score should therefore not be below 30. Values of 40-50 are usually easy to achieve and still sufficiently understandable for most users.

Conclusion

Measuring readability is easy. If you take care to write texts in an understandable way, you will create better, more useful docs. Tasks will be performed with greater speed and reliability. Moreover, the same documentation can be reused in many different areas because it can be understood even by people who are not experts in the subject matter. This saves time when creating and maintaining the documentation. Measuring readability is not just a gimmick, but a must for efficient knowledge management.

 

 

 

Do you have any ideas or feedback? Tell us via mail to: marketing@avato.net

Imprint: 
Date: September 2021
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© 2021 avato consulting ag
All Rights Reserved.

Chatbot personality: Friendly small talk vs. goal-oriented assistant

Chatbot personality: Friendly small talk vs. goal-oriented assistant

When we meet a person, we form an initial opinion within a few moments. Posture and facial expression give us an impression, which is then complemented by behaviour and expression. Whether we consider someone likeable, competent, or trustworthy often depends on the little things.

And the same is true for chatbots. Since they communicate in a human way – namely through language – we judge them by the same characteristics. That’s why it is important to pay attention not only to what a chatbot says (content) but also to how they say it (personality). But what kind of chatbot personality is the best? What are the most important factors?

We investigated these questions in cooperation with the University of Würzburg. Liibot, a chatbot for knowledge bases in the IT environment, served as the test object. The surprising findings are summarised in this article.

Many thanks to Fiona Wiederer for the design, implementation and evaluation of the study.

 

How does the chatbot personality make itself felt?

In the same way as with people: through clothing, behaviour, and manner of expression.

The “clothing” of a chatbot is the chat window. Bright and colourful or subtle and understated? Square or rounded? Times New Roman or Calibri? The chatbot icon is also important. A neutral speech bubble evokes different associations than a grinning mascot.

The behaviour of a chatbot mainly includes its timing. A bot that addresses the user of its own accord may seem more helpful – or, if the timing is bad, more intrusive – than a bot that waits until it is addressed. Timing is also important during the conversation. How long does it take to get the answer? When are additional questions asked, suggestions made, and tips given? When is the inquirer passed on to a human colleague?

Manner of expression is the third aspect of the chatbot personality. This was the focus of our study. Detailed or concise, casual or formal, personal or mechanical?

As an illustration, one variant of the Liibot uses phrases such as “Let’s have a look…”, incorporates smileys into the answers and speaks in the 1st person (“I can find pages and answer FAQs.”). This variant expresses views such as: “I suggest writing an email to the service desk.”

We contrasted this with a serious, goal-oriented bot. This bot always chooses the shortest possible wording and refers to itself as “This bot”. The answers consist only of facts and instructions, instead of recommendations and suggestions.

 

What effect does a different chatbot personality have?

This has been investigated in a whole series of studies (Jain et al 2018, Chaves & Gerosa 2020, Ruane et al 2021, …). In general, the effect of a personality trait depends on the domain. This means that, depending on the purpose of the bot and the user group, the same personality can create different impressions. For example, different qualities are desired for financial advice compared to fashion advice.

Users evaluate chatbots (like all media) based on certain characteristics. Above all, these include:

  • Usefulness (To what extent am I achieving my goal?)
  • Usability (How difficult is it to use the medium?)

These factors can also be combined into one question: Does the chatbot increase my productivity? This is the most important consideration when it comes to user satisfaction. Other influencing factors are entertainment value and general social intentions (e.g. to experience affirmation).

In other words, a bot that is too serious may not achieve its full potential because it comes across as boring. A bot that utters too many motivational phrases and jokes runs the risk of being used only as a toy. There is no universally optimal ratio between friendly small talk and task-oriented work ethic. Different users have different preferences. For this reason, it is important to adapt the bot not only to the task but also to the user group.

 

What personality should an IT chatbot have?

To find out, we gave the Liibot chatbot 2 different personalities. Liibot helps users from different IT sectors (DevOps, Service Desk, Management, …) to find information in an internal knowledge base. Technology-wise, the two versions of the bot are identical. They use the same language model, access the same data and use the same interface. The course of the conversation is also the same. Apart from the differences in the manner of expression already described, there are also two small differences in behaviour. Firstly, the avatar of the socially oriented bot changes its facial expression depending on the situation. The task-oriented bot, on the other hand, always maintains a neutral facial expression. Secondly, the social bot sometimes answers with a delay of up to 1 second, while the task-oriented bot always answers as quickly as possible.

80 test subjects (40 per bot variant) were then asked to solve a series of tasks with the help of the Liibot. Beforehand, they were asked about their attitude towards chatbots and afterwards about their evaluation of the Liibot.

The supposition

The initial assumption was that

  • the social bot would be rated as more entertaining and socially competent;
  • the task-oriented bot would be perceived as more competent, useful and easier to use.

Productivity is generally considered the most important quality of a medium. For IT companies in particular, efficiency plays a major role. Emotional aspects are considered less relevant. That’s why we also assumed that the task-oriented bot would perform slightly better overall than the social one.

The result

The first assumption turned out to be true. The social bot was rated as significantly more entertaining than the task-oriented bot. The difference in the evaluation of social competence was also significantly in favour of the social bot.

In terms of professional competence and usefulness, the two bot variants were rated equally. Since the bots conveyed the same content, this may not be too surprising. While it is true that personality can also influence perceived efficiency, this seems to require major adjustments, e.g. to the course of conversation or to the interface.

The other results were surprising. In terms of usability, there was a tendency – contrary to expectations – for the social bot to come out on top here as well. Users of the social variant even rated the overall quality of the Liibot service significantly better. They tended to be more likely to indicate that they would use the Liibot in the future and were generally more satisfied.

Why was this?

The fact that the social bot was rated as equally good or even better in every respect could have various causes. First, there is the test group. For the most part, the group did not consist of “real” IT workers, but of students who had to put themselves in the shoes of an IT worker by means of a scenario. This kind of methodology is often used. Nevertheless, it can distort the results. Students may have different requirements as a user group than IT staff.

The second reason could be that the social variant is the original personality of the bot. The course of the conversation, the interface and the avatar are chosen to suit this variant. This perhaps overshadows some of the strengths of the task-oriented variant.

And finally, there is of course a third explanation. It is quite possible that a serious, results-focussed bot might actually be less well received, even in the IT sector.

Conclusion

However the results of this study are interpreted, one thing is indisputable. The chatbot personality can be noticeably influenced by small changes in the manner of expression and behaviour. In addition, personality has a significant impact on how users evaluate the chatbot. This also means that the success of a chatbot does not depend only on the technology. The content and design are also important. Consequently, when planning a chatbot project, it is important to give as much consideration to the selection of the personality and the formulation of the answers as to the selection of the tools. Ensure your bot fits the company and creates the right impression with customers and partners. Just as you would in the case of a human employee.

 

 

 

Do you have any ideas, or feedback? Let us know: marketing@avato.net

Imprint: 
Date: September 2021
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© 2021 avato consulting ag
All Rights Reserved.

Make More Simple: Simpler language for faster reading

Make More Simple: Simpler language for faster reading

Admittedly, it was a struggle not to make the subtitle of this article “Simplifying language for increased readability”. A lot of writers will agree that complex phrases and words come naturally while writing – they make a text more sophisticated, entertaining, diverse, … a lot of good things. But there are cases when you do your readers a favor by stepping away from complexity and turning towards simpler language: when you’re writing knowledge articles. Let’s take a look at why that is and how to get there.

(You’ll find throughout this article that we’re not sticking very closely to the described rules, as this article isn’t itself a knowledge article. We invite you to treat it as an exercise in spotting where to make improvements, if this were to qualify as a simplified knowledge article.)

Why?

When creating knowledge articles, you are likely writing for an audience of experts. Why would that require simpler language if your readers have a high level of expertise? There are several reasons you are helping your readers by simplifying:

  • Easier to understand: Even for an expert audience, simpler language can be read and understood even more easily than more complex words and phrases. Less concentration is needed for reading specific steps or an entire article, leaving more focus for the work at hand.
  • Shorter: Generally, simpler terms and a more straightforward sentence structure are shorter than their more complex and elaborate counterparts. The extra word-length and word-count you save over an entire article can make a big difference. And of course shorter means faster to skim and read, saving your readers a lot of time.
  • Higher consistency: If you stick to simpler language and a limited vocabulary, it makes your content more consistent. The same words will be used to refer to the same things across the entire content. This makes it easier for readers to switch between different articles and know exactly when the same topics are addressed.
  • Better searchability: This consistency of terms also means that your readers can rely on search functions more, as they will deliver more matching results for a given term.

How?

There are several ways you can make the language in your knowledge articles simpler. Of course there are complex topics you are writing about, which will make it hard to stick to all these suggestions. But especially when you are already forced to include some highly specialized terms, consider these tips to balance out the rest of your article:

Simpler words

This one seems obvious, but it can be hard to keep in mind while writing. Simpler words are faster to read and easier to understand. They also limit variety, which, as mentioned, is a good thing: You introduce fewer synonyms for the user to keep in mind and search for.

A few examples in this very section could be written shorter in a knowledge article:

Instead of “obvious”, use “clear”.

Instead of “introduce”, use “add”.

Instead of “keep in mind” use “remember”.

Using simpler words can make you feel like your text is repetitive and belittling your readers’ skills of understanding. But remember that you are not writing knowledge articles for their entertainment, but for them to quickly take in specific information. They’ll be grateful for a text that is easier to read.

Controlled vocabulary

Even when using simpler words, you can still end up with a variety of synonyms for the same thing. Try to limit this. With a more limited vocabulary, you get more clarity, consistency and greater searchability.

For example, picture yourself describing how to close an app:

Different words: You could use the word “X symbol”, “X icon”, “X button”, “x control” or simply “X” to describe what the user should click on.

Controlled vocabulary: If you use a consistent approach to what you refer to as a symbol, icon or button, e.g. always using “X control” in this case, it becomes easier for your reader to search for a specific element and recognize it in a different article.

You can use a style guide to help yourself and other writers always use the same terms for the same things.

Active language

Going for more active instead of passive language is a good writing tip in a lot of areas. But it is especially helpful in knowledge articles. Using active language for instructions makes it clearer who is supposed to do what.

Here’s a comparison:

Passive language: “After the alert mail is sent and a call dispatched to the hotline, the system can be restarted.”

Active language: “After the helpdesk team has sent the alert mail and has made the call to the hotline, you can restart the system.”

In the first statement, it is unclear which steps are things the reader should do and which ones are triggers they need to wait for and that should be performed by someone else. In the second statement, the reader knows exactly who should do what.

Short sentences

Another clear candidate for simpler language: Keep your sentences as short as possible. Longer sentences can lead to complex sentence structures, where you start making a point in the beginning and finish it in the end of a very long sentence. That can be hard to follow for your reader who is trying to find specific information quickly and is likely already dealing with a relatively complicated subject. Another advantage of shorter sentences is that you save on connecting words you would put into longer sentences, leading to a shorter text overall.

Consider the difference of understandability and length in this example:

Longer sentence: “It’s advisable that after entering the code 1234 and toggling the settings A and B to be active, you save.”

Shorter sentence: “Enter code 1234. Set settings A and B to active. Save.”

Not only does the shorter sentence safe space, it’s also easier to understand the 3 actions described in it.

Especially with sequential steps, this can also help break down the instruction into smaller steps. (We’d suggest using a numbered list to make things even clearer, but that is a point for an article on structuring tips.)

Leading sentence structure

What is a leading sentence structure? By this we mean arranging a sentence so the most important parts or the steps that come first are in the beginning of the sentence. The less important parts or steps that come last, are in the end. While this rule does not make sense for every type of sentence, it is especially useful for describing things that follow a specific condition or steps that happen in order.

Two examples:

Highlight conditions: If you want to inform your reader that they should do something in case of another event, you could write:

“Restart the application if you get error code 123.”

But putting the condition first makes it easier for the reader to know when this applies – so instead, write:

“If you get error code 123, restart the application.”

Clarify sequences: When describing some sequential steps, you are not wrong to write:

“Before submitting the form (by clicking the ‘done’ button), be sure to tick the ‘save for later’ box at the bottom of the pop-up.”

But it will help the reader to orient themselves while they follow the instruction if you instead write:

“At the bottom of the pop-up, tick the box ‘save for later’. Then click the ‘done’ button to submit the form.”

This way you start at the biggest orientation help (“at the bottom of the page”) and the first step (“tick the box ‘save for later’”), giving the reader a good idea where to start, both visually and in order of steps.

Paying attention to this rule will help your readers know the most important part right away, allowing them to follow the sentence more easily or letting them know immediately that the condition the sentence starts with might not apply to them.

How simple is simple enough?

Trying to stick to these points already means you’re doing a lot to help your readers consume your content faster. But if you are curious just how readable your content is, there are several ways to measure this.

  • Web tools for copy-pasting and evaluating your texts: There are plenty of websites that allow you to copy-paste the text you want evaluated and then see how it performs against different measures of readability. This is a great way to help get a gauge of how readable your typical articles are or how certain changes affect them. (For example, the free options from readable and WebFX are a good starting point.)
  • Customized analytics: If you want to frequently measure the readability of content, perhaps not only yours but bigger amounts from a knowledge base, setting up your own analytics measurement might be the way to go. You can tweak it to the conditions of your content, accounting for necessary specialist vocabulary and have it scan your content automatically at publishing or in big batches. Of course, this requires a bit of expertise – ideally in general programming as well as text processing and natural language processing areas. If you are curious to learn more, stay tuned for our article on content analytics, coming soon.

By now, you are probably painfully aware of all the ways in which this very article cold be simplified if it had to fulfil the standards of a quickly readable knowledge article. Take that mindset along to your next knowledge article, being critical of every time you could use a simpler word or a shorter sentence and continue enabling your readers to work faster and with less frustration.

You are curious to learn about more ways aside from simpler language in which you can make your knowledge articles even more reader friendly? We’ve got an overview for you in our article The Art of Simple Information. Also, keep an eye out for more articles coming soon.

Did we miss ways of simplifying language you always use in your knowledge articles?

Any ideas, comments, feedback? Tell us: marketing@avato.net

Impressum: 
Date: August 2021
Author: Kris Schmidt
Contact: marketing@avato.net
www.avato-consulting.com
© 2021 avato consulting ag
All Rights Reserved.