A Chatbot as Your Website’s Receptionist: 3 Concepts From Practice

A Chatbot as Your Website’s Receptionist: 3 Concepts From Practice

There’s the dream of a chatbot functioning as an automated full-scale service desk: Answering questions even when asked in an atypical way using wrong terms, capable of recognizing the exact circumstances of a customer’s problem and forwarding solutions with the exact level of detail they need. We can get there, but it’s a lot of work. For the first step, we need a closer objective.

Instead of mimicking a service desk employee with all the knowledge and handling a wide scale of requests, let’s start out with a bot working as a receptionist with a single task: telling the users where to look for a solution. Why is this a good starting point? Because we already know how to navigate our website. Once the bot can tell where to find the answer, you can concentrate on enabling it to extract the information and directly provide it to the user.

There are three approaches on how to implement this functionality: full text search, guided search and using intents. They require different levels of development effort and data preparation, but that’s not a downside: you can move on from one to the other, starting small and building on what you already have. Let’s start with the easiest one.

 

Concept 1: Search Engine Interface

You probably already have a search engine implemented in your website. If you think about it, this engine does exactly what you want the chatbot to do. It takes a free text input and returns a list of places to look for information on the terms the user entered. Thus, think of your first chatbot as an enhanced interface for classic search. It asks the user for some keywords and pops out a list of pages, maybe in combination with a short summary stored in the page’s metadata.

One could argue that this won’t add any value because there is no new functionality. But functionality is not the only thing that adds value. You can use this first bot to test the performance of the tool you use. Your developers can collect first experiences on working with this kind of technology and on how to integrate it into your existing environment. Your conversation flow designer can experiment with ways on how to map the concept to a set of conversation nodes. And of course you can collect first feedback from your users without investing too much.

And to make it clear: Even for the users there will be added value. Providing an alternative interface may help some of them or enrich the user experience. Moreover, while the search engine is done when the result page is displayed, the bot can continue supporting the user, e.g. by asking whether these results answer the question and suggesting additional contact information in case they don’t.

 

Concept 2: Guided Search

Once the bot is working and executing at least this basic task, you can increase its helpfulness. A good search engine provides some kind of filtering, right? How do you implement this in the chatbot? Well, the chatbot can ask for specific information and provide options to select. This is where the bot can start to show something that at least looks like intelligence. For example, if there are many results to a certain request, it could ask the user to set exactly the filter that reduces the number of results the most (e.g. “Which Operating System do you use?” followed by a list of all OS in the result). Thus, instead of being overwhelmed by a huge range of options the user must only make the decisions that really help.

This concept requires your pages to be enriched with some additional metadata and the bot needs direct access to this information, without the search engine functioning as a broker in between. But this is only a small adaption and since your developers already know how the bot processes data, they probably won’t run into big issues.

If your data has an accurate structure you can even remove the free text input and use only a set of questions with pre-set options as answers for setting filters. This prevents users getting wrong results due to wrong terms in the query. However, to some users this might seem like a step backwards.

 

Concept 3: Understanding the Intent

Your bot is already capable of having a conversation – without even trying to understand the user. Now your developers know how to modify the bot, your conversation designer is experienced in developing flows and the bot is integrated well into your website. Time to tackle the last missing step to a real chatbot with all the AI stuff running in the background.

For those new to the topic: Chatbots use Machine Learning to match a text input to one of several pre-defined intents. The more intents, the harder the task. Thus, it is best to start with a small set of things the users might be interested in. For a first attempt, in order to get experience in working with intents, you might want to train the bot on intents related to the conversation itself like “explain one of the options you asked me to choose from” or “I want to change a filter I set earlier”. This is a lot easier than letting the bot recognize what information the user is looking for, since there is less variety.

Later you can try to replace the search engine integration by connecting pages to intents. Nevertheless, keeping search as a fallback in case the bot fails in recognizing the intent is a better idea.

 

 

You started out with a search engine interface and got to a navigation assistant. With some additional training, the bot will be able to predict the correct location to point the user to with high accuracy. From that point on, it is only a small step to the bot answering questions by itself. This is how you get the service desk bot you dreamed of in the beginning.

Do you have any question? Simply email to: marketing@avato.net

Imprint: 
Date: January 2020
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© 2020 avato consulting ag
All Rights Reserved.

The 7 Biggest Mistakes of Knowledge Management and ECM

The 7 Biggest Mistakes of Knowledge Management and ECM

The 7 Biggest Mistakes of Knowledge Management and ECM

Why Information Management? A Request for an Innovative Approach.

Why have Knowledge Management (KM) or Enterprise Content Management (ECM) failed for decades? Why do they not bring any added value to the organisation?

The simple answer is: If you keep using the same approach, you shouldn’t wonder when the result is always the same!

An organisation generally has the unpopular topic of “documentation” pushed on it from external sources. However from the perspective of those affected it is, at most, a second priority and expects something from the organization that is not part of its core competencies. A lot of effort is made to produce results that simply comply with the formal and regulatory requirements, but don’t provide the organisation with any practical added value or are only selectively or temporarily useful.

The 7 Biggest Mistakes

What might seem like a law of nature is actually more of a home-made problem. Examples from the industry (technical documentation/communication) show that innovative ideas and methods as well as the right technologies can very quickly achieve astonishing results with comparatively little effort. We’ll concentrate on the 7 biggest mistakes of ECM and KM projects. Information Management overcomes these by using methodical approaches and innovative technologies.

1. Lack of a comprehensive approach.

Information Management (IM) is complex. “Think big!” Every organisation needs to see IM as comprehensive and integrated. Many different groups of people, diverging information needs and IM technologies all need to be brought into an overall picture. This begins with the objectives of all relevant stakeholders, integrates the requirements of information users and identifies the relevant SMEs (Information Providers).

2. Focus on IM technology

In classic projects the focus is neither on the user nor the information. Projects are quickly shaped by discussions about IM technology, especially when company IT plays an important role in the setup of the projects.

3. Inadequate know-how

Describing processes, technologies, standards or services – to name just a few aspects – well and in an understandable manner is not an easy endeavour. One needs to understand content, efficiently cooperate with the specialists, recognise who uses information and then prepare information in a way that is appropriate to the target audience, interesting and understandable. Specialists who manage to do this are rare and expensive. Their importance is often underestimated.

4. Methodology-free work

Methodology often likes to focus on versions and approval procedures. However these are, at best, secondary methods. Information must be broken down into small units (Information Units) and assigned to a taxonomy using metadata. This avoids redundancies and means that information can be maintained. There must be prioritization in the preparation of content, content must be evaluated and the governance of content must be structured. The feedback of users should always be integrated. In fact, these central methods almost never dominate ECM or KM projects, which are generally driven by IT.

5. Cumbersome, inflexible processes with little content and a lot of formality

Traditional KM and ECM are neither slimline nor agile processes. In principle they work based on the waterfall method, are set up for the long term and are focused on formal processes. In general they work without evaluation of content-related results. This means that they are often simply a continuation of traditional documentation associated with new tools and comprehensive but unfortunately often only superficial requirements.

6. Cultural deficits

Organisations tend to view the current state as a natural law: There is no such thing as relevant and well-prepared information that is easily accessible. As a result there is no culture of communication, feedback or structured sharing of information. Everyone simply looks after their own small area. All sorts of information is collected and hoarded. This creates a digital dumping ground that is soon unmanageable.

7. Managerial failure

Half-heartedness expresses itself in many ways: Unclear responsibilities, insufficient prioritization or a lack of performance measurement are only the most visible signs. Management doesn’t seem to “like” the topic. As well as this, managers often don’t understand the importance to their own organisation and are unable to estimate the complexity of Information Management. This means that the organisation permanently drags the topic around, everyone half-heartedly addresses it and as a result it offers hardly any added value to the organisation.

Instead

If it is approached correctly, and the 7 biggest mistakes are avoided, self-documenting can be quick and easy. The best part is: Information Management works in all organisations. They become better, faster, more agile and more business and customer oriented.

How to do it right: Information Management has an integrated approach to people, information and IM technology. Reliable and useful information is always a snapshot in time. That’s why IM is a fast and agile process, uses innovative approaches and learns methodically from established procedures in other industries (technical communication).

Do you have any question? Simply email to: marketing@avato.net

Imprint: 
Date: January 2020
Author: Gregor Bister
Contact: marketing@avato.net
www.avato-consulting.com
© 2020 avato consulting ag
All Rights Reserved.

Production Monitoring 4.0 in the Paper Industry

Production Monitoring 4.0 in the Paper Industry

Initial Situation:

How many people are necessary to operate a paper machine optimally? Due to the high degree of automation, actual operation is possible with a very small production team. Over the last 10 years, some paper manufacturers have increased the production volume per employee by a factor of 10! At the same time, paper production is and remains a complex dynamic process with many possible settings and influencing options in a complex production plant. Due to the high and still increasing number of sensors, a fully manual monitoring of the production process by only a few persons is impossible in practice. As a result, problems in the system or operating settings are often not detected. The consequences are unplanned downtimes and quality deterioration in the end product. In many cases, only time-consuming ex-post analyses are possible. Even though process control systems are offering alarming functionality, the checks made are rule-based using static limits without taking operating mode, grades or changes in settings into account. As a result, end users are flooded with alarms, which is why these alarming functions are usually only used to a very limited extent.

Smart Data Approach:

Fully automated and dynamic monitoring of thousands of process signals and alarms in case of unusual patterns in sensor data allow early identification of problems in production. With this new insight derived from data, downtimes can be prevented and product quality is improved. In the Smart Data alarming system, the normal behaviour of the machine is continuously dynamically derived from historical data, taking into account grades and operating modes. Dependent alarms are summarized and prioritized according to importance. In addition to sensor data, monitoring can also be flexibly applied to other data such as quality parameters or calculated indicators such as raw material consumption etc. Resulting alarms are presented in a user-friendly interface where they can be investigated and processed further by end-users with extended analysis functions.

Advantages:

  • Increase of OEE – potentially saving several hundred thousands of euro per year
  • Prevention of Downtimes
  • Improved quality of the final product
  • Predictive maintenance

Features:

  • Real-time Monitoring
  • Dynamic calculation of threshold values
  • Consideration of grades and production modes
  • Prioritization of alarms
  • Automated monitoring of raw material and energy consumption

Production Monitoring 4.0 in the Paper Industry – Reduced downtime, improved quality, predictive maintenance

Do you have any question? Simply email to: marketing@avato.net

Imprint: 
Date: January 2020
Author: Leon Müller
Contact: marketing@avato.net
www.avato-consulting.com
© 2020 avato consulting ag
All Rights Reserved.

5 Basic Rules for a Good Meta Data Schema

5 Basic Rules for a Good Meta Data Schema

Automation is today’s hot topic. The never ending flood of information makes it impossible to maintain each file and each dataset individually – and never mind manually. Meta data are the key to the solution of this problem. They allow the grouping and batch processing of information in accordance with specified properties. To ensure the smooth operation of such processes, meta data must be captured in a structured form. This article explains why a structure, i.e. a meta data schema, is so important and what needs to be considered in the development of such a schema.

 

Why do I Need a Meta Data Schema?

Machines are not designed to process unstructured data – be it for simple, short scripts or KIs – because they lack an ability for logical interpretation. A specific, fixed structure is needed for their use. The more context there is for a particular bit of information and the more precise the definition of its structure and meaning, the lower the effort will be for automated processing and the more reliable and meaningful the results will be. A meta data schema is basically nothing more than a definition with the purpose to make such contexts available for machine processing.

However, a schema isn’t just good for the use of meta data – it is also beneficial for data capture. Since a meta data schema defines what data must look like, many errors can be detected at the time of input of the data and it doesn’t matter if that is done manually or (partially) automatically. In addition to avoiding errors, a good schema will also reduce the amount of work you have to put in, because when the meaning and the relationships of the meta data is clearly defined, then much of that data can be captured automatically or can be generated from other (meta) data.

The bottom line: A meta data schema…

  • …facilitates effective, automated data processing and maintenance;
  • …increases the quality of the meta data and with it their value;
  • …reduces costs for capturing the meta data.

 

What Makes a Good Meta Data Schema?

The best schema is one that supports data input and data processing the most, and makes these steps easiest. A few basic rules will help you to develop a schema that optimally matches your data and its purpose.

 

1.      Defining the Area of Use

What type of data should the meta data schema be applied to? A schema that matches all available data will also allow the processing of all the data with the same automatisms. Very varied data, on the other hand, will also have very few properties in common. Think about what kind of data you want to process (manage, search) together. That data set should share one schema. Then the schema will not have to consider other types of data and formats. There is, of course, no reason not to reuse parts of the schema for other data.

 

2.      Selecting the Right Fields

A meta data schema consists of so-called ‘fields’, whereby each field contains exactly one defined information. It is well worth your while to think about which fields you will need and where you want the data to come from. The key question here is: What will be the purpose of the meta data? It is a complete waste of time to define a field that isn’t needed at all. The same goes for fields that can’t be filled out for a large portion of the datasets, because mining that information would be too costly or not possible at all.

The data should be split into its smallest possible components, because it is much easier and less error-prone to join together two clearly defined fields, than it is to break down the content of a field. You should therefore check for each individual field you want to use, whether it may combine two or more independent bits of information. You could always add another filed in case of a combination of data that is frequently needed in this form – but that field should then be populated automatically to prevent contradictions.

 

3.      Don’t Re-Invent the Wheel

Meta data has been in use for quite some time and in many areas. The necessity for data exchange has resulted in the development of more robust, well-documented meta data schemas and exchange formats, which cover most of the requirements of a specific sector. Using a standard schema has a lot of advantages. Data provided by external sources can be used immediately and without any modifications, provided the same standard scheme was used for its capture. There are various tools and masks available for commonly used schemas, which further simplify data maintenance. And of course you save a lot of time and effort you would have used for creating your own schema. When you therefore find that iiRDS, Dublin Core or MODS offers everything you need, then choosing one of these will in all likelihood be a better idea than developing your own schema tailored specifically to suit your data.

 

4.      As Tight and Exact as Possible

The fewer selection options and freedoms a schema offers, the better. Every selectable option represents an opportunity for human error. Specify exactly, what information must be entered in a field and how. Data types, drop-down lists and regular expressions (a language to describe character strings) are a great help here. You avoid typos and make sure that identical information always appears in the same format. But there are even simpler ways that offer plenty of benefits. In a “Ranking” field, you only allow a numerical input of 1 to 6. A short explanation of the exact type of information this field refers to can be very helpful.

 

5.      Optional or Mandatory

If you are planning to capture meta data automatically or using experts, then it must be mandatory to fill out all fields of which you know that they apply for all instances. Every person has a name, every file a format and every digital text an encoding. Should one field remain empty, then the dataset cannot be processed by all processes accessing that dataset or will at least require special treatment. That will significantly impact the benefit of the schema.

There is, however, an exception, in which a limiting of the schema by keeping the number of mandatory fields as high as possible can also be a drawback: that will be the case if the meta data is entered manually by people, whose main responsibility is not the maintenance of that data. Too many mandatory tasks will mean a lot of time spent, which can lead to a drop in motivation and with it to careless, faulty and even inadvertent input. Where that is the case, it may become necessary to think about how much time spent on data input makes sense to ensure the best possible data quality.

Optional fields will, of course, also be useful in automated data capture processes. A “Most recent renovation” field will be a good idea in a meta dataset about a house – but will not be applicable for a new construction. Optional fields make sense, where the fact that an input is missing also represents a statement.

 

In addition to all these basic rules, the rule of implementability must also be applied. Should the cost for the creation and maintenance of a drop-down list be simply too high or the technical implementation of the perfect schema would take too much time, then some compromise in terms of specificity will be unavoidable. But anyone, who right from the start isn’t really sure about what the perfect meta data schema should be, will find it difficult to implement the best possible schema anyway.

Done with your meta data schema? Then it is time for the next step: Capturing! Or better stick with Create?

5 Basic Rules for a Good Meta Data Schema (pdf)

For further questions send an email to: marketing@avato.net

Imprint

Date: June 2019
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© avato consulting ag – Copyright 2019.
All Rights Reserved.

avato customer Steinbeis wins SAP Quality Award

avato customer Steinbeis wins SAP Quality Award

avato customer Steinbeis Papier GmbH wins SAP Quality Award 2019 in the category Innovation for Industrie 4.0 Project.

Industrie 4.0, IoT, Big Data, AI – just catchwords or ingredients for successful digitalization in medium-sized businesses? The focus on relevant business results, an excellent team combined with the intelligent use of modern technologies lead to convincing results for the medium-sized manufacturer of sustainably produced recycled paper. The jury of the SAP Quality Award 2019 therefore awarded the project Industrie 4.0@Steinbeis, supported by avato, gold in the Innovation category.  

If you would like to learn more about the project, Steinbeis Papier and the services of avato consulting, read on…