The Path From Technical Writer to Information Manager

The Path From Technical Writer to Information Manager

This article aims to draw attention to the increasing importance of Technical Writers / Information Managers (m/f/d) and highlight the features of these job descriptions. As well as this, it offers an overview of possible developments and recommended qualifications. To improve readability, the following does not include the supplement “m/f/d”. Use the LinkedIn comments to let the community know which certifications you think are important for an Information Manager.

 

The Technical Writer. The professional title “Technical Writer” was coined by Tekom (Gesellschaft für Technische Kommunikation) in collaboration with the Federal Labour Office. A Technical Writer is responsible for the conceptualisation, creation and updating of technical documentation such as user guides, operating manuals, installation and assembly instructions, as well as training material. Technical Writers are increasingly working in-house and write, for example, system and application documentation as well as requirement specifications. They also manage terminology and user interfaces alongside the development process.

Did you know? There were around 85,000 full-time Technical Writers in Germany in 2016. A large portion of the documentation is, however, authored by individuals, who actually have a different role, which means that the actual profession “Technical Writer” is largely unknown. (Wikipedia)

Reading suggestion: Why Your Role of Technical Writer is Becoming Increasingly Important

The Information Manager role is less clearly defined. Information management in general means the management of information, however the term has various definitions in technical literature. The reason for this is the dynamic environment of IT development as well as the various academic disciplines (in particular information systems that is involved with information and communication management). […] In general “strategic information management” is described by various authors as the planning, conceptualisation, monitoring and managing of information and communication within organisations with the aim of achieving strategic goals. (Wikipedia)

Information Manager – a profession? Information Manager as a role does not have a job description and is not listed as a skilled occupation in the German setting (see Planit, berufe.eu; despite this, however, “information Manager” is included in the occupations list of the employment agency as a “field of study” and “occupation after studies”).

A comparison between the activities of a Technical Writer and those of an Information Manager finds similarities. Both deal with documentation/information. However the Information Manager flies higher. Unlike a Technical Writer, they aren’t involved full-time in the actual creation of documentation but instead are also responsible for the process of information management and the quality of the process as well as that of the documentation. The tasks and responsibility of a Technical Writer are therefore a good basis for professional development into an Information manager. The experience gathered helps master growing challenges and expanded responsibilities.

 

Additional tasks for an Information Manager include:

  • Management of the documentation process
  • Recording of information needs
  • Communication with stakeholders (regular meetings with Responsibles & Accountables)
  • Drafting of programmes/projects
  • Management of technical implementation
  • Management of the (teams of) Responsibles
  • Adaptation of available modules, templates and models as required
  • Management and support of information creation, review and updating
  • Making reviewed information available and accessible for authorised persons
  • Reporting about the status, progress and obstacles / risks
  • Holding training/education sessions
  • Interest in and promotion of continuous improvement

 

An Information Manager should have the following knowledge and skills:

  • Understanding of the overall objective and the defined scope
  • Ability to work across departments and understand connections
  • Adequate experience
  • Knowledge of the necessary methods and practices as well as how to apply them
  • Handling of information management technologies and knowledge of their advantages/disadvantages
  • Basic user knowledge of content management systems (e.g. SharePoint, Alfresco)
  • Basic user knowledge of content delivery systems (e.g. WordPress, Typo 3, Confluence)
  • Ideally some industry expertise

 

Which qualifications can aid success in the additional areas of responsibility?

  • Studies, for example in information management / multimedia communication and documentation / digital humanities
  • and/or several years of professional experience as a Technical Writer
  • Certified Information Professional – CIP** (aiim)
  • (agile) project management
  • Technical Writer (Tekom)
  • ITIL Foundation (especially in IT)

 

CIP – what is this? Certified Information Professional is a certification offered by the American organisation aiim (Association for Information and Image Management). Details can be found in the following article: Why Should Technical Writers get CIP Certification?

 

Summary

The need for Technical Writers will increase (Why Your Role of Technical Writer is Becoming Increasingly Important). At the same time, the demands on Technical Writers are becoming ever more complex. Experienced Technical Writers that take on additional responsibilities are referred to as Information Managers. Anyone who is interested in an advancement to Information Manager can gain an overview of the tasks and skills of an Information Manager and obtain the available certifications.

Which certifications do you think are relevant for an Information Manager?
Which knowledge/certification do you see as must-have, and which as nice-to-have?
Write a LinkedIn comment!

The Path From Technical Writer to Information Manager (PDF)
Possible developments and recommended qualifications.

Do you have any question? Simply email to: marketing@avato.net

Imprint: 
Date: September 2019
Author: Jennifer Gitt
Contact: marketing@avato.net
www.avato-consulting.com
© 2019 avato consulting ag
All Rights Reserved.

Why Should Technical Writers get CIP Certification?

Why Should Technical Writers get CIP Certification?

Germany offers many avenues for additional training in the area of Information Management. The market for training and further education in the area of Information Management is developing at a rapid speed in Germany. 15 German universities currently offer Information Management in 24 different study courses (Studis.online). But what is on offer in the further education market? As a pioneer on the topic of technical documentation, (Tekom) offers an answer complete with relevant certifications.

On the international stage, the American organisation AIIM (Association for Information and Image Management) may be a more familiar name. Their certification, which reflects years of practical experience and know-how, is called CIP – Certified Information Professional.

 Did You Know?

The professional title “Technical Writer” was coined by Tekom (Gesellschaft für Technische Kommunikation) in collaboration with the Federal Labour Office.

There were around 85,000 full-time technical writers in Germany in 2016. A large portion of the documentation is, however, authored by individuals, who actually have a different role, which means that the actual profession ‘Technical Writer’ is largely unknown. (Wikipedia)

 

Why Should You Get Certification?

According to PayScale.com, organizations are willing to pay 27% more for a certified professional than an uncertified comparable specialist. But the majority of CIPs report that the true value is what it does for your reputation.

Hemaben Patel, ECM Lead for a large international airline, explains, “Having the CIP gives my internal customers and partners a certain level of confidence that whatever strategy or solution I propose is based on best practices and experience.” (AIIM)

The CIP course and the CIP study guide offer a wealth of valuable learning content. Read more on the subject below. The CIP certificate is proof of what the CIP certification has made you: An expert for intelligent information management – or in other words: an Information Manager.

The certification for Certified Information Manager could be our first step towards becoming an information manager if you have previously been employed as a technical writer. The responsibilities of an information manager are much broader in scope. The CIP certificate will help you find the right role on an international stage.

 

For Whom Will CIP be Interesting?

CIP is an interesting proposition for

  • Information Management Consultants
  • Technical Writers
  • Project leads, project managers and team members in IM projects
  • Anyone working in the areas of Records Managements, Document Management, Electronic Archiving and Enterprise Content Management
  • IT Management professionals and technical IT staff

Intelligent IM can also be an exciting topic for groups in the following areas:

  • Risk Management
  • Business Analysis
  • Process Design
  • IT Coordination
  • Change Management
Test yourself!

Are you ready for your certification?

There are three freely accessible tests that you can use to assess your knowledge: Sample Test 

 

Even More Practically Oriented Since May 2019

The CIP certification has been around for a few years now, but some content changes were made in May 2019. It now includes numerous practical scenarios. It also now recognises the fact that organisations no longer need ECMs (“ECM is now dead”, Gartner. aiim). More to the point: We now understand the fact that the challenge is only to some extent a technical one. However, the authors left one chapter on technology untouched. Technology is, after all, an important tool and very much needed when facing the challenges of creating intelligent information:

  • Modernizing the information toolkit
  • Digitalizing core organizational processes
  • Automating compliance & governance
  • Leveraging analytics & machine learning

This realisation is also reflected in the new structure of the various learning topics, which focus on the following content:

  1.   Creating and Capturing Information
  2.   Extracting Intelligence from Information 
  3.   Digitalizing Core Business Processes
  4.   Automating Governance and Compliance
  5.   Implementing an Information Management Solution
Are you already a CIP?

What has changed in comparison with the previous version?The first and last chapters are similar in terms of content. Some topics were summarised in the middle part and new focal points were set. These were the former contents:

  1. Creating and Capturing Information
  2. Organizing and Categorizing Information
  3. Governing Information
  4. Automating Information-Intensive Processes
  5. Managing the Information Lifecycle
  6. Implementing an Information Management Solution

 

Cost & Scope

The certification consists of 100 multiple choice questions. The various topic areas are weighted differently. The weightings can be found in the study guide. A 60% score is a pass.

Non-AIIM members will have to shell out USD 1785 – but in my experience, the certification is not required for technical writers. In most cases, practical work experience and in-depth familiarisation with the study guide should guarantee a pass.

The CIP Study Guide is available free of charge for professional members. Everyone else will have to pay affordable USD 60.

The test fee is around USD 385 (members pay USD 349).

(The membership fee is USD 169 per year. I personally opted in favour of a membership before I did my certification. It gave me access to the community and it got me an overall discount of USD 100 for the study guide and test.)

 

Summary

Certifications are always career boosters. Specifically in terms of international job opportunities, the CIP certification is a great place to start if you want to step up the ladder from Technical Writer to Information Manager. The certification is based on many years of valuable practical experience and was completely revised in May 2019 to reflect latest developments. The time needed, as well as the financial cost can be minimised with practical experience and autonomous preparation.

Why Should Technical Writers get CIP Certification? (PDF)

Send an email for questions and feedback: marketing@avato.net

Imprint: 
Date: August 2019
Author: Jennifer Gitt
Contact: marketing@avato.net
www.avato-consulting.com
© 2019 avato consulting ag
All Rights Reserved.

IT Integration or Separation: Information is the Key to Success

IT Integration or Separation: Information is the Key to Success

Post-merger IT integration and IT separations have become commonplace in the IT operations of most companies. Many of these incidences take a long time to do or are never truly completed – with often reiterated consequences. How can you make improvements in this area, how can you minimize risk and prevent pesky long-term consequences?

A reliable basis of information is the key to becoming faster and more efficient. How high is the cost for the integration of two badly documented IT organizations? How quickly can you carry out a spin-off in your IT? Do you have a good basis of information, with which you can support a spin-off?

Integration or Separation: It’s all a Matter of Perspective

Most IT separations are followed up by an IT integration, many IT integrations are preceded by an IT separation. The issues of both perspectives are the same. Much of it is a question of documentation and organization. Everything gets a lot easier if you know what you’ve got and what you really need.

It is, however, not an easy challenge: Shared data centres must be separated, while separate data centres should be joined. Shared applications must be broken down, while complying with data protection requirements and various applications should be merged with own data bases, etc.

“We’ll Just Follow the Blueprints!”

One might think that IT integrations or separations are done so frequently in some companies that there should be actual procedural blueprints, skilled experts and that one’s own IT department should be well prepared in terms of documentation. All the information needed for integration or separation is on hand and of high quality after a number of iterations and projects are pulled through professionally…

That is – unfortunately – not the case. Hardly any IT organisation exists that has the wherewithal to carry out a truly structured integration or separation. No preparation is done for the integration of one’s own IT and nobody can actually separate parts and hand them over in a structured manner. What can be done?

 

Divide and Rule: Documentation in 7 Workstreams for Success

Separate the task into manageable subtasks early – i.e. right from the start. A good approach – and not just for separation or integration – is a separation into individual workstreams, which align with contiguous areas.

#1 IT Infrastructure Services

That includes shared services, as well as IT assets from the areas of network, data centres and computing (CPU, storage, backup, etc.). These areas are well documented in most companies. In separations, documentation in the area of security is key.

#2 Applications & Data

Shared applications generally pose a special challenge. Application integration, as well as the interface points between applications are often neglected at the start. Shared data is generally a huge issue and more often than not poses a source of risk. It is never too early to develop and document data extraction and transformation concepts!

#3 Identity & Access Management (IAM)

While the separation of an active directory or an LDAP directory might still seem a relatively manageable challenge, the integration of two separate directory services may have already become an insurmountable stumbling block. And we’re not even touching on user management on application level or a company-wide SSO (Single Sign-On).

#4 Services for the End User

End user services often are a test of patience and dedication. Separation may be time-critical here, while integration is all about minimising costs.

#5 ITSM, Supplier Management & IT Governance

The importance of IT service management, supplier management and IT governance is often underestimated. While questions about the IT organisation are generally handled well and information is easily accessible, important data about suppliers and ITSM processes are much harder to come by. In case of separations that may pose a number of risks. Costs for integration increase.

#6 License Management

Once the separation process begins, you must know which licenses are available and which are still needed. Which licenses will you be able to continue using after a separation? What other licenses will be needed?

#7 IT Information Management

IT information management is the glue that holds it all together and the basis for all decisions. The earlier you begin collecting IT information, the more comprehensive and reliable that information basis becomes, the more effective and efficient your IT integration or separation will be.

 

Project Phases, Start of the IT Project & Information Management

You are one of those who don’t have comprehensive background knowledge or already do IT information management? You can’t afford to procrastinate – start on the very first day and take advantage of the pre-signing phase! 

Implement the necessary workstreams as early as possible, assign key roles to your best personnel and build up knowledge about your IT. Formulate goals: Start your IT information management and develop a high-level solution design.

The IT project phases can in part be run in parallel. The ‘as is’ analysis, for example, can start immediately and continue right into the transition phase. The rule of thumb in IT is: Better is the enemy of good.

 

Reliable Information Minimises Risks in Separations

Separation has its very own can of worms. At the very latest after a signing, buyers and sellers have very different points of view and follow their own objectives. The seller will start thinking about compliance and regulations regarding buyer access to systems and data very quickly. The buyer is suddenly confronted by risks regarding service accessibility based on insufficient information and a lack of qualifications. A good information basis is the only thing that can help here. The earlier qualified and reliable information can be on hand, the faster and better good decisions can be made.

 

Integration Shortfalls

While the main concern in separations is risks, it is mainly shortfalls that cause after-the-fact problems in integration. Incomplete IT integrations create a whole zoo of applications and systems. Anything that wasn’t structured before it was merged will cause long-term problems.

“Temporary solutions” are often the go-to procedure if there is not enough time or resources for a comprehensive integration. You should document these meticulously. Nothing is more permanent than a temporary solution. And afterwards, nobody can recreate the complexities. That doesn’t just mean more costs. Important adjustments to business areas and processes can only be done to a limited extent – they will be virtually paralysed.

 

Information is Everything

IT separations and integrations basically have one thing in common: The scope and quality of the information basis decide over success or failure. The premise is the same as in other areas like IT operations, cloud migration or an IT audit: The chances for success increase exponentially, the better you know your own IT and the more information is at hand. Knowledge – in this case about the IT – is a key prerequisite for effectiveness and efficiency.

Be proactive and start preparing early. A subdivision into so-called workstreams will be invaluable on your way to success.

 

IT Integration and Separation (PDF)
… and how documentation prevents pain.

Please send us an e-mail for questions or feedback: marketing@avato.net

Imprint
Date: July 2019
Author: Gregor Bister
Contact: marketing@avato.net
www.avato-consulting.com
© avato consulting ag – Copyright 2019.
All Rights Reserved.

5 Basic Rules for a Good Meta Data Schema

5 Basic Rules for a Good Meta Data Schema

Automation is today’s hot topic. The never ending flood of information makes it impossible to maintain each file and each dataset individually – and never mind manually. Meta data are the key to the solution of this problem. They allow the grouping and batch processing of information in accordance with specified properties. To ensure the smooth operation of such processes, meta data must be captured in a structured form. This article explains why a structure, i.e. a meta data schema, is so important and what needs to be considered in the development of such a schema.

 

Why do I Need a Meta Data Schema?

Machines are not designed to process unstructured data – be it for simple, short scripts or KIs – because they lack an ability for logical interpretation. A specific, fixed structure is needed for their use. The more context there is for a particular bit of information and the more precise the definition of its structure and meaning, the lower the effort will be for automated processing and the more reliable and meaningful the results will be. A meta data schema is basically nothing more than a definition with the purpose to make such contexts available for machine processing.

However, a schema isn’t just good for the use of meta data – it is also beneficial for data capture. Since a meta data schema defines what data must look like, many errors can be detected at the time of input of the data and it doesn’t matter if that is done manually or (partially) automatically. In addition to avoiding errors, a good schema will also reduce the amount of work you have to put in, because when the meaning and the relationships of the meta data is clearly defined, then much of that data can be captured automatically or can be generated from other (meta) data.

The bottom line: A meta data schema…

  • …facilitates effective, automated data processing and maintenance;
  • …increases the quality of the meta data and with it their value;
  • …reduces costs for capturing the meta data.

 

What Makes a Good Meta Data Schema?

The best schema is one that supports data input and data processing the most, and makes these steps easiest. A few basic rules will help you to develop a schema that optimally matches your data and its purpose.

 

1.      Defining the Area of Use

What type of data should the meta data schema be applied to? A schema that matches all available data will also allow the processing of all the data with the same automatisms. Very varied data, on the other hand, will also have very few properties in common. Think about what kind of data you want to process (manage, search) together. That data set should share one schema. Then the schema will not have to consider other types of data and formats. There is, of course, no reason not to reuse parts of the schema for other data.

 

2.      Selecting the Right Fields

A meta data schema consists of so-called ‘fields’, whereby each field contains exactly one defined information. It is well worth your while to think about which fields you will need and where you want the data to come from. The key question here is: What will be the purpose of the meta data? It is a complete waste of time to define a field that isn’t needed at all. The same goes for fields that can’t be filled out for a large portion of the datasets, because mining that information would be too costly or not possible at all.

The data should be split into its smallest possible components, because it is much easier and less error-prone to join together two clearly defined fields, than it is to break down the content of a field. You should therefore check for each individual field you want to use, whether it may combine two or more independent bits of information. You could always add another filed in case of a combination of data that is frequently needed in this form – but that field should then be populated automatically to prevent contradictions.

 

3.      Don’t Re-Invent the Wheel

Meta data has been in use for quite some time and in many areas. The necessity for data exchange has resulted in the development of more robust, well-documented meta data schemas and exchange formats, which cover most of the requirements of a specific sector. Using a standard schema has a lot of advantages. Data provided by external sources can be used immediately and without any modifications, provided the same standard scheme was used for its capture. There are various tools and masks available for commonly used schemas, which further simplify data maintenance. And of course you save a lot of time and effort you would have used for creating your own schema. When you therefore find that iiRDS, Dublin Core or MODS offers everything you need, then choosing one of these will in all likelihood be a better idea than developing your own schema tailored specifically to suit your data.

 

4.      As Tight and Exact as Possible

The fewer selection options and freedoms a schema offers, the better. Every selectable option represents an opportunity for human error. Specify exactly, what information must be entered in a field and how. Data types, drop-down lists and regular expressions (a language to describe character strings) are a great help here. You avoid typos and make sure that identical information always appears in the same format. But there are even simpler ways that offer plenty of benefits. In a “Ranking” field, you only allow a numerical input of 1 to 6. A short explanation of the exact type of information this field refers to can be very helpful.

 

5.      Optional or Mandatory

If you are planning to capture meta data automatically or using experts, then it must be mandatory to fill out all fields of which you know that they apply for all instances. Every person has a name, every file a format and every digital text an encoding. Should one field remain empty, then the dataset cannot be processed by all processes accessing that dataset or will at least require special treatment. That will significantly impact the benefit of the schema.

There is, however, an exception, in which a limiting of the schema by keeping the number of mandatory fields as high as possible can also be a drawback: that will be the case if the meta data is entered manually by people, whose main responsibility is not the maintenance of that data. Too many mandatory tasks will mean a lot of time spent, which can lead to a drop in motivation and with it to careless, faulty and even inadvertent input. Where that is the case, it may become necessary to think about how much time spent on data input makes sense to ensure the best possible data quality.

Optional fields will, of course, also be useful in automated data capture processes. A “Most recent renovation” field will be a good idea in a meta dataset about a house – but will not be applicable for a new construction. Optional fields make sense, where the fact that an input is missing also represents a statement.

 

In addition to all these basic rules, the rule of implementability must also be applied. Should the cost for the creation and maintenance of a drop-down list be simply too high or the technical implementation of the perfect schema would take too much time, then some compromise in terms of specificity will be unavoidable. But anyone, who right from the start isn’t really sure about what the perfect meta data schema should be, will find it difficult to implement the best possible schema anyway.

Done with your meta data schema? Then it is time for the next step: Capturing! Or better stick with Create?

5 Basic Rules for a Good Meta Data Schema (pdf)

For further questions send an email to: marketing@avato.net

Imprint

Date: June 2019
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© avato consulting ag – Copyright 2019.
All Rights Reserved.

Innovation Creates Intelligent Information

Innovation Creates Intelligent Information

Successful Information Management Projects in IT

Intelligent Information: the Right Approach is Key

IT documentation is up-to-date and reliable, they are of good quality and any information needed is available at the click of a button. Intelligent information is essential in the decision-making process for strategic issues, for example for decisions regarding cloud transformations or for the integration/separation of IT organisations.

You think this is unrealistic? But what about if intelligent information in IT did work?

Let’s say your company acquires another and you are in charge of integrating the IT. How do you set up projects/programs to make intelligent information available quickly across the as yet unknown IT environment? How do you create a good basis for management decisions and a sustainably resilient integration within a short space of time?

One thing is certain: the traditional approach via document management or ECM projects will not work. These would be too lengthy and unwieldy, the cost would be much too high and the results would in no way justify the meagre results.

You need a new approach. Information Management must be modular and built piece by piece from smaller projects. These projects must be agile and you will need innovative methods and modern technology approaches like audio and video integration, intelligent search functions, chatbots and text analytics.

Information Management in IT must quickly offer returns and a business case must be in place latest within a few weeks. A strategic added value must be apparent right away. Intelligent information must be maintainable to keep up these benefits long-term, meaning it should cost very little to keep things current.

Key Terminology in IT Information Management

Accountable: The person in charge of the correct and thorough completion of a service or task, who delegates work to Responsibles. There must only be one Accountable for each service or task.

Capture: According to aiim (Association of Image and Information Management), “Capture” is the method for collecting information at its source and forwarding it to a formal information management process.

Create: In the “Create” method we assume that no reliable information exists and that all information must therefore be created.

Information Governance: Ensures the long-term efficiency and quality of information.

Information Unit: Information Units (IUs) are the smallest meaningful units of information, which cannot be broken down any further. They may consist of a combination of file formats (e.g. html, mp3, mp4) and are enriched with metadata.

Methods: Methods of IT information management, e.g. “Capture” or “Create”.

Responsible: The person doing the work to fulfil a task. There will always be at least one Responsible. Even where tasks are delegated to others, an individual can remain the only Responsible.

Stakeholder: A Stakeholder is a person involved in an activity or who has an interest in an organisation or activity. These include members of the actual Corporate IT, but also members of IT Governance and IT Audit

Innovative Approaches in IT Information Management

Completely new and innovative approaches in terms of information management methods and the technologies used are needed. IT information management must be simplified – not just in terms of its usage, but also and very importantly: in terms of its generation (“Create”) and the processing of information and the information governance.

Learning Methods From Others: aiim, Tekom

IT information management methods can combine the best of two worlds: the worlds of business information management and technical documentation. Key elements here are taxonomies and the use of IUs instead of traditional documents.

 

New Technology Approaches Create New Perspectives

Technology components support the provision of information via a portal with web/mobile access, the use of text analytics and intelligent search functions, gamification and interaction, text-to-speech and speech-to-text, chatbots, video and audio.

A very important aspect is the separation of the actual content management (text, audio, video, structured data…) and the content processing and visualisation, i.e. content delivery.

Find out more about analytics here: https://intelligent-information.blog/en/analytics-offers-these-3-optimisation-scenarios/

 

Information Management Program / Project

Think Big, Start Small!

Unwieldy and costly large-scale projects with unforeseeable benefits have long since been discredited in IT documentation. A similar approach applies for Information Management: the objective is to produce quick wins. And that requires agile methods. Quick, visible benefits increase acceptance and engender far-reaching support and cooperation. And that is an important factor for success: the success of Information Management depends on cooperation and management buy-in.

Project Approach and Organisation

A small project or a program for Enterprise IT – the basic approach is always the same. There is always at least one stakeholder and defined objectives as well as at least one information manager (could be part-time for smaller projects), who controls the project/program. A quick win, i.e. usable results within a short time, requires agile methods.

From time to time, Information Management projects must involve large numbers of IT employees. And in order to improve engagement, the project reporting is not exclusively aimed at management. Dashboards, which offer transparency and visibility regarding project success and progress, ensure broad support. Gamification and success stories are other building blocks to create support across as much of the IT organisation as possible.

Information Management is rarely something dear to the heart of IT management. That is why the buy-in needs to be maintained at all times. A good tool to keep the objective in view could be a maturity index. There should also be a procedural model for stakeholder management, which can be used to keep general interest at a continuously high level.

Project/Program Phases: the avato Standard

Information management is not like any traditional project with project start, project phases and a defined project end. IT information management, on the other hand, is a traditional program and similar to CSI (Continual Service Improvement).

 

Planning

Where mandatory, traditional documentation can continue during the planning phase. New areas, however, should not be started. The planning phase for smaller projects can be completed after two weeks and should not take longer than 2–3 months for comprehensive programs. Key stakeholders are identified in this phase and their objectives (goals) are coordinated and prioritised. Reporting and the ongoing stakeholder management process are defined.

Setup
The project organisation is specified, methods are defined and information managers take over the control of processes as well as the coordination of accountables and responsibles (methods). The scope is defined and initial areas are identified in which traditional documentation will be replaced by IT information management.

A technology concept is developed concurrently. Once again, the following maxim applies: think big, start small. The concept should always account for an extensive expansion of the IT information management, but also offer options to become productive immediately and deliver results.

Team and stakeholder management is an ongoing process and should be set up in this phase. A changing set of stakeholders must be included and goals are continuously re-coordinated. An IT information management team encompasses all areas of IT and usually a whole lot of employees in each one of those. Keeping everyone informed, gamification and information managers actively seeking feedback are essential for success.

Implementation
Procedure models and methods are defined, the information structure is in place and IT information management is now fully integrated in the organisation and its workflows. The entire process of evaluating the existing information, generating new information as well as governance and publication is controlled by information managers.

The next step is to define a detailed structure for areas and required content. That also includes the definition of information sources and the selection of responsibles for all content.

Initial parts of the technology solution can now be implemented, while metadata and templates for the content are defined.

ITIM Governance: Ongoing IM
The final phase is less a project phase and more ongoing IT Information Management. New content is created and published, existing content is updated and maintained (create, maintain & publish).

Technology innovation is continuously added to the implementation and existing technologies are continuously adapted to changing requirements (technology maintenance).

Just like in CSI, Information Management is also subject to an ongoing review and improvement process (review & improve). New ideas and approaches are being integrated, the project/program organisation is being adapted to changing requirements.

What are the important factors for effective and efficient information governance?

Essential cornerstones are project reporting and dashboards as well as intensive communication between information users and information providers. Triggers are put in place to support efficient information updates. These can be new stakeholders, changes in stakeholder goals, adjustments in supplier contracts, organisational changes or the implementation of IT changes.

Other important tools are click rates or reviews on websites or blogs and user comments.

Summary

And how is that integration project going that we described above? IT integration was a long-term success. You were quick and made excellent decisions based on intelligent information. You also used the project to replace traditional IT document management with IT Information Management. Your IT teams are now more agile, the new and innovative approaches create enthusiasm in the teams and more and more IT areas are now being actively optimised on the basis of intelligent information.

Successful Information Management Projects in IT (PDF)
Intelligent Information: the Right Approach is Key

If you have any questions or ideas simply write us an: marketing@avato.net

Imprint: 
Date: March 2019
Autoren: Gregor Bister / Jennifer Gitt
Contact: marketing@avato.net
www.avato-consulting.com
© 2019 avato consulting ag
All Rights Reserved.

Analytics Offers These 3 Optimisation Scenarios

Analytics Offers These 3 Optimisation Scenarios

IT Information Management Optimisation

In many companies, the condition of their IT documentation can be summarised like this: Lots of information is missing or simply inaccessible. Any information that does exist is often ambiguous and unreliable and anyone dealing with IR (Information Retrieval) soon becomes lost in a quagmire of unsorted information consisting of all sorts of documents. A completely new approach is needed to get on top of this “digital landfill”.

There are two basic methods available to reduce the masses of information to a usable quantity: “create” and “capture”. The company covers its need for information by either “creating” it again from scratch or by sifting through the digital landfill to find and “capture” what is needed. Analytics primarily supports the “capture” approach, which we will focus on here.

Evaluation in Capturing

Even if sufficient documents/information exist and are made available, an analysis via sequential manual processing is rarely feasible. Can analytics be the answer here? Yes – specifically by way of evaluation, which is an essential component of capturing. In the following, we will present the strengths and weaknesses of three model approaches for using analytics for the purpose of evaluation:

The strengths and weaknesses of the models will be determined according to how well they answer the following question: What makes a document highly relevant, relevant or irrelevant in terms of a specific element of information? In our analysis, a document is relevant only if it contains relevant information.

In the first step, the effort involved in converting the document into elements of information is ignored. Analytics should therefore initially be limited to “evaluation” and should reduce the number of documents that then flow completely or partially into an information portal in a second, automatic/semi-automatic/manual step.

Results of the Automated Evaluation

What does the first step of automated evaluation entail? Analytics is used to automatically differentiate between potentially relevant and irrelevant information. In environments with a “historically grown” high number of documents in particular, we can expect significant results.

The final result will be a corpus of documents whose content will at least in part be relevant and should be transferred completely or partially into the information portal. 

In the subsequent automated/manual step, the information from the relevant documents is captured, adapted to the desired IT Information Management (ITIM) structure and reallocated. The next step entails the documents being checked by subject matter experts (SMEs) for correctness of content, and then forwarded to the responsible information manager for publication.

Analytics Methods

Three analytics methods can be used to check and evaluate information for specific criteria:

  1. Information Retrieval
  2. Supervised Machine Learning
  3. Unsupervised Machine Learning

Information Retrieval

Information Retrieval (IR) involves the creation of an index to which queries with various criteria can be added. The query is not just a collection of search terms, but of values from the criteria. These are specified in the criteria catalogue. Various priorities can be set and criteria used so that a document may be irrelevant in one context, but relevant in another. The index only has to be partially updated if the criteria catalogue is changed. After the check, the information elements/documents are automatically sorted by hit probability.

Strengths:

  • Flexibility: This method can easily be adapted if documents are added, requirements change or new findings are made in terms of criteria
  • Scalability: Queries can be defined for all documents or tailored to individual subsets
  • Ease of creation: No training necessary, and transferring the criteria into appropriate representations is comparatively easy
  • Transferability: Engine and ranking algorithms can be applied to various indices, provided the criteria match

Weaknesses:

  • No clear response: There is no classification into “relevant” or “irrelevant”. In other words: a line must be drawn based on defined guidelines or at the discretion of the SME.
  • Application workload: The appropriate query must be developed – depending not only on content, but also structure in some cases; this requires expert knowledge for an appropriate weighting of the criteria.

Supervised Machine Learning

For the evaluation of documents with the help of “Supervised Machine Learning”, some of the documents are manually assigned to predefined categories. A threshold between the categories is then calculated and all other documents are automatically assigned. A significance level can be calculated for this assignment. 

Should the catalogue of criteria or the definition of the true positive changes, then the threshold must be recalculated. Rule of thumb: High variance in the analysed documents will impact negatively on precision. A subdivision into subcorpora can counteract that effect – this subdivision, however, requires a lot of effort and increases the risk of “overfitting”.

Strengths:

  • Clear classifications: Exact limits and significance levels
  • No application workload: The entire corpus is categorised automatically
  • Approach optimisation: Irrelevant criteria are quite easily detected
  • Overview of the inventory: Criteria can also be viewed individually, e.g. “70% of the documents are checked too rarely to be reliable.”

Weaknesses:

  • Creation workload: A training corpus must be collated (expert knowledge required!) and then categorised manually by SMEs. 
  • Corpus-specific: The threshold must be recalculated individually for each corpus and after any major change to the corpus/definition of true positive; only individual elements may be transferable
  • CPU-intensive: ML (Machine Learning) requires a lot of processing power

Unsupervised Machine Learning

For analysis with the help of “Unsupervised Machine Learning”, a way must be found to teach the computer to recognise the difference between relevant and irrelevant documents itself. The criteria from the criteria catalogue should serve as points of reference. A representation that enables ML must therefore be found for each criterion. Examples: What do relevant documents have in common? Are they reviewed with similar frequency? Do they contain no personal contact information?…

Strengths:

  • Minimal creation workload: No training required
  • Precise classifications: Differentiation according to level/type/reason/area of relevance…

Weaknesses:

  • High creation workload: The development of an appropriate representation of the relevance criteria is complex, time-consuming and requires experience & expert knowledge
  • High interpretation workload: The recognised groups must be interpreted manually and will change with each run
  • No transferability: Categorisation thresholds and interpretation cannot be transferred to other corpora
  • CPU usage: Requires even more computing power than Supervised Machine Learning

Conclusion

The use of analytics for the evaluation of documents in capturing is recommended whenever there is a high number of documents to review – and particularly when documentation has “grown historically” without maintenance and has therefore become unmanageable. The prerequisite is always that sufficient documents are available at the time of analysis. In order to avoid nasty surprises, companies should always check whether one of the following cases applies before implementing analytics:  

  • Low number of documents
  • Low number of documents and the ones available are in multiple languages
  • Mix of various document types: text, table, diagram, audio, video

If this is not the case, thresholds for computer-aided analysis cannot be calculated in order to select the appropriate analytics method.

The conclusion after comparing the three approaches: Supervised Machine Learning is in many cases the most suitable approach. Clear classifications and the low levels of expert knowledge needed for project implementation mean that this method is preferable to IR (Information Retrieval). The effort needed to collate a training corpus is still significantly lower than checking all documents manually. 

Publications on IT Information Management:

Cloud Transformation und Operation“: Cloud Transformation und Operation – An immense challenge without detailed knowledge of IT

Intelligent Information: 3 Conditions“: Where does the concept come from, why do we need intelligent information, and what does it really mean? 

 „Business Case ITIM (IT Information Management)“: Faster, better, cost-effective – This is why ITIM always pays off

 „Simplify IT Information Management“: IT Information Management challenges & methods regarding „Simplify IT Information Management“

(Un)informed about IT Risks?“: IT Information Management as basis for Enterprise Risk Management

 “What can Business IT Learn From Wikipedia?“: Wikipedia’s approach and methods as the secret for success & conclusions for ITIM

Making the Right Decisions Fast“: Concept for building up knowledge about the corporate IT

Analytics Offers These 3 Optimisation Scenarios (PDF)
IT Information Management Optimisation

Please send us an e-mail for questions or feedback: marketing@avato.net

Imprint

Date: May 2018
Author: Isabell Bachmann
Contact: marketing@avato.net
www.avato-consulting.com
© 2018 avato consulting ag
© avato consulting ag – Copyright 2019. All Rights Reserved.