Is this Microsoft’s worst mistake ever?

by Frank 30. November 2013 06:00

I run a software company called the Knowledgeone Corporation that has been developing application solutions for the Microsoft Windows platform since the very first release of Windows. As always, our latest product offering RecFind 6 version 2.6 has to be tested and certified against the latest release of windows. In this case that means Windows 8.1.

Like most organizations, we waited for the Windows 8.1 release before upgrading our workstations from Windows 7. The only exceptions were our developers workstations because we bought them new PCs with Windows 8 pre-installed.

We are now testing the final builds of RecFind 6 version 2.6 and have found a major problem. The problem is that Microsoft in its infinite wisdom has decided that you can’t install Windows 8.1 over a Windows 7 system and retain your already installed applications.

The only solution is to install Windows 8 first and then upgrade Windows 8 to Windows 8.1. However, if you are running Windows 7 Enterprise this won’t work either and you will be told that you will have reinstall all of your applications.

I am struggling to understand Microsoft’s logic.

Surely Microsoft wants all its customers to upgrade to Windows 8.1? If so, why has it ‘engineered’ the Windows 8.1 upgrade so customers will be discouraged from using it? Does anyone at Microsoft understand how much work and pain is involved in re-installing all your applications?

No, I am not kidding. If you have a PC or many PCs with Windows 7 installed you are going to have to install Windows 8 first in order to maintain all of your currently installed applications. Then, after spending many hours installing Windows 8 (it is not a trivial process) spend more precious time installing Windows 8.1. Microsoft has ensured that you cannot go direct from Windows 7 to Windows 8.1.

Of course, if you are unlucky, you could be living in a country where Microsoft has blocked the downloading of Windows 8, like Australia. Now you are between a rock and a hard place. Microsoft won’t let you install Windows 8 and if you install Windows 8.1 you face days or weeks of frustrating effort trying to re-install all of your existing applications.

 

Here are some quotes from Microsoft:

“You can decide what you want to keep on your PC. You won't be able to keep programs and settings when you upgrade. Be sure to locate your original program installation discs or purchase confirmation emails if you bought programs online. You'll need these to reinstall your programs after you upgrade to Windows 8.1—this includes, for example, Microsoft Office, Apache OpenOffice, and Adobe programs. It's also a good idea to back up your files at this time, too.”

If you're running Windows 7, Windows Vista, or Windows XP, all of your apps will need to be reinstalled using the original installation discs, or purchase confirmation emails if you bought the apps online.”

If the management at Microsoft wanted to ensure the failure of Windows 8.1 they couldn’t have come up with a better plan than the one they have implemented. By making Windows 8.1 so difficult to install they have ensured that its customers will stick with the tried and proven Windows 7 for as long as possible.

Can anyone at Microsoft explain why they thought this was a good idea?

Do you really need a Taxonomy/Classification Scheme with a Records Management System?

by Frank 26. October 2013 06:00

Background

Classification schemes are a way to group or order data; the objective being to group ‘like’ objects together. Classification schemes have been in use for tens of thousands of years, probably beginning when man first realized that there were different types of animals and plants.

We use classifications schemes both to make things easier to find and to add value to a group of objects. By adding value I mean that a classification (describing a group) may provide more information about the members of that group that is obvious from an analysis of a member; this could be referred to as semantics.

Classification schemes are used in all walks of life, for example; in business, in science, in academia and in politics. Are you a liberal or a conservative? Is it a mammal? If it is, is it a marsupial or a monotreme or a placental mammal? This last example illustrates the usual hierarchical arrangement of classification schemes.

In business, we have long used classification schemes to order business documents, that is, records of business transactions. We are all familiar with file folders and filing cabinets; these things are tools of a classification scheme. They make implementing a classification scheme easier as do numbering systems, colors, barcodes and Lektrievers.

With the first commercial availability of mainframe computers in the early 1960s came our first attempts to computerize filing systems. It was also in the 1960s that we saw the first text indexing systems and the first sophisticated search algorithms.

The advent of text indexing and search algorithms allowed us to do a much better job of classifying data but more importantly, they allowed us to do a much better job of finding data.

Let’s not get in a debate about terminology and acronyms

Our industry (information management to use an all-encompassing term) is often its own worst enemy. It creates terms and acronyms at will with both confusing and overlapping definitions. Then it wonders why normal end–users exhibit first bewilderment and then disinterest. Let’s look at a few examples, e.g., RIMS, RMS, DMS, EDRMS, IAMS, CMS, ECM and KMS.

Do you realize that the process of records management is part of each of the preceding acronyms?

For my part I will stick with my old friend the world records management standard, ISO 15489. It tells us that records are evidence of a business transaction and that records are in any form including paper, electronic documents and emails (I know emails are electronic documents but the world generally differentiates them because emails are ‘different’).

So as far as I am concerned the term Records Management System or RMS includes everything we do and is easily recognized and understood so this is the term and acronym I will use in this paper.

Browsing versus searching

Classification systems are very good at making it easier for us to find information by browsing but not very helpful when we are searching.

Most classification systems require you to first ‘browse’ before finding the exact information you want; you usually have to examine multiple objects before you find the one you want. But this is what classifications systems are very good at; because they organize data in a logical (to a human being) way, we usually know where to begin looking. This is why a classification scheme works so well with a manual filing system (multiple cabinets or multiple shelves of file folders)

Classification schemes are great for physical data and, I would say, absolutely necessary for physical data; how else would you organize fifty-thousand file folders (containing seven and a half million pages) in a huge filing room with hundreds of shelves?

However, with computers I don’t need to browse through multiple objects to find the one I want. By using techniques more appropriate to the computer than the filing room, I can search for and find exactly what I want almost instantly. I do not need to leaf through the file folder, I can go directly to the page or directly to the word. I can use the power of the computer.

The following statement will be probably seen as heresy by most practicing records managers but we actually don’t need a classification system (Taxonomy) when computerizing records. We just need a way to index and then search for information.

We need to organize our data so an ordinary end-user can easily find what they need without having to be a trained, professional records manager.

Indexing versus classifying

Now I know my interpretation of these two terms will not thrill everyone but the differentiation is an important part of my hypothesis.

Let’s start by looking at two kinds of books, a reference book and a work of fiction. Both have tables of content (a classification system usually called a TOC) but only one (the reference book) has an index (usually).

The TOC for the reference book is both useful and often used. The TOC for the work of fiction is both not useful and rarely used (readers rarely need more than a bookmark).

The TOC for the reference book is way to organize information into a logical form grouping ‘like’ information together in chapters and sections. A TOC for the work of fiction is just a list of chapters; it serves little or no purpose for the typical ‘end-user’, the reader.

All the reader of a fiction book really needs is two things; a bookmark and a ‘memory’ of the author, title, cover combination so he/she doesn’t accidentally buy it again at the airport bookshop before that dreaded long and boring flight.

The reader of the reference book actually needs both the TOC and the index for browsing (the TOC) and searching (the index).

A work of fiction doesn’t usually have nor need an index because the end-user doesn’t require it. A reference book usually has an index and it is often used to go direct to a page (or pages) and locate something very specific.

Drawing parallels with our broader topic, some information needs both a classification system and an index, some information needs just an index and some doesn’t require either (e.g., works of fiction).

Generally speaking, scientific collections require a classification system (a scientific taxonomy); for example, the study of plant species and the study of animal species (e.g., using a phylogenetic classification system). Scientists simply could not communicate with each other without having a detailed and exact classification system in place. But, most end-users are not scientists; they are just people trying to find the best place to store something and want to find it again with the least amount of effort and pain.

My contention is that we can solve all ‘content management’ and records management needs with a solution based on the application of a sensible, simple and self-evident (read that as easy to use or human-oriented) indexing system plus the required searching capabilities (i.e., covering both Metadata and full text). There is a better way.

What indexing system?

Whenever I consult with customers who are contemplating the capture and organization of data (hopefully into information) I always give the same advice. That is, “When you are thinking about how to index data first think about how you will find it later.” Ask this key question of your end-users, “When you are about to search for information what do you usually know about it?” For example:

  • Do you know the last name?
  • Do you know the first name?
  • Do you know the date of birth?

A good indexing scheme reflects real life usage of the system; it reflects how ordinary humans work and ‘see’ information. Put simply, it indexes the information people will later need to search on. It indexes the information people understand and are comfortable with because it is self-evident.

Indexing Emails

An email is usually described as an unstructured document (the same way a Word or Excel document is described as being ‘unstructured’) but in fact it does have structure. Even better, everyone is familiar with an email’s structure so we have very little to teach end-users; that is, we have a simple and self-evident ‘natural’ set of Metadata items to index.

  1. Date of email
  2. Sender
  3. Recipient
  4. CC
  5. BCC
  6. Subject
  7. Text of the body of the email
  8. Text of any attachments

For any normal end-user trying to find an email this is how they would envision an appropriate search.  They wouldn’t care that the email has been classified down to 6 hierarchies using the world’s most sophisticated Business Classification Scheme (BCS).

Understanding what end-users typically ‘know’ before they do a search determines what elements you have to index. This is the key to implementing a successful indexing system.

The above 8 elements of an email are self-evident insomuch as, “Of course I need to be able to search on the sender or recipient or subject….”

Indexing Electronic Documents

Now let’s look at ordinary electronic documents (i.e., not emails) because they are much less structured. We all know there are ways to add a common structure using features of MS Office like the information dialog box (asking for keywords etc) and templates and smart tags but these things are rarely and inconsistently used.

With shared drives we usually find some form of ‘evolved’ classification system because managing electronic documents in shared drives is akin to managing millions of pieces of paper in tens of thousands of file folders in hundreds of filing cabinets. Unfortunately, the good intentions and purity of design of the original architects of the shared drives folder/sub folder naming conventions (a classification system) are soon corrupted as users make uncoordinated changes and the structure soon becomes unwieldy and incomprehensible.

In my opinion shared drives are OK for the creation of documents (i.e., a work area) but not OK for the management of documents. In fact I would say shared drives are absolutely hopeless for the management of documents as history and practice will attest.

Once again we need an appropriate indexing system and once again we need to ask, “What do people know at the time of the search?” For example:

  1. Original filename
  2. Original path/filename
  3. Type/suffix – e.g., .DOC, .XLS, .PDF, etc
  4. Author
  5. *Subject

Metadata and the Dublin Core

Let me quote from the Dublin Core website:

http://dublincore.org/

“The Dublin Core Metadata Element Set is a vocabulary of fifteen properties for use in resource description. The name "Dublin" is due to its origin at a 1995 invitational workshop in Dublin, Ohio; "core" because its elements are broad and generic, usable for describing a wide range of resources.”

To quote Wikipedia:

http://en.wikipedia.org/wiki/Dublin_Core

“It provides a simple and standardized set of conventions for describing things online in ways that make them easier to find. Dublin Core is widely used to describe digital materials such as video, sound, image, text, and composite media like web pages.”

The Simple Dublin Core Metadata Element Set (DCMES) consists of 15 elements.

  1. Title
  2. Creator
  3. Subject
  4. Description
  5. Publisher
  6. Contributor
  7. Date
  8. Type
  9. Format
  10. Identifier
  11. Source
  12. Language
  13. Relation
  14. Coverage
  15. Rights

To my mind the Dublin Core is an excellent set of elements for describing almost any ‘record’ because it is both simple and appropriate to both computers and ‘normal’ end-users. As a professional, I like the elegance of the Dublin Core.

I also like the basic principle because it fits in with my hypothesis. That is, there is a better way to store, index and find records than a complex and unwieldy Taxonomy.

The Full Solution?

  • We need an application that stores documents of all types, i.e., all types of content.
  • We need an application that indexes both Metadata and full text.
  • We need an application with a customer configurable Metadata model.
  • We need an application that allows you to search on both Metadata and full text in a single search.
  • We need a search that combines BOOLEAN and numeric operators, e.g., AND, OR, NOT, =, <, >, etc.
  • We need a ‘standard’ Metadata definition (a Class if you will) that includes a simple (not more than 20 in my estimation) set of data elements that includes all of the elements necessary to index all of the types of documents (including file folders and paper) that you manage.
  • We need an application that includes all types of data capture, e.g., from the file system, from the native application, from a scanner, etc.
  • We need an application with a comprehensive security system.
  • We need an application with all reporting options, e.g., both standard reports and ad hoc reports.
  • We need an application with a configurable audit trail.
  • We need an application with comprehensive import and export capabilities.

 

The standard Metadata definition (Master Metadata Class)

I have come up with a limited set of elements that I believe can be used to index and find any type of record, paper or electronic. I have borrowed heavily from the Dublin Core because it makes good sense to do so; there is no need to reinvent the wheel.

#

Element

Explanation

1

Title

A name given to the record. Typically, a Title will be a name by which the record is formally known.  Text, e.g., "Business Plan for 2010"

2

Author(s)

The sender or author, E.g., Mark Twain or f.mckenna@k1corp.com

3

Dated

The original date of the document or published date

4

Date Received

Date received by the recipient or recipient's organization, whichever is the earlier

5

Original Name

e.g., filename or file\pathname for electronic documents  - C:\franks stuff\sample.xls

6

Primary Identifier

An unambiguous reference to the record within a given context. E.g., The file number

7

Secondary Identifier

An unambiguous reference to the record within a given secondary context. E.g., The case number or contract number or employee number

8

Barcode

Barcode number or RFID tag

9

Subject

The topic of the record. Typically, the subject will be represented using keywords or key phrases. Recommended best practice is to use a controlled vocabulary.

10

Description

An account of the record. Description may include but is not limited to: an abstract, a table of contents, a graphical representation, or a free-text account of the record.

11

Content

Words or phrases from the text content of the main document and attached documents

12

Contents

Description of contents if the document is a container, e.g., an archive box

13

Recipient(s)

Addressed to, sent to etc. People or organizations.

14

CC recipient(s)

CC and BCC recipients

15

Publisher

An entity responsible for making the record available.  Company or organization that either published the document or that employs the author

16

Type

The nature or genre of the record, usually from a controlled list, e.g., complaint, quotation, submission, application, etc.

17

Format

The file format, physical medium, or dimensions of the record. E.g., Word, Excel, PDF, etc

18

Language

e.g., English, French, Spanish

19

Retention

 The retention code determining the record’s lifecycle

20

Security

Access rights, security code, etc

 

My contention is that by using an ‘index set’ like the above 20 Metadata elements you can index, manage and retrieve any ‘record’ regardless of form and content.

What about all the standards ‘out there’?

There is a plethora of local, state, federal, industry and international standards pertaining to the management of records. Examples are DoD 5015, MoReq2, Dublin Core, ISO 15489, VERS etc and literally thousands of standards for Metadata.

The problem with most of these standards is that they are extraordinarily difficult to read and understand (even the Dublin Core documentation can be heavy going). I would draw a parallel back to the times when the Bible was in Latin but Christians were supposed to order their lives by its teachings. The problem being that only about 0.025% of Christians spoke Latin. Ergo, how do you order your life by a book you can’t read?

My assertion is that most records managers do not fully understand the standards they are charged with enforcing.

The problem isn’t with the records managers; it is with the people who write the standards. The standards are not written for records managers, they are written for academics and technical people (i.e., systems engineers who are experts in XML).  Just like the Latin Bible, they are not written in the language of the intended user.

And even when you do think you have a grasp of the fundamentals there are always multiple points to be clarified (as to the exact meaning) with the standards authority.

What about Retention/Disposal schedules?

This should probably be the subject of another paper because retention schedules have also become way too complex, unwieldy and difficult to understand and apply.

The question will be, “How can I do away with my classification system when my retention codes are linked to it?”

I have looked at hundreds of retention schedules and every single one has been way too complicated for the organization trying to use it. Another problem is that very few of the authorities that compile retention schedules do so with computers in mind. This means that we end up with lots of very vague conditional statements that are almost impossible to computerize.

Most retention schedules are written for archivists to read, not for computers to process. This is the heritage of retention schedules; they assumed an appraisal process by a trained and expert archivist.

The Continuum model or ‘Whole of Life’ model or File Plan model all assume we will allocate a retention code at the time the record is created, not during a later appraisal process. This made much more sense and allowed us to better manage the record throughout its life cycle. However, many such schemes also linked the retention code to a classification term or embedded the retention codes within the classification system. This of course made the classification system even more complex and difficult to understand and apply.

To my mind no organization needs more than ten retention codes (shortest period, longest period and eight in between) and three life cycles (e.g., active, inactive, destroyed). This is also probably heresy to a lot of the records management profession but, I would ask them to think about the proposition that something that was entirely appropriate to the manual world is not necessarily entirely appropriate to the computerized world. There is an easier and simpler way to manage retention and there is no need to embed retention codes into the classification system just as there is no need for a classification system in any modern, computerized records management system.

What about File Folders and Archive Boxes?

This is the classic stumbling block. This is when the records manager tells you that all the standards require you to use the same taxonomy for emails and electronic documents that he/she uses for traditional file folders and archive boxes.

You need to explain that the classification from the manual paper handling world is inappropriate to the computerized world, that it is an anachronism. You need to explain that all it will add is complexity, massive cost, confusion and a seriously negative attitude to end-users. You should say it is time to discard techniques and tools from the eighteenth century and adopt techniques from the twenty-first century. You should say you have a much better way. Then you should probably duck and run. Failing all else, blame me and give them my email address.

 

 

What is happening with the Tablet market?

by Frank 18. August 2013 06:00

I run a software company called the Knowledgeone Corporation and our main job is to provide the tools to capture, manage and find content. As such, we need to be on top of the hardware and software systems used by our customers so that we can constantly review and update our enterprise content management products like RecFind 6 so that they are appropriate to the times and devices in use.

I have spoken in previous Blogs about tablets and form factors and what is needed for business so other than providing the following links, I won’t go over old ground.

Will the Microsoft surface tablet unseat the iPad?

The PC is dead, or is it?

What will be the next big thing in IT?

Could you manage all of your records with a mobile device?

Why aren’t tablets the single solution yet?

The real impact of mobilization – How will it affect the way we work?

Mobile and the Web – The real future of applications?

Form factor – The real problem with mobile devices doing real work

Since my last Blog on the subject we have all seen RT tablets come and go (there will be a big landfill of RT tablets somewhere) and we are now all watching the slow and painful demise of Blackberry. In both of these cases we have to ask how big, super-clever companies like Microsoft and Blackberry could get it so wrong. Just thinking about the number of well-educated and highly experienced marketing and product people they have, it is inconceivable that they couldn’t work out what the average Joe in the street could have told them for free.

Then let’s also think about HP’s disastrous experiment with its TouchPad tablet (another e-waste landfill) and it becomes apparent that some of the largest, richest and best credentialed companies in the world can’t forecast what will happen in the tablet market.

In my opinion the problem all along, apart from operating system selection (iOS or Android?), has been matching needs to form factor and processing power. For example, no one wants a 12 inch phone and no one wants to write and read large documents on a 3 inch screen. This is why most of us still carry around three devices instead of one; a phone, a tablet and a laptop. This is just plain silly, what is the point of a small form factor device if I have to supplement it with a large form factor device? Like most other users, I really just want to carry around one device and I want it to have the capabilities and processing power for all the work I do.

It is for this reason that I believe the next big thing in the tablet market will be based on phones, not tablets. I envision slightly larger and much more powerful phones with universal connectors (are you listening Apple?) and docking capability. I would also like it to have a minimum of 4G and preferably 5G when available.

I want to be able to use it as a phone and when I get to my office I want to connect it to my keyboard, screen and network. I want to be able to connect it to a projector when visiting customers and prospects and I want a dynamically sizing desktop that knows when to automatically adjust the display to the form factor being viewed. That is, I want a different desktop for my screen at work than I want on the phone screen when travelling.

This brings up an interesting issue about choice of operating system as Windows owns about 95% of all business PCs and servers. I have previously never thought about buying a Windows Phone (I had one once a few years ago with Windows CE and it was awful) but my ideal device is going to have to run on the Windows operating system to be really usable in my new one-device paradigm.

I wonder why Microsoft didn’t think of this?

Outsourcing does not save anything, it always costs

by Frank 27. July 2013 06:00

I run a software company called Knowledgeone Corporation and I receive many unsolicited contacts a week from mainly Indian companies wanting me to outsource RecFind 6 development and support to them. They must live in a different world to me (which of course they do) because I hate dealing with outsourced anything, particularly help desks.

I have promised my customers I will NEVER outsource support (because they deserve better) and I have promised myself that I will NEVER outsource development (because I deserve better).

I regularly converse with my peer group in the industry and hear the horror stories from those that have had their development and/or support outsourced to an Indian or worse, Filipino location. I hear the same horror stories time after time.

The senior executive that instigated the outsourcing may well be getting bigger bonuses (or even secret commissions) and basking in his/her ‘success’ but down deep in the organization where the real work gets done no one is happy. The customers are also not happy and most would happily change suppliers if they could find an organization that hadn’t outsourced its development and support but that is a tough ask in today’s world where greedy and corrupt executives care much more about themselves than their customers and staff.

I have been managing software developers for over 30 years and I know for a fact that you cannot run a development team from thousands of miles away. My programmers sit outside my office and I talk to them multiple times every day and also hold regular formal review meetings. I also have them do peer reviews of work in progress and I regularly amend the specifications as we ‘discover’ roadblocks on the way to completion. They come and talk to me throughout the day and ask clarification questions or suggest better ways to do something.

The point is software development is an interactive, ‘living’ process that relies on the open exchange of ideas and a healthy interaction between team members. As the guy who writes most of the specifications I regard myself as a team member and most importantly, do not believe I am infallible. I need the interaction and so do the programmers. What we do is very, very complex and no one, even someone as experienced as me, can get the specification one-hundred percent correct on day one. Ego has no part in software development because this process is always one of cooperation, shared intelligence and open dialogue; the team produces the result, not the leader.

My peer group tells me that outsourced software development does not work even when you go to ridiculous lengths to make sure that the specification is as clear and as unambiguous as possible. By ridiculous lengths I mean doing things like actually coding the solution in process diagrams of pseudo code and creating all the algorithms and decision tables as addendums. The code that comes back is always immature, unfinished and lacking in core logic and architecture. It always has to be massaged by the local guys and even then it often goes to the customer in an unacceptable state. The ‘cost’ to the long-suffering local guys is high as is the degree of frustration. The monetary cost is rarely calculated because it involves additional time and lost time and delayed releases and much snarling and grinding of teeth most of which is invisible to the person on top who orchestrated the outsourcing just as it is invisible to other board members and shareholders. However, internal disgruntlement like this is a cancer and it will eventually behave like all malignant cancers and grow and spread and destroy.

It is a similar situation with outsourced support when the poor service level erodes customer loyalty over time and ensures a high customer churn rate. Again however, these ‘costs’ are rarely visible to board members and shareholders until it is too late.

I am pleased to see a new trend (in the USA at least) whereby companies are bringing back outsourced work (is it called insourcing?), employing locals and making their customers, staff and local city/town a lot happier. It is a great and welcome trend but so far it is just a trickle and there are still far more companies outsourcing than insourcing.

In my business at least (software development), outsourcing will NEVER produce the desired result if that desired result is focussed on quality rather than cost. Nor will you ever really save money because of the hidden and ignored costs that always accompany outsourced software development. I guarantee however that you will save money by insourcing because as long as you select your local team carefully you will be able to accomplish your work with just one quarter the number of programmers you had in India; hire quality, not make up numbers. Five experienced Americans or Australians or Canadians will easily do the work of twenty Indian interns and produce infinitely better code in the process. Never has that old adage been truer, “You get what you pay for”.

You will lose money, you will lose great staff and you will eventually lose customers if you put cost ahead of quality.

Bring it back in-house and retain your reputation, your best staff and your customers; it is a no-brainer if you really do have the best interests of your company, your staff and your customers at heart.

Why product training is so important

by Frank 23. June 2013 06:00

I run a company called the Knowledgeone Corporation that produces a software application called RecFind 6 that is used to provide records management, document management, workflow, document imaging, email management and general business process management functionality. Every installation is different because we configure RecFind 6 to the exact requirements of each customer. All installations include some form of business process management and many include a reasonable degree of complexity, especially, when integrating to other systems.

We are always proposing to new and existing customers and negotiating contracts and the one item in the pricing summary that is always under attack is training. As well as questioning the need for face to face training, many customers also try to reduce the cost by just training a core group that will then train the rest of the staff who will use the new system.

I try to explain that effective and complete training is fundamental to the success of the project; that training isn’t a cost, it is an investment in success. I rarely win.

I also try to convince my customers of the importance of ongoing training for new releases and new employees but I am rarely successful.

I try to explain that cutting costs on training is a sure fire way to ensure that the project will never be as successful as it could be. I rarely win this argument either.

And finally, I always recommend that an IT person attends the training course because his/her services will be need by the application administrator throughout the year. This rarely happens.

Yet, time after time and in example after example, projects (not just ours) are significantly less successful than they should be because someone in management decided to cut costs by skimping on training; by not training operational staff in how to use the product in the most cost effectively and productive way possible.

If you skimp on training you are almost certainly dooming your project to failure.

Lack of knowledge on how to best use a product is an insidious cancer. The project may begin with a big bang and lots of congratulations but deep within your organization the cancer has already started to grow. “I don’t like this product.” “This product is too hard to use.” “I can’t find anything with this product.” “My staff don’t understand this product.”

By year two, many people and departments simply don’t use the product any more. By year three there is a concerted push to find a replacement for this product that “is too hard to use. No one understands it.” The replacement project manager or application owner, who hasn’t been trained, is unable to address the complaints and soon also decides that the problem is with the product. It would be a bad career move to decide anything else.

In year four the organization begins looking for a replacement product. In year five, at great expense they select a replacement product and then lower costs by skimping on training. The cycle starts again.

If you skimp on training and re-training your project is doomed to failure.

How many expensive failures does it take before we learn the lesson?

Training is an investment in productivity, not a cost.

Why don’t you make it easy for end users to find what they need?

by Frank 8. June 2013 06:00

Many records managers and librarians still hold on to the old paradigm that says if a user wants something they should come though the information management professional. They either believe that the end user can’t be trusted to locate the information or that the task is so complex that only an information professional can do it in a proper and professional manner.

This approach to tightly controlled access to information has been with us for a very long time; unfortunately, not always to the benefit of the end user. It is often interpreted as a vice-grip on power rather than a service by the end users.

In my experience, (many years designing information and knowledge management solutions), most end users would like the option of searching for themselves and then deciding whether or not to request assistance.

Of course it may also be true that the system in use is so complex or so awkward to use that most end users (usually bereft of training) find it too hard to use and so have to fall back on asking the information professional. However, if this is the case then there will invariably be a backlog of requests and end users will be frustrated because they have to wait days or weeks for a response. In this environment, end users begin to feel like victims rather than valued customers or ‘clients’.

The obvious answer is to make it easy for end users to find what they are looking for but this obvious answer seems to escape most of us as we continue to struggle with the obscure vagaries of the existing system and an often impenetrable wall of mandated policies, processes and official procedures.

If we really want a solution, it’s time to step outside of the old and accepted model and provide a service to end users that end users actually want, can use and appreciate. If we don’t take a wholly new approach and adopt a very different attitude and set of procedures then nothing will improve and end user dissatisfaction (and anger) will grow until it reaches the point where they simply refuse to use the system.

End users are not stupid; end users are dissatisfied.

One of the core problems in my experience is an absence of an acceptance of the fact that the requirements of the core, professional users are very different to the requirements of the end users. At the risk of oversimplifying it, end users only need to know what they need to know. End users need a ‘fast-path’ into the system that allows them to find out what they need to know (and nothing more) in the shortest possible time and via the absolutely minimum amount of keystrokes, mouse-clicks or swipes.

End users need a different interface to a system than professional users.

This isn’t because they are less smart, it is because the ‘system’ is just one of the many things they have to contend with during a working day, it is not their core focus. They don’t have time (or the interest) to become experts and nor should they have to become experts.

If end users can’t find stuff it isn’t their fault; it is the system’s fault.

The system of course, is more than just the software. It is the way menus and options are configured and made available, it is the policy and procedures that govern access and rights to information. It is the attitude of those ‘in-power’ to those that are not empowered.

If you want happy and satisfied end users, give them what they need.

Make sure that the choices available to an end user are entirely appropriate to each class of end user. Don’t show them more options then they need and don’t give them more information than they are asking for. Don’t ask them to navigate down multiple levels of menus before they can ask the question they want to ask; let them ask the question as the very first thing they do in the system. Then please don’t overwhelm them with information; just provide exactly and precisely what they asked for.

If you want the end users off your back, give them what they need.

I fall back on my original definition of a Knowledge Management system from 1997, “A Knowledge Management system is one that provides the user with the explicit information required, in exactly the form required, at precisely the time the user needs it.”

With hindsight, my simple definition can be applied to any end user’s needs. That is, please provide a system that provides the end user with the explicit information required, in exactly the form required, at precisely the time the end user needs it.

What could be more simple?

More references:

The IDEA – 1995

Knowledge Management, the Next Challenge? - 1997

Whatever happened to the Knowledge Management Revolution?  – 2006

A Knowledge Management System – A Discourse – 2008

 

Records Management in the 21st century; you have computers now, do it differently

by Frank 1. June 2013 06:32

I own and run a computer software company called the Knowledgeone Corporation and we have specialised in what is now known as enterprise content management software since 1984 when we released our first product DocFind. We are now into the 8th iteration of our core and iconic product RecFind and have sold and installed thousands of RecFind sites where we manage corporate records and electronic documents.

I have personally worked with hundreds of customers to ensure that we understand and meet their requirements and I have also designed and specified every product we have delivered over the last 29 years so while I have never been a practicing records manager, I do know a great deal about records and document management and the vagaries of the practise all around the world.

My major lament is that many records managers today still want to run their ‘business’ in exactly the same way it was run 30 or 50 or even a hundred years ago. That is, as a physical model even when using computers and automated solutions like our product RecFind 6. This means we still see overly complicated classification systems and overcomplicated file numbering systems and overcomplicated manual processes for the capture and classification of paper, document images, electronic documents and emails.

It is a mindset that is locked in the past and can’t see beyond the confines of the file room.

I also still meet records managers that believe each and every employee has a responsibility to ‘become’ a junior records manager and both fully comprehend and religiously follow all of the old-fashioned and hopelessly overcomplicated and time-consuming processes laid out for the orderly capture of corporate documents.

I have news for all those locked-in-the-past records managers. Your approach hasn’t worked in the last 30 years and it certainly will not work in the future.

Smart people don’t buy sophisticated computer hardware and application software and then try to replicate the physical model for little or no benefit. Smart people look at what a computer system can do as opposed to 20,000 linear feet of filing shelves or 40 Compactuses and 30 boxes of filing cards and immediately realize that they have the power to do everything differently, faster, most efficiently and infinitely smarter.  They also realize that there is no need to overburden already busy end users by a forcing them to become very bad and very inconsistent junior records managers. End users are not hired to be records managers they are hired to be engineers, sales people, accountants, PAs, etc., and most already have 8 hours of work a day without you imposing more on them.

There is always a better way and the best way is to roll out a records and document and email management system that does not require your end users to become very bad and inconsistent junior records managers. This way it may even have a chance of actually working.

Please throw that old physical model away. It has never worked well when applied to computerised records, document and email management and it never will. Remember that famous adage, “The definition of insanity is to keep doing the same thing and to expect the results to be different”?

I guarantee two things:

1.     Your software vendor’s consultant is more than happy to offer advice and guidance; and

2.     He/she has probably worked in significantly more records management environments than you have and has a much broader range of experience than you do.

It doesn’t hurt to ask for advice and it doesn’t hurt to listen.

A lifetime of maintenance and support?

by Frank 31. March 2013 06:00

I run a software company manufacturing enterprise content management products that has been offering maintenance on its products for nearly 30 years and that has never failed to produce at least one major update per year during that time. We have also always offered multiple year options for our software maintenance. We call it the ASU, Automatic Software Upgrade. We currently offer 1, 2, 3, 4 and 5 year terms; the longer the term, the lower the cost per year.

I got the idea for a new software maintenance offering from Garmin, the satellite navigation company. Essentially, I bought a Garmin because the manufacturer of a car I bought in 2008 stopped issuing updates to its integrated satellite navigation system and it is now pretty useless as it doesn’t know about all the new and changed road systems.

An attraction of the Garmin was that they offered a ‘lifetime’ supply of updated maps for a single fee that I could download up to four times a year. The end result is that my Garmin is always up to date with all new and changed roads and is one hundred-percent useful while the satellite navigation system in my car is now useless because it is so out of date.

As well as the advantage of always being up to date the Garmin deal was great because it was a single transaction; I don’t have to worry about renewing it every year and I don’t have to worry about future cost increases.

I thought why not offer a similar deal to RecFind 6 customers? They too have to keep up to date and they too don’t want to worry about having to budget and renew the ASU every year and future cost increases.

In our case we chose to re-name the five year ASU option to the ‘Lifetime’ option. If you choose the Lifetime option you automatically receive all updates for as long as you use RecFind 6 and you also receive free support via email and our web form for as long as you use RecFind 6.

The fee is one-time and the price is therefore fixed for life. You no longer have to worry about budgeting and contracting for renewals every year and your RecFind 6 software will continue to be relevant, fully supported and improved with new and enhanced functionality.

If at any time in the future a customer purchases new software from us or additional licences they can be added to its Lifetime ASU for a single one-time fee.

Frank’s perspective:

For the record, I buy a lot of software for our development team and none offer lifetime maintenance; all only offer annual maintenance and it is very expensive (up to 25% of the value of the software) and the price seems to go up every year. If I could convince my software vendors to offer a lifetime deal I would jump at the offer.”

Frank McKenna | Knowledgeone Corporation
CEO & Sales & Marketing Director
f.mckenna@knowledgeonecorp.com

Why aren’t more software vendors offering this same maintenance option?

Ain’t life funny?

by Frank 27. January 2013 06:00

When I wrote at the bottom of my last blog that I would be taking some time off over Xmas I had little idea what that would actually entail. Seven days after writing that Blog about Santa Claus I was enjoying a ride in the country when I crashed my motorcycle into a guard rail (Armco). I don’t remember how the accident happened but I managed to write off the bike and do serious damage to myself.

Today is my first day back at work albeit in part-time mode as I am still receiving treatment for my injuries. As I try to get myself back into gear and get up to speed I realize again what a great family I have and what great staff I have. Thanks guys.

My heartfelt thanks to my family, especially my wife Kay who has carried all the burden of my hospitalization and treatment regime and my daughter Michelle who has taken on many of my duties at work, and my fantastic staff who have kept everything running smoothly and covered for me when required. Having a close call helps one to focus in on what is really important in life and that is without any shadow of doubt, family and friends.

Before the accident I was focussed on technology and spent hours a day researching the latest trends and development tools. Before the accident it was important to me to know which tablet was succeeding and which phone was selling the most and would the Surface RT capture market share, etc., but for now all that technology news seems so unimportant and transient. Basically, most of it is just recycled, repetitive floss.

To be truthful, despite hundreds of technical papers and blogs since my accident nothing in the IT world has really changed; most of the daily news is just noise. Most of the emails are just noise. I have discovered that I can miss five weeks of technical news and emails and not really miss a thing. It literally took me an hour or so this morning to speed read the latest IT news and get up to date. I have missed nothing by being cut off from the news for 5 weeks.

I also realize that I can happily live without ninety-percent of the emails I receive. This is important because I also believe that most of us waste an enormous amount of time reading and replying to emails that are really of no consequence. My New Year’s resolution is now to unsubscribe from most of the technical emails and blogs I subscribe to; they really are a waste of time when viewed in a wider, whole-of-life, what-is-really-important perspective.

The only emails I want to receive at work are from customers, prospective customers and partners. That is, from ‘real’ people about real issues important to them and my business. Everything else I am going to ignore, spam or unsubscribe from to leave more time for real work, not ‘busy’ work.

My advice to you is to do likewise. You don’t need a serious motorcycle crash to learn the valuable lesson I have just learned; I am happily passing on the lesson to you as my gift for the New Year.

My other New Year’s resolution is to quit riding motorcycles and all other dangerous pursuits. To those of you who are still engaging in dangerous disputes my message is that it is the worst kind of selfish behaviour. Please do what I didn’t do and put your family and friends and employees (if you have them) first. Realize how much they depend upon you and realize the impact any accident would have on them.

As my older brother Pat said to me after my accident, “You didn’t have to crash your bike; I could have given you photos of my crashed bike.” He too had a serious motorcycle accident some years ago and I was just too dumb to learn from his experience.

Life is precious, family and friends are precious and none of us know how much time we have so it behoves us to enjoy every single minute of every day and to think about the people that depend upon us before we take those silly risks.

I am lucky because I will fully recover. I am also blessed with a wonderful and supportive family and great employees that stepped up and kept everything running smoothly in my absence. However, I caused the problem through selfish behaviour and this was neatly summed up my five year old granddaughter who made me a get well card addressed to “Silly Grandad”, now isn’t that just so true?

How does Santa Claus handle all those letters?

by Frank 2. December 2012 06:02

All over the Western world little kids are writing to Santa Claus asking for presents for Christmas and telling Santa what good children they have been.

However, Santa must receive tens or even hundreds of millions of letters and all within the short few months before Christmas. How does he handle this veritable avalanche of mail? How does he process it? How does he even read each and every letter? How does he match the presents to the letters? How does he make sure the child has been good?

As Santa has been doing this for some time we can only assume that he has some pretty slick systems in place so as not to disappoint even a single child. The scale of his operation is awesome and dwarfs anything undertaken by any other organization.  Granted some of the work may well be handled by elves and fairies but surely in this day and age even Santa makes use of computers and software? I mean, there are millions and millions of letters, how many elves and fairies can he employ? How would he house and feed them all? You can imagine the problems if they are unionised; he must be automated in some way.

I don’t know but I envision that he must use at least a super computer or two and some really very, very clever auto-reading, auto-matching and auto-ordering software. This isn’t the kind of job you could do manually. He has to read let’s say one hundred million letters and at even just one minute a letter that equates to 1,666,667 hours of reading or 69,444 days of reading (if he works 24 hours a day) in just 60 days. That is a tough ask even for someone as experienced and capable as Santa Claus.

Yet despite these astounding statistics Santa meets his objectives year after year after year.  This is the man I would like to have as Prime Minister or President. Can you just imagine what he could achieve if he was running the country?

But, back to the question; just how does he do it?

Well obviously he must have a central mail room with a very fancy machine that opens every letter and separates the letter from the envelope. Then every letter must pass by a reading station that scans and OCRs it allowing for multiple languages, fonts and any kind of hand writing, including that of a two-year old. Santa’s software must be significantly more sophisticated than anything I have seen.

Then the text of each letter has to be intelligently analysed to extract the child’s name and address as well as the list of presents.  This information then feeds into Santa’s purchasing system that issues orders to toy manufacturers world-wide. Then he needs to track millions of orders and deliveries and package and stack everything in the correct order for his deliveries all around the world on Christmas Eve; phew!

Just think about the difficulty of ensuring that the right toys are at Santa’s hand very time he stops over a home. On second thoughts, he must be automating the process by uniquely tagging every toy with Santa’s advanced (and invisible) version of RFID tags. Maybe he even has his elves or fairies go out in advance and RFID tag the homes or even the children to ensure a perfect match? All I know is that he gets it right year after year after year so whatever hardware and software he uses it must be pretty special.

Of course, my admiration for Santa’s abilities grows by leaps and bounds when I try to work out just how fast he moves. Even allowing for an extended Christmas Eve because of multiple time zones he still has to visit hundreds of millions of homes in just 24 hours (1,440 minutes or 86,400 seconds). This means that good old Santa manages to visit a home, deliver a present, eat the cake or biscuit the good child has left for him and drink whatever libation the child or dad or mum has provided in around eight ten-thousandths of a second. This man is fast and I don’t know what he feeds his reindeers but it must be the ultimate ultra-high energy food to support this pace; it is no wonder that Rudolph’s nose glows.

There may well be some miserable, glass-half-empty, doubting Thomas’ out there who look at these amazing statistics and say Santa can’t be real because it is just not doable. But that is a dumb thing to say because it is as obvious as the nose on your face (or even the nose on Rudolph) that Santa is real and that it is undeniable that every Christmas Eve for centuries he has left presents for good little children all around the world. His record is impeccable; this man sets a standard for all of us to aspire to and admire. Santa never disappoints any child despite the enormous challenge he sets himself every Christmas Eve.

Santa brought me presents when I was a child (I still remember the Hopalong Cassidy cap-gun and holster), he brought presents to my children when they were little and now he brings presents to my grandchildren. This man is real and he has never let me or my family down; what a wonderful person he must be to do so much to make children happy.

When I read the newspapers, listen to the depressing news on radio or watch it on TV I realize just how much we all need to believe in Santa Claus and what he stands for. A wonderful, loving, caring and unselfish person who does everything in his power to help the most helpless; our beautiful children. Would it be that our leaders were even a fraction as good and as unselfish and as accomplished and as trustworthy as Santa. Can you imagine what a wonderful world it would be?

Santa is my hero and I hope he is yours too; Merry Xmas to you all.

 

Footnote:

Frank is taking a break over Christmas and New Year and will be back writing his Blog again in late January 2013.

 

Month List