Integration, what does it really entail?

by Frank 10. June 2012 06:00

Over the last 28 years of running this business I have had hundreds of conversations with customers and partners about integration. In most cases when I have tried to obtain more details about exactly what they wanted to integrate to and how I have drawn a blank. It is as if the word ‘integration’ has some magical connotation that everyone is supposed to instantly understand. Being a very logical and technical person, I guess I fail the test because to me it is just a general term that covers a multitude of possibilities.

I have also designed and/or programmed many integrations in my life and I can honestly say that no two have ever been the same. So, assuming that you have a requirement for two or more of the application software products you use to ‘integrate’ how should you go about defining what is required so it is as clear and as unambiguous as possible to your partners, the people you will ask to do the work?

Integration is usually about sharing information and importantly, not duplicating effort (i.e., having to enter data twice) or duplicating information. Having to enter the same information twice into two different systems is just plain dumb and bad design. Maintaining duplicate copies of information is even dumber and dangerous because sooner or later they will get out of step and you will suffer from what we call a loss of data integrity. This is usually why we need integration, to share information and to avoid duplicate effort and duplicate information.

For our purpose let’s assume that we have two application systems, A and B, that we need to be ‘integrated’; both use a SQL database to store data. Applications A and B are produced by different vendors that haven’t worked together before. The first fact to face is that in the normal course of events each vendor is going to want the other vendor to do all the work and each vendor is going to want the other vendor to utilize its proprietary integration methodology (e.g., Application Program Interface (API) or Software Development Kit (SDK)). This is your first big challenge; you need to get the vendors to agree to work together because no matter how this turns out both are going to have to do work and contribute; you can’t complete an integration using just one vendor. That is, the most important thing you have to do is to manage the vendors involved. You can’t just leave it up to them; you need to manage the process from beginning to end.

The second most important thing you have to do is to actually define and document the integration processes as clearly as possible. Here is a checklist to guide you:

1.    Will application A need to access (i.e., read) data held by application B?

2.    Will application B need to access data held by application A?

3.    How often and at what times is access to data required? All the time, once a day, once a week, only when something changes, only when there is new data, every time a new record is added to either application A or B, etc. What are the rules?

4.    How is the data identified? That is, how does application A know what data it needs to access in the application B database? Is it by date or special code or some unique identifier? What are the rules that determine the data to be accessed?

5.    Will application A need to transfer data to application B (i.e. write data to the B database)?

6.    Will application B need to transfer data to application A?

7.    How often and at what times is a transfer of data required? All the time, once a day, one a week, only when something changes, only when there is new data, every time a new record is added to either application A or B, etc. What are the rules?

8.    How is the data identified? That is, how does application A know what data it needs to transfer to the application B database? Is it by date or special code or some unique identifier? What are the rules that determine the data to be transferred?

9.    Does application A have an API or SDK?

10.What is the degree of difficulty (expertise required, time and cost) of programming to this API or SDK? Rate it on a scale of 1 to 10, 10 being the most difficult and most expensive and most time consuming.

11.Does application B have an API or SDK?

12.What is the degree of difficulty (expertise required, time and cost) of programming to this API or SDK? Rate it on a scale of 1 to 10, 10 being the most difficult and most expensive and most time consuming.

13.Is the vendor of application A happy to assign a suitable qualified technical person (not a sales person or pre-sales person) to be the interface?

14.Is the vendor of application B happy to assign a suitable qualified technical person (not a sales person or pre-sales person) to be the interface?

15.What is your timescale? When does it have to be completed? What is the ‘driver’ event?

16.What is your budget? Basically vendors work in a commercial environment and if they do work then they expect to get paid. As a rule, the size of the budget will depend directly upon your management skills and the quality of your integration specification.

Please keep it in mind that there are always multiple ways to integrate; there is never just a single solution. The best way is always the simplest way and this is usually also the lowest cost and quickest way as well as being the lowest cost to maintain. Think about the future, the most complex solution is always the most difficult and most expensive ongoing maintenance solution. Think KISS; minimize your pain and expense.

As a guideline, the vendor with the most work to do is usually the best one to be the ‘lead’ in the integration (remember, both have to be involved or it won’t work). So, if for example vendor A needs to read data from Vendor B’s database and then massage it and utilize it within application A then vendor A is the natural lead. All vendor B has to do is expose its API and provide the required technical assistance to vendor A so vendor A can successfully program to the API.

However, in the end it will usually be the vendor that is the most cooperative and most helpful that you will choose. If you choose the vendor you most trust and work best with to be the lead then you will maximize your chances of completing a successful integration on time and on budget.

 

Your help desk works, or does it?

by Frank 3. June 2012 06:00

Almost every organization, commercial or government, needs a help desk. Help desks support either internal or external ‘customers’. Generally speaking the job of a help desk is to support users who have problems or questions about a product or service.

Help desks may run as either a profit centre or as a cost centre. Normally, help desks supporting internal customers run as cost centres (though maybe with an internal accounting function that attempts to allocate costs to all the departments that utilize the service) and help desks that support external customers run as a profit centre, charging for their services via an annual service fee or incident fee.

The only true measure of the worth of a help desk is the level of customer satisfaction and this is very difficult to measure other than in an anecdotal way. This is because of human nature; customers who are happy with the service rarely take the time to write to the help desk manager and tell him. The same is true of customers who are unhappy with the service; most just make a decision not to use that product or service again. A small number of very disgruntled or even litigious or nuisance customers will complain repeatedly in the most vociferous and rudest manner but will largely be ignored as repeat offenders or the usual suspects.

Trying to get a reading across the customer base by using a survey rarely works either as most won’t respond  and the ones that do respond are usually from the two extremes, the really, really happy customers and the really, really dissatisfied customers. Plus, we all know that a survey is like a poll, if you design the questions in a certain way you can always get the result you first thought of.

Because it is so difficult to obtain enough customer input to be able to rate the help desk we usually fall back on internal metrics. Such things as how many calls did we receive last week? What percentage was closed within 1 day, 2 days, 3 days, etc.? How many are still outstanding after 7 days? How many had to be escalated?

The problem with internal metrics, like police reports on crime statistics, is that they can be manipulated to produce the result you first thought of. Remember that old saying about statistics, "Lies, damned lies, and statistics." A smart and politically savvy help desk manager will always find a way to guild the lily and dress up the stats so he looks good.

So, how do you know if your help desk is working and servicing your customers to the highest standard? There is only one sure way I know of and that is to ring the help desk yourself (incognito I hope, calling up and saying this is the CEO won’t really give you a fair reading about how ordinary customers are treated), or organize a team to call the help desk with a list of known issues and test the responses.

This sounds like it should be a business opportunity; a kind of reverse outsourced help desk, an organization that specializes in testing help desk services. All you have to do is provide them with scripts and a way to measure the effectiveness of the responses. However, I don’t know of any organization that provides this service just as I have never met a CEO lately who seems to know or care what is happening with his help desk service and this is the real problem.

You can always tell the company with the disinterested CEO because there is no way to contact him or her on the website. Companies that aren’t interested in supporting customers always make it almost impossible for a customer to provide feedback. Unfortunately, this ‘we are hiding from you approach’ is becoming the norm as companies remove all contact information from their websites and force customers to endure long waits and rubbish ‘service’ from outsourced support centres.

The executives don’t receive negative feedback because they make it so difficult for customers to reach them. Personally, I think this is a short term and eventually damaging practice as customers tend to have long memories and frustrated, dissatisfied customers will make it their business to tell everyone but the company’s management team (because they aren’t able to contact them) about the rubbish product and the shoddy way they were treated.

Before you ask, let me explain that we do have a support centre but it is not outsourced and we make it as easy as possible for customers to contact us by web form, email, mobile device or toll free number. Please see the links below:

http://www.knowledgeonecorp.com/support/contactinghelpdesk.htm

http://www.knowledgeonecorp.com/contactus/emailus.htm

http://www.knowledgeonecorp.com/contactus/Contact_By_Mobile_App.htm

http://www.knowledgeonecorp.com/support/freeemailsupport.htm

support@knowledgeonecorp.com

Just so you know that we practice what we preach.

Paradoxically, I believe the reason that I get so few complaints (apart from the high standard of our support services) is that I make it so easy for customers to contact me or any other executive in my company.

We also use our own product RecFind 6 as our help desk software so we are able to build in all the alerts, escalations and reporting we need to manage each and every support call to the best of our ability. And finally, my office is just 20 metres or so from the support centre so I make it my business to be in there talking to the support staff at least 4 or 5 times a day.

I am a CEO who is vitally interested in his customers and the quality of support they are receiving and not just for altruistic reasons but for sound business reasons.  Happy customers stay with us and invest in our products and services year after year. It is quite simple really; I invest in my customers so they will invest in my company. It works for us and I wonder why other CEO’s don’t understand this very simple message.

The relationship between a vendor and a customer should be a mutually beneficial partnership; it should not be an destructive, adversarial relationship. In my opinion CEO’s who do not allow their customers to contact them and deliver either a complaint or a compliment are fools and bad business people with a strictly short term view. It is a formula for more short term profit but less long term customers. We opt to spend more money and time on support so we can foster better long term relationships. I think in the ‘old days’ this used to be called service.

What is really involved in converting to a new system?

by Frank 27. May 2012 06:00

Your customer’s old system is now way past its use by date and they have purchased a new application system to replace it. Now all you have to do is convert all the data from the old system to the new system, how hard can that be?

The answer is it that can be very, very hard to get right and it can take months or years if the IT staff or the contractors don’t know what they are doing. In fact, the worst case is that no one can actually figure out how to do the data conversion so you end up two years later still running the old, unsupported and now about to fail system. The really bad news is that this isn’t just the worst case scenario, it is the most common scenario and I have seen it happen time and time again.

People who are good at conversions are good because they have done it successfully many times before. So, don’t hire a contractor based on potential and a good sales spiel, hire a contractor based on record, on experience and on a good many previous references. The time to learn how to do a conversion isn’t on your project.

I will give you guidelines on how to handle a data conversion but as every conversion is different, you are going to have to adapt my guidelines to your project and you should always expect the unexpected. The good news is that if you have a calm, logical and experienced head then any problem is solvable. We have handled hundreds of conversions from every type of system imaginable to our RecFind product and we have never failed even though we have run into every kind of speed bump imaginable. As they say, “expect the best, plan for the worst, and prepare to be surprised.”

1.    Begin by reviewing the application to be converted by looking at the ‘screens’ with someone who uses the system and understands it. Ask the user what fields/data they want to convert. Take screenshots for your documentation. Remember that a field on the screen may or may not be a field in the database; the value may be calculated or generated automatically. Also remember that even though a screen may be called say “File Folder” that all the fields you can see may not in fact be part of the file folder table, they may be ‘linked’ fields in other tables in the database.

2.    You need to document and understand the data model, that is, all the tables and fields and relationships you will need to convert. See if someone has a representation of the data model but, never assume it is up to date. In fact, always assume it is not up to date. You need to work with an IT specialist (e.g., the database administrator) and utilize standard database tools like SQL Server Management Studio to validate the data model of the old system.

3.    Once you think you understand the data model and data to be converted you need to document your thoughts in a conversion report and ask the customer to review and approve it. You won’t get it right first time and expect this to be an iterative process. Remember that the customer will be in ‘discovery’ mode also.

4.    Once you have acceptance of the data to be converted you need to document the data mapping. That is, show where the data will go in the new application. It would be extremely rare that you would be able to duplicate the data model from the old application; it will usually be a case of adapting the data from the old system to the different data model of the new application. Produce a data mapping report and submit it to the customer for sign-off. Again, don’t expect to get this right the first time; it is also an iterative process because both you and the customer are in discovery mode.

5.    Expect that about 20% or more of the data in the old system will be ‘dirty’; that is, bad or duplicate and redundant data. You need to make a decision about the best time to clean up and de-dupe the data. Sometimes it is in the old application before you convert but often it is in the new application after you have converted because the new application has more and better functionality for this purpose.   Whichever method you choose, you must clean up the data before going live in production.

6.    Expect to run multiple trial conversions. The customer may have approved a specification but reading it and seeing the data exposed in the new application are two very different experiences. A picture is worth a thousand words and no one is smart enough to know exactly how they want their data converted until they actually see what it looks like and works like in the new application. Be smart and bring in more users to view and comment on the new application; more heads are better than one and new users will always find ways to improve the conversion. Don’t be afraid of user opinion, actively encourage and solicit it.

7.    Once the data mapping is approved you need to schedule end-user training (as close as possible to the cutover to the new system) and the final conversion prior to cutover.

Of course for the above process to work you also need the tools required to extract data from the old system and import it into the new system. If you don’t have standard tools you will have to write a one-off conversion program. The time to write this is after the data mapping is approved and before the first trial conversion. To make our life easy we designed and build a standard tool we call Xchange and it can connect to any data source and then map and write data to our RecFind 6 system. However, this is not an easy program to design and write and you are unlikely to be able to afford to do this unless you are in the conversion business like we are. You are therefore most likely going to have to design and write a one-off conversion program.

One alternative tool you should not ignore is Microsoft’s Excel. If the old system can export data in CSV format and the new system can import data in CSV format then Excel is the ideal tool for cleaning up, re-sequencing and preparing the data for import.

And finally, please do not forget to sanity check your conversion. You need to document exactly how many records of each type you exported so you can ensure that exactly the same number of records exist in the new system. I have seen far too many examples of a badly managed conversion resulting in thousands or even millions of records going ‘missing’ during the conversion process. You must have a detailed record count going out and a detailed record count going in. The last thing you want is a phone call from the customer a month or two later saying, “it looks like we are missing some records.”

Don’t expect the conversion to be easy and do expect it to be an iterative process. Always involve end-users and always sanity check the results.  Take extra care and you will be successful.

Moving your Records Management application to the Cloud; why would you do it?

by Frank 20. May 2012 06:00

We have all heard and read a lot about the Cloud and why we should all be moving that way. I wrote a little about this in a previous post. However, when we look at specific applications like records management we need to think about the human interaction and how that may be affected if we change from an in-house system to a hosted system. That is, how will the move affect your end-users and records management administrator? Ideally, it will make their job easier and take away some pain. If it makes their job harder and adds pain then you should not be doing it even if it saves you money.

We also need to think about the services we may need when we move to the Cloud. That is, will we need new services we don’t have now and will the Cloud vendor offer to perform services, like application maintenance, we currently do in-house?

In general, normal end-user functions should work the same whether we are running off an internal system or a Cloud-based one. This of course will depend upon the functionality of your records management software. Hopefully, there will be no difference to either the functionality or the user interface when you move to the Cloud. For the sake of this post let’s assume that there is a version of your records management system that can run either internally or in the Cloud and that the normal end-user interface is identical or as near-as-such that it doesn’t matter. If the end-user interface is massively different then you face extra cost and disruption because of the need to convert and retrain your users and this would be a reason not to move to the Cloud unless you were planning to change vendors and convert anyway.

Now we need to look at administrator functions, those tasks usually performed by the records management administrator or IT specialist to configure and manage the application.  Either the records management administrator can perform the same tasks using the Cloud version or you need to ask the Cloud vendor to perform some services for you. This will be at a cost so make sure you know what it is beforehand.  There are some administrator functions you will probably be glad to outsource to the Cloud vendor such as maintaining the server and SQL Server and taking and verifying backups.

I would assume that the decision to move a records management application to the Cloud would and should involve the application owner and IT management. The application owner has to be satisfied that the end-user experience will be better or at least equal to that of the in-house installation and IT management needs to be sure that the integrity and security of the Cloud application will at the very least be equal to that of the in-house installation. And finally, the application owner, the records manager, needs to be satisfied that the IT support from the vendor of the Cloud system will be equal to or better than the IT support being received from the in-house or currently out-sourced IT provider.

There is no point in moving to the Cloud if the end-user or administrator experience will deteriorate just as there is no point in moving to the Cloud if the level of IT support falls.

Once you have made the decision to move your records management application to the Cloud you need to plan the cutover in a way that causes minimal disruption to your operation. Ideally, your staff will finish work on the in-house application on Friday evening and begin working on the Cloud version the next Monday morning. You can’t afford to have everyone down for days or weeks while IT specialists struggle to make everything work to your satisfaction. This means you need to test the Cloud system extensively before going live in production. In this business, little or no testing equals little or no success and a great deal of pain and frustration.

If it was me, I would make sure that the move to the Cloud meant improvements in all facets of the operation. I would want to make sure that the Cloud vendor took on the less pleasant, time-consuming and technical tasks like managing and configuring the required IT infrastructure. I would also want them to take on the more bothersome, awkward and technically difficult application administration tasks. Basically, I would want to get rid of all the pain and just enjoy the benefits.

You should plan to ‘outsource’ all the pain to make your life and the life of your staff easier and more pleasant and in doing so, make everyone more productive. It is like paying an expert to do your tax return and getting a bigger refund. The Cloud solution must be presented as a value proposition. It should take away all the non-core activities that suck up your valuable time and allow you and your staff more time to do the core activities in a better and more efficient way; it should allow you to become more productive.

I am a great believer in the Cloud as a means of improving productivity, lowering costs and improving data integrity and security. It is all doable given available facilities and technology but in the end, it is up to you and your negotiations with the Cloud provider.  Stand firm and insist that the end result has to be a better solution in every way; compromise should not be part of the agreement.

Using Terminal Digits to minimize “Squishing”

by Frank 13. May 2012 06:00

Have you ever had to remove files from shelving or cabinets and reallocate them to other spaces because a drawer or shelf is packed tight? Then had to do it again and again?

One of my favourite records managers used to call this the “Squishing” problem.

The squishing problem is inevitable if you start to load files from the beginning of any physical filing system, be it shelving or cabinets and unload file files from random locations as the retention schedule dictates. If you create and file parts (a new folder called part 2, part 3, etc., when the original file folder is full) then the problem is exacerbated. You may well spend a large part of your working life shuffling file folders from location to location; a frustrating and worthless, thankless task. You also get to inhale a lot of toxic paper dust and mites which is not a good thing.

You may not be aware of it but there is a very simple algorithm you can utilize to make sure the squishing problem never happens to you. It is usually referred to as the ‘Terminal Digit’ file numbering system but you may call it whatever you like. The name isn’t important but the operation is.

Importantly, you don’t need to change your file numbering system other than by adding on additional numbers to the end. These additional numbers are the terminal digits.

The number of terminal digits you need depends upon how many file folders you have to manage. Here is a simple guideline:

·         One terminal Digit (0 to 9) = one thousand files

·         Two Terminal Digits (00 to 99) = ten thousand Files

·         Three Terminal Digits (000 to 999) = greater than ten thousand files

Obviously, you also have to have the filing space and appropriate facilities available (e.g., boxes, bays, etc.,) to hold the required number of files for each terminal.

It is called the Terminal Digit system because you first have to separate your available filing space into a number of regular ‘terminals’. Each terminal is identified by a number, e.g., 0, 1, 2, 09, 23, 112, 999, etc.

The new terminal digit is additional and separate from your normal file number. It determines which terminal a file will be stored in. Let’s say your normal file number is of the format YYYY/SSSSSS. That is, the current year plus an automatically incrementing auto number like 2012/000189 then 2012/000190, etc. If we use two terminal digits and divide your available filing space into one hundred terminals (think of it as 100 equally sized filing slots or bays numbered 00 to 99) then your new file number format is YYYY/SSSSSS-99. The two generated file numbers above may now look like 2012/000189-00 and 2012/000190-01.

File folder 2012/000189-00 is filed in terminal number 00 and 2012/000190-01 is filled in terminal number 01. In a nutshell, what we are doing is distributing files evenly across all available filing space. We are not starting at terminal 00 and filling it up and then moving on to terminal 01, then terminal 02 when 01 is full etc. Finding files is even easier because the first part of the file number you look at is the terminal digit. If a file number ends in 89 it will be in terminal 89 in file number order.

The other good news is that when we unload files from the shelves say at end of life or at the point in the lifecycle when they need to sent offsite we will also unload files evenly across all available filing space. If the terminals are actually big enough and if you have calculated everything correctly you should never again suffer from the ‘squishing’ problem and you should never again have to ingest paper dust and mites when tediously shuffling files from location to location.

Obviously, there is a little more to this than sticking a couple of digits on the end of your file number. I assume you are using a computerised records management system so changes have to be made or configured to correctly calculate the now extended file number (including the new terminal digit) and your colour file labels will need to be changed to show the terminal digit in a prominent position.

There is also the question of what to do with your existing squished file store. Ideally you would start from scratch with your new numbering systems and terminals and wait for the old system to disappear as the files age and disappear offsite to Grace or Iron Mountain. That probably won’t be possible so you will have to make decisions based on available resources and budget and come up with the best compromise.

I can’t prove it but I suspect that the terminal digit system has been around since people began filing stuff. It is an elegantly simple solution to an annoying and frustrating problem and involves nothing more complicated than simple arithmetic.

The surprise is that so few organizations actually use it. In twenty-five plus years in this business I don’t think I have seen it in use at more than one to two-percent of the customers I have visited. I have talked about it and recommended it often but the solution seems to end up in the too-hard basket; a shame really, especially for the records management staff charged with the constant shuffling of paper files.

It may be that you have a better solution but just in case you don’t, please humour me and have another look at the terminal digit filing solution. It may just save you an enormous amount of wasted time and make your long-suffering records staff a lot happier and a lot healthier.

 

Have you considered Cloud processing? There are significant benefits

by Frank 6. May 2012 06:00

Most of us have probably become more than a little numbed to the onslaught of Cloud advertising and the promotion of the ‘Cloud’ as the salvation for everyone and the panacea for everything. The Cloud is promoted by its aggrandizers as being both omnipotent and omniscient; both qualities I only previously associated with God.

This is not to say that moving business processing to the Cloud is not a good thing; it certainly is. I just wish that the promoters would tone down the ‘sell’ and clearly explain the benefits and advantages without the super-hype.

Those of us with long memories clearly recall the early hype about what was then called ASP or Application Service Processing or even Application Service Provider. This was the early progenitor of the Cloud and despite massive hype it did not fly. The reasons were simple, neither the technology nor the software (application and system) were up to the job. Great idea, pity it was about five years before its time.

Unfortunately, super-hype in our industry is usually associated with immature and unproven technology. Wiser, older people nod sagely and then wait a few years for the technology to catch up with the promises.

As an older (definitely) and wiser (hopefully) person I am now ready to accept that all the technology required for successful and secure Cloud processing is now available and proven; albeit being ‘improved’ all the time so still take care not to rush in with experimental technology.

As with many new technologies the secret is KISS; Keep It Simple Stupid. If it seems too complex then it is too complex. If the sales person can’t answer all of your questions clearly and unambiguously then walk away.

Most importantly, make sure you know all about all of the parties involved in the transaction. For example:

1.    What is the name of the data centre?

2.    Where is it located?

3.    Who ‘owns’ the rack and equipment and software at the data centre?

4.    What are the redundant features?

5.    What are the backup and recovery options?

6.    Is your vendor the owner of the co-hosted facility or do they subcontract to someone else? If they sub-contract is the company they subcontract to the owner or are they too just part of a chain of ‘hidden’ middle-men? It is critical for you to understand this chain of responsibility because if something goes wrong you need to know who to chase.

There are a lot more questions you need to ask but this Blog isn’t the place to list them all. I am sure your IT team and application owners will come up with plenty more. If they don’t, wake them up and demand questions.

Most small to medium organizations today simply do not have the time or expertise to run a computer room and manage and maintain a rack of servers. There is also a dearth of ‘real’ expertise and a plethora of phonies out there so hiring someone who is actually smart enough to manage your critical infrastructure is a very difficult exercise made more so by most business owners and managers simply not understanding the requirements or technology. It often becomes a case of the blind hiring the almost blind.

Most small to medium enterprises also cannot afford the redundancy required to ensure a stable and reliable infrastructure. A fifteen minute UPS is no substitute for a redundant bank of diesel generators and a guaranteed clean power supply.

Why should small to medium enterprises have to buy servers and networks and IT support? It isn’t part of their core business and this stuff should not be weighing down the balance sheet. Why should they be devoting scarce and expensive management time to activities that are not part of their core business?

In-house computer rooms will soon be become as rare as dinosaurs and this is how it should be, they are an anachronism in this time and age; out of time and out of place.

All smart and business savvy small to medium organizations should be planning to progressively move all their processing to the Cloud so as to lower costs, improve service levels and reduce management stress. I say progressively because it is still wise to get wet slowly and to take little steps. Just like with your first two-wheel bicycle, it pays to practice with the training wheels on first. That way, you usually avoid those painful falls.

I like to think I am a little wiser because I still have scars from gravel rash when I was a kid. I am moving my RecFind 6 customers to the Cloud and I am moving my in-house processing to the Cloud but just like you, I am doing it slowly and carefully and triple-checking every aspect. I don’t take risks with my customers or my business and neither should you.

One last thing, I have the advantage of being very IT literate and of having a top IT team working for me so we have the in-house expertise required to correctly evaluate and select the most appropriate technology and options. If you do not have this level of in-house IT expertise then please take extra care and try to find someone to assist who does have the level of IT knowledge required. Once you sign up, it is too late. Buyer’s remorse is not a solution to any problem.

Are you running old and unsupported software? What about the risks?

by Frank 29. April 2012 20:59

Many years ago we released a 16 bit product called RecFind version 3.2 and we made a really big mistake. We gave it so much functionality (much of it way ahead of its time) and we made it so stable that we still have thousands of users.

It is running under operating systems like XP it was never developed for or certified for and is still ‘doing the job’ for hundreds of our customers. Most frustratingly, when we try to get them to upgrade they usually say, “We can’t justify the expense because it is working fine and doing everything we need it to do.”

However, RecFind 3.2 is decommissioned, unsupported and, the databases it uses (Btrieve, Disam and an early version of SQL Server) and also no longer supported by their vendors.

So our customers are capturing and managing critical business records with totally unsupported software. Most importantly, most of them also do not have any kind of support agreement with us (and this really hurts because they say they don’t need a support agreement because the system doesn’t fail) so when the old system catastrophically fails, which it will, they are on their own.

Being a slow learner, ten years ago I replaced RecFind 3.2 and RecFind 4.0 with RecFind 5.0, a brand new 32 bit product. Once again I gave it too much functionality and made it way too stable. We now have hundreds of customers still using old and unsupported versions of RecFind 5.0 and when we try to convince them to upgrade we get that same response, “It is still working fine and doing everything we need it to do.”

If I was smarter I would have built-in a date-related software time bomb to stop old systems from working when they were well past their use-by date. However, that would have been a breach of faith so it is not something we have or will ever do. It is still a good idea, though probably illegal, because it would have protected our customers’ records far better than our old and unsupported systems do now.

In my experience, most senior executives talk about risk management but very few actually practice it. All over the world I have customers with millions of vital business records stored and managed in systems that are likely to fail the next time IT updates desktop or server operating systems or databases. We have warned them multiple times but to no avail. Senior application owners and senior IT people are ignoring the risk and, I suspect, not making senior management aware of the inevitable disaster. They are not managing risk; they are ignoring risk and just hoping it won’t happen in their reign.

Of course, it isn’t just our products that are still running under IT environments they were never designed or certified for; this is a very common problem. The only worse problem I can think of is the ginormous amount of critical business data being ‘managed’ in poorly designed, totally insecure and teetering-on-failure, unsupportable Access and Excel systems; many of them in the back offices of major banks and financial institutions. One of my customers called the 80 or so Access systems that had been developed across his organization as the world’s greatest virus. None had been properly designed, none had any security and most were impossible to maintain once a key employee or contractor had left.

Before you ask, yes we do produce regular updates for current products and yes we do completely redesign and redevelop our core systems like RecFind about every five years to utilize the very latest technology. We also offer all the tools and services necessary for any customer to upgrade to our new releases; we make it as easy and as low cost as possible for our customers to upgrade to the latest release but we still have hundreds of customers and many thousands of users utilizing old, unsupported and about-to-fail software.

There is an old expression that says you can take a horse to water but you can’t make it drink. I am starting to feel like an old, tired and very frustrated farmer with hundreds of thirsty horses on the edge of expiration. What can I do next to solve the problem?

Luckily for my customers, Microsoft Windows Vista was a failure and very few of them actually rolled it out. Also, luckily for my customers, SQL Server 2005 was a good and stable product and very few found it necessary to upgrade to SQL Server 2008 (soon to be SQL Server 2012). This means that most of my customers using old and unsupported versions of RecFind are utilizing XP and SQL Server 2005, but this will soon change and when it does my old products will become unstable and even just stop working. It is just good luck and good design (programmed tightly to the Microsoft API) that some (e.g., 3.2) still work under XP. RecFind 3.2 and 4.0 were never certified under XP.

So we have a mini-Y2K coming but try as I may I can’t seem to convince my customers of the need to protect their critical and irreplaceable (are they going to rescan all those documents from 10 years ago?) data. And, as I alluded to above, I am absolutely positive that we are only one of thousands of computer software companies in this same position.

In fairness to my customers, the Global Financial Crisis of 2008 was a major factor in the disappearance of upgrade budgets. If the call is to either upgrade software or retain staff then I would also vote to retain staff. Money is as tight as it has ever been and I can understand why upgrade projects have been delayed and shelved. However, none of this changes the facts or averts the coming data-loss disaster.

All over the world government agencies and companies are managing critical business data in old and unsupported systems that will inevitably fail with catastrophic consequences. It is time someone started managing this risk; are you?

 

Managing Emails, how hard can it be?

by Frank 22. April 2012 00:22

We produce a content management system called RecFind 6 that includes several ways to capture, classify and save emails. Like most ECM vendors, we offer a number of alternatives so as to be better able to meet the unique requirements of a variety of clients.

We offer the ‘manual’ version whereby we embed our email client into packages like Outlook and the end user can just click on our RecFind 6 Button from the Outlook toolbar to capture and classify any email.

We also offer a fully automated email management system called GEM that is rule-driven and that automatically analyses, captures and classifies all incoming and outgoing emails.

At the simplest level, an end user can just utilize the standard RecFind 6 client and click on the ‘Add Attachment’ button to capture a saved email from the local file store.

Most of our customers use the RecFind 6 Button because they prefer to have end users decide which emails to capture and because the Button is embedded into Microsoft Office, Adobe Professional, Notes and GroupWise. A much smaller percentage of our customers use GEM even though it is a much better, more complete and less labour intensive solution because there are still many people that just don’t want email to be automatically captured.

This last point is of great interest to me because I find it hard to understand why customers would choose the ‘manual’ RecFind 6 Button, small, smart and fast though it is, over the fully automated and complete solution offered by GEM, especially when GEM is a much lower cost solution for mid-size to large enterprises.

A few years ago in 2005 the Records Management Association of Australia asked me to write a paper on this topic, that is, why don’t organizations make a good job of capturing emails when there is plenty of software out there that can do the job?  I came up with six reasons why organizations don’t manage emails effectively and after re-reading that paper today, they are still valid.

In my experience, the most common protagonists are the records manager and the IT manager.  I don’t think I have ever spoke to a senior executive or application owner who didn’t think GEM was a good idea but I have only ever spoken to a tiny number of records managers who would even contemplate the idea of fully automatic email management. Most IT managers just don’t want all their emails captured.

This is despite the fact that because GEM is rule-driven any competent administrator could write rules to include or exclude any emails they want included or excluded.

Another road block is that old red herring personal emails. In ninety-percent upwards of cases where my customer has decided against GEM this is given as the ‘real’ reason. It is of course rubbish because there are many ways to handle personal emails including an effective email policy and writing GEM rules to enforce that policy. This 2004 paper explains why we need to manage emails and also talks about an effective email policy.

The absolute worst way to mismanage emails is to mandate that end users must select and print them out for the records staff to file in cardboard file folders. This method is entirely appropriate to 1900 except for the fact that we actually didn’t have emails in 1990. It is entirely inappropriate and just plain ineffective, wasteful and stupid in 2012 but, tens of thousands of records managers all around the world still mandate this as the preferred approach.

Is it because they don’t understand the technology or is it because they stubbornly refuse to even consider the technology?

It can’t be budget because the cost of expensive staff having to be part-time records managers is monumental. You would be hard pressed to find a more expensive and less effective solution. So why are we still doing it?

Back to the title of this paper, “How hard can it be?”

The answer is that it is not hard at all and that every ECM vendor has at least one flexible and configurable solution for email management. More so, these solutions have been around for at least the last ten years. So why are we still doing it the hard, ineffective, incomplete and expensive way?

The answer is that it is to do with people and attitudes; with a reluctance to embrace change and a reluctance to embrace a challenge that just might force managers to learn a lot in a short time and extend their capabilities and workload for the period necessary to implement a new generation solution. I guess it comes down to fear and a head in the sand attitude.

I once had a senior records manager tell me he wasn’t going to install any new systems because he was retiring in five years and didn’t want the worry and stress. Is this really why you aren’t managing your emails effectively and completely? Isn’t it time you asked the question of your records and IT managers?

Project Management – just what does it entail?

by Frank 15. April 2012 06:00

In a previous career with mainframes I spent eight years as a large scale project manager and then a further two years as the international operations manager managing a number of project managers at troubled projects around the world. Those ten years taught me a great deal about what it takes to be a successful project manager and conversely, why some project managers fail.

Notice that I said why some project managers fail, not why some projects fail. It is cause and effect; projects only fail when the project manager fails to do the job required. This particular concept separates good project managers from bad project managers. Good project managers take full responsibility for the success or failure of their projects, bad project managers don’t.

Good project managers are ‘glass-half-full’ people, bad project managers are ‘glass-half-empty’ people. Good project managers are leaders, bad project managers are victims.

So the first piece of advice is to choose your project manager carefully. You want a strong willed, bright and energetic doer, not a facilitator or politician. You want a strong leader, not a careful and political follower; you want Jesus, not the disciples.

The next piece of advice is that you should set quantitative criteria for project success. No ambiguity or motherhood or weaselly words, as the Dragnet cop used to say, “Just the facts Mam.” In my day it was easy, we had to install the new hardware and software, convert from the old system, design and program the new applications and then take the whole system through a 30 day acceptance test with 99% uptime. There was always a contract and the conditions of acceptance were always clearly laid out and assiduously referred to by the customer. We knew what we had to achieve and there was no ambiguity.

Unfortunately, one of the problems with a lot of projects is that the conditions for acceptance and success are not clearly articulated or documented. But, a good project manager will always make sure that the scope and objectives and expected outcomes are clearly defined regardless before accepting the challenge. The bad project manager on the other hand is always happy that there isn’t a clear definition of success because the bad project manager wants to make judging his or her performance as difficult as possible.

I once fired a project manager who told me in three meetings in a row that he had not completed the requested project plan because the project was too complex. Obviously the more complex the project the more its needs a comprehensive project plan otherwise it will be impossible to manage. My failed project manager didn’t want to document the project plan because he didn’t want deadlines and he didn’t want to be judged on how well he was meeting deadlines.

It sounds like an over-simplification but if you want a successful project then choose a successful project manager, one who accepts full responsibility for all outcomes and one who is committed to success.

As part of the interview process, ask them what their philosophy of responsibility is. As an example, here is one I always used.

“Everything that happens is due to me because everything that happens is either due to something I did or something I didn’t do.”

I have never found a good project manager who had a problem with this credo. Bad project managers on the other hand, see it as anathema to their survival strategies. Good project managers accept full responsibility for success or failure, bad project managers do not.

Good project managers also don’t spend all day in an office playing with Excel and Microsoft Project. Nor do they spend all day in meetings or on conference calls. Good project managers integrate themselves into the very bowels of the project and ‘walk-and-talk’ on a daily basis.

Walk and talk refers to the practice of meeting with real workers at all levels of the project, especially end users. Good project managers make the time to talk to end users every day and because of this they know more about what is happening than any senior manager. They are ‘in-touch’ with the project and are constantly aware of changes, problems and successes. Good project managers who practice the walk and talk technique are never surprised in project or management meetings because they always know more than anyone else at the meeting and they always have the very latest information. This is probably why they are such good project managers. If you aren’t prepared to invest at least one hour of your time every day walking and talking to real users then you shouldn’t be a project manager.

Good project managers also always know how to select and manage their team. Because they are natural leaders, management is a natural and comfortable process for them. There is never any doubt in a good project manager’s team about who the leader is and who will make the final decisions and then take responsibility for them. There is no disseminated responsibility. The opposite is always true in a bad project manager’s team with disseminated responsibility and no clear record of who made what decision.

The calibre of the bad project manager’s team is always significantly lower than that of the good project manager’s team. This is because mediocre people always hire mediocre people and a bad project manager is afraid of strong capable staff because he or she finds them threatening. A good project manager on the other hands loves working with strong capable people and revels in the ongoing challenge of managing them. A good project manager is never threatened by strong capable staff, au contraire; he seeks them out because they make it easier for him (or her) to be successful.

There is no magical formula that will ensure a successful project, completed on time and on budget and with all contracted deliverables accepted and signed off. It also doesn’t matter what project management tool you use as long as you do use a project management tool. I don’t particularly like the latest version of Microsoft Project (and that is an understatement) but if required I could use it to manage any project no matter how big and how complex. It isn’t the tool; it is the person that counts.

This is simple advice like my favourite about how to do well on the stock market, “buy low and sell high.” If you want a successful project, always start with a successful project manager. He or she will take care of everything else.

Workflow – What does it really entail?

by Frank 8. April 2012 06:00

Workflow has been defined as “the glue that binds business processes together.” Depending upon your background and experience that particular definition may or may not be as clear as mud. Despite having been a key factor in business application processing for a very long time workflow is still very poorly understood by many in business and is more often than not too narrowly defined.

For example, you do not need to pay big bucks for a heavy-duty workflow package and all the services associated with it to implement workflow in your organization. Workflow is really about automating some business process using whatever tool is appropriate. You can automate a business process with Word or Excel or Outlook for that matter and the most common starting point is to first capture a paper document as a digital document using simple tools like a document scanner. You don’t even need a computer (apart from the human brain, the world’s best computer) to implement workflow.

Designing and implementing workflow is more about the thought processes, about evaluating what you are doing and why you are doing it and then trying to figure out a better and more efficient way to do it. It is about documenting and analysing a current business process and then redesigning it to make it more appropriate and more efficient. It is by making it more efficient that you make productivity gains; ideally, you end up doing more with less and adding more value.

You shouldn’t undertake any investigation of new workflows without first having defined objectives and metrics. You should also always begin with some basic questions of your staff or end-users:

  1. What are you doing now that you think could be done better?
  2. What aren’t you doing now that you think you should be doing?
  3. What are you doing now that you don’t think is necessary?

I call these the three golden questions and they have served me well throughout my consulting career. They are simple enough and specific enough that most end-users can relate to them and produce answers. These three simple questions provide the foundation for any business process re-engineering to come. They are also the catalyst to kick off the required thought processes in your end-users. Out of these three simple questions should come many more questions and answers and the information you need to solve the problem.

In every case in the past I have been able to add value well before using tools and creating workflows just by suggesting changes to current manual business processes. As I said earlier, workflow is really about thought processes, “How can I do this in a better and more efficient way?”

Adding value always begins by saving time and money and usually also entails providing better access to information. Real value in my experience is about ensuring that workers have access to the precise information they need (not more and not less) at the precise time they need it (not earlier and not later).  It sound simple but it is the root of all successful business processes, that is, “please just give me what I need when I need it and then I can get the job done.” Modern ‘just-in-time’ automated production lines only work if this practice is in place; it is fundamental to the low cost, efficient and high quality production of any product or service.

When something ‘just works’ very few of us notice it but when something doesn’t work well it frustrates us and we all notice it. Frustrated workers are not happy or productive workers. If we do our job well we take away the sources of frustration by improving work processes to the point where they ‘just work’ and are entirely appropriate and efficient and allow us to work smoothly and uninterrupted without frustration and delays. This should be our objective when designing new workflows.

Metrics are important and should always be part of the project. You begin by taking measurements at the beginning and then after careful analysis, predict what the measurements will be after the project. You must have a way of measuring, using criteria agreed beforehand with your end-users, whether or not you have been successful and to what degree. It is a very bad trades person who leaves without testing his work. We have all had experiences with bad trades people who want to be paid and away before you test the repaired appliance, roof or door. Please do not be a bad trades person.

Metrics are the way we test our theory. For example, “If we re-engineer this series of processes the way I have recommended you will save two hours of time per staff member per day and will be able to complete the contract review and sign off within two days instead of seven days.” The idea is to have something finite to measure against. We are talking quantitative as opposed to qualitative measurement. An example of a qualitative measurement would be, “If we re-engineer this series of processes the way I have recommended everyone will be happier.” Metrics are a quantitative way to measure results.

In summary, implementing workflow should always be about improving a business process; about making it better, more appropriate and more efficient. Any workflow project should begin with the three golden questions and must include defined objectives and quantitative metrics. The most important tool is the human brain and the thought processes that you will use to analyse current processes and design improved processes. Every new workflow should add value; if it doesn’t you should not be doing it.

Critically, workflow must be about improving the lot of your staff or end users. It is about making a process easier, more natural, less frustrating and even, more enjoyable. The staff or end users are the only real judges because no matter how clever you think your solution is if they don’t like it, it will never work.

Month List