Do you really want that job you are applying for?

by Frank 26. August 2012 06:00

I own and run a software company that builds, sells, installs and supports an enterprise content management solution called RecFind 6. As such, I employ programmers, support specialists, accountants, consultants, trainers, pre-sales people and sales people to name but a few categories. This means I am always hiring and always reviewing applications from candidates.

Basically, most of the applications I receive are rubbish. They are badly written, badly formatted, not ‘selling’ documents and almost never focussed on the position I am advertising.  This is very sad but it does make vetting an avalanche of resumes pretty easy. I would probably spend no more than a minute or two reading each resume in the first pass to separate the real candidates from the flotsam. I move the results into two folders, one called possible and the other called ‘No way’.

This may sound a little impersonal but I have no patience with people who waste my time by firstly not reading the advertised job description properly and then by sending in a non-selling document. In fact, most resumes I see are great big red flags saying, “Please don’t hire me, I am a dope who didn’t read your ad properly and then couldn’t be bothered even getting the spelling and grammar correct or trying to sell myself in any way”.

So my first advice is if you are too lazy to allocate the time and effort required or can’t simply be bothered to sell yourself in the most professional manner possible then don’t bother because all you are doing is wasting your time and the time of any prospective employer. Prospective employers also have long memories so rest assured your next application to the same firm will be instantly relegated to the waste bin.

I only hire professionals and professionals do not send in a non-professional job application.

I only hire people who respect my time and I only hire people who manage to convince me that they really want the job I am advertising and are the best person for that role.

I figure that the effort you are prepared to expend on what should be your most important task at this time (i.e., finding employment) is indicative of the quality of work I can expect from you as an employee. If you send me a poor quality application then I assume everything you would do for me as an employee will be of a similar poor standard. If you are too lazy or too careless to submit a winning application then I can only assume you would also behave in this manner after employment so I have zero interest in you.

This is the bit I struggle to understand. How come the applicant doesn’t understand the obvious correlation any prospective employer makes between the quality of the job application and the quality of the person?

Please allow me to give you some simple common-sense advice that comes from a very experienced employer of people.

Always:

  • Read the job ad very carefully. Note the prerequisites and requirements; the employer put them in for a reason and he/she would really appreciate it if you didn’t waste his/her time by applying for a position you do not qualify for.
  • Always include a cover letter personalized for each and every job application. Your objective should be to convince the prospective employer that the job advertised is perfect for you and that you are in turn a perfect fit for the job.  If your past experience or skillset isn’t a perfect fit, use the cover letter to explain why it isn’t a problem and why you are still the right person for the job being advertised. All potential employers are impressed by someone who takes the time and trouble to align their skills and experience to the job on offer. Most importantly, use words and phrases from the job ad in your cover letter. This helps convince the potential employer that you have really thought about the position and have put intelligent time into your application.
  • Clean up your resume, spell and grammar check it and convert it to a PDF for a much better and more professional looking presentation effect. All potential employers can’t help but appreciate a well presented and professional looking resume; it sets you apart.

In the end it is all about the initial impression you convey to the prospective employer. You have one shot so make sure it is a good one.

You need to convince your prospective employer that you selected their advertised job to respond to because it really interests and excites you and that you have the attitude, aptitude, character, experience and skillset required to make the most of this position. You have to convince them that you would be an asset to their organization.

It doesn’t take long to write a personalised cover letter, maybe an hour or two at the most and it should never be more than one page long. My final advice is that if you don’t think the advertised position is worth an hour or two of your time then don’t respond because you will be wasting your time. Don’t ‘shotgun’ job opportunities with multiple low-quality and non-selling applications. Instead focus on just the jobs you really like and then submit a smaller number of high-quality and personalised applications. I guarantee that your success rate will be much higher and that you will be asked to more interviews and that you will eventually get the job of your dreams.

The simple message is that you will get out of the process precisely what you put into the process. It is a tough world but in my experience effort is always rewarded. For your sake, please make the effort.

Are you addressing the symptoms or the problem?

by Frank 19. August 2012 06:00

We are a software company building, selling and supporting our product RecFind 6 as an information management system and enterprise content management system. We have an in-house support department (we don’t outsource anything) and thousands of customers that contact it with questions and reports of problems they are having.

However, like I suspect happens at most software vendors, it is often very difficult for my support people to initially diagnose the real problem. Obviously, if there is an error message then it is easier to resolve but in most cases there is no error message, just an explanation of what a user thinks is the product not working properly.

If we can connect in to the user’s workstation using GoToAssist then we can usually ‘see’ firsthand what the problem is and then help the customer. However, this is not always possible and in a lot of cases my people are working ‘blind’ via phone or email and the only recourse is a question and answer dialog until we get to the point where we can define what the user thinks is going wrong and we can get the history of the problem. That is “When did it start to happen? What changed? Does it happen with everyone or just some users?” Etc., etc.

My people are pretty good at this process but even they get caught occasionally when the customer describes what he/she thinks the solution is rather than what the problem is. This usually takes the form of the customers telling us the ‘fix’ we need to make to the product to solve his/her ‘problem’. The wise support person will always ask, “What were you trying to do?” Once you can determine what the customer was trying to do, you then understand why they are asking for the particular ‘fix’. In most cases, the real problem is that the customer isn’t using the right functionality and once shown how to use the right functionality the need for a ‘fix’ goes away.

Problems also arise when my support people start mistakenly addressing the symptoms instead of the problem. In all fairness, it is often hard to differentiate the two but you can’t fix a problem by addressing the symptoms; you have to go back further and first define and then fix the root problem. Once the root problem is fixed the symptoms magically disappear.

For example, a customer reports multiple documents being created with the same auto number (i.e., duplicate numbers) as a problem. This isn’t really the problem though that is how the customer sees it. It is in fact a symptom and a clue to the identification of the real problem. In the above example, the root problem will be either an auto-number algorithm not working properly or an auto-number configuration with a flawed design. The former is what we call a ‘bug’ and the latter is what we call ‘finger trouble’; the configured auto number configuration was working precisely as designed but not as the customer intended.

Bugs we fix in code but finger trouble we fix by first clearly understanding what the customer wants to achieve and then by helping them to configure the functionality so its works as expected.

All experienced support people get to know the difference between:

What the customer thinks is the solution versus the problem; and

The symptoms versus the problem.

In my experience these are the two most common challenges faced when handling support calls. Recognizing both as early as possible is critical to achieving a speedy resolution and minimizing frustration. Not recognizing both as early as possible leads to longer resolution times and unhappy customers.

If we extend our support experience to real life we realize that these same two challenges face us in everyday life and in all of our social interactions. It why we often argue at cross-purposes; each party seeing the problem differently because of different perceptions of what the real problem is.

The challenges of misunderstanding are also often harder to overcome in real life because unlike a support call which has form and structure, our social interactions are mostly unstructured and opportunistic. We don’t start with a problem, we start with a casual dialog and don’t realize we are about to enter a conflict zone until it sneaks up upon us.

So if you find yourself in an argument please take pause and take the time to ask yourself and the other party, “Just what is it exactly we are arguing about?”  Which upon reflection, is exactly how we should handle each and every support call.

If we take the time to properly define the real problem we would spend far less time arguing and making people unhappy and far more time enjoying the company of our customers and friends. It is a no-brainer really, who wants to go through life in constant conflict?

For my part, I will just continue to ask to ask, “Before I address your request for a change would you mind please explaining what you were you actually trying to achieve; can you please show me?” And “What were you doing when you first saw that problem? Please start from the beginning and walk me through the process.” These two questions have worked for me for a very long time and I certainly hope that they work for you.

 

Is Information Management now back in focus?

by Frank 12. August 2012 06:00

When we were all learning about what used to be called Data Processing we also learned about the hierarchy or transformation of information. That is, “data to information to knowledge to wisdom.”

Unfortunately, as information management is part of what we call the Information Technology industry (IT) we as a group are never satisfied with simple self-explanatory terms. Because of this age-old flaw we continue to invent and hype new terms like Knowledge Management and Enterprise Content Management most of which are so vague and ill-defined as to be virtually meaningless but nevertheless, provide great scope for marketing hype and consultants’ income.

Because of the ongoing creation of new terminology and the accompanying acronyms we have managed to confuse almost everyone. Personally I have always favoured the term ‘information management’ because it tells it like it is and it needs little further explanation. In the parlance of the common man it is an “old un, but a good un.”

The thing I most disliked about the muddy knowledge management term was the claim that computers and software could produce knowledge. That may well come in the age of cyborgs and true artificial intelligence but I haven’t seen it yet. At best, computers and software produce information which human beings can convert to knowledge via a unique human cognitive process.

I am fortunate in that I have been designing and programming information management solutions for a very long time so I have witnessed first-hand the enormous improvements in technology and tools that have occurred over time. Basically this means I am able to design and build an infinitely better information management solution today that I could have twenty-nine years ago when I started this business.  For example, the current product RecFind 6 is a much better, more flexible, more feature rich and more scalable product than the previous K1 product and it in turn was an infinitely better product than the previous one called RecFind 5.

One of the main factors in them being better products than their predecessors is that each time we started afresh with the latest technology; we didn’t build on the old product, we discarded it completely and started anew. As a general rule of thumb I believe that software developers need to do this around a five year cycle. Going past the five year life cycle inevitably means you end up compromising the design because of the need to support old technology. You are carrying ‘baggage’ and it is synonymous with trying to run the marathon with a hundred pound (45 Kg) backpack.

I recently re-read an old 1995 white paper I wrote on the future of information management software which I titled “Document Management, Records Management, Image Management Workflow Management...What? – The I.D.E.A”. I realised after reading this old paper that it is only now that I am getting close to achieving my lofty ambitions as espoused in the early paper. It is only now that I have access to the technology required to achieve my design ambitions. In fact I now believe that despite its 1995 heritage this is a paper every aspiring information management solution creator should reference because we are all still trying to achieve the ideal ‘It Does Everything Application’ (but remember that it was my I.D.E.A. first).

Of course, if you are involved in software development then you realise that your job is never done. There are always new features to add and there are always new releases of products like Windows and SQL server to test and certify against and there are always new releases of development tools like Visual Studio and HTML5 to learn and start using.

You also realise that software development is probably the dumbest business in the world to be part of with the exception of drug development, the only other business I can think of which has a longer timeframe between beginning R&D and earning a dollar. We typically spend millions of dollars and two to three years to bring a brand new product to market. Luckily, we still have the existing product to sell and fund the R&D. Start-ups however, don’t have this option and must rely on mortgaging the house or generous friends and relatives or venture capital companies to fund the initial development cycle.

Whatever the source of funding, from my experience it takes a brave man or woman to enter into a process where the first few years are all cost and no revenue. You have to believe in your vision, your dream and you have to be prepared for hard times and compromises and failed partnerships. Software development is not for the faint hearted.

When I wrote that white paper on the I.D.E.A. (the It Does Every Thing Application or, my ‘idea’ or vision at that time) I really thought that I was going to build it in the next few years, I didn’t think it would take another fifteen years. Of course, I am now working on the next release of RecFind so it is actually more than fifteen years.

Happily, I now market RecFind 6 as an information management solution because information management is definitely back in vogue. Hopefully, everyone understands what it means. If they don’t, I guess that I will just have to write more white papers and Blogs.

Are you really managing your emails?

by Frank 5. August 2012 06:00

It was a long time ago that we all realized that emails were about eighty-percent plus of business correspondence and little has changed today. Hopefully, we also realised that most of us weren’t managing emails and that this left a potentially lethal compliance and legal hole to plug.

I wrote some white papers on the need to manage emails back in 2004 and 2005 (“The need to manage emails” and “Six reasons why organizations don’t manage emails effectively”) and when I review them today they are just as relevant as they were eight years ago. That is to say, despite the plethora of email management tools now available most organizations I deal with still do not manage their emails effectively or completely.

As an recent example  we had an inquiry from the records manager at a US law firm who said she needed an email management solution but it had to be a ‘manual’ one where each worker would decide if and when and how to capture and save important emails into the records management system.  She went on to state emphatically that under no circumstances would she consider any kind of automatic email management solution.

This is the most common request we get. Luckily, we have several ways to capture and manage emails including a ‘manual’ one as requested as well as a fully automatic one called GEM that analyses all incoming and outgoing emails according to business rules and then automatically captures and classifies them within our electronic records and document management system RecFind 6.

We have to provide multiple options because that is what the market demands but it is common sense that any manual system cannot be a complete solution. That is, if you leave it up to the discretion of the operator to decide which emails to capture and how to capture them then you will inevitably have an incomplete and inconsistent solution.  Worse still, you will have no safeguards against fraudulent or dishonest behaviour.

Human beings are, by definition, ‘human’ and not perfect. We are by nature inconsistent in our behaviour on a day to day basis. We also forget things and sometimes make mistakes. We are not robots or cyborgs and consistent, perfect behaviour all controlled by Asimov’s three laws of robotics is a long, long way off for most of us.

This means dear reader that we cannot be trusted to always analyse, capture and classify emails in a one-hundred percent consistent manner. Our excuse is that we are in fact, just human.

The problem is exacerbated when we have hundreds or even thousands of inconsistent humans (your staff) all being relied upon to behave in an entirely uniform and consistent manner. It is in fact ludicrous to expect entirely uniform and consistent behaviour from your staff and it is bad practice and just plain foolish to roll out an email management system based on this false premise. It will never meet expectations. It will never plug all the compliance and legal holes and you will remain exposed no matter how much money you throw at the problem (e.g., training, training and re-training).

The only complete solution is one based on a fully-automatic model whereby all incoming and outgoing emails are analysed according to a set of business rules tailored to your specific needs. This is the only way to ensure that nothing gets missed. It is the only way to ensure that you are in fact plugging all the compliance and legal holes and removing exposure.

The fully automatic option is also the most cost-effective by a huge margin.

The manual approach requires each and every staff member to spend (waste?) valuable time every single day trying to decide which emails to capture and then actually going through the process time and time again. It also requires some form of a licence per employee or per desktop. This licence has a cost and it also has to be maintained, again at a cost.

The automatic approach doesn’t require the employee to do anything. It also doesn’t require a licence per employee or desktop because the software runs in the background talking directly to your email server. It is what we call a low cost, low impact and asynchronous solution.

The automatic model increases productivity and lowers costs. It therefore provides a complete and entirely consistent email management solution and at a significantly lower cost than any ‘manual’ model. So, why is it so hard to convince records managers to go with the fully automatic solution? This is the million dollar question though in some large organizations, it is a multi-million dollar question.

My response is that you should not be leaving this decision up to the records manager. Emails are the business of all parts of any organization; they don’t just ‘belong’ to the records management department. Emails are an important part of most business processes particularly those involving clients and suppliers and regulators. That is, the most sensitive parts of your business. The duty to manage emails transects all vertical boundaries within any organization. The need is there in accounts and marketing and engineering and in support and in every department.

The decision on how to manage emails should be taken by the CEO or at the very least, the CIO with full cognizance of the risks to the enterprise of not managing emails in a one-hundred percent consistent and complete manner.

In the end email management isn’t in fact about email management, it is about risk management. If you don’t understand that and if you don’t make the necessary decisions at the top of your organization you are bound to suffer the consequences in the future.

Are you going to wait for the first law suit or punitive fine before taking action?

Integration, what does it really entail?

by Frank 10. June 2012 06:00

Over the last 28 years of running this business I have had hundreds of conversations with customers and partners about integration. In most cases when I have tried to obtain more details about exactly what they wanted to integrate to and how I have drawn a blank. It is as if the word ‘integration’ has some magical connotation that everyone is supposed to instantly understand. Being a very logical and technical person, I guess I fail the test because to me it is just a general term that covers a multitude of possibilities.

I have also designed and/or programmed many integrations in my life and I can honestly say that no two have ever been the same. So, assuming that you have a requirement for two or more of the application software products you use to ‘integrate’ how should you go about defining what is required so it is as clear and as unambiguous as possible to your partners, the people you will ask to do the work?

Integration is usually about sharing information and importantly, not duplicating effort (i.e., having to enter data twice) or duplicating information. Having to enter the same information twice into two different systems is just plain dumb and bad design. Maintaining duplicate copies of information is even dumber and dangerous because sooner or later they will get out of step and you will suffer from what we call a loss of data integrity. This is usually why we need integration, to share information and to avoid duplicate effort and duplicate information.

For our purpose let’s assume that we have two application systems, A and B, that we need to be ‘integrated’; both use a SQL database to store data. Applications A and B are produced by different vendors that haven’t worked together before. The first fact to face is that in the normal course of events each vendor is going to want the other vendor to do all the work and each vendor is going to want the other vendor to utilize its proprietary integration methodology (e.g., Application Program Interface (API) or Software Development Kit (SDK)). This is your first big challenge; you need to get the vendors to agree to work together because no matter how this turns out both are going to have to do work and contribute; you can’t complete an integration using just one vendor. That is, the most important thing you have to do is to manage the vendors involved. You can’t just leave it up to them; you need to manage the process from beginning to end.

The second most important thing you have to do is to actually define and document the integration processes as clearly as possible. Here is a checklist to guide you:

1.    Will application A need to access (i.e., read) data held by application B?

2.    Will application B need to access data held by application A?

3.    How often and at what times is access to data required? All the time, once a day, once a week, only when something changes, only when there is new data, every time a new record is added to either application A or B, etc. What are the rules?

4.    How is the data identified? That is, how does application A know what data it needs to access in the application B database? Is it by date or special code or some unique identifier? What are the rules that determine the data to be accessed?

5.    Will application A need to transfer data to application B (i.e. write data to the B database)?

6.    Will application B need to transfer data to application A?

7.    How often and at what times is a transfer of data required? All the time, once a day, one a week, only when something changes, only when there is new data, every time a new record is added to either application A or B, etc. What are the rules?

8.    How is the data identified? That is, how does application A know what data it needs to transfer to the application B database? Is it by date or special code or some unique identifier? What are the rules that determine the data to be transferred?

9.    Does application A have an API or SDK?

10.What is the degree of difficulty (expertise required, time and cost) of programming to this API or SDK? Rate it on a scale of 1 to 10, 10 being the most difficult and most expensive and most time consuming.

11.Does application B have an API or SDK?

12.What is the degree of difficulty (expertise required, time and cost) of programming to this API or SDK? Rate it on a scale of 1 to 10, 10 being the most difficult and most expensive and most time consuming.

13.Is the vendor of application A happy to assign a suitable qualified technical person (not a sales person or pre-sales person) to be the interface?

14.Is the vendor of application B happy to assign a suitable qualified technical person (not a sales person or pre-sales person) to be the interface?

15.What is your timescale? When does it have to be completed? What is the ‘driver’ event?

16.What is your budget? Basically vendors work in a commercial environment and if they do work then they expect to get paid. As a rule, the size of the budget will depend directly upon your management skills and the quality of your integration specification.

Please keep it in mind that there are always multiple ways to integrate; there is never just a single solution. The best way is always the simplest way and this is usually also the lowest cost and quickest way as well as being the lowest cost to maintain. Think about the future, the most complex solution is always the most difficult and most expensive ongoing maintenance solution. Think KISS; minimize your pain and expense.

As a guideline, the vendor with the most work to do is usually the best one to be the ‘lead’ in the integration (remember, both have to be involved or it won’t work). So, if for example vendor A needs to read data from Vendor B’s database and then massage it and utilize it within application A then vendor A is the natural lead. All vendor B has to do is expose its API and provide the required technical assistance to vendor A so vendor A can successfully program to the API.

However, in the end it will usually be the vendor that is the most cooperative and most helpful that you will choose. If you choose the vendor you most trust and work best with to be the lead then you will maximize your chances of completing a successful integration on time and on budget.

 

What is really involved in converting to a new system?

by Frank 27. May 2012 06:00

Your customer’s old system is now way past its use by date and they have purchased a new application system to replace it. Now all you have to do is convert all the data from the old system to the new system, how hard can that be?

The answer is it that can be very, very hard to get right and it can take months or years if the IT staff or the contractors don’t know what they are doing. In fact, the worst case is that no one can actually figure out how to do the data conversion so you end up two years later still running the old, unsupported and now about to fail system. The really bad news is that this isn’t just the worst case scenario, it is the most common scenario and I have seen it happen time and time again.

People who are good at conversions are good because they have done it successfully many times before. So, don’t hire a contractor based on potential and a good sales spiel, hire a contractor based on record, on experience and on a good many previous references. The time to learn how to do a conversion isn’t on your project.

I will give you guidelines on how to handle a data conversion but as every conversion is different, you are going to have to adapt my guidelines to your project and you should always expect the unexpected. The good news is that if you have a calm, logical and experienced head then any problem is solvable. We have handled hundreds of conversions from every type of system imaginable to our RecFind product and we have never failed even though we have run into every kind of speed bump imaginable. As they say, “expect the best, plan for the worst, and prepare to be surprised.”

1.    Begin by reviewing the application to be converted by looking at the ‘screens’ with someone who uses the system and understands it. Ask the user what fields/data they want to convert. Take screenshots for your documentation. Remember that a field on the screen may or may not be a field in the database; the value may be calculated or generated automatically. Also remember that even though a screen may be called say “File Folder” that all the fields you can see may not in fact be part of the file folder table, they may be ‘linked’ fields in other tables in the database.

2.    You need to document and understand the data model, that is, all the tables and fields and relationships you will need to convert. See if someone has a representation of the data model but, never assume it is up to date. In fact, always assume it is not up to date. You need to work with an IT specialist (e.g., the database administrator) and utilize standard database tools like SQL Server Management Studio to validate the data model of the old system.

3.    Once you think you understand the data model and data to be converted you need to document your thoughts in a conversion report and ask the customer to review and approve it. You won’t get it right first time and expect this to be an iterative process. Remember that the customer will be in ‘discovery’ mode also.

4.    Once you have acceptance of the data to be converted you need to document the data mapping. That is, show where the data will go in the new application. It would be extremely rare that you would be able to duplicate the data model from the old application; it will usually be a case of adapting the data from the old system to the different data model of the new application. Produce a data mapping report and submit it to the customer for sign-off. Again, don’t expect to get this right the first time; it is also an iterative process because both you and the customer are in discovery mode.

5.    Expect that about 20% or more of the data in the old system will be ‘dirty’; that is, bad or duplicate and redundant data. You need to make a decision about the best time to clean up and de-dupe the data. Sometimes it is in the old application before you convert but often it is in the new application after you have converted because the new application has more and better functionality for this purpose.   Whichever method you choose, you must clean up the data before going live in production.

6.    Expect to run multiple trial conversions. The customer may have approved a specification but reading it and seeing the data exposed in the new application are two very different experiences. A picture is worth a thousand words and no one is smart enough to know exactly how they want their data converted until they actually see what it looks like and works like in the new application. Be smart and bring in more users to view and comment on the new application; more heads are better than one and new users will always find ways to improve the conversion. Don’t be afraid of user opinion, actively encourage and solicit it.

7.    Once the data mapping is approved you need to schedule end-user training (as close as possible to the cutover to the new system) and the final conversion prior to cutover.

Of course for the above process to work you also need the tools required to extract data from the old system and import it into the new system. If you don’t have standard tools you will have to write a one-off conversion program. The time to write this is after the data mapping is approved and before the first trial conversion. To make our life easy we designed and build a standard tool we call Xchange and it can connect to any data source and then map and write data to our RecFind 6 system. However, this is not an easy program to design and write and you are unlikely to be able to afford to do this unless you are in the conversion business like we are. You are therefore most likely going to have to design and write a one-off conversion program.

One alternative tool you should not ignore is Microsoft’s Excel. If the old system can export data in CSV format and the new system can import data in CSV format then Excel is the ideal tool for cleaning up, re-sequencing and preparing the data for import.

And finally, please do not forget to sanity check your conversion. You need to document exactly how many records of each type you exported so you can ensure that exactly the same number of records exist in the new system. I have seen far too many examples of a badly managed conversion resulting in thousands or even millions of records going ‘missing’ during the conversion process. You must have a detailed record count going out and a detailed record count going in. The last thing you want is a phone call from the customer a month or two later saying, “it looks like we are missing some records.”

Don’t expect the conversion to be easy and do expect it to be an iterative process. Always involve end-users and always sanity check the results.  Take extra care and you will be successful.

Moving your Records Management application to the Cloud; why would you do it?

by Frank 20. May 2012 06:00

We have all heard and read a lot about the Cloud and why we should all be moving that way. I wrote a little about this in a previous post. However, when we look at specific applications like records management we need to think about the human interaction and how that may be affected if we change from an in-house system to a hosted system. That is, how will the move affect your end-users and records management administrator? Ideally, it will make their job easier and take away some pain. If it makes their job harder and adds pain then you should not be doing it even if it saves you money.

We also need to think about the services we may need when we move to the Cloud. That is, will we need new services we don’t have now and will the Cloud vendor offer to perform services, like application maintenance, we currently do in-house?

In general, normal end-user functions should work the same whether we are running off an internal system or a Cloud-based one. This of course will depend upon the functionality of your records management software. Hopefully, there will be no difference to either the functionality or the user interface when you move to the Cloud. For the sake of this post let’s assume that there is a version of your records management system that can run either internally or in the Cloud and that the normal end-user interface is identical or as near-as-such that it doesn’t matter. If the end-user interface is massively different then you face extra cost and disruption because of the need to convert and retrain your users and this would be a reason not to move to the Cloud unless you were planning to change vendors and convert anyway.

Now we need to look at administrator functions, those tasks usually performed by the records management administrator or IT specialist to configure and manage the application.  Either the records management administrator can perform the same tasks using the Cloud version or you need to ask the Cloud vendor to perform some services for you. This will be at a cost so make sure you know what it is beforehand.  There are some administrator functions you will probably be glad to outsource to the Cloud vendor such as maintaining the server and SQL Server and taking and verifying backups.

I would assume that the decision to move a records management application to the Cloud would and should involve the application owner and IT management. The application owner has to be satisfied that the end-user experience will be better or at least equal to that of the in-house installation and IT management needs to be sure that the integrity and security of the Cloud application will at the very least be equal to that of the in-house installation. And finally, the application owner, the records manager, needs to be satisfied that the IT support from the vendor of the Cloud system will be equal to or better than the IT support being received from the in-house or currently out-sourced IT provider.

There is no point in moving to the Cloud if the end-user or administrator experience will deteriorate just as there is no point in moving to the Cloud if the level of IT support falls.

Once you have made the decision to move your records management application to the Cloud you need to plan the cutover in a way that causes minimal disruption to your operation. Ideally, your staff will finish work on the in-house application on Friday evening and begin working on the Cloud version the next Monday morning. You can’t afford to have everyone down for days or weeks while IT specialists struggle to make everything work to your satisfaction. This means you need to test the Cloud system extensively before going live in production. In this business, little or no testing equals little or no success and a great deal of pain and frustration.

If it was me, I would make sure that the move to the Cloud meant improvements in all facets of the operation. I would want to make sure that the Cloud vendor took on the less pleasant, time-consuming and technical tasks like managing and configuring the required IT infrastructure. I would also want them to take on the more bothersome, awkward and technically difficult application administration tasks. Basically, I would want to get rid of all the pain and just enjoy the benefits.

You should plan to ‘outsource’ all the pain to make your life and the life of your staff easier and more pleasant and in doing so, make everyone more productive. It is like paying an expert to do your tax return and getting a bigger refund. The Cloud solution must be presented as a value proposition. It should take away all the non-core activities that suck up your valuable time and allow you and your staff more time to do the core activities in a better and more efficient way; it should allow you to become more productive.

I am a great believer in the Cloud as a means of improving productivity, lowering costs and improving data integrity and security. It is all doable given available facilities and technology but in the end, it is up to you and your negotiations with the Cloud provider.  Stand firm and insist that the end result has to be a better solution in every way; compromise should not be part of the agreement.

Using Terminal Digits to minimize “Squishing”

by Frank 13. May 2012 06:00

Have you ever had to remove files from shelving or cabinets and reallocate them to other spaces because a drawer or shelf is packed tight? Then had to do it again and again?

One of my favourite records managers used to call this the “Squishing” problem.

The squishing problem is inevitable if you start to load files from the beginning of any physical filing system, be it shelving or cabinets and unload file files from random locations as the retention schedule dictates. If you create and file parts (a new folder called part 2, part 3, etc., when the original file folder is full) then the problem is exacerbated. You may well spend a large part of your working life shuffling file folders from location to location; a frustrating and worthless, thankless task. You also get to inhale a lot of toxic paper dust and mites which is not a good thing.

You may not be aware of it but there is a very simple algorithm you can utilize to make sure the squishing problem never happens to you. It is usually referred to as the ‘Terminal Digit’ file numbering system but you may call it whatever you like. The name isn’t important but the operation is.

Importantly, you don’t need to change your file numbering system other than by adding on additional numbers to the end. These additional numbers are the terminal digits.

The number of terminal digits you need depends upon how many file folders you have to manage. Here is a simple guideline:

·         One terminal Digit (0 to 9) = one thousand files

·         Two Terminal Digits (00 to 99) = ten thousand Files

·         Three Terminal Digits (000 to 999) = greater than ten thousand files

Obviously, you also have to have the filing space and appropriate facilities available (e.g., boxes, bays, etc.,) to hold the required number of files for each terminal.

It is called the Terminal Digit system because you first have to separate your available filing space into a number of regular ‘terminals’. Each terminal is identified by a number, e.g., 0, 1, 2, 09, 23, 112, 999, etc.

The new terminal digit is additional and separate from your normal file number. It determines which terminal a file will be stored in. Let’s say your normal file number is of the format YYYY/SSSSSS. That is, the current year plus an automatically incrementing auto number like 2012/000189 then 2012/000190, etc. If we use two terminal digits and divide your available filing space into one hundred terminals (think of it as 100 equally sized filing slots or bays numbered 00 to 99) then your new file number format is YYYY/SSSSSS-99. The two generated file numbers above may now look like 2012/000189-00 and 2012/000190-01.

File folder 2012/000189-00 is filed in terminal number 00 and 2012/000190-01 is filled in terminal number 01. In a nutshell, what we are doing is distributing files evenly across all available filing space. We are not starting at terminal 00 and filling it up and then moving on to terminal 01, then terminal 02 when 01 is full etc. Finding files is even easier because the first part of the file number you look at is the terminal digit. If a file number ends in 89 it will be in terminal 89 in file number order.

The other good news is that when we unload files from the shelves say at end of life or at the point in the lifecycle when they need to sent offsite we will also unload files evenly across all available filing space. If the terminals are actually big enough and if you have calculated everything correctly you should never again suffer from the ‘squishing’ problem and you should never again have to ingest paper dust and mites when tediously shuffling files from location to location.

Obviously, there is a little more to this than sticking a couple of digits on the end of your file number. I assume you are using a computerised records management system so changes have to be made or configured to correctly calculate the now extended file number (including the new terminal digit) and your colour file labels will need to be changed to show the terminal digit in a prominent position.

There is also the question of what to do with your existing squished file store. Ideally you would start from scratch with your new numbering systems and terminals and wait for the old system to disappear as the files age and disappear offsite to Grace or Iron Mountain. That probably won’t be possible so you will have to make decisions based on available resources and budget and come up with the best compromise.

I can’t prove it but I suspect that the terminal digit system has been around since people began filing stuff. It is an elegantly simple solution to an annoying and frustrating problem and involves nothing more complicated than simple arithmetic.

The surprise is that so few organizations actually use it. In twenty-five plus years in this business I don’t think I have seen it in use at more than one to two-percent of the customers I have visited. I have talked about it and recommended it often but the solution seems to end up in the too-hard basket; a shame really, especially for the records management staff charged with the constant shuffling of paper files.

It may be that you have a better solution but just in case you don’t, please humour me and have another look at the terminal digit filing solution. It may just save you an enormous amount of wasted time and make your long-suffering records staff a lot happier and a lot healthier.

 

Have you considered Cloud processing? There are significant benefits

by Frank 6. May 2012 06:00

Most of us have probably become more than a little numbed to the onslaught of Cloud advertising and the promotion of the ‘Cloud’ as the salvation for everyone and the panacea for everything. The Cloud is promoted by its aggrandizers as being both omnipotent and omniscient; both qualities I only previously associated with God.

This is not to say that moving business processing to the Cloud is not a good thing; it certainly is. I just wish that the promoters would tone down the ‘sell’ and clearly explain the benefits and advantages without the super-hype.

Those of us with long memories clearly recall the early hype about what was then called ASP or Application Service Processing or even Application Service Provider. This was the early progenitor of the Cloud and despite massive hype it did not fly. The reasons were simple, neither the technology nor the software (application and system) were up to the job. Great idea, pity it was about five years before its time.

Unfortunately, super-hype in our industry is usually associated with immature and unproven technology. Wiser, older people nod sagely and then wait a few years for the technology to catch up with the promises.

As an older (definitely) and wiser (hopefully) person I am now ready to accept that all the technology required for successful and secure Cloud processing is now available and proven; albeit being ‘improved’ all the time so still take care not to rush in with experimental technology.

As with many new technologies the secret is KISS; Keep It Simple Stupid. If it seems too complex then it is too complex. If the sales person can’t answer all of your questions clearly and unambiguously then walk away.

Most importantly, make sure you know all about all of the parties involved in the transaction. For example:

1.    What is the name of the data centre?

2.    Where is it located?

3.    Who ‘owns’ the rack and equipment and software at the data centre?

4.    What are the redundant features?

5.    What are the backup and recovery options?

6.    Is your vendor the owner of the co-hosted facility or do they subcontract to someone else? If they sub-contract is the company they subcontract to the owner or are they too just part of a chain of ‘hidden’ middle-men? It is critical for you to understand this chain of responsibility because if something goes wrong you need to know who to chase.

There are a lot more questions you need to ask but this Blog isn’t the place to list them all. I am sure your IT team and application owners will come up with plenty more. If they don’t, wake them up and demand questions.

Most small to medium organizations today simply do not have the time or expertise to run a computer room and manage and maintain a rack of servers. There is also a dearth of ‘real’ expertise and a plethora of phonies out there so hiring someone who is actually smart enough to manage your critical infrastructure is a very difficult exercise made more so by most business owners and managers simply not understanding the requirements or technology. It often becomes a case of the blind hiring the almost blind.

Most small to medium enterprises also cannot afford the redundancy required to ensure a stable and reliable infrastructure. A fifteen minute UPS is no substitute for a redundant bank of diesel generators and a guaranteed clean power supply.

Why should small to medium enterprises have to buy servers and networks and IT support? It isn’t part of their core business and this stuff should not be weighing down the balance sheet. Why should they be devoting scarce and expensive management time to activities that are not part of their core business?

In-house computer rooms will soon be become as rare as dinosaurs and this is how it should be, they are an anachronism in this time and age; out of time and out of place.

All smart and business savvy small to medium organizations should be planning to progressively move all their processing to the Cloud so as to lower costs, improve service levels and reduce management stress. I say progressively because it is still wise to get wet slowly and to take little steps. Just like with your first two-wheel bicycle, it pays to practice with the training wheels on first. That way, you usually avoid those painful falls.

I like to think I am a little wiser because I still have scars from gravel rash when I was a kid. I am moving my RecFind 6 customers to the Cloud and I am moving my in-house processing to the Cloud but just like you, I am doing it slowly and carefully and triple-checking every aspect. I don’t take risks with my customers or my business and neither should you.

One last thing, I have the advantage of being very IT literate and of having a top IT team working for me so we have the in-house expertise required to correctly evaluate and select the most appropriate technology and options. If you do not have this level of in-house IT expertise then please take extra care and try to find someone to assist who does have the level of IT knowledge required. Once you sign up, it is too late. Buyer’s remorse is not a solution to any problem.

Are you running old and unsupported software? What about the risks?

by Frank 29. April 2012 20:59

Many years ago we released a 16 bit product called RecFind version 3.2 and we made a really big mistake. We gave it so much functionality (much of it way ahead of its time) and we made it so stable that we still have thousands of users.

It is running under operating systems like XP it was never developed for or certified for and is still ‘doing the job’ for hundreds of our customers. Most frustratingly, when we try to get them to upgrade they usually say, “We can’t justify the expense because it is working fine and doing everything we need it to do.”

However, RecFind 3.2 is decommissioned, unsupported and, the databases it uses (Btrieve, Disam and an early version of SQL Server) and also no longer supported by their vendors.

So our customers are capturing and managing critical business records with totally unsupported software. Most importantly, most of them also do not have any kind of support agreement with us (and this really hurts because they say they don’t need a support agreement because the system doesn’t fail) so when the old system catastrophically fails, which it will, they are on their own.

Being a slow learner, ten years ago I replaced RecFind 3.2 and RecFind 4.0 with RecFind 5.0, a brand new 32 bit product. Once again I gave it too much functionality and made it way too stable. We now have hundreds of customers still using old and unsupported versions of RecFind 5.0 and when we try to convince them to upgrade we get that same response, “It is still working fine and doing everything we need it to do.”

If I was smarter I would have built-in a date-related software time bomb to stop old systems from working when they were well past their use-by date. However, that would have been a breach of faith so it is not something we have or will ever do. It is still a good idea, though probably illegal, because it would have protected our customers’ records far better than our old and unsupported systems do now.

In my experience, most senior executives talk about risk management but very few actually practice it. All over the world I have customers with millions of vital business records stored and managed in systems that are likely to fail the next time IT updates desktop or server operating systems or databases. We have warned them multiple times but to no avail. Senior application owners and senior IT people are ignoring the risk and, I suspect, not making senior management aware of the inevitable disaster. They are not managing risk; they are ignoring risk and just hoping it won’t happen in their reign.

Of course, it isn’t just our products that are still running under IT environments they were never designed or certified for; this is a very common problem. The only worse problem I can think of is the ginormous amount of critical business data being ‘managed’ in poorly designed, totally insecure and teetering-on-failure, unsupportable Access and Excel systems; many of them in the back offices of major banks and financial institutions. One of my customers called the 80 or so Access systems that had been developed across his organization as the world’s greatest virus. None had been properly designed, none had any security and most were impossible to maintain once a key employee or contractor had left.

Before you ask, yes we do produce regular updates for current products and yes we do completely redesign and redevelop our core systems like RecFind about every five years to utilize the very latest technology. We also offer all the tools and services necessary for any customer to upgrade to our new releases; we make it as easy and as low cost as possible for our customers to upgrade to the latest release but we still have hundreds of customers and many thousands of users utilizing old, unsupported and about-to-fail software.

There is an old expression that says you can take a horse to water but you can’t make it drink. I am starting to feel like an old, tired and very frustrated farmer with hundreds of thirsty horses on the edge of expiration. What can I do next to solve the problem?

Luckily for my customers, Microsoft Windows Vista was a failure and very few of them actually rolled it out. Also, luckily for my customers, SQL Server 2005 was a good and stable product and very few found it necessary to upgrade to SQL Server 2008 (soon to be SQL Server 2012). This means that most of my customers using old and unsupported versions of RecFind are utilizing XP and SQL Server 2005, but this will soon change and when it does my old products will become unstable and even just stop working. It is just good luck and good design (programmed tightly to the Microsoft API) that some (e.g., 3.2) still work under XP. RecFind 3.2 and 4.0 were never certified under XP.

So we have a mini-Y2K coming but try as I may I can’t seem to convince my customers of the need to protect their critical and irreplaceable (are they going to rescan all those documents from 10 years ago?) data. And, as I alluded to above, I am absolutely positive that we are only one of thousands of computer software companies in this same position.

In fairness to my customers, the Global Financial Crisis of 2008 was a major factor in the disappearance of upgrade budgets. If the call is to either upgrade software or retain staff then I would also vote to retain staff. Money is as tight as it has ever been and I can understand why upgrade projects have been delayed and shelved. However, none of this changes the facts or averts the coming data-loss disaster.

All over the world government agencies and companies are managing critical business data in old and unsupported systems that will inevitably fail with catastrophic consequences. It is time someone started managing this risk; are you?

 

Month List