Where have all the (good) applicants gone?

by Frank 24. June 2012 06:00

I am told again and again by the popular press and unpopular politicians (is there any other kind?) that we in Australia have a skills shortage. I agree but with a strong proviso; we have a skills shortage but we don’t have an applicant shortage.

We have been advertising for a support specialist (we actually hired one), software sales people and experienced .NET programmers. We are trying to grow and expand and the lack of good quality staff is the major impediment.

I have placed the ads on SEEK, on LinkedIn and am also using the services of several recruiting firms so we at least have a wide coverage.

The initial problem is that the majority of candidates either don’t read the ad or don’t understand the ad or just plain ignore the requirements in the ad. Please note that we are talking about very clear and unambiguous requirements like:

  • Please note that applications without a personalised cover letter articulating why you have the right attributes to be successful in this role will not be considered.
  • Previous applicants need not apply and all applicants must be Australian citizens or legal residents.

We also list skill or experience prerequisites which most applicants also either misread, don’t understand or just plain ignore. Again, we list them very clearly as follows:

  • You will have 3+ years’ experience programming in .NET (preferably VB)
  • You will have 3+ years’ experience working with SQL Server (2005/2008)
  • Experience with the most of the following: .NET 3.5, ASP, AJAX, LINQ, Threading, Web Services, JavaScript, IIS

Of course, as you may guess, the next biggest problem is that the claims in the resume/curriculum vitae simply do not match reality. We for example now test all programing applicants and less than ten-percent of the people we interview come even close to passing a simple programming test. For example, applicants who claim to be certified and experts in topics like SQL are unable to answer even the most elementary questions about SQL.

The funniest (strangest?) thing is that invariably, when we ask them after the test why they rated themselves as a 9 out of 10 in SQL but don’t seem to know anything about SQL, they still rate themselves as a 9 out of ten. It is at that point that you realise there is no point in continuing the interview.

We have now changed our approach and in order not to waste time we conduct a simple phone interview with applicants before deciding to bring them in. As you would guess, most never get past the simple phone interview.

In a nutshell, the ‘norm’ appears to be that applicants ignore the requirements in the ad and also lie about their experience and skills in their resumes. Sometimes the lies are so obvious it is funny. For example, we always check applicants in social networking sites like LinkedIn. The differences between the public profile on LinkedIn and the resume we receive are often amazing; different companies, different titles, different dates of employment. It reminds me of that old question, “Are you lying now or were you lying then?” As soon as we see big differences between the LinkedIn profile and the resume we lose interest.

Recruiters are also in the main, simple hopeless. They want a huge fee for placing an ad on SEEK and sending you a resume. Most don’t interview candidates or screen them in any way or even check references and none take any responsibility. Most beg for an appointment so they can really understand your requirements and then totally ignore them after taking up an hour or two of your valuable time.

However, even after the ‘information-gathering’ appointment and us supplying the recruiter with detailed written requirements the first few resumes we receive are usually nothing like what we asked for. Invariably, when I summon up enough patience to call them as ask why they wasted my time sending me resumes that are nothing like our requirements the answer is usually, “Oh, I thought you might be interested in this one.” Luckily I am not in the habit of gnashing my teeth or I would have none left.

Let me translate that response, “I am a recruiter on a low base salary and high commission and I can’t pay the mortgage on my girlfriend’s flat unless you take one of my candidates so I am going to send you whatever I have in the hope I can earn some commission.”

Then there is the question of literacy and professionalism or the lack thereof. To be fair, a lot of our programming candidates (most actually) are new to this country and English isn’t their first language so we expect to see some unusual phrasing and sentence construction in the resume. Most programming candidates however, despite language difficulties, do a pretty good job in the resume. It is only when we do a phone interview that we discover the candidate’s real grasp of English and unfortunately, for most new arrivals, I can’t employ them in my development environment if they can’t communicate technical matters and nuances at an expert level. It isn’t my job to teach them English.

The real surprise, or shock, is the number of ‘sales professionals’ who can’t spell or construct a sentence or even format a document despite English being their first language. I need these people to be able to construct well-written, cohesive selling proposals for my clients and if the resume is an indicator of their abilities then they fail abysmally.

More importantly, you have to ask if this is the effort they put into an extremely important document selling themselves what hope do you have of getting a well-written and totally professional proposal for your customers? We simply reject any sales candidate with a poorly written and formatted resume.

It is strange that most resumes from programming candidates who are also recent arrivals to our country are generally much better written that the resumes of so-called experienced sales professionals who were schooled here. There is obviously something seriously amiss with our education system and the standards of the companies they worked for previously.

The sad bottom line is that out of one hundred applicants we will only want to interview ten and out of those ten only one will prove to be suitable. I would like to say that this is a one-percent success rate but it isn’t because the one good candidate always gets several offers and the chance of actually hiring them is no better than one in two. This give me a success rate of at best, one in two-hundred.

My theory is that there is a major mismatch between available candidates and the available positions with a lot of poorly qualified people in the market and very few highly qualified people in the market. So we definitely have an ‘available’ skills shortage. It is an awful thing to say but I can only see this situation getting worse in the next few years as our economy slows down because the few good people are going to stay where they are and wait out the bad times.

Where are those cyborgs I see in movies like Prometheus; how much do I have to pay and how long do we have to wait?

 

Why is the Web-Client a much better solution for applications?

by Frank 17. June 2012 06:00

When we see terms like Web-Client or Thin-Client it means an application that runs in a browser like IE, Firefox or Chrome. The alternative is a Fat-Client usually meaning an application that runs within Windows (XP, Vista, Windows 7) on your desktop or notebook.

Neither the Web-Client nor the Fat-Client are new concepts having been around for many years but both have changed and improved over time as they have employed new technologies. Most Fat-Client applications today for instance are based on the 2008 or 2010 Microsoft .NET platform and require the .NET Framework to be installed on the workstation or notebook. Most Web-Client applications today utilize advanced multi-tier tools like those from Ajax plus more advanced toolsets from development systems like Visual Studio 2010 and provide a far better user interface and experience than their ancestors did.

In a nutshell, in the old days say fifteen years ago, Web-Clients were clunky, two-tier and had terrible user interfaces, nowhere near as good as their Fat-Client counterparts. Today, using the advanced development tools available, it is possible to build a Web-Client that looks and works little different from a Fat-Client. Much better development tools have made it much easier for programmers to build nicer looking, more functional and easier to use Web-Client user interfaces for applications.

It still isn’t all roses because not all browsers are equal and different browsers (e.g., IE versus Safari) and different versions of browsers (e.g., IE 6, 7, 8 and 9) force the programmer to have to write extra code to handle the differences. The advent of HTML5 will soon introduce another layer of differences and difficulty as vendors deploy different or non-standard versions of the HTML5 standard. However, this has been the case for as long as I can remember (there have always been differences in the HTML deployed by different vendors) and us programmers are used to it by now.

It used to be that because of the limited development tools available to code Web-Client interfaces that the typical Web-Client had far less functionality than its Fat-Client equivalent. Whereas it is still easier to implement complex functionality in a .NET Fat-Client than a Web-Client, it is possible to provide equivalent functionality in a Web-Client; it just needs smarter programmers, more work and a few extra tools.

So if your application vendor offers you the choice of a Fat or Web user interface, which should you choose and why?

You first need to ask a couple of very important questions:

·         Does the Web-Client have equivalent functionality to the Fat-Client? and

·         Is the Web-Client operating system and browser independent?

Let’s address the second question first because it is the most important. You can for example, write a Web-Client user interface that it not operating system independent and that in fact is ‘locked’ into a particular operating system like Windows and a particular browser like IE8. Most Silverlight Web-Client applications for instance require that the .NET Framework be installed on the local workstation or notebook. As the .NET Framework only works on Microsoft Windows systems it means you can’t run your Web-Client on other systems such as Linux, Mac OS, iOS or Android.

It also means that your IT department has to install and maintain software on each and every workstation or notebook that needs to run the application. This is the big problem because the cost of installing and maintaining software on each and every desktop and notebook is enormous.

Ideally, the Web-Client will be operating system independent and will support most of the latest versions of the most popular browsers. Expecting the Web-Client to support old versions of browsers is an unreasonable expectation.

If the Web-Client is operating system independent and has the same functionality as the Fat-Client then your decision is a foregone conclusion; go with the Web-Client and save on the installation and ongoing maintenance costs that would apply to the Fat-Client but not to the Web-Client.

If the Web-Client has a subset of the functionality of the Fat-Client you then need to compare the functionality offered to the needs of different classes of your users. It may not suit your systems administrator but will it suit inquiry users who just need to be able to browse, search, view and print? Will it also suit the middle-class of users who need to be able to add, modify and delete records but not perform administrative tasks like designing reports and building workflow templates?

It is important to have as many of your users as possible using the Web-Client because not only will this approach reduce your IT costs it will provide extra benefits for users who travel and operate from remote and poorly serviced locations not to mention those who work from home. After all, all that is needed is a computer (or tablet or smart-phone) and a secure Internet connection.  

Obviously, as a Web-Client runs from a Web Server your IT people need to ensure that it is secure and for example, operates as a secure and encrypted HTTPS website rather than as an insecure HTTP website. All traffic from public sites needs to be encrypted both ways as a minimum security requirement.

The other major benefit of the Web-Client is that it protects you from differences in operating systems, e.g., Windows XP versus Windows 7 or even Windows 8. A Web-Client runs in the browser, not in Windows so it is much less affected by fundamental changes in an operating system than a Fat-Client application which has to be re-compiled and re-certified against every change. Importantly, you are not locked in to a particular operating system or version of an operating system.

I expect most application vendors to be moving their customers to a Web-Client end user platform; we certainly will be and are investing large amounts of dollars in our RecFind 6 Web-Client interface. There are enormous cost and convenience benefits to our customers in moving from our Fat-Client to our Web-Client user interface and we will be doing everything in our power to encourage this move.

 

Integration, what does it really entail?

by Frank 10. June 2012 06:00

Over the last 28 years of running this business I have had hundreds of conversations with customers and partners about integration. In most cases when I have tried to obtain more details about exactly what they wanted to integrate to and how I have drawn a blank. It is as if the word ‘integration’ has some magical connotation that everyone is supposed to instantly understand. Being a very logical and technical person, I guess I fail the test because to me it is just a general term that covers a multitude of possibilities.

I have also designed and/or programmed many integrations in my life and I can honestly say that no two have ever been the same. So, assuming that you have a requirement for two or more of the application software products you use to ‘integrate’ how should you go about defining what is required so it is as clear and as unambiguous as possible to your partners, the people you will ask to do the work?

Integration is usually about sharing information and importantly, not duplicating effort (i.e., having to enter data twice) or duplicating information. Having to enter the same information twice into two different systems is just plain dumb and bad design. Maintaining duplicate copies of information is even dumber and dangerous because sooner or later they will get out of step and you will suffer from what we call a loss of data integrity. This is usually why we need integration, to share information and to avoid duplicate effort and duplicate information.

For our purpose let’s assume that we have two application systems, A and B, that we need to be ‘integrated’; both use a SQL database to store data. Applications A and B are produced by different vendors that haven’t worked together before. The first fact to face is that in the normal course of events each vendor is going to want the other vendor to do all the work and each vendor is going to want the other vendor to utilize its proprietary integration methodology (e.g., Application Program Interface (API) or Software Development Kit (SDK)). This is your first big challenge; you need to get the vendors to agree to work together because no matter how this turns out both are going to have to do work and contribute; you can’t complete an integration using just one vendor. That is, the most important thing you have to do is to manage the vendors involved. You can’t just leave it up to them; you need to manage the process from beginning to end.

The second most important thing you have to do is to actually define and document the integration processes as clearly as possible. Here is a checklist to guide you:

1.    Will application A need to access (i.e., read) data held by application B?

2.    Will application B need to access data held by application A?

3.    How often and at what times is access to data required? All the time, once a day, once a week, only when something changes, only when there is new data, every time a new record is added to either application A or B, etc. What are the rules?

4.    How is the data identified? That is, how does application A know what data it needs to access in the application B database? Is it by date or special code or some unique identifier? What are the rules that determine the data to be accessed?

5.    Will application A need to transfer data to application B (i.e. write data to the B database)?

6.    Will application B need to transfer data to application A?

7.    How often and at what times is a transfer of data required? All the time, once a day, one a week, only when something changes, only when there is new data, every time a new record is added to either application A or B, etc. What are the rules?

8.    How is the data identified? That is, how does application A know what data it needs to transfer to the application B database? Is it by date or special code or some unique identifier? What are the rules that determine the data to be transferred?

9.    Does application A have an API or SDK?

10.What is the degree of difficulty (expertise required, time and cost) of programming to this API or SDK? Rate it on a scale of 1 to 10, 10 being the most difficult and most expensive and most time consuming.

11.Does application B have an API or SDK?

12.What is the degree of difficulty (expertise required, time and cost) of programming to this API or SDK? Rate it on a scale of 1 to 10, 10 being the most difficult and most expensive and most time consuming.

13.Is the vendor of application A happy to assign a suitable qualified technical person (not a sales person or pre-sales person) to be the interface?

14.Is the vendor of application B happy to assign a suitable qualified technical person (not a sales person or pre-sales person) to be the interface?

15.What is your timescale? When does it have to be completed? What is the ‘driver’ event?

16.What is your budget? Basically vendors work in a commercial environment and if they do work then they expect to get paid. As a rule, the size of the budget will depend directly upon your management skills and the quality of your integration specification.

Please keep it in mind that there are always multiple ways to integrate; there is never just a single solution. The best way is always the simplest way and this is usually also the lowest cost and quickest way as well as being the lowest cost to maintain. Think about the future, the most complex solution is always the most difficult and most expensive ongoing maintenance solution. Think KISS; minimize your pain and expense.

As a guideline, the vendor with the most work to do is usually the best one to be the ‘lead’ in the integration (remember, both have to be involved or it won’t work). So, if for example vendor A needs to read data from Vendor B’s database and then massage it and utilize it within application A then vendor A is the natural lead. All vendor B has to do is expose its API and provide the required technical assistance to vendor A so vendor A can successfully program to the API.

However, in the end it will usually be the vendor that is the most cooperative and most helpful that you will choose. If you choose the vendor you most trust and work best with to be the lead then you will maximize your chances of completing a successful integration on time and on budget.

 

Your help desk works, or does it?

by Frank 3. June 2012 06:00

Almost every organization, commercial or government, needs a help desk. Help desks support either internal or external ‘customers’. Generally speaking the job of a help desk is to support users who have problems or questions about a product or service.

Help desks may run as either a profit centre or as a cost centre. Normally, help desks supporting internal customers run as cost centres (though maybe with an internal accounting function that attempts to allocate costs to all the departments that utilize the service) and help desks that support external customers run as a profit centre, charging for their services via an annual service fee or incident fee.

The only true measure of the worth of a help desk is the level of customer satisfaction and this is very difficult to measure other than in an anecdotal way. This is because of human nature; customers who are happy with the service rarely take the time to write to the help desk manager and tell him. The same is true of customers who are unhappy with the service; most just make a decision not to use that product or service again. A small number of very disgruntled or even litigious or nuisance customers will complain repeatedly in the most vociferous and rudest manner but will largely be ignored as repeat offenders or the usual suspects.

Trying to get a reading across the customer base by using a survey rarely works either as most won’t respond  and the ones that do respond are usually from the two extremes, the really, really happy customers and the really, really dissatisfied customers. Plus, we all know that a survey is like a poll, if you design the questions in a certain way you can always get the result you first thought of.

Because it is so difficult to obtain enough customer input to be able to rate the help desk we usually fall back on internal metrics. Such things as how many calls did we receive last week? What percentage was closed within 1 day, 2 days, 3 days, etc.? How many are still outstanding after 7 days? How many had to be escalated?

The problem with internal metrics, like police reports on crime statistics, is that they can be manipulated to produce the result you first thought of. Remember that old saying about statistics, "Lies, damned lies, and statistics." A smart and politically savvy help desk manager will always find a way to guild the lily and dress up the stats so he looks good.

So, how do you know if your help desk is working and servicing your customers to the highest standard? There is only one sure way I know of and that is to ring the help desk yourself (incognito I hope, calling up and saying this is the CEO won’t really give you a fair reading about how ordinary customers are treated), or organize a team to call the help desk with a list of known issues and test the responses.

This sounds like it should be a business opportunity; a kind of reverse outsourced help desk, an organization that specializes in testing help desk services. All you have to do is provide them with scripts and a way to measure the effectiveness of the responses. However, I don’t know of any organization that provides this service just as I have never met a CEO lately who seems to know or care what is happening with his help desk service and this is the real problem.

You can always tell the company with the disinterested CEO because there is no way to contact him or her on the website. Companies that aren’t interested in supporting customers always make it almost impossible for a customer to provide feedback. Unfortunately, this ‘we are hiding from you approach’ is becoming the norm as companies remove all contact information from their websites and force customers to endure long waits and rubbish ‘service’ from outsourced support centres.

The executives don’t receive negative feedback because they make it so difficult for customers to reach them. Personally, I think this is a short term and eventually damaging practice as customers tend to have long memories and frustrated, dissatisfied customers will make it their business to tell everyone but the company’s management team (because they aren’t able to contact them) about the rubbish product and the shoddy way they were treated.

Before you ask, let me explain that we do have a support centre but it is not outsourced and we make it as easy as possible for customers to contact us by web form, email, mobile device or toll free number. Please see the links below:

http://www.knowledgeonecorp.com/support/contactinghelpdesk.htm

http://www.knowledgeonecorp.com/contactus/emailus.htm

http://www.knowledgeonecorp.com/contactus/Contact_By_Mobile_App.htm

http://www.knowledgeonecorp.com/support/freeemailsupport.htm

support@knowledgeonecorp.com

Just so you know that we practice what we preach.

Paradoxically, I believe the reason that I get so few complaints (apart from the high standard of our support services) is that I make it so easy for customers to contact me or any other executive in my company.

We also use our own product RecFind 6 as our help desk software so we are able to build in all the alerts, escalations and reporting we need to manage each and every support call to the best of our ability. And finally, my office is just 20 metres or so from the support centre so I make it my business to be in there talking to the support staff at least 4 or 5 times a day.

I am a CEO who is vitally interested in his customers and the quality of support they are receiving and not just for altruistic reasons but for sound business reasons.  Happy customers stay with us and invest in our products and services year after year. It is quite simple really; I invest in my customers so they will invest in my company. It works for us and I wonder why other CEO’s don’t understand this very simple message.

The relationship between a vendor and a customer should be a mutually beneficial partnership; it should not be an destructive, adversarial relationship. In my opinion CEO’s who do not allow their customers to contact them and deliver either a complaint or a compliment are fools and bad business people with a strictly short term view. It is a formula for more short term profit but less long term customers. We opt to spend more money and time on support so we can foster better long term relationships. I think in the ‘old days’ this used to be called service.

What is really involved in converting to a new system?

by Frank 27. May 2012 06:00

Your customer’s old system is now way past its use by date and they have purchased a new application system to replace it. Now all you have to do is convert all the data from the old system to the new system, how hard can that be?

The answer is it that can be very, very hard to get right and it can take months or years if the IT staff or the contractors don’t know what they are doing. In fact, the worst case is that no one can actually figure out how to do the data conversion so you end up two years later still running the old, unsupported and now about to fail system. The really bad news is that this isn’t just the worst case scenario, it is the most common scenario and I have seen it happen time and time again.

People who are good at conversions are good because they have done it successfully many times before. So, don’t hire a contractor based on potential and a good sales spiel, hire a contractor based on record, on experience and on a good many previous references. The time to learn how to do a conversion isn’t on your project.

I will give you guidelines on how to handle a data conversion but as every conversion is different, you are going to have to adapt my guidelines to your project and you should always expect the unexpected. The good news is that if you have a calm, logical and experienced head then any problem is solvable. We have handled hundreds of conversions from every type of system imaginable to our RecFind product and we have never failed even though we have run into every kind of speed bump imaginable. As they say, “expect the best, plan for the worst, and prepare to be surprised.”

1.    Begin by reviewing the application to be converted by looking at the ‘screens’ with someone who uses the system and understands it. Ask the user what fields/data they want to convert. Take screenshots for your documentation. Remember that a field on the screen may or may not be a field in the database; the value may be calculated or generated automatically. Also remember that even though a screen may be called say “File Folder” that all the fields you can see may not in fact be part of the file folder table, they may be ‘linked’ fields in other tables in the database.

2.    You need to document and understand the data model, that is, all the tables and fields and relationships you will need to convert. See if someone has a representation of the data model but, never assume it is up to date. In fact, always assume it is not up to date. You need to work with an IT specialist (e.g., the database administrator) and utilize standard database tools like SQL Server Management Studio to validate the data model of the old system.

3.    Once you think you understand the data model and data to be converted you need to document your thoughts in a conversion report and ask the customer to review and approve it. You won’t get it right first time and expect this to be an iterative process. Remember that the customer will be in ‘discovery’ mode also.

4.    Once you have acceptance of the data to be converted you need to document the data mapping. That is, show where the data will go in the new application. It would be extremely rare that you would be able to duplicate the data model from the old application; it will usually be a case of adapting the data from the old system to the different data model of the new application. Produce a data mapping report and submit it to the customer for sign-off. Again, don’t expect to get this right the first time; it is also an iterative process because both you and the customer are in discovery mode.

5.    Expect that about 20% or more of the data in the old system will be ‘dirty’; that is, bad or duplicate and redundant data. You need to make a decision about the best time to clean up and de-dupe the data. Sometimes it is in the old application before you convert but often it is in the new application after you have converted because the new application has more and better functionality for this purpose.   Whichever method you choose, you must clean up the data before going live in production.

6.    Expect to run multiple trial conversions. The customer may have approved a specification but reading it and seeing the data exposed in the new application are two very different experiences. A picture is worth a thousand words and no one is smart enough to know exactly how they want their data converted until they actually see what it looks like and works like in the new application. Be smart and bring in more users to view and comment on the new application; more heads are better than one and new users will always find ways to improve the conversion. Don’t be afraid of user opinion, actively encourage and solicit it.

7.    Once the data mapping is approved you need to schedule end-user training (as close as possible to the cutover to the new system) and the final conversion prior to cutover.

Of course for the above process to work you also need the tools required to extract data from the old system and import it into the new system. If you don’t have standard tools you will have to write a one-off conversion program. The time to write this is after the data mapping is approved and before the first trial conversion. To make our life easy we designed and build a standard tool we call Xchange and it can connect to any data source and then map and write data to our RecFind 6 system. However, this is not an easy program to design and write and you are unlikely to be able to afford to do this unless you are in the conversion business like we are. You are therefore most likely going to have to design and write a one-off conversion program.

One alternative tool you should not ignore is Microsoft’s Excel. If the old system can export data in CSV format and the new system can import data in CSV format then Excel is the ideal tool for cleaning up, re-sequencing and preparing the data for import.

And finally, please do not forget to sanity check your conversion. You need to document exactly how many records of each type you exported so you can ensure that exactly the same number of records exist in the new system. I have seen far too many examples of a badly managed conversion resulting in thousands or even millions of records going ‘missing’ during the conversion process. You must have a detailed record count going out and a detailed record count going in. The last thing you want is a phone call from the customer a month or two later saying, “it looks like we are missing some records.”

Don’t expect the conversion to be easy and do expect it to be an iterative process. Always involve end-users and always sanity check the results.  Take extra care and you will be successful.

Moving your Records Management application to the Cloud; why would you do it?

by Frank 20. May 2012 06:00

We have all heard and read a lot about the Cloud and why we should all be moving that way. I wrote a little about this in a previous post. However, when we look at specific applications like records management we need to think about the human interaction and how that may be affected if we change from an in-house system to a hosted system. That is, how will the move affect your end-users and records management administrator? Ideally, it will make their job easier and take away some pain. If it makes their job harder and adds pain then you should not be doing it even if it saves you money.

We also need to think about the services we may need when we move to the Cloud. That is, will we need new services we don’t have now and will the Cloud vendor offer to perform services, like application maintenance, we currently do in-house?

In general, normal end-user functions should work the same whether we are running off an internal system or a Cloud-based one. This of course will depend upon the functionality of your records management software. Hopefully, there will be no difference to either the functionality or the user interface when you move to the Cloud. For the sake of this post let’s assume that there is a version of your records management system that can run either internally or in the Cloud and that the normal end-user interface is identical or as near-as-such that it doesn’t matter. If the end-user interface is massively different then you face extra cost and disruption because of the need to convert and retrain your users and this would be a reason not to move to the Cloud unless you were planning to change vendors and convert anyway.

Now we need to look at administrator functions, those tasks usually performed by the records management administrator or IT specialist to configure and manage the application.  Either the records management administrator can perform the same tasks using the Cloud version or you need to ask the Cloud vendor to perform some services for you. This will be at a cost so make sure you know what it is beforehand.  There are some administrator functions you will probably be glad to outsource to the Cloud vendor such as maintaining the server and SQL Server and taking and verifying backups.

I would assume that the decision to move a records management application to the Cloud would and should involve the application owner and IT management. The application owner has to be satisfied that the end-user experience will be better or at least equal to that of the in-house installation and IT management needs to be sure that the integrity and security of the Cloud application will at the very least be equal to that of the in-house installation. And finally, the application owner, the records manager, needs to be satisfied that the IT support from the vendor of the Cloud system will be equal to or better than the IT support being received from the in-house or currently out-sourced IT provider.

There is no point in moving to the Cloud if the end-user or administrator experience will deteriorate just as there is no point in moving to the Cloud if the level of IT support falls.

Once you have made the decision to move your records management application to the Cloud you need to plan the cutover in a way that causes minimal disruption to your operation. Ideally, your staff will finish work on the in-house application on Friday evening and begin working on the Cloud version the next Monday morning. You can’t afford to have everyone down for days or weeks while IT specialists struggle to make everything work to your satisfaction. This means you need to test the Cloud system extensively before going live in production. In this business, little or no testing equals little or no success and a great deal of pain and frustration.

If it was me, I would make sure that the move to the Cloud meant improvements in all facets of the operation. I would want to make sure that the Cloud vendor took on the less pleasant, time-consuming and technical tasks like managing and configuring the required IT infrastructure. I would also want them to take on the more bothersome, awkward and technically difficult application administration tasks. Basically, I would want to get rid of all the pain and just enjoy the benefits.

You should plan to ‘outsource’ all the pain to make your life and the life of your staff easier and more pleasant and in doing so, make everyone more productive. It is like paying an expert to do your tax return and getting a bigger refund. The Cloud solution must be presented as a value proposition. It should take away all the non-core activities that suck up your valuable time and allow you and your staff more time to do the core activities in a better and more efficient way; it should allow you to become more productive.

I am a great believer in the Cloud as a means of improving productivity, lowering costs and improving data integrity and security. It is all doable given available facilities and technology but in the end, it is up to you and your negotiations with the Cloud provider.  Stand firm and insist that the end result has to be a better solution in every way; compromise should not be part of the agreement.

Using Terminal Digits to minimize “Squishing”

by Frank 13. May 2012 06:00

Have you ever had to remove files from shelving or cabinets and reallocate them to other spaces because a drawer or shelf is packed tight? Then had to do it again and again?

One of my favourite records managers used to call this the “Squishing” problem.

The squishing problem is inevitable if you start to load files from the beginning of any physical filing system, be it shelving or cabinets and unload file files from random locations as the retention schedule dictates. If you create and file parts (a new folder called part 2, part 3, etc., when the original file folder is full) then the problem is exacerbated. You may well spend a large part of your working life shuffling file folders from location to location; a frustrating and worthless, thankless task. You also get to inhale a lot of toxic paper dust and mites which is not a good thing.

You may not be aware of it but there is a very simple algorithm you can utilize to make sure the squishing problem never happens to you. It is usually referred to as the ‘Terminal Digit’ file numbering system but you may call it whatever you like. The name isn’t important but the operation is.

Importantly, you don’t need to change your file numbering system other than by adding on additional numbers to the end. These additional numbers are the terminal digits.

The number of terminal digits you need depends upon how many file folders you have to manage. Here is a simple guideline:

·         One terminal Digit (0 to 9) = one thousand files

·         Two Terminal Digits (00 to 99) = ten thousand Files

·         Three Terminal Digits (000 to 999) = greater than ten thousand files

Obviously, you also have to have the filing space and appropriate facilities available (e.g., boxes, bays, etc.,) to hold the required number of files for each terminal.

It is called the Terminal Digit system because you first have to separate your available filing space into a number of regular ‘terminals’. Each terminal is identified by a number, e.g., 0, 1, 2, 09, 23, 112, 999, etc.

The new terminal digit is additional and separate from your normal file number. It determines which terminal a file will be stored in. Let’s say your normal file number is of the format YYYY/SSSSSS. That is, the current year plus an automatically incrementing auto number like 2012/000189 then 2012/000190, etc. If we use two terminal digits and divide your available filing space into one hundred terminals (think of it as 100 equally sized filing slots or bays numbered 00 to 99) then your new file number format is YYYY/SSSSSS-99. The two generated file numbers above may now look like 2012/000189-00 and 2012/000190-01.

File folder 2012/000189-00 is filed in terminal number 00 and 2012/000190-01 is filled in terminal number 01. In a nutshell, what we are doing is distributing files evenly across all available filing space. We are not starting at terminal 00 and filling it up and then moving on to terminal 01, then terminal 02 when 01 is full etc. Finding files is even easier because the first part of the file number you look at is the terminal digit. If a file number ends in 89 it will be in terminal 89 in file number order.

The other good news is that when we unload files from the shelves say at end of life or at the point in the lifecycle when they need to sent offsite we will also unload files evenly across all available filing space. If the terminals are actually big enough and if you have calculated everything correctly you should never again suffer from the ‘squishing’ problem and you should never again have to ingest paper dust and mites when tediously shuffling files from location to location.

Obviously, there is a little more to this than sticking a couple of digits on the end of your file number. I assume you are using a computerised records management system so changes have to be made or configured to correctly calculate the now extended file number (including the new terminal digit) and your colour file labels will need to be changed to show the terminal digit in a prominent position.

There is also the question of what to do with your existing squished file store. Ideally you would start from scratch with your new numbering systems and terminals and wait for the old system to disappear as the files age and disappear offsite to Grace or Iron Mountain. That probably won’t be possible so you will have to make decisions based on available resources and budget and come up with the best compromise.

I can’t prove it but I suspect that the terminal digit system has been around since people began filing stuff. It is an elegantly simple solution to an annoying and frustrating problem and involves nothing more complicated than simple arithmetic.

The surprise is that so few organizations actually use it. In twenty-five plus years in this business I don’t think I have seen it in use at more than one to two-percent of the customers I have visited. I have talked about it and recommended it often but the solution seems to end up in the too-hard basket; a shame really, especially for the records management staff charged with the constant shuffling of paper files.

It may be that you have a better solution but just in case you don’t, please humour me and have another look at the terminal digit filing solution. It may just save you an enormous amount of wasted time and make your long-suffering records staff a lot happier and a lot healthier.

 

Have you considered Cloud processing? There are significant benefits

by Frank 6. May 2012 06:00

Most of us have probably become more than a little numbed to the onslaught of Cloud advertising and the promotion of the ‘Cloud’ as the salvation for everyone and the panacea for everything. The Cloud is promoted by its aggrandizers as being both omnipotent and omniscient; both qualities I only previously associated with God.

This is not to say that moving business processing to the Cloud is not a good thing; it certainly is. I just wish that the promoters would tone down the ‘sell’ and clearly explain the benefits and advantages without the super-hype.

Those of us with long memories clearly recall the early hype about what was then called ASP or Application Service Processing or even Application Service Provider. This was the early progenitor of the Cloud and despite massive hype it did not fly. The reasons were simple, neither the technology nor the software (application and system) were up to the job. Great idea, pity it was about five years before its time.

Unfortunately, super-hype in our industry is usually associated with immature and unproven technology. Wiser, older people nod sagely and then wait a few years for the technology to catch up with the promises.

As an older (definitely) and wiser (hopefully) person I am now ready to accept that all the technology required for successful and secure Cloud processing is now available and proven; albeit being ‘improved’ all the time so still take care not to rush in with experimental technology.

As with many new technologies the secret is KISS; Keep It Simple Stupid. If it seems too complex then it is too complex. If the sales person can’t answer all of your questions clearly and unambiguously then walk away.

Most importantly, make sure you know all about all of the parties involved in the transaction. For example:

1.    What is the name of the data centre?

2.    Where is it located?

3.    Who ‘owns’ the rack and equipment and software at the data centre?

4.    What are the redundant features?

5.    What are the backup and recovery options?

6.    Is your vendor the owner of the co-hosted facility or do they subcontract to someone else? If they sub-contract is the company they subcontract to the owner or are they too just part of a chain of ‘hidden’ middle-men? It is critical for you to understand this chain of responsibility because if something goes wrong you need to know who to chase.

There are a lot more questions you need to ask but this Blog isn’t the place to list them all. I am sure your IT team and application owners will come up with plenty more. If they don’t, wake them up and demand questions.

Most small to medium organizations today simply do not have the time or expertise to run a computer room and manage and maintain a rack of servers. There is also a dearth of ‘real’ expertise and a plethora of phonies out there so hiring someone who is actually smart enough to manage your critical infrastructure is a very difficult exercise made more so by most business owners and managers simply not understanding the requirements or technology. It often becomes a case of the blind hiring the almost blind.

Most small to medium enterprises also cannot afford the redundancy required to ensure a stable and reliable infrastructure. A fifteen minute UPS is no substitute for a redundant bank of diesel generators and a guaranteed clean power supply.

Why should small to medium enterprises have to buy servers and networks and IT support? It isn’t part of their core business and this stuff should not be weighing down the balance sheet. Why should they be devoting scarce and expensive management time to activities that are not part of their core business?

In-house computer rooms will soon be become as rare as dinosaurs and this is how it should be, they are an anachronism in this time and age; out of time and out of place.

All smart and business savvy small to medium organizations should be planning to progressively move all their processing to the Cloud so as to lower costs, improve service levels and reduce management stress. I say progressively because it is still wise to get wet slowly and to take little steps. Just like with your first two-wheel bicycle, it pays to practice with the training wheels on first. That way, you usually avoid those painful falls.

I like to think I am a little wiser because I still have scars from gravel rash when I was a kid. I am moving my RecFind 6 customers to the Cloud and I am moving my in-house processing to the Cloud but just like you, I am doing it slowly and carefully and triple-checking every aspect. I don’t take risks with my customers or my business and neither should you.

One last thing, I have the advantage of being very IT literate and of having a top IT team working for me so we have the in-house expertise required to correctly evaluate and select the most appropriate technology and options. If you do not have this level of in-house IT expertise then please take extra care and try to find someone to assist who does have the level of IT knowledge required. Once you sign up, it is too late. Buyer’s remorse is not a solution to any problem.

Are you running old and unsupported software? What about the risks?

by Frank 29. April 2012 20:59

Many years ago we released a 16 bit product called RecFind version 3.2 and we made a really big mistake. We gave it so much functionality (much of it way ahead of its time) and we made it so stable that we still have thousands of users.

It is running under operating systems like XP it was never developed for or certified for and is still ‘doing the job’ for hundreds of our customers. Most frustratingly, when we try to get them to upgrade they usually say, “We can’t justify the expense because it is working fine and doing everything we need it to do.”

However, RecFind 3.2 is decommissioned, unsupported and, the databases it uses (Btrieve, Disam and an early version of SQL Server) and also no longer supported by their vendors.

So our customers are capturing and managing critical business records with totally unsupported software. Most importantly, most of them also do not have any kind of support agreement with us (and this really hurts because they say they don’t need a support agreement because the system doesn’t fail) so when the old system catastrophically fails, which it will, they are on their own.

Being a slow learner, ten years ago I replaced RecFind 3.2 and RecFind 4.0 with RecFind 5.0, a brand new 32 bit product. Once again I gave it too much functionality and made it way too stable. We now have hundreds of customers still using old and unsupported versions of RecFind 5.0 and when we try to convince them to upgrade we get that same response, “It is still working fine and doing everything we need it to do.”

If I was smarter I would have built-in a date-related software time bomb to stop old systems from working when they were well past their use-by date. However, that would have been a breach of faith so it is not something we have or will ever do. It is still a good idea, though probably illegal, because it would have protected our customers’ records far better than our old and unsupported systems do now.

In my experience, most senior executives talk about risk management but very few actually practice it. All over the world I have customers with millions of vital business records stored and managed in systems that are likely to fail the next time IT updates desktop or server operating systems or databases. We have warned them multiple times but to no avail. Senior application owners and senior IT people are ignoring the risk and, I suspect, not making senior management aware of the inevitable disaster. They are not managing risk; they are ignoring risk and just hoping it won’t happen in their reign.

Of course, it isn’t just our products that are still running under IT environments they were never designed or certified for; this is a very common problem. The only worse problem I can think of is the ginormous amount of critical business data being ‘managed’ in poorly designed, totally insecure and teetering-on-failure, unsupportable Access and Excel systems; many of them in the back offices of major banks and financial institutions. One of my customers called the 80 or so Access systems that had been developed across his organization as the world’s greatest virus. None had been properly designed, none had any security and most were impossible to maintain once a key employee or contractor had left.

Before you ask, yes we do produce regular updates for current products and yes we do completely redesign and redevelop our core systems like RecFind about every five years to utilize the very latest technology. We also offer all the tools and services necessary for any customer to upgrade to our new releases; we make it as easy and as low cost as possible for our customers to upgrade to the latest release but we still have hundreds of customers and many thousands of users utilizing old, unsupported and about-to-fail software.

There is an old expression that says you can take a horse to water but you can’t make it drink. I am starting to feel like an old, tired and very frustrated farmer with hundreds of thirsty horses on the edge of expiration. What can I do next to solve the problem?

Luckily for my customers, Microsoft Windows Vista was a failure and very few of them actually rolled it out. Also, luckily for my customers, SQL Server 2005 was a good and stable product and very few found it necessary to upgrade to SQL Server 2008 (soon to be SQL Server 2012). This means that most of my customers using old and unsupported versions of RecFind are utilizing XP and SQL Server 2005, but this will soon change and when it does my old products will become unstable and even just stop working. It is just good luck and good design (programmed tightly to the Microsoft API) that some (e.g., 3.2) still work under XP. RecFind 3.2 and 4.0 were never certified under XP.

So we have a mini-Y2K coming but try as I may I can’t seem to convince my customers of the need to protect their critical and irreplaceable (are they going to rescan all those documents from 10 years ago?) data. And, as I alluded to above, I am absolutely positive that we are only one of thousands of computer software companies in this same position.

In fairness to my customers, the Global Financial Crisis of 2008 was a major factor in the disappearance of upgrade budgets. If the call is to either upgrade software or retain staff then I would also vote to retain staff. Money is as tight as it has ever been and I can understand why upgrade projects have been delayed and shelved. However, none of this changes the facts or averts the coming data-loss disaster.

All over the world government agencies and companies are managing critical business data in old and unsupported systems that will inevitably fail with catastrophic consequences. It is time someone started managing this risk; are you?

 

Managing Emails, how hard can it be?

by Frank 22. April 2012 00:22

We produce a content management system called RecFind 6 that includes several ways to capture, classify and save emails. Like most ECM vendors, we offer a number of alternatives so as to be better able to meet the unique requirements of a variety of clients.

We offer the ‘manual’ version whereby we embed our email client into packages like Outlook and the end user can just click on our RecFind 6 Button from the Outlook toolbar to capture and classify any email.

We also offer a fully automated email management system called GEM that is rule-driven and that automatically analyses, captures and classifies all incoming and outgoing emails.

At the simplest level, an end user can just utilize the standard RecFind 6 client and click on the ‘Add Attachment’ button to capture a saved email from the local file store.

Most of our customers use the RecFind 6 Button because they prefer to have end users decide which emails to capture and because the Button is embedded into Microsoft Office, Adobe Professional, Notes and GroupWise. A much smaller percentage of our customers use GEM even though it is a much better, more complete and less labour intensive solution because there are still many people that just don’t want email to be automatically captured.

This last point is of great interest to me because I find it hard to understand why customers would choose the ‘manual’ RecFind 6 Button, small, smart and fast though it is, over the fully automated and complete solution offered by GEM, especially when GEM is a much lower cost solution for mid-size to large enterprises.

A few years ago in 2005 the Records Management Association of Australia asked me to write a paper on this topic, that is, why don’t organizations make a good job of capturing emails when there is plenty of software out there that can do the job?  I came up with six reasons why organizations don’t manage emails effectively and after re-reading that paper today, they are still valid.

In my experience, the most common protagonists are the records manager and the IT manager.  I don’t think I have ever spoke to a senior executive or application owner who didn’t think GEM was a good idea but I have only ever spoken to a tiny number of records managers who would even contemplate the idea of fully automatic email management. Most IT managers just don’t want all their emails captured.

This is despite the fact that because GEM is rule-driven any competent administrator could write rules to include or exclude any emails they want included or excluded.

Another road block is that old red herring personal emails. In ninety-percent upwards of cases where my customer has decided against GEM this is given as the ‘real’ reason. It is of course rubbish because there are many ways to handle personal emails including an effective email policy and writing GEM rules to enforce that policy. This 2004 paper explains why we need to manage emails and also talks about an effective email policy.

The absolute worst way to mismanage emails is to mandate that end users must select and print them out for the records staff to file in cardboard file folders. This method is entirely appropriate to 1900 except for the fact that we actually didn’t have emails in 1990. It is entirely inappropriate and just plain ineffective, wasteful and stupid in 2012 but, tens of thousands of records managers all around the world still mandate this as the preferred approach.

Is it because they don’t understand the technology or is it because they stubbornly refuse to even consider the technology?

It can’t be budget because the cost of expensive staff having to be part-time records managers is monumental. You would be hard pressed to find a more expensive and less effective solution. So why are we still doing it?

Back to the title of this paper, “How hard can it be?”

The answer is that it is not hard at all and that every ECM vendor has at least one flexible and configurable solution for email management. More so, these solutions have been around for at least the last ten years. So why are we still doing it the hard, ineffective, incomplete and expensive way?

The answer is that it is to do with people and attitudes; with a reluctance to embrace change and a reluctance to embrace a challenge that just might force managers to learn a lot in a short time and extend their capabilities and workload for the period necessary to implement a new generation solution. I guess it comes down to fear and a head in the sand attitude.

I once had a senior records manager tell me he wasn’t going to install any new systems because he was retiring in five years and didn’t want the worry and stress. Is this really why you aren’t managing your emails effectively and completely? Isn’t it time you asked the question of your records and IT managers?

Month List