How to clean up your shared drives, Frank’s approach

by Frank 22. August 2014 06:00

In my time in this business (enterprise content management, records management, document management, etc.) I have been asked to help with a ‘shared drive problem’ more times than I can remember. This particular issue is analogous with the paperless office problem. Thirty years ago when I started my company I naively thought that both problems would be long gone by now but they are not.

I still get requests for purely physical records management solutions and I still get requests to assist customers in sorting out their shared drives problems.

The tools and procedures to solve both problems have been around for a long time but for whatever reason (I suspect lack of management focus) the problems still persist and could be described as systemic across most industry segments.

Yes, I know that you can implement an electronic document and records management system (we have one called RecFind 6) and take away the need for shared drives and physical records management systems completely but most organizations don’t and most organizations still struggle with shared drives and physical records. This post addresses the reality.

Unfortunately, the most important ingredient in any solution is ‘ownership’ and that is as hard to find as it ever was. Someone with authority, or someone who is prepared to assume authority, needs to take ownership of the problem in a benevolent dictator way and just steam-roll a solution through the enterprise. It isn’t solvable by committees and it requires a committed, driven person to make it happen. These kind of people are in short supply so if you don’t have one, bring one in.

In a nutshell there are three basic problems apart from ownership of the problem.

1.     How to delete all redundant information;

2.     How to structure the ‘new’ shared drives; and

3.     How to make the new system work to most people’s satisfaction.

Deleting redundant Information

Rule number one is don’t ever ask staff to delete the information they regard as redundant. It will never happen. Instead, tell staff that you will delete all documents in your shared drives with a created or last updated date greater than a nominated date (say one-year into the past) unless they tell you specifically which ‘older’ documents they need to retain. Just saying “all of them” is not an acceptable response. Give staff advance notice of a month and then delete everything that has not been nominated as important enough to retain.  Of course, take a backup of everything before you delete, just in case. This is tough love, not stupidity.

Structuring the new shared drives

If your records manager insists on using your already overly complex, hierarchical corporate classification scheme or taxonomy as the model for the new shared drive structure politely ask them to look for another job. Do you want this to work or not?

Records managers and archivists and librarians (and scientists) understand and love complex classification systems. However, end users don’t understand them, don’t like them and won’t use them. End users have no wish to become part-time records managers, they have their own work to do thank you.

By all means make the new structure a subset of the classification system, major headings only and no more than two levels if possible. If it takes longer than a few seconds to decide where to save something or to find something then it is too complex. If three people save the same document in three different places then it is too complex. If a senior manager can’t find something instantly then it is too complex. The staff aren’t to blame, you are.

I have written about this issue previously and you can reference a white paper at this link, “Do you really need a Taxonomy?”

The shared drives aren’t where we classify documents, it is where we make it as easy and as fast as possible to save, retrieve and work on documents; no more, no less. Proper classification (if I can use that term) happens later when you use intelligent software to automatically capture, analyse and store documents in your document management system.

Please note, shared drives are not a document management system and a document management system should never just be a copy of your shared drives. They have different jobs to do.

Making the new system work

Let’s fall back on one of the oldest acronyms in business, KISS, “Keep It Simple Stupid!” Simple is good and elegant, complex is bad and unfathomable.

Testing is a good example of where the KISS principle must be applied. Asking all staff to participate in the testing process may be diplomatic but it is also suicidal. You need to select your testers. You need to pick a small number of smart people from all levels of your organization. Don’t ask for volunteers, you will get the wrong people applying. Do you want participants who are committed to the system working, or those who are committed to it failing? Do you want this to succeed or not?

If I am pressed for time I use what I call the straight-line-method. Imagine all staff in a straight line from the most junior to the most senior. Select from both ends, the most junior and the most senior. Chances are that if the system works for this subset that it will also work for all the staff in between.

Make it clear to all that the shared drives are not your document management system. The shared drives are there for ease of access and to work on documents. The document management system has business rules to ensure that you have inviolate copies of important documents plus all relevant contextual information. The document management system is where you apply business rules and workflow. The document management system is all about business process management and compliance. The shared drives and the document management system are related and integrated but they have different jobs to do.

We have shared drives so staff don’t work on documents on ‘private’ drives, inaccessible and invisible to others. We provide a shared drive resource so staff can collaborate and share information and easily work on documents. We have shared drives so that when someone leaves we still have all their documents and work-in-process.

Please do all the complex processes required in your document management system using intelligent software, automate as much as possible. Productivity gains come about when you take work off staff, not when you load them up with more work. Give your staff as much time as possible so they can use their expertise to do the core job they were hired for.

If you don’t force extra work on your staff and if you make it as easy and as fast as possible to use the shared drives then your system will work. Do the opposite and I guarantee it will not work.

What is the future of RecFind? - The Product Road Map

by Frank 19. May 2014 06:00

First a little history. We began in 1984 with our first document management application called DocFind marketed by the then Burroughs Corporation (now called Unisys). In June 1986 we sold the first version of RecFind, a fully-featured electronic records management system and a vast improvement on the DocFind product. Then we progressively added document imaging then electronic document management and workflow and then with RecFind 6 a brand new paradigm and an amalgam of all previous functionality; an Information management system able to run multiple applications concurrently with a complete set of enterprise content management functionality. RecFind 6 is the eighth completely new iteration of the iconic RecFind brand.

RecFind 6 was and is unique in our industry because it was designed to be what was previously called a Rapid Application Development system (RAD) but unlike previous examples, we provided the high level toolset so new applications could be inexpensively ‘configured’ (by using the DRM) not expensively programmed and new application tables and fields easily populated using Xchange. It immediately provided every customer with the ability to change almost anything they needed changed without needing to deal with the vendor (us).  Each customer had the same tools we used to configure multiple applications within a single copy of RecFind 6. RecFind 6 was the first ECM product to truly empower the customer and to release them from the expensive and time consuming process of having to negotiate with the vendor to “make changes and get things done.”

In essence, the future of the RecFind brand can be summarised as more of the same but as an even easier to use and more powerful product. Architecturally, we are moving away from the fat-client model (in our case based on the .NET smart-client paradigm) to the zero-footprint, thin-client model to reduce installation and maintenance costs and to support far more operating system platforms than just Microsoft Windows. The new version 2.6 web-client for instance happily runs on my iPad within the Safari browser and provides me with all the information I need on my customers when I travel or work from home (we use RecFind 6 as our Customer Relationship Management system or CRM). I no longer need a PC at home and nor do I need to carry a heavy laptop through airports.

One of my goals for the remainder of 2014 and 2015 following is to convince my customer base to move to the RecFind 6 web-client from the standard .NET smart-client. This is because the web-client provides tangible, measurable cost benefits and will be the basis for a host of new features as we gradually deprecate the .NET smart-client and expand the functionality of the web-client. We do not believe there is a future for the fat/smart-client paradigm; it has seen its day. Customers are rightfully demanding a zero footprint and the support of an extensive range of operating environments and devices including mobile devices such as smartphones and tablets. Our web-client provides the functionality, mobile device support and convenience they are demanding.

Of course the back-end of the product, the image and data repository, also comes in for major upgrades and improvements. We are sticking with MS SQL Server as our database but will incorporate a host of new features and improvements to better facilitate the handling of ‘big data’. We will continue to research and make improvements to the way we capture, store and retrieve data and because our customer’s databases are now so large (measured in hundreds of Gigabytes), we are making it easier and faster to both backup and audit the repository. The objectives as always are scalability, speed, security and robustness.

We are also adding new functionality to allow the customer to bypass our standard user interface (e.g., the .NET smart-client or web-client) and create their own user interface or presentation layer. The objective is to make it as easy as possible for the customer to create tailored interfaces for each operating unit within their organization. A simple way to think of this functionality is to imagine a single high level tool that lets you quickly and easily create your own screens and dashboards and program to our SDK.

On the add-in product front we will continue to invest in our add-in products such as the Button, the MINI API, the SDK, GEM, RecCapture, the High Speed Scanning Module and the SharePoint Integration Module. Even though the base product RecFind 6 has a full complement of enterprise content management functionality these add-on products provide options requested by our customers. They are generally a way to do things faster and more automatically.

We will continue to provide two approaches for document management; the end-user paradigm (RecFind 6 plus the Button) and the fully automatic capture and classification paradigm (RecFind 6 plus GEM and RecCapture). As has been the case, we also fully expect a lot of our customers to combine both paradigms in a hybrid solution.

The major architectural change is away from the .NET smart-client (fat-client) paradigm to the browser-based thin-client or web-client paradigm. We see this as the future for all application software, unconstrained by the strictures of proprietary operating systems like Microsoft Windows.

As always, our approach, our credo, is that we do all the hard work so you don’t have to. We provide the feature rich, scalable and robust image and data repository and we also provide all of the high level tools so you can configure your applications that access our repository. We also continue to invest in supporting and enhancing all of our products making sure that they have the feature set you require and run in the operating environments you require them to. We invest in the ongoing development of our products to protect your investment in our products. This is our responsibility and our contribution to our ongoing partnership.

 

Are you also confused by the term Enterprise Content Management?

by Frank 16. September 2012 06:00

I may be wrong but I think it was AIIM that first coined the phrase Enterprise Content Management to describe both our industry and our application solutions.

Whereas the term isn’t as nebulous as Knowledge Management it is nevertheless about as useful when trying to understand what organizations in this space actually do. At its simplest level it is a collective term for a number of related business applications like records management, document management, imaging, workflow, business process management, email management and archiving, digital asset management, web site content management, etc.

To simple people like me the more appropriate term or label would be Information Management but as I have already covered this in a previous Blog I won’t beleaguer the point in this one.

When trying to define what enterprise content management actually means or stands for we can discard the words ‘enterprise’ and ‘management’ as superfluous to our needs and just concentrate on the key word ‘content’. That is, we are talking about systems that in some way create and manage content.

So, what exactly is meant by the term ‘content’?

In the early days of content management discussions we classified content into two broad categories, structured and unstructured. Basically, structured content had named sections or labels and unstructured content did not. Generalising even further we can say that an email is an example of structured content because it has commonly named, standardised and accessible sections or labels like ‘Sender’, ‘Recipient’, ‘Subject’ etc., that we can interrogate and rely on to carry a particular class or type of information. The same general approach would regard a Word document as unstructured because the content of a Word document does not have commonly named and standardised sections or labels. Basically a Word document is an irregular collection of characters that you have to parse and examine to determine content.

Like Newtonian physics, the above generalisations do not apply to everything and can be argued until the cows come home. In truth, every document has an accessible structure of some kind. For example, a Word document has an author, a size, a date written, etc. It is just that it is far easier to find out who the recipient of an email was than the recipient of a Word document. This is because there is a common and standard ‘Tag’ that tells us who the recipient is of an email and there is no such common and standard tag for a Word document.

In our business we call ‘information about information’ (e.g., the recipient and date fields on an email) Metadata. If an object has recognizable Metadata then it is far easier to process than an object without recognizable Metadata. We may then say that adding Metadata to an object is the same as adding structure.

Adding structure is what we do when we create a Word document using a template or when we add tags to a Word document. We are normalizing the standard information we require in our business processes so the objects we deal with have the structure we require to easily and accurately identify and process them.

This is of course one of the long-standing problems in our industry, we spend far too much time and money trying to parse and interpret unstructured objects when we should be going back to the coal face and adding structure when the object is first created. This is of course relatively easy to do if we are creating the objects (e.g., a Word document) but not easy to achieve if we are receiving documents from foreign sources like our customers, our suppliers or the government. Unless you are the eight-hundred pound gorilla (like Walmart) it is very difficult to force your partners to add the structure you require to make processing as fast and as easy and as accurate as possible.

There have been attempts in the past to come up with common ‘standards’ that would have regulated document structure but none have been successful. The last one was when XML was the bright new kid on the block and the XML industry rushed headlong into defining XML standards for every conceivable industry to facilitate common structures and to make data transfer between different organizations as easy and as standard as possible. The various XML standardisation projects sucked up millions or even billions of dollars but did not produce the desired results; we are still spending billions of dollars each year parsing unstructured documents trying to determine content.

So, back to the original question, what exactly is Enterprise Content Management? The simple answer is that it is the business or process of extracting useful information from objects such as emails and PDFs and Word documents and then using that information in a business process. It is all about the process of capturing Metadata and content in the most accurate and expeditious manner possible so we can automate business processes as much as possible.

If done properly, it makes your job more pleasant and saves your organization money and it makes your customers and suppliers happier. As such it sounds a lot like motherhood (who is going to argue against it?) but it certainly isn’t like manna from heaven. There is always a cost and it is usually significant. As always, you reap what you sow and effort and cost produces rewards.

Is content management something you should consider? The answer is definitely yes with one proviso; please make sure that the benefits are greater than the cost.

 

Why isn’t Linux the universal desktop operating system?

by Frank 9. September 2012 06:00

I own and run a software company building enterprise content management solutions (RecFind 6) and I have a love/hate relationship with Microsoft Windows.

I love Windows because it is a universal platform I can develop for that provides me access to ninety-percent plus of the business and government organizations in the world.  I only need one set of source code and one set of development skills and I can leverage off this to offer my solutions to virtually any organization in any location. We may say that Microsoft Windows is ubiquitous.

I hate Windows because it is overly complex, unnecessarily difficult to build software for, buggy and causes me to have to spend far more money on software development than I ought to. There are many times each year when all I really want to do is assemble all the Microsoft programmers in one place and then bang their heads together and shout at them, “for heaven’s sake, why don’t you guys just talk to each other!”

Linux on the other hand, even in its many manifestations (one of its main problems), is not ubiquitous and it does not provide me with an entry point to ninety-percent of the world’s businesses and government agencies. This is why I don’t develop software for Linux.

Because I don’t develop application software for Linux I am not an expert in Linux but I have installed and run Ubuntu as a desktop operating system and I really like it. It is simple, clean and easy to use; more ‘Apple-like’ than ‘Windows-like’ to my eyes and all the better for it. It is also a great software development platform for programmers especially using the Eclipse IDE. It is also free and most of the office software you need (like OpenOffice) is also free. It also runs happily on virtually any PC or notebook and seems to be a lot faster than Windows.

So, Ubuntu (a flavour of Linux but a very good one) is free, most of the office software you need is also free, it looks good, runs on your hardware and is easy to use and uncomplicated. So why isn’t it ubiquitous? Why are people and organizations all over the world paying for (and struggling with – who remembers Vista?) inferior Windows when Linux varieties like Ubuntu are both free and better? Why are users and organizations now planning to pay to upgrade to Windows 7 or Windows 8 when alternative operating systems like Ubuntu will do the job and are free?

I read a lot of technical papers and IT blogs and I notice that the Linux community has been having similar discussions for years. As an ‘outsider’ (i.e., not a Linux zealot) it is pretty obvious to me that the Linux community is the main reason Linux is not ubiquitous. Please read the following ZDNet link and then tell me what you think.

http://www.zdnet.com/linus-torvalds-on-the-linux-desktops-popularity-problems-7000003641/

When I read an article like this two terms come immediately to mind, internecine bickering or sibling rivalry. How many versions of Linux do we need? The Linux fraternity calls these distributions or ‘distros’ to the insiders.  At last count there are around 600 ‘distros’ of which 300 are actively maintained.  Ubuntu is just one of these distros. How would the business world fare if there were 300 versions of Windows? Admittedly, most of the 300 have been built for a specialised use and the real list of general use versions of Linux is much smaller and includes product names such as Ubuntu, Kubuntu, Fedora, Mint, Debian, Arch, openSUSE, Red hat and about a dozen more.

But, it gets worse. On Ubuntu alone there are there main desktop environments to choose from, GNOME, KDE and Xfce.  Are you confused yet? Is it now obvious why Linux is not the default desktop operating system? It probably isn’t obvious to the squabbling Linux insider community but it is patently obvious to everyone else.

Linux isn’t the default desktop operating system because there is not a single standard and there is never likely to be a single standard. No software developer is going to invest millions of dollars in building commercial applications for Linux because of this. Without a huge library of software applications there is no commercial market for Linux. Windows reigns supreme despite its painful problems because it provides a single platform and because software developers do invest in building millions of commercial applications for the windows operating system.

Until such time as the Linux community stops its in-fighting and produces a single robust, supported version of Linux (when hell freezes over I hear you say) the situation will not change. The inferior desktop operating system Windows will continue to dominate and Linux will remain the plaything of propeller-heads and techies and old guys like me who really like it (well, the Ubuntu version that is, there are too many distros for me to become an expert in all of them and that is the core of the problem).

Is Information Management now back in focus?

by Frank 12. August 2012 06:00

When we were all learning about what used to be called Data Processing we also learned about the hierarchy or transformation of information. That is, “data to information to knowledge to wisdom.”

Unfortunately, as information management is part of what we call the Information Technology industry (IT) we as a group are never satisfied with simple self-explanatory terms. Because of this age-old flaw we continue to invent and hype new terms like Knowledge Management and Enterprise Content Management most of which are so vague and ill-defined as to be virtually meaningless but nevertheless, provide great scope for marketing hype and consultants’ income.

Because of the ongoing creation of new terminology and the accompanying acronyms we have managed to confuse almost everyone. Personally I have always favoured the term ‘information management’ because it tells it like it is and it needs little further explanation. In the parlance of the common man it is an “old un, but a good un.”

The thing I most disliked about the muddy knowledge management term was the claim that computers and software could produce knowledge. That may well come in the age of cyborgs and true artificial intelligence but I haven’t seen it yet. At best, computers and software produce information which human beings can convert to knowledge via a unique human cognitive process.

I am fortunate in that I have been designing and programming information management solutions for a very long time so I have witnessed first-hand the enormous improvements in technology and tools that have occurred over time. Basically this means I am able to design and build an infinitely better information management solution today that I could have twenty-nine years ago when I started this business.  For example, the current product RecFind 6 is a much better, more flexible, more feature rich and more scalable product than the previous K1 product and it in turn was an infinitely better product than the previous one called RecFind 5.

One of the main factors in them being better products than their predecessors is that each time we started afresh with the latest technology; we didn’t build on the old product, we discarded it completely and started anew. As a general rule of thumb I believe that software developers need to do this around a five year cycle. Going past the five year life cycle inevitably means you end up compromising the design because of the need to support old technology. You are carrying ‘baggage’ and it is synonymous with trying to run the marathon with a hundred pound (45 Kg) backpack.

I recently re-read an old 1995 white paper I wrote on the future of information management software which I titled “Document Management, Records Management, Image Management Workflow Management...What? – The I.D.E.A”. I realised after reading this old paper that it is only now that I am getting close to achieving my lofty ambitions as espoused in the early paper. It is only now that I have access to the technology required to achieve my design ambitions. In fact I now believe that despite its 1995 heritage this is a paper every aspiring information management solution creator should reference because we are all still trying to achieve the ideal ‘It Does Everything Application’ (but remember that it was my I.D.E.A. first).

Of course, if you are involved in software development then you realise that your job is never done. There are always new features to add and there are always new releases of products like Windows and SQL server to test and certify against and there are always new releases of development tools like Visual Studio and HTML5 to learn and start using.

You also realise that software development is probably the dumbest business in the world to be part of with the exception of drug development, the only other business I can think of which has a longer timeframe between beginning R&D and earning a dollar. We typically spend millions of dollars and two to three years to bring a brand new product to market. Luckily, we still have the existing product to sell and fund the R&D. Start-ups however, don’t have this option and must rely on mortgaging the house or generous friends and relatives or venture capital companies to fund the initial development cycle.

Whatever the source of funding, from my experience it takes a brave man or woman to enter into a process where the first few years are all cost and no revenue. You have to believe in your vision, your dream and you have to be prepared for hard times and compromises and failed partnerships. Software development is not for the faint hearted.

When I wrote that white paper on the I.D.E.A. (the It Does Every Thing Application or, my ‘idea’ or vision at that time) I really thought that I was going to build it in the next few years, I didn’t think it would take another fifteen years. Of course, I am now working on the next release of RecFind so it is actually more than fifteen years.

Happily, I now market RecFind 6 as an information management solution because information management is definitely back in vogue. Hopefully, everyone understands what it means. If they don’t, I guess that I will just have to write more white papers and Blogs.

Have we really thought about disaster recovery?

by Frank 29. July 2012 06:00

The greatest knowledge-loss disaster I can think of was the destruction of the great library of Alexandria by fire around 642 AD. This was the world’s largest and most complete store of knowledge at the time and it was almost totally destroyed. It would take over a thousand years for mankind to rediscover and regain the knowledge that went up in smoke and to this day we still don’t think we have recovered or re-discovered a lot of what was lost. It was an unmitigated disaster for mankind because nearly all of Alexandria’s records were flammable and most were irreplaceable.

By contrast, we still have far older records from ancient peoples like the Egyptians of five-thousand years ago because they carved their records in stone, a far more durable material.

How durable and protected are your vital records?

I mentioned vital records because disaster recovery is really all about protecting your vital records.  If you are a business a vital record is any record without which your business could not run. For the rest of us a vital record is irreplaceable knowledge or memories. I bet the first thing you grab when fire or flood threatens your home is the family photo album or, in this day and age, the home computer or iPad or backup drive.

In 1996 I presented a paper to the records management society titled “Using technology as a surrogate for managing and capturing vital paper based records.” The technology references are now both quaint and out-of-date but the message is still valid. You need to use the most appropriate technology and processes to protect your vital records.

Interestingly, the challenges today are far greater than they were in 1996 because of the ubiquitous ‘Cloud’.  If you are using Google Docs or Office 365 or even Apple iCloud who do you think is protecting your vital records? Have you heard the term ‘outage’? Would you leave your children with a stranger, especially a stranger who doesn’t even tell you the physical location of your children? A stranger who is liable to say, “Sorry, it appears that your children are missing but under our agreement I accept no liability.” Have you ever read the standard terms and conditions of your Cloud provider? What are your rights if your vital records just disappear? Where are your children right now?

Some challenges are surprisingly no different because we are still producing a large proportion of our vital records in paper. Apart from its major flaws of being highly flammable and subject to water damage paper is in fact an excellent medium for the long term preservation of vital records because we don’t need technology to read it; we may say paper is technology agnostic.

By contrast, all forms of electronic or optical storage are strictly technology dependent. What good is that ten year old DAT tape if you no longer have the Pentium compute, SCSI card, cable and Windows 95 drivers to read it? Have you moved your vital records to new technology lately?

And now to the old bugbear (a persistent problem or source of annoyance), a backup is not disaster recovery. If your IT manager tells you that you are OK because he takes backups you should smack him with your heaviest notebook, (not the iPad, the iPad is too light and definitely not with the Samsung tablet, it is too fragile).

I have written about what disaster recovery really involves and described our disaster recovery services so I won’t repeat it here, I have just provided the link so you can read at your leisure.

Suffice to say, the objective of any disaster recovery process is to ensure that you can keep running your business or life with only a minimal disruption regardless of the type or scale of the disaster.

I am willing to bet that ninety-percent of homes and businesses are unprepared and cannot in any way guarantee that they could continue to run their business or home after a major disaster.

We don’t need to look as far back as 642 AD and the Alexandria Library fire for pertinent examples. How about the tsunami in Japan in 2011? Over 200,000 homes totally destroyed and countless business premises wiped from the face of the earth. Tsunamis, earthquakes, floods, fire and wars are all very real dangers no matter where you live.

However, it isn’t just natural disasters you need to be wary of. A recent study published by EMC Corporation offers a look at how companies in Japan and Asia Pacific deal with disaster recovery. According to the study, the top three causes of data loss and downtime are hardware failure (60%), data corruption (47%), and loss of power (44%).

The study also goes on to analyse how companies are managing backups and concludes, “For all the differences inherent to how countries in the Asia Pacific region deal with their data, there is at least one similarity with the rest of the world: Companies are faced with an increasing amount of data to move within the same backup windows. Many businesses in the region, though, still rely on tape backup systems (38%) or CD-ROMs (38%). On this front, the study found that many businesses (53%) have plans to migrate from tape to a faster medium in order to improve the efficiencies of their data backup and recovery.”

It concludes by estimating where backups are actually stored, “The predominant response is to store offsite data at another company-owned location within the same country (58%), which is followed by at a “third-party site” within the same country.”

I certainly wouldn’t be relying on tape as my only recovery medium and neither would I be relying on data and systems stored at the same site or at an employee’s house. Duplication and separation are the two key principles together with proven and regularly tested processes.

I recently spoke to an IT manager who wasn’t sure what his backup (we didn’t get to disaster recovery) processes were. That was bad enough but when he found out it seemed that they took a full backup once a month and then incremental backups every day and he had not tested the recovery process in years. I sincerely hope that he has somewhere to run and hide when and if his company ever suffers a disaster.

In a nutshell, disaster recovery is all about being able to get up and running again in as short a time as possible even if your building burns to the ground. That in fact is the acid test of any disaster recovery plan. That is, ask your IT manager, “If this building burns down Thursday explain to me how we will be up and operating again on Friday morning.”

If his answer doesn’t fill you with confidence then you do not have a disaster recovery plan.

 

Business Processes Management, BPM, BPO; just what does it entail?

by Frank 15. July 2012 06:00

Like me I am sure that you have been inundated with ads, articles, white papers and proposals for something called BPM or BPO, Business Process Management, Business Process Outsourcing and Business Process Optimisation.

Do you really understand what it all means?

BPM and BPO certainly aren’t new, there have been many companies offering innovative and often cutting-edge technology solutions for many years. The pioneering days were probably the early 1980’s. One early innovator I can recall (and admired) was Tower Technology because their office was just across from our old offices in Lane Cove.

In the early days BPM was all about imaging and workflow and forms. Vendors like Tower Technology used early version of workflow products like Staffware and a whole assortment of different imaging and forms products to solve customer processing problems. It involved a lot of inventing and a lot of creative genius to make all those disparate products work and actually do what the sales person promised. More often than not the final solution didn’t quite work as promised and it always seemed to cost a lot more than quoted.

Like all new technologies everyone had to go through a learning process and like most new technologies, for many years the promises were far ahead of what was actually delivered.

So, is it any different today? Is BPM a proven, reliable and feature-rich and mature technology?

The answer dear friends is yes and no; just as it was twenty-five or more years ago.

There is a wonderful Latin phrase ‘Caveat Emptor’ which means “Let the buyer beware”. Caveat Emptor applies just as much today as it did in the early days because despite the enormous technological progress we have all witnessed and experienced we are still pushing the envelope. We are still being asked to do things the current software and hardware can’t quite yet handle. The behind the scenes technicians are still trying to make the product do what the sales person promised in good faith (we hope) because he didn’t really understand his product set.

Caveat Emptor means it is up to the buyer to evaluate the offering and decide if it can do the job. Of course, if the vendor lies or makes blatant false claims then Caveat Emptor no longer applies and you can hit them with a lawsuit.  However, in reality it is rarely as black and white as that. The technology is complex and the proposals and explanations are full of proprietary terminology, ambiguities, acronyms and weaselly words.

Like most agreements in life you shouldn’t enter into a BPM contract unless you know exactly what you are getting into. This is especially true with BPM or BPO because you are talking about handing over part of your core business processes to someone else to ‘improve’. If you don’t understand what is being proposed then please hire someone who does; I guarantee it will be worth the investment. This is especially true if you are outsourcing customer or supplier facing processes like accounts payable and accounts receivable. Better to spend a little more up front than suffer cost overruns, failed processes and an inbox full of complaints.

My advice is to always begin with some form of a consultancy to ‘examine’ your processes and produce a report containing conclusions and recommendations. The vendor may (should) offer this as part of its sales process and it may be free or it may be chargeable.  Personally, I believe in the old adage that you get what you pay for so I would prefer to pay to have a qualified and experienced professional consultant do the study. The advantage of paying for the study is that you then ‘own’ the report and can then legally provide it to other vendors to obtain competitive quotes.

You should also have a pretty good idea of what the current processing is costing you in both direct and indirect costs (e.g., lost sales, dissatisfied customers, unhappy staff, etc.) before beginning the evaluation exercise. Otherwise, how are you going to be able to judge the added value of the vendor’s proposal?

In my experience the most common set of processes to be ‘outsourced’ are those to do with accounts payable processing. This is the automation of all processes beginning with your purchase order (and its line items), the delivery docket (proof of receipt), invoices (and line items) and statements. The automation should reconcile invoices to delivery dockets and purchase orders and should throw up any discrepancies such as items invoiced but not delivered, variations in price, etc. Vendors will usually propose what is commonly called an automatic matching engine; the software that reads all the documents and does its best to make sure you only pay for delivered goods that are exactly as ordered.

If the vendor’s proposal is to be attractive it must replace your manual processing with an automated model that is faster and more accurate. Ideally, it would also be more cost-effective but even if it is more costly than your manual direct cost estimate it should still solve most of your indirect cost problems like unhappy suppliers and late payment fees.

In essence, there is nothing magical about BPM and BPO; it is all about replacing inefficient manual processes with much more efficient automated ones using clever computer software. The magic, if that is the word to use, is about getting it right. You need to know what the current manual processing is costing you. You need to be absolutely sure that you fully understand the vendor’s proposal and you need to build in metrics so you can accurately evaluate the finished product and clearly determine if it is meeting its stated objectives.

Please don’t enter into negotiations thinking that if it doesn’t work you can just blame the vendor. That would be akin to cutting off your nose to spite your face. Remember Caveat Emptor; success or failure really depends upon how well you do your job as the customer.

Does the customer want to deal with a sales person?

by Frank 8. July 2012 06:00

We are in the enterprise content management business or more explicitly in the information management business and we provide a range of solutions including contract management, records management, document management, asset management, HR management, policy management, etc. We are a software company that designs and develops its own products. We also develop and provide all the services required to make our products work once installed at the customer’s site.

However, we aren’t in the ‘creating innovative software’ business even though that is what we do; we are really in the ‘selling our innovative software’ business because without sales there would be no business and no products and no services (and no employees).

We have been in business for nearly 30 years and have watched and participated as both technology and practices have evolved over that time. Some changes are easy to see. For example, we no longer product paper marketing collateral, we produce all of our marketing collateral in HTML or PDF form for delivery via our website and email. We also now market to the world via our website and the Internet, not just to our ‘local’ area.

Another major area of change has been the interface between the customer and the vendor. Many companies today no longer provide a human-face interface. Most big companies and government agencies no longer maintain a shopfront; they require you to deal with them via a website. Some don’t even allow a phone call or email; your only contact is via a web form.

Sometimes the website interface works but mostly it is a bit hit and miss and a very frustrating experience as the website fails or doesn’t offer the option you need. My pet hate is being forced to fill in a web form and then never hearing back from the vendor. Support is often non-existent or very expensive. From my viewpoint, a major failing of the modern paradigm is that I more often than not cannot get the information I need to evaluate a product from the website. This is when I try to find a way to ask them to please have a sales person contact me as I need to know more about their product or service.

I look forward to a sales person contacting me because I know what I want and I know what questions I need answers to. However, the sad truth is that I am rarely contacted by a sales person (and I refuse to speak to anyone from an Indian call centre because I have no wish to waste my time). However, experience with my customers and prospects tells me that not everyone is as enamoured with sales people as I am. In fact, many of the people I have contact with are very nervous of sales people, some are even afraid of them.

Unfortunately for me, we aren’t in a business where we can sell our products and services via a webpage and cart checkout. We need to understand the customer’s business needs before we can provide a solution so we need to employ high quality sales people who are business savvy and really understand business processes. It is not until I know enough to be able to restate the customer’s requirement in detail that I am in a position to make a sale. Conversely, the customer isn’t going to buy anything from me until he/she is absolutely sure I understand the problem and can articulate the solution.

So, in my industry I rely on a human interface and that usually means a sales person. But, do I really need a sales person and do my customers and prospective customers really want to speak to a sales person? Is there a more modern alternative? Please trust me when I say I have pondered this question many, many times.

Those in my business (selling information management solutions) will know how hard it is to find a good sales person and how hard it is to keep them. The good ones are less than ten-percent of the available pool and even after you hire them they are still besieged by offers from recruiters. Finding and retaining good sales people is in my opinion the biggest problem facing all the companies in our industry. They are also the most expensive of human resources and after paying a recruitment fee and a big salary you are then faced with the 80:20 rule; that is, 20% of the sales force produces 80% of your revenues.

Believe me, if I could find a way to meet my sales targets without expensive and difficult to manage sales people I would. However, as our solutions are all about adapting our technology to the customer’s often very complex business processes this is not a solution that can be sold via a website or automated questionnaire; it requires a great deal of skill and experience.

So for now dear customer, please deal with my sales person; he or she is your best chance of solving that vexing problem that is costing your organization money and productivity. All you really need to do is be very clear about what you want and very focussed on the questions you want answered. There is nothing to be afraid of because if you do your homework you will quickly be able to differentiate the good sales person from the bad sales person and then take the appropriate action. I never deal with a bad sales person and nor should you. I also really enjoy dealing with a professional sales person who knows his/her business and knows how to research and qualify my needs.

A good sales person uses my time wisely and saves me money. A bad sales person doesn’t get the chance to waste my time. This should be your approach too; be happy and willing to deal with a sales person but only if he/she is a professional and can add value to your business.

Sales people call this the value proposition. More explicitly; if the sales person is not able to articulate a value proposition to the customer that resonates with the customer then he/she shouldn’t be there. Look for the value proposition; if it isn’t apparent, close the meeting. Make each and every sales person understand, if they aren’t able to articulate a value proposition for your business then there is no point in continuing the conversation.

Dealing with a sales person isn’t difficult; it is all up to you to know what you want (the value proposition) and what questions to ask. Do your preparation and you will never fear a sales person again.

 

Why is the Web-Client a much better solution for applications?

by Frank 17. June 2012 06:00

When we see terms like Web-Client or Thin-Client it means an application that runs in a browser like IE, Firefox or Chrome. The alternative is a Fat-Client usually meaning an application that runs within Windows (XP, Vista, Windows 7) on your desktop or notebook.

Neither the Web-Client nor the Fat-Client are new concepts having been around for many years but both have changed and improved over time as they have employed new technologies. Most Fat-Client applications today for instance are based on the 2008 or 2010 Microsoft .NET platform and require the .NET Framework to be installed on the workstation or notebook. Most Web-Client applications today utilize advanced multi-tier tools like those from Ajax plus more advanced toolsets from development systems like Visual Studio 2010 and provide a far better user interface and experience than their ancestors did.

In a nutshell, in the old days say fifteen years ago, Web-Clients were clunky, two-tier and had terrible user interfaces, nowhere near as good as their Fat-Client counterparts. Today, using the advanced development tools available, it is possible to build a Web-Client that looks and works little different from a Fat-Client. Much better development tools have made it much easier for programmers to build nicer looking, more functional and easier to use Web-Client user interfaces for applications.

It still isn’t all roses because not all browsers are equal and different browsers (e.g., IE versus Safari) and different versions of browsers (e.g., IE 6, 7, 8 and 9) force the programmer to have to write extra code to handle the differences. The advent of HTML5 will soon introduce another layer of differences and difficulty as vendors deploy different or non-standard versions of the HTML5 standard. However, this has been the case for as long as I can remember (there have always been differences in the HTML deployed by different vendors) and us programmers are used to it by now.

It used to be that because of the limited development tools available to code Web-Client interfaces that the typical Web-Client had far less functionality than its Fat-Client equivalent. Whereas it is still easier to implement complex functionality in a .NET Fat-Client than a Web-Client, it is possible to provide equivalent functionality in a Web-Client; it just needs smarter programmers, more work and a few extra tools.

So if your application vendor offers you the choice of a Fat or Web user interface, which should you choose and why?

You first need to ask a couple of very important questions:

·         Does the Web-Client have equivalent functionality to the Fat-Client? and

·         Is the Web-Client operating system and browser independent?

Let’s address the second question first because it is the most important. You can for example, write a Web-Client user interface that it not operating system independent and that in fact is ‘locked’ into a particular operating system like Windows and a particular browser like IE8. Most Silverlight Web-Client applications for instance require that the .NET Framework be installed on the local workstation or notebook. As the .NET Framework only works on Microsoft Windows systems it means you can’t run your Web-Client on other systems such as Linux, Mac OS, iOS or Android.

It also means that your IT department has to install and maintain software on each and every workstation or notebook that needs to run the application. This is the big problem because the cost of installing and maintaining software on each and every desktop and notebook is enormous.

Ideally, the Web-Client will be operating system independent and will support most of the latest versions of the most popular browsers. Expecting the Web-Client to support old versions of browsers is an unreasonable expectation.

If the Web-Client is operating system independent and has the same functionality as the Fat-Client then your decision is a foregone conclusion; go with the Web-Client and save on the installation and ongoing maintenance costs that would apply to the Fat-Client but not to the Web-Client.

If the Web-Client has a subset of the functionality of the Fat-Client you then need to compare the functionality offered to the needs of different classes of your users. It may not suit your systems administrator but will it suit inquiry users who just need to be able to browse, search, view and print? Will it also suit the middle-class of users who need to be able to add, modify and delete records but not perform administrative tasks like designing reports and building workflow templates?

It is important to have as many of your users as possible using the Web-Client because not only will this approach reduce your IT costs it will provide extra benefits for users who travel and operate from remote and poorly serviced locations not to mention those who work from home. After all, all that is needed is a computer (or tablet or smart-phone) and a secure Internet connection.  

Obviously, as a Web-Client runs from a Web Server your IT people need to ensure that it is secure and for example, operates as a secure and encrypted HTTPS website rather than as an insecure HTTP website. All traffic from public sites needs to be encrypted both ways as a minimum security requirement.

The other major benefit of the Web-Client is that it protects you from differences in operating systems, e.g., Windows XP versus Windows 7 or even Windows 8. A Web-Client runs in the browser, not in Windows so it is much less affected by fundamental changes in an operating system than a Fat-Client application which has to be re-compiled and re-certified against every change. Importantly, you are not locked in to a particular operating system or version of an operating system.

I expect most application vendors to be moving their customers to a Web-Client end user platform; we certainly will be and are investing large amounts of dollars in our RecFind 6 Web-Client interface. There are enormous cost and convenience benefits to our customers in moving from our Fat-Client to our Web-Client user interface and we will be doing everything in our power to encourage this move.

 

Using Terminal Digits to minimize “Squishing”

by Frank 13. May 2012 06:00

Have you ever had to remove files from shelving or cabinets and reallocate them to other spaces because a drawer or shelf is packed tight? Then had to do it again and again?

One of my favourite records managers used to call this the “Squishing” problem.

The squishing problem is inevitable if you start to load files from the beginning of any physical filing system, be it shelving or cabinets and unload file files from random locations as the retention schedule dictates. If you create and file parts (a new folder called part 2, part 3, etc., when the original file folder is full) then the problem is exacerbated. You may well spend a large part of your working life shuffling file folders from location to location; a frustrating and worthless, thankless task. You also get to inhale a lot of toxic paper dust and mites which is not a good thing.

You may not be aware of it but there is a very simple algorithm you can utilize to make sure the squishing problem never happens to you. It is usually referred to as the ‘Terminal Digit’ file numbering system but you may call it whatever you like. The name isn’t important but the operation is.

Importantly, you don’t need to change your file numbering system other than by adding on additional numbers to the end. These additional numbers are the terminal digits.

The number of terminal digits you need depends upon how many file folders you have to manage. Here is a simple guideline:

·         One terminal Digit (0 to 9) = one thousand files

·         Two Terminal Digits (00 to 99) = ten thousand Files

·         Three Terminal Digits (000 to 999) = greater than ten thousand files

Obviously, you also have to have the filing space and appropriate facilities available (e.g., boxes, bays, etc.,) to hold the required number of files for each terminal.

It is called the Terminal Digit system because you first have to separate your available filing space into a number of regular ‘terminals’. Each terminal is identified by a number, e.g., 0, 1, 2, 09, 23, 112, 999, etc.

The new terminal digit is additional and separate from your normal file number. It determines which terminal a file will be stored in. Let’s say your normal file number is of the format YYYY/SSSSSS. That is, the current year plus an automatically incrementing auto number like 2012/000189 then 2012/000190, etc. If we use two terminal digits and divide your available filing space into one hundred terminals (think of it as 100 equally sized filing slots or bays numbered 00 to 99) then your new file number format is YYYY/SSSSSS-99. The two generated file numbers above may now look like 2012/000189-00 and 2012/000190-01.

File folder 2012/000189-00 is filed in terminal number 00 and 2012/000190-01 is filled in terminal number 01. In a nutshell, what we are doing is distributing files evenly across all available filing space. We are not starting at terminal 00 and filling it up and then moving on to terminal 01, then terminal 02 when 01 is full etc. Finding files is even easier because the first part of the file number you look at is the terminal digit. If a file number ends in 89 it will be in terminal 89 in file number order.

The other good news is that when we unload files from the shelves say at end of life or at the point in the lifecycle when they need to sent offsite we will also unload files evenly across all available filing space. If the terminals are actually big enough and if you have calculated everything correctly you should never again suffer from the ‘squishing’ problem and you should never again have to ingest paper dust and mites when tediously shuffling files from location to location.

Obviously, there is a little more to this than sticking a couple of digits on the end of your file number. I assume you are using a computerised records management system so changes have to be made or configured to correctly calculate the now extended file number (including the new terminal digit) and your colour file labels will need to be changed to show the terminal digit in a prominent position.

There is also the question of what to do with your existing squished file store. Ideally you would start from scratch with your new numbering systems and terminals and wait for the old system to disappear as the files age and disappear offsite to Grace or Iron Mountain. That probably won’t be possible so you will have to make decisions based on available resources and budget and come up with the best compromise.

I can’t prove it but I suspect that the terminal digit system has been around since people began filing stuff. It is an elegantly simple solution to an annoying and frustrating problem and involves nothing more complicated than simple arithmetic.

The surprise is that so few organizations actually use it. In twenty-five plus years in this business I don’t think I have seen it in use at more than one to two-percent of the customers I have visited. I have talked about it and recommended it often but the solution seems to end up in the too-hard basket; a shame really, especially for the records management staff charged with the constant shuffling of paper files.

It may be that you have a better solution but just in case you don’t, please humour me and have another look at the terminal digit filing solution. It may just save you an enormous amount of wasted time and make your long-suffering records staff a lot happier and a lot healthier.

 

Month List