The Enterprise Content Management (ECM) challenges of managing ‘Records-in-Place’

by Frank 18. February 2020 01:01

 

The Challenge

An interesting challenge for Records Managers, Knowledge Managers and CIOs is the newer document management paradigm of being asked to manage all content according to the ‘Records-In-Place’ paradigm, without a single central repository. That is, to be responsible for all content across a myriad of locations controlled by a myriad of applications and a myriad of departments/organizations and people.

We all realize that Enterprise Content Management, or Content Services as it is now called by Gartner, is a moving target, constantly evolving with new challenges and new paradigms.

For example:  

  • How do we filter out only relevant information from social media?
  • How do we avoid capturing personal data and being culpable under privacy laws?
  • How do we capture all emails containing sexism, racism and bullying without being guilty of an invasion of privacy of the individual?
  • How do we meet all of our compliance obligations when our staff are spread across multiple states/counties/provinces and multiple countries with different legislation and compliance requirements?

All weighty challenges for the modern Records Manager, Knowledge Manager or CIO. Now we have a new challenge, how to manage multiple silos of information without a central repository.

Multi-Repository (multi-Silo) Systems

In multiple-repository systems we find multiple document stores or silos, local files, network file shares, local data bases, multiple file servers, multiple copies of SharePoint and multiple Cloud repositories like Dropbox, Box, iCloud, Google Cloud Storage and other hosted document storage. The CIO may proudly claim to manage multiple information silos but what he or she really has is a laissez faire document management ecosystem that may well be centrally monitored (hopefully) but is most certainly not centrally managed.

In the multiple silo model, the documents in our multiple locations are ‘managed’ by multiple people and multiple and independent applications (e.g., SharePoint, Google Docs, Office 365, etc.). We may have implemented another layer of software above all these diverse applications trying to keep up with what is happening but If I am just ‘watching’ then I don’t have an inviolate copy and I don’t have any control over what happens to the document. I am unable to enforce any standards. There is no ‘standard’ central control over versioning or retention and no control over the document life cycle or chain of evidence.

For example, you wouldn’t know if the document had since been moved to a different location that you are not monitoring. You wouldn’t know if it had been deleted. You wouldn’t know its relationship to other documents and processes in other silos. You wouldn’t know its context in your enterprise and therefore you wouldn’t know how relevant this document was. The important distinction is that under the multiple silo model you are ‘watching’ not managing; other multiple pieces of software are managing the life cycles and dispositions of the documents independently.

All you really know is that at a certain point in time a document existed and what its properties were at that time (e.g., historical ‘natural’ Metadata such as original filename, author, date created, etc.). However, you have no contextual Metadata, no transactional Metadata, no common indexing and no common Business Classification System. In this case, you don’t have a document management system, you have a laissez faire document management ecosystem, an assortment of independently ‘managed’ information silos. Most importantly, you are not able to link documents to business processes that transcend organizational structures and silos.

What are the issues?

Sure, SharePoint and Cloud silos make collaboration easier but at what cost? What can’t we do with this multi-silo ecosystem? Why doesn’t this solution meet the best-practice objectives of an enterprise Document Management System? What are the major areas where it falls short? How does the proliferation of multiple silos and content repositories affect us? What are our risks? Here is my assessment of the major shortfalls of this ‘Records-In-Place’ paradigm.

We are unable to: 

  1. extract the critical insights that enterprise information should provide
  2. define all the relationships that link documents to enterprise business processes
  3. find the right information at the right time
  4. provide a single access point for all content
  5. Implement an effective, consistent enterprise-wide document security system
  6. effectively protect against natural or man-made disasters
  7. produce evidence-standard documents
  8. minimize document handling costs
  9. guarantee the integrity of a document
  10. guarantee that a document is in fact the most recent version
  11. guarantee that a document is not an older copy
  12. minimize duplicate and redundant information
  13. meet critical compliance targets like Sarbanes-Oxley Act (SOX) and the HIPAA
  14. create secure, searchable archives for digital content
  15. effectively secure all documents against loss
  16. implement common enterprise version control
  17. facilitate enterprise collaboration
  18. Improve timeliness
  19. manage enterprise document security and control
  20. manage smaller and more reliable backups
  21. achieve the lowest possible document management and archiving costs
  22. deliver the best possible knowledge management access and search
  23. guarantee consistent content
  24. optimize management and executive time
  25. standardize the types of documents and other content can be created within an organization.
  26. define common use template to use for each type of document
  27. standardize the Metadata required for each type of document
  28. standardize where to store a document at each stage of its life cycle
  29. control access to a document at each stage of its life cycle
  30. move documents within the organization as team members contribute to the documents' creation, review, approval, publication, and disposition
  31. implement a common set of policies that apply to documents so that document-related actions are audited, documents are retained or disposed of properly, and content that is important to the organization is protected
  32. manage when and if a document has to be converted from one format to another as it moves through the stages of its life cycle
  33. guarantee that all documents are treated as corporate records, that common retention policies are applied determining which documents must be retained according to legal requirements and corporate guidelines
  34. guarantee enterprise-wide Regulatory compliance
  35. produce an enterprise-wide audit trail
  36. share information across departmental and/or silo boundaries
  37. centrally manage the security access to documents/information across different areas of the organization
  38. consistently classify documents as each repository may be used by a different department and be classified differently  
  39. identify duplicates based on document name
  40. easily find things based on metadata, as it wouldn’t be common across repositories
  41. control access via AD single sign on
  42. access all enterprise documents using a single license
  43. centrally audit access and changes to metadata

What are your risks?  

Your risks are huge!

 

Why are your staff still manually capturing and classifying electronic documents and emails?

by Frank 15. June 2017 06:00

For many years we have promoted the totally automatic paradigm for low cost, high productivity content management.

We haven’t just articulated this cost-effective approach, we have also invested in products to help our customers not just meet compliance targets but also become more efficient while doing so.

Specifically, we have invented and produced two products that totally automate the content management process for electronic documents and emails. These two products automate the capture, classification and work processes required for electronic documents and emails.

These two products sit on top of a super-fast, scalable and secure content management database with all the functionality required to manage your rich content. Find any eDoc in seconds, produce any report, audit every transaction.

These two products are GEM and RecCapture, innovations 10 years ago and leading the field today after being comprehensively updated and redeveloped over the years. The content management database is RecFind 6. All products in the RecFind 6 Product Suite are totally compatible with all the latest Microsoft software including Office 365, Windows 10, Windows Server 2016, MS SQL Server 2016 and SharePoint 2016.

Better still, these are low cost products available under a number of licensing options including installed onsite on your server, hosted, Perpetual License, Subscription License and Annual License.

If you would like further information, a demonstration, webinar, meeting, online presentation or quotation please contact us at your convenience at marketing@knowledgeonecorp.com

We look forward to being of service.

How to clean up your shared drives, Frank’s approach

by Frank 22. August 2014 06:00

In my time in this business (enterprise content management, records management, document management, etc.) I have been asked to help with a ‘shared drive problem’ more times than I can remember. This particular issue is analogous with the paperless office problem. Thirty years ago when I started my company I naively thought that both problems would be long gone by now but they are not.

I still get requests for purely physical records management solutions and I still get requests to assist customers in sorting out their shared drives problems.

The tools and procedures to solve both problems have been around for a long time but for whatever reason (I suspect lack of management focus) the problems still persist and could be described as systemic across most industry segments.

Yes, I know that you can implement an electronic document and records management system (we have one called RecFind 6) and take away the need for shared drives and physical records management systems completely but most organizations don’t and most organizations still struggle with shared drives and physical records. This post addresses the reality.

Unfortunately, the most important ingredient in any solution is ‘ownership’ and that is as hard to find as it ever was. Someone with authority, or someone who is prepared to assume authority, needs to take ownership of the problem in a benevolent dictator way and just steam-roll a solution through the enterprise. It isn’t solvable by committees and it requires a committed, driven person to make it happen. These kind of people are in short supply so if you don’t have one, bring one in.

In a nutshell there are three basic problems apart from ownership of the problem.

1.     How to delete all redundant information;

2.     How to structure the ‘new’ shared drives; and

3.     How to make the new system work to most people’s satisfaction.

Deleting redundant Information

Rule number one is don’t ever ask staff to delete the information they regard as redundant. It will never happen. Instead, tell staff that you will delete all documents in your shared drives with a created or last updated date greater than a nominated date (say one-year into the past) unless they tell you specifically which ‘older’ documents they need to retain. Just saying “all of them” is not an acceptable response. Give staff advance notice of a month and then delete everything that has not been nominated as important enough to retain.  Of course, take a backup of everything before you delete, just in case. This is tough love, not stupidity.

Structuring the new shared drives

If your records manager insists on using your already overly complex, hierarchical corporate classification scheme or taxonomy as the model for the new shared drive structure politely ask them to look for another job. Do you want this to work or not?

Records managers and archivists and librarians (and scientists) understand and love complex classification systems. However, end users don’t understand them, don’t like them and won’t use them. End users have no wish to become part-time records managers, they have their own work to do thank you.

By all means make the new structure a subset of the classification system, major headings only and no more than two levels if possible. If it takes longer than a few seconds to decide where to save something or to find something then it is too complex. If three people save the same document in three different places then it is too complex. If a senior manager can’t find something instantly then it is too complex. The staff aren’t to blame, you are.

I have written about this issue previously and you can reference a white paper at this link, “Do you really need a Taxonomy?”

The shared drives aren’t where we classify documents, it is where we make it as easy and as fast as possible to save, retrieve and work on documents; no more, no less. Proper classification (if I can use that term) happens later when you use intelligent software to automatically capture, analyse and store documents in your document management system.

Please note, shared drives are not a document management system and a document management system should never just be a copy of your shared drives. They have different jobs to do.

Making the new system work

Let’s fall back on one of the oldest acronyms in business, KISS, “Keep It Simple Stupid!” Simple is good and elegant, complex is bad and unfathomable.

Testing is a good example of where the KISS principle must be applied. Asking all staff to participate in the testing process may be diplomatic but it is also suicidal. You need to select your testers. You need to pick a small number of smart people from all levels of your organization. Don’t ask for volunteers, you will get the wrong people applying. Do you want participants who are committed to the system working, or those who are committed to it failing? Do you want this to succeed or not?

If I am pressed for time I use what I call the straight-line-method. Imagine all staff in a straight line from the most junior to the most senior. Select from both ends, the most junior and the most senior. Chances are that if the system works for this subset that it will also work for all the staff in between.

Make it clear to all that the shared drives are not your document management system. The shared drives are there for ease of access and to work on documents. The document management system has business rules to ensure that you have inviolate copies of important documents plus all relevant contextual information. The document management system is where you apply business rules and workflow. The document management system is all about business process management and compliance. The shared drives and the document management system are related and integrated but they have different jobs to do.

We have shared drives so staff don’t work on documents on ‘private’ drives, inaccessible and invisible to others. We provide a shared drive resource so staff can collaborate and share information and easily work on documents. We have shared drives so that when someone leaves we still have all their documents and work-in-process.

Please do all the complex processes required in your document management system using intelligent software, automate as much as possible. Productivity gains come about when you take work off staff, not when you load them up with more work. Give your staff as much time as possible so they can use their expertise to do the core job they were hired for.

If you don’t force extra work on your staff and if you make it as easy and as fast as possible to use the shared drives then your system will work. Do the opposite and I guarantee it will not work.

Records Management in the 21st century; you have computers now, do it differently

by Frank 1. June 2013 06:32

I own and run a computer software company called the Knowledgeone Corporation and we have specialised in what is now known as enterprise content management software since 1984 when we released our first product DocFind. We are now into the 8th iteration of our core and iconic product RecFind and have sold and installed thousands of RecFind sites where we manage corporate records and electronic documents.

I have personally worked with hundreds of customers to ensure that we understand and meet their requirements and I have also designed and specified every product we have delivered over the last 29 years so while I have never been a practicing records manager, I do know a great deal about records and document management and the vagaries of the practise all around the world.

My major lament is that many records managers today still want to run their ‘business’ in exactly the same way it was run 30 or 50 or even a hundred years ago. That is, as a physical model even when using computers and automated solutions like our product RecFind 6. This means we still see overly complicated classification systems and overcomplicated file numbering systems and overcomplicated manual processes for the capture and classification of paper, document images, electronic documents and emails.

It is a mindset that is locked in the past and can’t see beyond the confines of the file room.

I also still meet records managers that believe each and every employee has a responsibility to ‘become’ a junior records manager and both fully comprehend and religiously follow all of the old-fashioned and hopelessly overcomplicated and time-consuming processes laid out for the orderly capture of corporate documents.

I have news for all those locked-in-the-past records managers. Your approach hasn’t worked in the last 30 years and it certainly will not work in the future.

Smart people don’t buy sophisticated computer hardware and application software and then try to replicate the physical model for little or no benefit. Smart people look at what a computer system can do as opposed to 20,000 linear feet of filing shelves or 40 Compactuses and 30 boxes of filing cards and immediately realize that they have the power to do everything differently, faster, most efficiently and infinitely smarter.  They also realize that there is no need to overburden already busy end users by a forcing them to become very bad and very inconsistent junior records managers. End users are not hired to be records managers they are hired to be engineers, sales people, accountants, PAs, etc., and most already have 8 hours of work a day without you imposing more on them.

There is always a better way and the best way is to roll out a records and document and email management system that does not require your end users to become very bad and inconsistent junior records managers. This way it may even have a chance of actually working.

Please throw that old physical model away. It has never worked well when applied to computerised records, document and email management and it never will. Remember that famous adage, “The definition of insanity is to keep doing the same thing and to expect the results to be different”?

I guarantee two things:

1.     Your software vendor’s consultant is more than happy to offer advice and guidance; and

2.     He/she has probably worked in significantly more records management environments than you have and has a much broader range of experience than you do.

It doesn’t hurt to ask for advice and it doesn’t hurt to listen.

Do you really need all those boxes of records in offsite storage?

by Frank 11. November 2012 06:39

Is it jobs or useless paper records?

It is my belief that all over the western world companies and government agencies are wasting enormous amounts of money maintaining boxes of paper on the dusty but lucrative shelves of offsite storage companies like Grace Records Management, Iron Mountain and Crown Records Management. In total, it must be hundreds of millions (I know of one Australian company that spends a million dollars a year on offsite storage at multiple offsite repositories and doesn’t even know what its holdings are) or even billions of dollars a year; most of it wasted.

It is almost enough for me to dive into debt to build an offsite storage facility and then buy a few vans and shredders. I say almost because I am not a hypocrite and I wouldn’t be able to sell a service to my customers I didn’t believe in. For the life of me, I cannot understand why senior management delegates this level of expenditure to junior or mid-level managers when it really should be scrutinized at board level like every other significant cost.

Even the advent of the Global Financial Crisis (GFC) beginning in 2008 doesn’t seem to have woken up senior management or board members to this area of massive waste. Instead, big corporations and government are ‘saving money’ laying off staff and outsourcing jobs to third world and developing countries. Where is the sense in that when there are easier and less disruptive and more ‘humane’ savings to be made by simply reducing the money being paid to store useless paper records that will never be referenced again? How would you feel if management laid you off because they thought it was more important to keep paying for boxes of old paper they will never use again?

Is it really only me that sees the unfairness and absurdity in this archaic paradigm? Why is the huge cost of the offsite storage of useless paper often overlooked when management is fighting to find cost savings? Why are people’s livelihoods sacrificed in deference to the need to maintain old, never-to-be-referenced-again, useless paper? Is it just because senior management is too busy with more important stuff like negotiating their next executive pay increase?

If you talk to the records manager you will be told that all that paper has to be maintained whatever the cost because of the Retention Schedule. In most cases, the Retention Schedule will be mentioned in the same way one talks about the Bible. That is, it is holy and sacrosanct and anyone who dares question it will be charged with heresy and subjected to torture and extreme deprivation in a rat infested, mouldy, dark and damp cell in the basement.

But, dig deeper and you will discover that the Retention Schedule is way too complex for the organization. You will also discover that no one really understands or can explain all the variations and that the application of it is at best, haphazard and irregular. This is when you will also discover that no one in records can actually justify why a huge percentage of those old, dusty and now irrelevant paper records are still costing you real hard cash each and every month. More importantly, they may have also cost you some of your most trusted and most valuable employees.

Isn’t it time someone senior actually looked at the money you are spending to manage mostly paper rubbish in very expensive containers?

Are you also confused by the term Enterprise Content Management?

by Frank 16. September 2012 06:00

I may be wrong but I think it was AIIM that first coined the phrase Enterprise Content Management to describe both our industry and our application solutions.

Whereas the term isn’t as nebulous as Knowledge Management it is nevertheless about as useful when trying to understand what organizations in this space actually do. At its simplest level it is a collective term for a number of related business applications like records management, document management, imaging, workflow, business process management, email management and archiving, digital asset management, web site content management, etc.

To simple people like me the more appropriate term or label would be Information Management but as I have already covered this in a previous Blog I won’t beleaguer the point in this one.

When trying to define what enterprise content management actually means or stands for we can discard the words ‘enterprise’ and ‘management’ as superfluous to our needs and just concentrate on the key word ‘content’. That is, we are talking about systems that in some way create and manage content.

So, what exactly is meant by the term ‘content’?

In the early days of content management discussions we classified content into two broad categories, structured and unstructured. Basically, structured content had named sections or labels and unstructured content did not. Generalising even further we can say that an email is an example of structured content because it has commonly named, standardised and accessible sections or labels like ‘Sender’, ‘Recipient’, ‘Subject’ etc., that we can interrogate and rely on to carry a particular class or type of information. The same general approach would regard a Word document as unstructured because the content of a Word document does not have commonly named and standardised sections or labels. Basically a Word document is an irregular collection of characters that you have to parse and examine to determine content.

Like Newtonian physics, the above generalisations do not apply to everything and can be argued until the cows come home. In truth, every document has an accessible structure of some kind. For example, a Word document has an author, a size, a date written, etc. It is just that it is far easier to find out who the recipient of an email was than the recipient of a Word document. This is because there is a common and standard ‘Tag’ that tells us who the recipient is of an email and there is no such common and standard tag for a Word document.

In our business we call ‘information about information’ (e.g., the recipient and date fields on an email) Metadata. If an object has recognizable Metadata then it is far easier to process than an object without recognizable Metadata. We may then say that adding Metadata to an object is the same as adding structure.

Adding structure is what we do when we create a Word document using a template or when we add tags to a Word document. We are normalizing the standard information we require in our business processes so the objects we deal with have the structure we require to easily and accurately identify and process them.

This is of course one of the long-standing problems in our industry, we spend far too much time and money trying to parse and interpret unstructured objects when we should be going back to the coal face and adding structure when the object is first created. This is of course relatively easy to do if we are creating the objects (e.g., a Word document) but not easy to achieve if we are receiving documents from foreign sources like our customers, our suppliers or the government. Unless you are the eight-hundred pound gorilla (like Walmart) it is very difficult to force your partners to add the structure you require to make processing as fast and as easy and as accurate as possible.

There have been attempts in the past to come up with common ‘standards’ that would have regulated document structure but none have been successful. The last one was when XML was the bright new kid on the block and the XML industry rushed headlong into defining XML standards for every conceivable industry to facilitate common structures and to make data transfer between different organizations as easy and as standard as possible. The various XML standardisation projects sucked up millions or even billions of dollars but did not produce the desired results; we are still spending billions of dollars each year parsing unstructured documents trying to determine content.

So, back to the original question, what exactly is Enterprise Content Management? The simple answer is that it is the business or process of extracting useful information from objects such as emails and PDFs and Word documents and then using that information in a business process. It is all about the process of capturing Metadata and content in the most accurate and expeditious manner possible so we can automate business processes as much as possible.

If done properly, it makes your job more pleasant and saves your organization money and it makes your customers and suppliers happier. As such it sounds a lot like motherhood (who is going to argue against it?) but it certainly isn’t like manna from heaven. There is always a cost and it is usually significant. As always, you reap what you sow and effort and cost produces rewards.

Is content management something you should consider? The answer is definitely yes with one proviso; please make sure that the benefits are greater than the cost.

 

Could you manage all of your records with a mobile device?

by Frank 2. September 2012 06:00

I run a software company and I design and build an enterprise strength content management system called RecFind 6 which among other things, handles all the needs of physical records management.

This is fine if I have a big corporate or government customer because the cost is appropriate to the scale of the task at hand. However it isn’t fine when we receive lots of inquiries from much smaller organizations like small law forms that need a records management solution but only have a very small budget.

A very recent inquiry from a small but successful engineering company was also a problem because they didn’t have any IT infrastructure. They had no servers and used Google email. However, they still had a physical records management problem as well as an electronic document management problem but our solution was way outside of the ballpark.

Like any businessman I don’t like to see business walk away especially after we have spent valuable consultancy time helping the customer to understand the problem and define the need.

We have had a lot of similar inquiries lately and it has started me thinking about the need for a new type of product for small business, one that doesn’t require the overhead and expense of an enterprise-grade solution. It should also be one that doesn’t require in-house servers and a high overhead and maintenance cost.

Given our recent experience building a couple of iOS (for the iPhone and iPad) and Android (for any Android phone or tablet) apps I am of the opinion that any low cost but technically clever and easy-to-use solution should be based around a mobile device like a smart phone or tablet.

The lack of an in-house server wouldn’t be a problem because we would host the solution servers at a data centre in each country we operate in. Programming it wouldn’t be a problem because that is what we do and we already have a web services API as the foundation.

The only challenge I see is the need to get really creative about the functionality and the user interface. There is no way I can implement all the advanced functionality of the full RecFind 6 product on a mobile device and there is no way I can re-use the user interface from either the RecFind 6 smart-client or web-client. Even scaled down the user interface would be unsuitable for a mobile device; it needs a complete redesign. It isn’t just a matter of adapting to different form factors (screen sizes), it is about using the mobile device in the most appropriate way. It is about designing a product that leverages off the unique capabilities of a mobile device, not trying to force fit an application designed for Windows.

The good news is that there is some amazing technology now available for mobile devices that could easily be put to use for commercial business purposes even though a lot of it was designed for light weight applications and games. Three examples of very clever new software for mobile devices are Gimbal Context Aware, Titanium Mobile SDK and Vuforia Augmented Reality. But, these three development products are just the tip of the iceberg; there is literally a plethora of clever development tools and new products both in the market and coming to market in the near future.

As a developer, right now the Android platform looks to be my target. This is mainly because of the amount of software being developed for Android and because of the open nature of Android. It allows me to do far more than Apple allows me to do on its sandboxed iOS operating system.

Android also makes it far easier for me to distribute and support my solutions. I love iOS but Apple is just a little too anal and controlling to suit my needs. For example, I require free access to the file system and Apple doesn’t allow that. Nor does it give me the freedom I need to be able to attach devices my customers will need; no standard USB port is a huge pain for application developers.

I am sorry that I don’t have a solution for my smaller customers yet but I have made the decision to do the research and build some prototypes. RecFind 6 will be the back-end residing on a hosted server (in the ‘Cloud’) because it has a superset of the functionality required for my new mobile app. It is also the perfect development environment because the RecFind 6 Web Services SDK makes it easy for me to build apps for any mobile operating system.

So, I already have the backend functionality, the industrial-strength and scalable relational database and the Web Services API plus expertise in Android development using Eclipse and Java. Now all I have to do to produce my innovative new mobile app is find the most appropriate software and development platforms and then get creative.

It is the getting creative bit that is the real challenge. Wish me luck and watch this space.

 

Is Information Management now back in focus?

by Frank 12. August 2012 06:00

When we were all learning about what used to be called Data Processing we also learned about the hierarchy or transformation of information. That is, “data to information to knowledge to wisdom.”

Unfortunately, as information management is part of what we call the Information Technology industry (IT) we as a group are never satisfied with simple self-explanatory terms. Because of this age-old flaw we continue to invent and hype new terms like Knowledge Management and Enterprise Content Management most of which are so vague and ill-defined as to be virtually meaningless but nevertheless, provide great scope for marketing hype and consultants’ income.

Because of the ongoing creation of new terminology and the accompanying acronyms we have managed to confuse almost everyone. Personally I have always favoured the term ‘information management’ because it tells it like it is and it needs little further explanation. In the parlance of the common man it is an “old un, but a good un.”

The thing I most disliked about the muddy knowledge management term was the claim that computers and software could produce knowledge. That may well come in the age of cyborgs and true artificial intelligence but I haven’t seen it yet. At best, computers and software produce information which human beings can convert to knowledge via a unique human cognitive process.

I am fortunate in that I have been designing and programming information management solutions for a very long time so I have witnessed first-hand the enormous improvements in technology and tools that have occurred over time. Basically this means I am able to design and build an infinitely better information management solution today that I could have twenty-nine years ago when I started this business.  For example, the current product RecFind 6 is a much better, more flexible, more feature rich and more scalable product than the previous K1 product and it in turn was an infinitely better product than the previous one called RecFind 5.

One of the main factors in them being better products than their predecessors is that each time we started afresh with the latest technology; we didn’t build on the old product, we discarded it completely and started anew. As a general rule of thumb I believe that software developers need to do this around a five year cycle. Going past the five year life cycle inevitably means you end up compromising the design because of the need to support old technology. You are carrying ‘baggage’ and it is synonymous with trying to run the marathon with a hundred pound (45 Kg) backpack.

I recently re-read an old 1995 white paper I wrote on the future of information management software which I titled “Document Management, Records Management, Image Management Workflow Management...What? – The I.D.E.A”. I realised after reading this old paper that it is only now that I am getting close to achieving my lofty ambitions as espoused in the early paper. It is only now that I have access to the technology required to achieve my design ambitions. In fact I now believe that despite its 1995 heritage this is a paper every aspiring information management solution creator should reference because we are all still trying to achieve the ideal ‘It Does Everything Application’ (but remember that it was my I.D.E.A. first).

Of course, if you are involved in software development then you realise that your job is never done. There are always new features to add and there are always new releases of products like Windows and SQL server to test and certify against and there are always new releases of development tools like Visual Studio and HTML5 to learn and start using.

You also realise that software development is probably the dumbest business in the world to be part of with the exception of drug development, the only other business I can think of which has a longer timeframe between beginning R&D and earning a dollar. We typically spend millions of dollars and two to three years to bring a brand new product to market. Luckily, we still have the existing product to sell and fund the R&D. Start-ups however, don’t have this option and must rely on mortgaging the house or generous friends and relatives or venture capital companies to fund the initial development cycle.

Whatever the source of funding, from my experience it takes a brave man or woman to enter into a process where the first few years are all cost and no revenue. You have to believe in your vision, your dream and you have to be prepared for hard times and compromises and failed partnerships. Software development is not for the faint hearted.

When I wrote that white paper on the I.D.E.A. (the It Does Every Thing Application or, my ‘idea’ or vision at that time) I really thought that I was going to build it in the next few years, I didn’t think it would take another fifteen years. Of course, I am now working on the next release of RecFind so it is actually more than fifteen years.

Happily, I now market RecFind 6 as an information management solution because information management is definitely back in vogue. Hopefully, everyone understands what it means. If they don’t, I guess that I will just have to write more white papers and Blogs.

Moving your Records Management application to the Cloud; why would you do it?

by Frank 20. May 2012 06:00

We have all heard and read a lot about the Cloud and why we should all be moving that way. I wrote a little about this in a previous post. However, when we look at specific applications like records management we need to think about the human interaction and how that may be affected if we change from an in-house system to a hosted system. That is, how will the move affect your end-users and records management administrator? Ideally, it will make their job easier and take away some pain. If it makes their job harder and adds pain then you should not be doing it even if it saves you money.

We also need to think about the services we may need when we move to the Cloud. That is, will we need new services we don’t have now and will the Cloud vendor offer to perform services, like application maintenance, we currently do in-house?

In general, normal end-user functions should work the same whether we are running off an internal system or a Cloud-based one. This of course will depend upon the functionality of your records management software. Hopefully, there will be no difference to either the functionality or the user interface when you move to the Cloud. For the sake of this post let’s assume that there is a version of your records management system that can run either internally or in the Cloud and that the normal end-user interface is identical or as near-as-such that it doesn’t matter. If the end-user interface is massively different then you face extra cost and disruption because of the need to convert and retrain your users and this would be a reason not to move to the Cloud unless you were planning to change vendors and convert anyway.

Now we need to look at administrator functions, those tasks usually performed by the records management administrator or IT specialist to configure and manage the application.  Either the records management administrator can perform the same tasks using the Cloud version or you need to ask the Cloud vendor to perform some services for you. This will be at a cost so make sure you know what it is beforehand.  There are some administrator functions you will probably be glad to outsource to the Cloud vendor such as maintaining the server and SQL Server and taking and verifying backups.

I would assume that the decision to move a records management application to the Cloud would and should involve the application owner and IT management. The application owner has to be satisfied that the end-user experience will be better or at least equal to that of the in-house installation and IT management needs to be sure that the integrity and security of the Cloud application will at the very least be equal to that of the in-house installation. And finally, the application owner, the records manager, needs to be satisfied that the IT support from the vendor of the Cloud system will be equal to or better than the IT support being received from the in-house or currently out-sourced IT provider.

There is no point in moving to the Cloud if the end-user or administrator experience will deteriorate just as there is no point in moving to the Cloud if the level of IT support falls.

Once you have made the decision to move your records management application to the Cloud you need to plan the cutover in a way that causes minimal disruption to your operation. Ideally, your staff will finish work on the in-house application on Friday evening and begin working on the Cloud version the next Monday morning. You can’t afford to have everyone down for days or weeks while IT specialists struggle to make everything work to your satisfaction. This means you need to test the Cloud system extensively before going live in production. In this business, little or no testing equals little or no success and a great deal of pain and frustration.

If it was me, I would make sure that the move to the Cloud meant improvements in all facets of the operation. I would want to make sure that the Cloud vendor took on the less pleasant, time-consuming and technical tasks like managing and configuring the required IT infrastructure. I would also want them to take on the more bothersome, awkward and technically difficult application administration tasks. Basically, I would want to get rid of all the pain and just enjoy the benefits.

You should plan to ‘outsource’ all the pain to make your life and the life of your staff easier and more pleasant and in doing so, make everyone more productive. It is like paying an expert to do your tax return and getting a bigger refund. The Cloud solution must be presented as a value proposition. It should take away all the non-core activities that suck up your valuable time and allow you and your staff more time to do the core activities in a better and more efficient way; it should allow you to become more productive.

I am a great believer in the Cloud as a means of improving productivity, lowering costs and improving data integrity and security. It is all doable given available facilities and technology but in the end, it is up to you and your negotiations with the Cloud provider.  Stand firm and insist that the end result has to be a better solution in every way; compromise should not be part of the agreement.

Using Terminal Digits to minimize “Squishing”

by Frank 13. May 2012 06:00

Have you ever had to remove files from shelving or cabinets and reallocate them to other spaces because a drawer or shelf is packed tight? Then had to do it again and again?

One of my favourite records managers used to call this the “Squishing” problem.

The squishing problem is inevitable if you start to load files from the beginning of any physical filing system, be it shelving or cabinets and unload file files from random locations as the retention schedule dictates. If you create and file parts (a new folder called part 2, part 3, etc., when the original file folder is full) then the problem is exacerbated. You may well spend a large part of your working life shuffling file folders from location to location; a frustrating and worthless, thankless task. You also get to inhale a lot of toxic paper dust and mites which is not a good thing.

You may not be aware of it but there is a very simple algorithm you can utilize to make sure the squishing problem never happens to you. It is usually referred to as the ‘Terminal Digit’ file numbering system but you may call it whatever you like. The name isn’t important but the operation is.

Importantly, you don’t need to change your file numbering system other than by adding on additional numbers to the end. These additional numbers are the terminal digits.

The number of terminal digits you need depends upon how many file folders you have to manage. Here is a simple guideline:

·         One terminal Digit (0 to 9) = one thousand files

·         Two Terminal Digits (00 to 99) = ten thousand Files

·         Three Terminal Digits (000 to 999) = greater than ten thousand files

Obviously, you also have to have the filing space and appropriate facilities available (e.g., boxes, bays, etc.,) to hold the required number of files for each terminal.

It is called the Terminal Digit system because you first have to separate your available filing space into a number of regular ‘terminals’. Each terminal is identified by a number, e.g., 0, 1, 2, 09, 23, 112, 999, etc.

The new terminal digit is additional and separate from your normal file number. It determines which terminal a file will be stored in. Let’s say your normal file number is of the format YYYY/SSSSSS. That is, the current year plus an automatically incrementing auto number like 2012/000189 then 2012/000190, etc. If we use two terminal digits and divide your available filing space into one hundred terminals (think of it as 100 equally sized filing slots or bays numbered 00 to 99) then your new file number format is YYYY/SSSSSS-99. The two generated file numbers above may now look like 2012/000189-00 and 2012/000190-01.

File folder 2012/000189-00 is filed in terminal number 00 and 2012/000190-01 is filled in terminal number 01. In a nutshell, what we are doing is distributing files evenly across all available filing space. We are not starting at terminal 00 and filling it up and then moving on to terminal 01, then terminal 02 when 01 is full etc. Finding files is even easier because the first part of the file number you look at is the terminal digit. If a file number ends in 89 it will be in terminal 89 in file number order.

The other good news is that when we unload files from the shelves say at end of life or at the point in the lifecycle when they need to sent offsite we will also unload files evenly across all available filing space. If the terminals are actually big enough and if you have calculated everything correctly you should never again suffer from the ‘squishing’ problem and you should never again have to ingest paper dust and mites when tediously shuffling files from location to location.

Obviously, there is a little more to this than sticking a couple of digits on the end of your file number. I assume you are using a computerised records management system so changes have to be made or configured to correctly calculate the now extended file number (including the new terminal digit) and your colour file labels will need to be changed to show the terminal digit in a prominent position.

There is also the question of what to do with your existing squished file store. Ideally you would start from scratch with your new numbering systems and terminals and wait for the old system to disappear as the files age and disappear offsite to Grace or Iron Mountain. That probably won’t be possible so you will have to make decisions based on available resources and budget and come up with the best compromise.

I can’t prove it but I suspect that the terminal digit system has been around since people began filing stuff. It is an elegantly simple solution to an annoying and frustrating problem and involves nothing more complicated than simple arithmetic.

The surprise is that so few organizations actually use it. In twenty-five plus years in this business I don’t think I have seen it in use at more than one to two-percent of the customers I have visited. I have talked about it and recommended it often but the solution seems to end up in the too-hard basket; a shame really, especially for the records management staff charged with the constant shuffling of paper files.

It may be that you have a better solution but just in case you don’t, please humour me and have another look at the terminal digit filing solution. It may just save you an enormous amount of wasted time and make your long-suffering records staff a lot happier and a lot healthier.

 

Month List