Using barcodes to raise productivity and lower costs in Records Management processes

by Frank 6. August 2014 06:00

Did you know that in the spring of 1969 the first true bar code systems were installed? One went into a General Motors plant in Pontiac, Michigan, where it was used to monitor the production and distribution of automobile axle units. The other went into a distribution facility run by General Trading Company in Carlsbad, New Jersey, to help direct shipments to the proper loading-bay doors.

Did you also know that the very first product to be sold with a barcode and scanner was a single packet of chewing gum at a Marsh supermarket in Troy, Ohio on June 26, 1974?

Both these interesting facts came from an excellent article on the history of barcodes by Tony Seideman. Please see this link.

The overall advantages and benefits of barcodes are well known; speed, accuracy, ease of implementation and cost-effectiveness.

In a nutshell, barcodes are cheap to produce, easy to implement and easy to read. They are infinitely better than a human keying in information. Barcodes are reliable and they just work.

Modern supermarkets simply couldn’t function without barcodes on products and barcode readers at checkouts.

Most well-run records management facilities also use barcodes to great advantage to track file-folders and boxes, run audits and speed up the entering of information. Most offsite records storage facilities use barcodes to track boxes on shelves. It is what we call a “no brainer.”

However, despite the obvious benefits, especially the cost benefits, many organizations today still manage physical assets bereft of barcodes. You may well ask “why?” and so do I. Given the low cost of both barcodes and barcode readers and the well-proven technology, I honestly can’t think of any reason for not using barcoding technology to manage physical assets like file-folders and archive boxes. It just doesn’t make any sense whatever to me. It is analogous to running ten miles to deliver a message rather than just phoning or texting. How many messages a day can you deliver by running and how many can you deliver a day by phoning or texting?

Why ask staff to write down file-folder numbers or enter them on a keyboard when you can ‘wand’ or 'scan' them much more accurately and infinitely faster using a barcode reader? Why put up with processing 20 file movements a day by hand when you can easily process 200 a day using a barcode reader?

If you have 30 file-folders on your desk that you have to process why would you do it manually by keying in each file number (and making mistakes) over 30 minutes when you could process the same number of file-folders in 30 seconds using a fixed barcode reader (and not making any keying mistakes)?

When you have 500 file-folders to add to archive boxes provided by your offsite storage provider why would you take hours to do it laboriously with lists and the keyboard when you could do it in minutes using a barcode reader? Simply use your portable barcode reader to read the box barcode then read each file-folder barcode number as you add it to the box and then read the box number again when finished to complete the transaction. What could be faster or simpler?

So, what do you need to convert your slow and error-prone manual-entry records management processes to fast and accurate barcode-enabled processes?

1.       A records management software package that supports barcodes (I don’t know of any modern RM system that doesn’t)

2.       *A supply of pre-printed barcodes (or you can print them out of your records management software package)

3.       Some fixed or wedge barcode readers (expect to pay $150 to $250 each)

4.       One or more portable barcode readers (expect to pay $1,000 to $2,000 including cables, battery chargers, etc.)

*A word on barcode labels. It pays to make them as durable as possible. This usually means laminating them as un-laminated barcodes produced on a laser printer tend to have a short life expectancy. The easiest way to obtain high quality, laminated barcode labels is to order them from a specialist print house. This way you can specify exactly what you need in terms of format and size and be assured of a long life and reliability. Nothing frustrates more than a worn barcode that doesn’t read properly.

Of course someone has to stick the barcode label on the file-folders and then tell the computer system (i.e., file-folder number AB/2003/00067 is now barcode number 1000049). You have a choice of how to do this. If you don’t have too many file-folders you can bite the bullet and add them all as a special project. Or, you can decide just to add them to every new file-folder created and to add barcodes to existing file-folders when they cross your desk. It is your decision based on volume and resources. However, you need to invest the effort to reap the benefits.

Then if you really want to benefit you will assign a different class of barcode to ‘locations’. That is, offices, shelves, rooms, etc., and even people. This is so you can do an audit on a regular basis using your portable barcode reader. Wouldn’t it be nice to know where everything is and even, where some things aren’t?

Finally, assign yet another set of barcodes to your archive boxes so it is as easy and as fast as possible to move file-folders into and out of archive boxes.

The above describes just the simplest application of barcodes but even so, the benefits and cost savings are significant. The more creative of you will comes up with many more ways to make barcodes pay big dividends. We have one customer for example, that automatically allocates barcodes to emails in Outlook to make them easier to monitor and track both electronically and physically. See this link:

Barcodes are simple to use, low cost and well-proven, ‘risk-free’ technology. The effective use of barcodes and barcode readers can remove drudgery, lower costs and massively improve productivity.

If you aren’t using barcodes your boss should be asking you “why not?”

Document Imaging, Forms Processing & Workflow – A Guide

by Frank 28. July 2014 06:00

Document imaging (scanning) has been a part of most business processing since the early 1980s. We for example, produced our first document imaging enabled version of RecFind in 1987. So it isn’t new technology and it is now low risk, tried and proven technology.

Even in this age of electronic documents most of us still receive and have to read, analyse and process mountains of paper.

I don’t know of any organization that doesn’t use some form of document imaging to help process paper documents. Conversely, I know of very few organizations that take full advantage of document imaging to gain maximum value from document imaging.

For example, just scanning a document as a TIFF file and then storing it on a hard drive somewhere is almost a waste of time. Sure, you can then get rid of the original paper (but most don’t) but you have added very little value to your business.

Similarly, capturing a paper document without contextual information (Metadata) is not smart because you have the document but none of the important transactional information. Even converting a TIFF document to a PDF isn’t smart unless you first OCR (Optical Character Recognition) it to release the important text ‘hidden’ in the TIFF file.

I would go even further and say that if you are not taking the opportunity to ‘read’ and ‘capture’ key information from the scanned document during the scanning process (Forms Processing) then you aren’t adding anywhere near as much value as you could.

And finally, if you aren’t automatically initiating workflow as the document is stored in your database then you are criminally missing an opportunity to automate and speed up your internal business processes.

To give it a rating scale, just scanning and storing TIFF files is a 2 out of 10. If this is your score you should be ashamed to be taking a pay packet. If you are scanning, capturing contextual data, OCRing, Forms Processing, storing as a text-searchable PDF and initiating workflow then you get a 10 out of 10 and you should be asking your boss for a substantial raise and a promotion.

How do you rate on a scale of 0 to 10? How satisfied is your boss with your work? Are you in line for a raise and a promotion?

Back in the 1980s the technology was high-risk, expensive and proprietary and few organizations could afford the substantial investment required to scan and process information with workflow.

Today the technology is low cost and ubiquitous. There is no excuse for not taking full advantage of document imaging functionality.

So, where do you start?

As always, you should begin with a paper-flow analysis. Someone needs to do an inventory of all the paper you receive and produce and then document the business processes it becomes part of.

For every piece of paper you produce you should be asking “why?” Why are you producing paper when you could be producing an electronic document or an electronic form?

In addition, why are you producing multiple copies? Why are you filing multiple copies? What do your staff actually do with the paper? What happens to the paper when it has been processed? Why is it sitting in boxes in expensive off-site storage? Why are you paying to rent space for that paper month after month after month? Is there anything stored there that could cause you pain in any future legal action?

And most importantly, what paper can you dispose of?

For the paper you receive you need to work out what is essential and what can be discarded. You should also talk to your customers, partners and suppliers and investigate if paper can be replaced by electronic documents or electronic forms. Weed out the non-essential and replace whatever you can with electronic documents and electronic forms. For example, provide your customers, partners and suppliers with Adobe electronic forms to complete, sign and return or provide electronic forms on your website for them to complete and submit.

Paper is the enemy, don’t let it win!

Once you have culled all the paper you can, you then need to work out how to process the remaining paper in the most efficient and effective manner possible and that always ends up as a Business Process Management (BPM) exercise. The objectives are speed, accuracy, productivity and automation.

Don’t do anything manually if you can possibly automate it. This isn’t 30 years ago when staff were relatively cheap and computers were very expensive. This is now when staff are very expensive and computers are very cheap (or should I say low-cost?).

If you have to process paper the only time it should be handled is when it is taken from the envelope and fed into a document scanner. After that, everything should be automated and electronic. Yes, your records management department will dutifully want to file paper in file folders and archive boxes but even that may not be necessary.  Don’t accept the mystical term ‘compliance’ as a reason for storing paper until you really do understand the compliance legislation that applies to your business. In most cases, electronic copies, given certain safeguards, are acceptable.

I am willing to bet that your records manager will be operating off a retention schedule that is old, out-of-date, modified from another schedule, copied, modified again and ‘made-to-fit’ your needs. It won’t be his/her fault because I can probably guarantee that no budget was allocated to update the retention schedule on an ongoing basis. I am also willing to bet that no one has a copy of all of the current compliance rules that apply to your business.

In my experience, ninety-percent plus of the retention schedules in use are old, out-of-date and inappropriate for the business processes they are being applied to. Most are also way too complicated and crying out for simplification. Bad retention schedules (and bad retention practices – are you really destroying everything as soon as you are allowed?) are the main reason you are wasting thousands or millions of dollars a year on redundant offsite storage.

Do your research and save a fortune! Yes, records are very important and do deserve your attention because if they don’t get your attention you will waste a lot of money and sooner or later you will be penalised for holding information you could have legally destroyed a long time ago. A good records practice is an essential part of any corporate risk management regime. Ignore this advice at your peril.

Obviously, processing records efficiently requires software. You need a software package that can:

  1. Scan, OCR and Forms Process paper documents.
  2. Capture and store scanned images and associated Metadata plus any other kind of electronic document.
  3. Define and execute workflow.
  4. Provide search and inquiry capabilities
  5. Provide reporting capabilities.
  6. Audit all transactions.

The above is obviously a ‘short-list’ of the functionality required but you get the idea. There must be at least several hundred proven software packages in the world that have the functionality required. Look under the categories of:

  1. Enterprise Content Management (ECM, ECMS)
  2. Records Management (RM, RMS)
  3. Records and Document Management
  4. Document Management (DM, DMS)
  5. Electronic Document and Records Management (EDRMS)
  6. Business Process Management (BPM)

You need to define your business processing requirements beginning with the paper flow analysis mentioned earlier. Then convert your business processing requirements into workflows in your software package. Design any electronic forms required and where possible, re-design input paper forms to facilitate forms processing. Draw up procedures, train your staff and then test and go live.

The above paragraph is obviously a little short on detail but I am not writing a “how-to” textbook, just a simple guide. If you don’t have the necessary expertise then hire a suitably qualified and experienced consultant (someone who has done it before many times) and get productive.

Or, you can just put it off again and hope that you don’t get caught.

 

A simple guide to using shared drives to capture & classify electronic documents and emails

by Frank 18. July 2014 06:00

I have written previously about ways to solve the shared drives problem (click here) and I have written numerous articles (and a book) about ways to manage emails and electronic/digital records. However, we still receive multiple requests from customers and prospective customers about the best, and simplest, way to effectively manage these problems.

The biggest stumbling block and impediment to progress in most cases is the issue of a suitable taxonomy or classification system. Time and time again I see people putting off the solution while they spend years and tens of thousands or hundreds of thousands of dollars grappling with the construction of a suitable taxonomy. I have written about this topic previously as well and if you want my recommendations please click on this link.

If you really want the simplest, easiest to understand, easiest to use and lowest cost way to solve all of the above problems then please forget about spending the next twelve to eighteen months grappling with the nuances of your classification system. It isn’t necessary.

What you need instead is a natural classification structure that reflects your business processes. Please give your long-suffering end users something they will instantly recognize and can easily work with because it is familiar from their day to day work. Give them something to work with that doesn’t require them to become amateur records managers battling to decipher a complex, hierarchical classification system that requires an intricate knowledge of classification theory to interpret correctly. Give them something that makes it as easy as possible to file everything in the right place first time with absolutely minimal effort. Give them something that makes it as easy as possible to find something.

What I am proposing isn’t a hundred-percent solution and it won’t suit every organization but I guarantee that it will turn chaos into order in any organization that implements it. You may well see it as an eighty-five-percent solution but that is a hell of a lot better than no solution. It is also easy and fast to implement and relatively low cost (you will need some form of RM software).

First up you need to make decisions about what kind of business you are.  Notice that I said “what kind of business you are” not “what kind of records you manage” or “how your business is structured”.  Most importantly, strongly resist the temptation to base your classification structure on your existing business structure or organization’s departments/agencies and instead base it on your most common business processes. Please refer to the following extract from:

Overview of Classification Tools for Records Management by the National Archives of Australia, ISBN 0 642 34499 X (an excellent reference document if you need to understand classification systems).

“Classifying records and business information by functions and activities moves away from traditional classification based on organisational structure or subject. Functions and activities provide a more stable framework for classification than organisational structures that are often subject to change through amalgamation, devolution and decentralisation. The structure of an organisation may change many times, but the functions an organisation carries out usually remain much the same over time.”

I would also strongly resist the temptation to build your classification structure on content; it is way too difficult. Instead, as I have said above, base it on your common business processes.

When I say classification structure I mean the way you name and organize folders in your shared drives. I can’t give you a generic solution because I am not that clever; I don’t know enough about your business. I can however, give you an example.

Please also remember that for the most part, we are dealing with unstructured source information; Word, Excel, PowerPoint, Emails, etc. Emails are a little easier to deal with because they have a limited but common structure, e.g., Date Received, Sender, Recipient, CC and Subject. With other electronic documents we are have far less information and are  usually limited to Author (not reliable), Date Created, Date Modified and Filename. Ergo, as I said earlier, trying to base a classification system on the content of unstructured documents is both difficult and inexact. It is certainly doable but you will have to spend a lot more money on consulting and sophisticated software to achieve your ends.

In my simple example of my simple system I am going to assume that your business is customer (or client) centric, i.e., as opposed to being case-centric or project-centric, etc. The top level of your classification structure therefore will be the client name and/or number. To make it as simple as possible I am going to propose only two levels. The second level represents your most common business processes, that is, what you do with each customer. So for example, I have:

Customer Name

     Correspondence

     Contracts

     Quotes & Proposals.

     Orders

     Incidents

I am also not going to differentiate between emails and other types of electronic documents, I am going to treat them all the same.

Now how does this simple system work?

  1. Staff producing electronic documents don’t have their ‘own’ shared drive, all staff use the common classification structure. This is very important, let one or more people be exceptions and you no longer have a system you can rely on to meet your needs for reliable retrieval and any compliance legislation you are subject to.
  2. Staff drag and drop or ‘save-as’ emails from their email client to the correct sub-folder.
  3. Similarly, staff save (or drag and drop) electronic documents into the correct sub-folder. You can control access if required by applying security to electronic documents.
  4. You purchase or build a document repository (based on any common database such as SQL Server, MySQL, etc.) and within this repository you replicate the folder structure of your shared drives with logical folders and subfolders.
  5. You purchase or build a tool that constantly monitors the shared drives (e.g., using .NET Watcher technology) and that instantly captures a copy of any new or modified document (you do need to configure your repository to automatically version modified documents). You may also decide to automatically delete the original source document after it has been captured.
  6. You build or purchase a records and document management software package that allows you to index, search and report on all the information in your repository.
  7. You train your staff in how to save and search for information (shouldn’t take more than a half to one day) and then you go live.

I would also recommend applying a retention schedule based on sub folder (e.g., contracts) and date created and have the records management system automatically apply it to manage the lifecycle of captured documents. There is no sense in retaining information longer than you have to; it is also a dangerous practice.

Please note that the above is just an example and a very simple one at that. You need to determine the most appropriate folder structure for your organization.

WARNING

Do not let the folder structure become overly complex and unwieldy. If you do, it won’t work and you will end up with lots of stuff either not captured or captured to the wrong place. The basic rules are that if it takes more than few second to decide where to file something then it is too complex and that any structure more than 3 levels deep is too complex.

And finally, this isn’t just a theory, it is something we do in our organization and it is something many of our customers do. If you would like to read more on this approach there are some white papers and more explanations at this link. Alternatively, you can contact us and ask questions at this link.

Good luck.

 

Are you still struggling with physical records management, with paper?

by Frank 16. July 2014 00:01

 

Are you still struggling with physical records management, with paper?

We produced our first computerised records management system in 1984 (when our company was called GMB) and it was called DocFind. It was marketed by the Burroughs Corporation initially to about 100 clients and then we stared marketing DocFind direct and sold it to about another 2,000 clients.

Every one of those clients wanted DocFind just to manage physical records, paper, file folders and archive boxes. There was little or no demand for document imaging and workflow and the term electronic document management had yet to be invented. Office automation was in its infancy. We for example, wrote our letters on an Apple IIe using a word processor called WordStar running under CP/M.

In 1986 we released RecFind, a major remake of the DocFind product. This product was initially marketed by ourselves and NEC and it too focussed just on managing physical records.

However, even in 1986 we knew we had a bigger job to do with the general acceptance of document scanners and workflow so we added imaging and workflow to our product and starting trying to convince our customers and prospective customers to reduce the size of their paper mountain and even to start planning for a ‘Paperless Office’.

In the late 1980s and early 1990s I delivered numerous papers extolling the value of the paperless office and worked hard to convince my customers to make the move to Electronic Document and Records Management (EDRMS).

In the mid-1990s the industry discovered ‘Knowledge Management’ (KMS) and industry consultants lost interest in EDRMS and instead heavily promoted the virtues and benefits of KMS, whatever it was. Maybe this was the time organizations lost interest in eradicating paper as senior IT staff and consultants moved on to more interesting projects like KMS.

In 1995 I delivered my first paper on a totally integrated information management system or what I called at the time the ‘It Does Everything Application’ (IDEA). In 1995 I truly thought the age of physical records management was almost over and that the western world at least would move to fully-automated, paperless processes.

How wrong I was 19 years ago.

Today, despite the advanced functionality of our RecFind 6 Product Suite, almost all of my customers still manage physical records with RecFind 6. At least half of the inquiries that come in via our website are for systems to manage physical records.

There is more paper in the world today than there has ever been and organizations all over the world still struggle with managing paper, vast amounts of paper.

Luckily for us, we never succumbed to the temptation to remove the paper handling features from our products. Instead, we added to them with each subsequent release and redesign/rewrite of RecFind. We had to provide upwards compatibility for our clients as they still managed mountains of paper both onsite and offsite.

Being a little older and wiser now I am never again going to predict the paperless office. I will provide advanced physical records management functionality for my clients as long as they require it.

I haven’t given up the fight but my job is to address the real needs of my customers and they tell me and keep telling me that they need to manager paper, mainly file folders full of paper and archive boxes full of file folders. They need to manage paper onsite in shelving and offsite in warehouses with millions of boxes and we do it all.

We manage paper from creation to destruction and throughout the whole lifecycle. We apply retention schedules and classification systems and we track anything and everything with barcodes and barcode readers. We have enhanced our products to cater for every need and we are now probably responsible for millions of tonnes of paper all over the world.

I still hope for a paperless world but I very much doubt that I am going to see it in my lifetime.

So, if you are still struggling with how to best manage all your physical records please don’t despair, you are most certainly not alone! 

  

What is the future of RecFind? - The Product Road Map

by Frank 19. May 2014 06:00

First a little history. We began in 1984 with our first document management application called DocFind marketed by the then Burroughs Corporation (now called Unisys). In June 1986 we sold the first version of RecFind, a fully-featured electronic records management system and a vast improvement on the DocFind product. Then we progressively added document imaging then electronic document management and workflow and then with RecFind 6 a brand new paradigm and an amalgam of all previous functionality; an Information management system able to run multiple applications concurrently with a complete set of enterprise content management functionality. RecFind 6 is the eighth completely new iteration of the iconic RecFind brand.

RecFind 6 was and is unique in our industry because it was designed to be what was previously called a Rapid Application Development system (RAD) but unlike previous examples, we provided the high level toolset so new applications could be inexpensively ‘configured’ (by using the DRM) not expensively programmed and new application tables and fields easily populated using Xchange. It immediately provided every customer with the ability to change almost anything they needed changed without needing to deal with the vendor (us).  Each customer had the same tools we used to configure multiple applications within a single copy of RecFind 6. RecFind 6 was the first ECM product to truly empower the customer and to release them from the expensive and time consuming process of having to negotiate with the vendor to “make changes and get things done.”

In essence, the future of the RecFind brand can be summarised as more of the same but as an even easier to use and more powerful product. Architecturally, we are moving away from the fat-client model (in our case based on the .NET smart-client paradigm) to the zero-footprint, thin-client model to reduce installation and maintenance costs and to support far more operating system platforms than just Microsoft Windows. The new version 2.6 web-client for instance happily runs on my iPad within the Safari browser and provides me with all the information I need on my customers when I travel or work from home (we use RecFind 6 as our Customer Relationship Management system or CRM). I no longer need a PC at home and nor do I need to carry a heavy laptop through airports.

One of my goals for the remainder of 2014 and 2015 following is to convince my customer base to move to the RecFind 6 web-client from the standard .NET smart-client. This is because the web-client provides tangible, measurable cost benefits and will be the basis for a host of new features as we gradually deprecate the .NET smart-client and expand the functionality of the web-client. We do not believe there is a future for the fat/smart-client paradigm; it has seen its day. Customers are rightfully demanding a zero footprint and the support of an extensive range of operating environments and devices including mobile devices such as smartphones and tablets. Our web-client provides the functionality, mobile device support and convenience they are demanding.

Of course the back-end of the product, the image and data repository, also comes in for major upgrades and improvements. We are sticking with MS SQL Server as our database but will incorporate a host of new features and improvements to better facilitate the handling of ‘big data’. We will continue to research and make improvements to the way we capture, store and retrieve data and because our customer’s databases are now so large (measured in hundreds of Gigabytes), we are making it easier and faster to both backup and audit the repository. The objectives as always are scalability, speed, security and robustness.

We are also adding new functionality to allow the customer to bypass our standard user interface (e.g., the .NET smart-client or web-client) and create their own user interface or presentation layer. The objective is to make it as easy as possible for the customer to create tailored interfaces for each operating unit within their organization. A simple way to think of this functionality is to imagine a single high level tool that lets you quickly and easily create your own screens and dashboards and program to our SDK.

On the add-in product front we will continue to invest in our add-in products such as the Button, the MINI API, the SDK, GEM, RecCapture, the High Speed Scanning Module and the SharePoint Integration Module. Even though the base product RecFind 6 has a full complement of enterprise content management functionality these add-on products provide options requested by our customers. They are generally a way to do things faster and more automatically.

We will continue to provide two approaches for document management; the end-user paradigm (RecFind 6 plus the Button) and the fully automatic capture and classification paradigm (RecFind 6 plus GEM and RecCapture). As has been the case, we also fully expect a lot of our customers to combine both paradigms in a hybrid solution.

The major architectural change is away from the .NET smart-client (fat-client) paradigm to the browser-based thin-client or web-client paradigm. We see this as the future for all application software, unconstrained by the strictures of proprietary operating systems like Microsoft Windows.

As always, our approach, our credo, is that we do all the hard work so you don’t have to. We provide the feature rich, scalable and robust image and data repository and we also provide all of the high level tools so you can configure your applications that access our repository. We also continue to invest in supporting and enhancing all of our products making sure that they have the feature set you require and run in the operating environments you require them to. We invest in the ongoing development of our products to protect your investment in our products. This is our responsibility and our contribution to our ongoing partnership.

 

The Post-Microsoft World

by Frank 15. January 2014 06:00

Sometimes companies get to believe their own myth and end up going down a path that is different to the path taken by their users. This creates a major disjoin between the company and its users which the users recognise immediately but the company doesn’t. The users usually then perceive the company as arrogant and out-of-touch and as a company that has stopped listening to its customers.

Sometimes the company fails with a particular product line (e.g., HP and its first tablets) and sometimes its fails altogether. Recent examples of companies that completely misread the market are the aforementioned HP, Kodak and Blackberry.  It was also only a few years ago that IBM almost came to the same crossroads but it managed to stem the collapse.

I guess the lesson is that it doesn’t matter how old the company is or how big or how respected, it can still get it wrong and it can still fail. This is probably truer today than it has ever been because trends and fads and favourites change so rapidly compared to yesteryear. For example, for how long will Twitter and Google rule the roost? I am positive that the next Google and Twitter are already in production and gearing up for the conquest. Does anyone not think that Google is arrogant; dictating what users want, not asking?

However, the company I fear is in more danger than most of becoming suddenly irrelevant is Microsoft. To my mind, Microsoft has pursued a path of change for change’s sake (and to hell with what the customers want) for too many years and I see it today as a giant room full of programmers and marketing people with no one minding the shop and no one steering the ship.

It makes most of its money from Windows and Office and yet these are two of the most disliked pieces of software on the planet. How many people actually love Windows 8 and Office 2013? Does anyone at Microsoft actually know this? They could ask me or anyone out of millions of users but they don’t and won’t. Like HP and Kodak and Blackberry they will internalise all marketing discussions and push through users’ complaints doggedly pursuing their own wayward path to the cliff top.

Windows ME and Windows Vista should have been big red flags but obviously they weren’t because we now have Windows 8 and Metro soon to be replaced by Windows 9. Remember the old expression, “Those who don’t learn from history are bound to repeat history.”

In the past Microsoft has got off almost scot free because there was no real competitor waiting in the wings.  Even today, there may not be a single competitor waiting to replace Microsoft but there are competitors such as Apple PCs and phones and tablets, Android PCs, phones and tablets, Linux PCs and servers, Chrome books and the like. There is also Windows 7 (Vista fixed) and Office 2003 and Office 2010 to tide people over under a really strong challenger emerges. You do not have to buy Windows 8 and you do not have to buy Office 2013; there are alternatives.

Even the major fall-off in PC sales over the last couple of years doesn’t seem to have been taken seriously by Microsoft.

There are a lot of factors pushing Microsoft towards the edge of the cliff and all that is needed is a really strong ‘alternative’ (to Windows and less so, Office) or an acceleration of the trend away from Windows PCs to push Microsoft over the cliff. When the end comes, it will be fast, like the next ice age.

When it happens senior management at Microsoft will say to investors and soon-to-be-redundant staff, “We didn’t see this coming” and the rest of us ordinary consumers will just smile knowingly and shake our heads, “Why didn’t you talk to us?”

The post Microsoft era will be one of much, much simpler operating systems (e.g., iOS), much more stable operating systems, much simpler office products and corporate application software that runs in a browser on most devices and under most operating systems (e.g., iOS and Android and Linux).

We won’t need Windows and without Windows, we won’t be forced to use Microsoft Office.

The most important factor contributing to Microsoft’s downfall will be software vendors like us moving away from developing for Windows and into developing for browsers.  This is happening now and the pace is quickening. I predict that by the end of 2015 almost any application software you or any company needs will be available running in a browser. You will not need Windows.

By my reckoning, Microsoft needs to change direction and have a new and popular paradigm in place by the end of 2014 or it doesn’t have a future as the desktop king. Let’s see if I am right; we don’t have long to wait.

Frank McKenna is the CEO of the Knowledgeone Corporation, a long-time Microsoft ISV and the producer of the RecFind 6 product suite.

Technology Trends for 2014 – A developer’s perspective

by Frank 7. January 2014 06:00

I run a software company called the Knowledgeone Corporation and we produce enterprise content management software for government and business. Because it takes so long to design, build and test a new product or even a new version, we have to try and predict where the market will be in one or two years and then try to make sure our product RecFind 6 ‘fits-in’ with future requirements.

Years ago it was much easier because we were sure Windows would be the dominant factor and mostly we had to worry about compatibility with the next version of Windows and Microsoft Office. Apple however, changed the game with first the iPhone and then the iPad.

We now need to be aware of a much wider range of devices and operating systems; smart phones and tablets in particular. Three years ago we decided to design in compatibility for iOS and Android and we also decided to ignore Blackberry; so far, a wise move.

However, the prediction business is getting harder because the game is changing faster and probably faster than we can change our software (a major application).

I was just reading about CES 2014 on ZDNet and the major technologies previewed and displayed there. Most are carry overs from 2013 and I haven’t noted anything really new but even so, the question is which of these major trends will become major players during 2014 and 2015 (our design, develop and test window for the next major release of RecFind 6)?

1.     Wearables

2.     The Internet of Things

3.     Contextual Computing (or Predictive Computing)

4.     Consumerization of business tech

5.     3D printing

6.     Big Data

7.     The Cloud

Larry Dignan, Editor in Chief of ZDNet, wrote an excellent summary of things to think about for 2014, see this link:

Larry sees China and emerging Chinese companies as major players outside of China in 2014 but I think the Europeans and Americans will resist until well into 2015 or later. Coming on the heels of the Global Financial Crisis of 2008 their governments won’t take kindly to having their local high tech industries swamped by Chinese giants. He also talks about the fate of Windows 8 and the direction of the PC market and this is our major concern.

The PC market has been shrinking and even though Microsoft is still the major player by far a lot depends upon the acceptance of Windows 8 as the default operating system. Personally I saw the Windows 8 Metro interface as clumsy and as change for changes’ sake.

I really don’t understand Microsoft’s agenda. Why try to force a major change like this on consumers and businesses just when everyone is happy with Windows 7 and we have all almost forgotten Vista. Windows 8 isn’t an improvement over Windows 7 just as Office 2013 isn’t an improvement over Office 2010. Both are just different and in my opinion, less intuitive and more difficult to use.

Try as I might, I cannot see any benefits to anyone in moving from Windows 7 to Windows 8 and in moving from Office 2010 to Office 2013. The only organization benefiting would be Microsoft and at the cost of big disruptions to its loyal customers.

Surely this isn’t a wise thing to do in an era of falling PC sales? Why exacerbate the problem?

Smart phones and tablets are real and growing in importance. Android and iOS are the two most important ‘new’ operating systems to support and most importantly for us, browsers are the application carriers of the future. No software vendor has the resources to support all the manifestations of Windows, Linux, Android, iOS, etc., in ‘native’ form but all operating systems support browsers. Browsers have become what Windows was ten years ago. That is, a way to reach most of the market with a single set of source code.

We lived through the early days of DOS, UNIX, Windows and the AS/400 and at one time had about fifteen different sets of source code for RecFind. No vendor wants to go back to those bad old days. When the world settled on Windows it meant that most of us could massively simplify our development regime and revert to a single set of source code to reach ninety-percent of the market. In the early days, Windows was our entry point to the world. Today it is browsers.

Of course not all browsers are equal and there is extra work to do to support different operating systems, especially sand-boxed ones like iOS but, we are still running ninety five percent common source and five-percent variations so it is eminently manageable.

Does Microsoft realize that many developers like us now target browsers as our main application carriers and not Windows? Does it also realize that the Windows 8 Metro interface was the catalyst that pushed many more developers along this same path?

Let’s hope that the new CEO of Microsoft cares more about his customers than the previous one did. If not, 2014 won’t just be the post-PC era, it will also be the beginning of the post-Microsoft era.

Is this Microsoft’s worst mistake ever?

by Frank 30. November 2013 06:00

I run a software company called the Knowledgeone Corporation that has been developing application solutions for the Microsoft Windows platform since the very first release of Windows. As always, our latest product offering RecFind 6 version 2.6 has to be tested and certified against the latest release of windows. In this case that means Windows 8.1.

Like most organizations, we waited for the Windows 8.1 release before upgrading our workstations from Windows 7. The only exceptions were our developers workstations because we bought them new PCs with Windows 8 pre-installed.

We are now testing the final builds of RecFind 6 version 2.6 and have found a major problem. The problem is that Microsoft in its infinite wisdom has decided that you can’t install Windows 8.1 over a Windows 7 system and retain your already installed applications.

The only solution is to install Windows 8 first and then upgrade Windows 8 to Windows 8.1. However, if you are running Windows 7 Enterprise this won’t work either and you will be told that you will have reinstall all of your applications.

I am struggling to understand Microsoft’s logic.

Surely Microsoft wants all its customers to upgrade to Windows 8.1? If so, why has it ‘engineered’ the Windows 8.1 upgrade so customers will be discouraged from using it? Does anyone at Microsoft understand how much work and pain is involved in re-installing all your applications?

No, I am not kidding. If you have a PC or many PCs with Windows 7 installed you are going to have to install Windows 8 first in order to maintain all of your currently installed applications. Then, after spending many hours installing Windows 8 (it is not a trivial process) spend more precious time installing Windows 8.1. Microsoft has ensured that you cannot go direct from Windows 7 to Windows 8.1.

Of course, if you are unlucky, you could be living in a country where Microsoft has blocked the downloading of Windows 8, like Australia. Now you are between a rock and a hard place. Microsoft won’t let you install Windows 8 and if you install Windows 8.1 you face days or weeks of frustrating effort trying to re-install all of your existing applications.

 

Here are some quotes from Microsoft:

“You can decide what you want to keep on your PC. You won't be able to keep programs and settings when you upgrade. Be sure to locate your original program installation discs or purchase confirmation emails if you bought programs online. You'll need these to reinstall your programs after you upgrade to Windows 8.1—this includes, for example, Microsoft Office, Apache OpenOffice, and Adobe programs. It's also a good idea to back up your files at this time, too.”

If you're running Windows 7, Windows Vista, or Windows XP, all of your apps will need to be reinstalled using the original installation discs, or purchase confirmation emails if you bought the apps online.”

If the management at Microsoft wanted to ensure the failure of Windows 8.1 they couldn’t have come up with a better plan than the one they have implemented. By making Windows 8.1 so difficult to install they have ensured that its customers will stick with the tried and proven Windows 7 for as long as possible.

Can anyone at Microsoft explain why they thought this was a good idea?

Do you really need a Taxonomy/Classification Scheme with a Records Management System?

by Frank 26. October 2013 06:00

Background

Classification schemes are a way to group or order data; the objective being to group ‘like’ objects together. Classification schemes have been in use for tens of thousands of years, probably beginning when man first realized that there were different types of animals and plants.

We use classifications schemes both to make things easier to find and to add value to a group of objects. By adding value I mean that a classification (describing a group) may provide more information about the members of that group that is obvious from an analysis of a member; this could be referred to as semantics.

Classification schemes are used in all walks of life, for example; in business, in science, in academia and in politics. Are you a liberal or a conservative? Is it a mammal? If it is, is it a marsupial or a monotreme or a placental mammal? This last example illustrates the usual hierarchical arrangement of classification schemes.

In business, we have long used classification schemes to order business documents, that is, records of business transactions. We are all familiar with file folders and filing cabinets; these things are tools of a classification scheme. They make implementing a classification scheme easier as do numbering systems, colors, barcodes and Lektrievers.

With the first commercial availability of mainframe computers in the early 1960s came our first attempts to computerize filing systems. It was also in the 1960s that we saw the first text indexing systems and the first sophisticated search algorithms.

The advent of text indexing and search algorithms allowed us to do a much better job of classifying data but more importantly, they allowed us to do a much better job of finding data.

Let’s not get in a debate about terminology and acronyms

Our industry (information management to use an all-encompassing term) is often its own worst enemy. It creates terms and acronyms at will with both confusing and overlapping definitions. Then it wonders why normal end–users exhibit first bewilderment and then disinterest. Let’s look at a few examples, e.g., RIMS, RMS, DMS, EDRMS, IAMS, CMS, ECM and KMS.

Do you realize that the process of records management is part of each of the preceding acronyms?

For my part I will stick with my old friend the world records management standard, ISO 15489. It tells us that records are evidence of a business transaction and that records are in any form including paper, electronic documents and emails (I know emails are electronic documents but the world generally differentiates them because emails are ‘different’).

So as far as I am concerned the term Records Management System or RMS includes everything we do and is easily recognized and understood so this is the term and acronym I will use in this paper.

Browsing versus searching

Classification systems are very good at making it easier for us to find information by browsing but not very helpful when we are searching.

Most classification systems require you to first ‘browse’ before finding the exact information you want; you usually have to examine multiple objects before you find the one you want. But this is what classifications systems are very good at; because they organize data in a logical (to a human being) way, we usually know where to begin looking. This is why a classification scheme works so well with a manual filing system (multiple cabinets or multiple shelves of file folders)

Classification schemes are great for physical data and, I would say, absolutely necessary for physical data; how else would you organize fifty-thousand file folders (containing seven and a half million pages) in a huge filing room with hundreds of shelves?

However, with computers I don’t need to browse through multiple objects to find the one I want. By using techniques more appropriate to the computer than the filing room, I can search for and find exactly what I want almost instantly. I do not need to leaf through the file folder, I can go directly to the page or directly to the word. I can use the power of the computer.

The following statement will be probably seen as heresy by most practicing records managers but we actually don’t need a classification system (Taxonomy) when computerizing records. We just need a way to index and then search for information.

We need to organize our data so an ordinary end-user can easily find what they need without having to be a trained, professional records manager.

Indexing versus classifying

Now I know my interpretation of these two terms will not thrill everyone but the differentiation is an important part of my hypothesis.

Let’s start by looking at two kinds of books, a reference book and a work of fiction. Both have tables of content (a classification system usually called a TOC) but only one (the reference book) has an index (usually).

The TOC for the reference book is both useful and often used. The TOC for the work of fiction is both not useful and rarely used (readers rarely need more than a bookmark).

The TOC for the reference book is way to organize information into a logical form grouping ‘like’ information together in chapters and sections. A TOC for the work of fiction is just a list of chapters; it serves little or no purpose for the typical ‘end-user’, the reader.

All the reader of a fiction book really needs is two things; a bookmark and a ‘memory’ of the author, title, cover combination so he/she doesn’t accidentally buy it again at the airport bookshop before that dreaded long and boring flight.

The reader of the reference book actually needs both the TOC and the index for browsing (the TOC) and searching (the index).

A work of fiction doesn’t usually have nor need an index because the end-user doesn’t require it. A reference book usually has an index and it is often used to go direct to a page (or pages) and locate something very specific.

Drawing parallels with our broader topic, some information needs both a classification system and an index, some information needs just an index and some doesn’t require either (e.g., works of fiction).

Generally speaking, scientific collections require a classification system (a scientific taxonomy); for example, the study of plant species and the study of animal species (e.g., using a phylogenetic classification system). Scientists simply could not communicate with each other without having a detailed and exact classification system in place. But, most end-users are not scientists; they are just people trying to find the best place to store something and want to find it again with the least amount of effort and pain.

My contention is that we can solve all ‘content management’ and records management needs with a solution based on the application of a sensible, simple and self-evident (read that as easy to use or human-oriented) indexing system plus the required searching capabilities (i.e., covering both Metadata and full text). There is a better way.

What indexing system?

Whenever I consult with customers who are contemplating the capture and organization of data (hopefully into information) I always give the same advice. That is, “When you are thinking about how to index data first think about how you will find it later.” Ask this key question of your end-users, “When you are about to search for information what do you usually know about it?” For example:

  • Do you know the last name?
  • Do you know the first name?
  • Do you know the date of birth?

A good indexing scheme reflects real life usage of the system; it reflects how ordinary humans work and ‘see’ information. Put simply, it indexes the information people will later need to search on. It indexes the information people understand and are comfortable with because it is self-evident.

Indexing Emails

An email is usually described as an unstructured document (the same way a Word or Excel document is described as being ‘unstructured’) but in fact it does have structure. Even better, everyone is familiar with an email’s structure so we have very little to teach end-users; that is, we have a simple and self-evident ‘natural’ set of Metadata items to index.

  1. Date of email
  2. Sender
  3. Recipient
  4. CC
  5. BCC
  6. Subject
  7. Text of the body of the email
  8. Text of any attachments

For any normal end-user trying to find an email this is how they would envision an appropriate search.  They wouldn’t care that the email has been classified down to 6 hierarchies using the world’s most sophisticated Business Classification Scheme (BCS).

Understanding what end-users typically ‘know’ before they do a search determines what elements you have to index. This is the key to implementing a successful indexing system.

The above 8 elements of an email are self-evident insomuch as, “Of course I need to be able to search on the sender or recipient or subject….”

Indexing Electronic Documents

Now let’s look at ordinary electronic documents (i.e., not emails) because they are much less structured. We all know there are ways to add a common structure using features of MS Office like the information dialog box (asking for keywords etc) and templates and smart tags but these things are rarely and inconsistently used.

With shared drives we usually find some form of ‘evolved’ classification system because managing electronic documents in shared drives is akin to managing millions of pieces of paper in tens of thousands of file folders in hundreds of filing cabinets. Unfortunately, the good intentions and purity of design of the original architects of the shared drives folder/sub folder naming conventions (a classification system) are soon corrupted as users make uncoordinated changes and the structure soon becomes unwieldy and incomprehensible.

In my opinion shared drives are OK for the creation of documents (i.e., a work area) but not OK for the management of documents. In fact I would say shared drives are absolutely hopeless for the management of documents as history and practice will attest.

Once again we need an appropriate indexing system and once again we need to ask, “What do people know at the time of the search?” For example:

  1. Original filename
  2. Original path/filename
  3. Type/suffix – e.g., .DOC, .XLS, .PDF, etc
  4. Author
  5. *Subject

Metadata and the Dublin Core

Let me quote from the Dublin Core website:

http://dublincore.org/

“The Dublin Core Metadata Element Set is a vocabulary of fifteen properties for use in resource description. The name "Dublin" is due to its origin at a 1995 invitational workshop in Dublin, Ohio; "core" because its elements are broad and generic, usable for describing a wide range of resources.”

To quote Wikipedia:

http://en.wikipedia.org/wiki/Dublin_Core

“It provides a simple and standardized set of conventions for describing things online in ways that make them easier to find. Dublin Core is widely used to describe digital materials such as video, sound, image, text, and composite media like web pages.”

The Simple Dublin Core Metadata Element Set (DCMES) consists of 15 elements.

  1. Title
  2. Creator
  3. Subject
  4. Description
  5. Publisher
  6. Contributor
  7. Date
  8. Type
  9. Format
  10. Identifier
  11. Source
  12. Language
  13. Relation
  14. Coverage
  15. Rights

To my mind the Dublin Core is an excellent set of elements for describing almost any ‘record’ because it is both simple and appropriate to both computers and ‘normal’ end-users. As a professional, I like the elegance of the Dublin Core.

I also like the basic principle because it fits in with my hypothesis. That is, there is a better way to store, index and find records than a complex and unwieldy Taxonomy.

The Full Solution?

  • We need an application that stores documents of all types, i.e., all types of content.
  • We need an application that indexes both Metadata and full text.
  • We need an application with a customer configurable Metadata model.
  • We need an application that allows you to search on both Metadata and full text in a single search.
  • We need a search that combines BOOLEAN and numeric operators, e.g., AND, OR, NOT, =, <, >, etc.
  • We need a ‘standard’ Metadata definition (a Class if you will) that includes a simple (not more than 20 in my estimation) set of data elements that includes all of the elements necessary to index all of the types of documents (including file folders and paper) that you manage.
  • We need an application that includes all types of data capture, e.g., from the file system, from the native application, from a scanner, etc.
  • We need an application with a comprehensive security system.
  • We need an application with all reporting options, e.g., both standard reports and ad hoc reports.
  • We need an application with a configurable audit trail.
  • We need an application with comprehensive import and export capabilities.

 

The standard Metadata definition (Master Metadata Class)

I have come up with a limited set of elements that I believe can be used to index and find any type of record, paper or electronic. I have borrowed heavily from the Dublin Core because it makes good sense to do so; there is no need to reinvent the wheel.

#

Element

Explanation

1

Title

A name given to the record. Typically, a Title will be a name by which the record is formally known.  Text, e.g., "Business Plan for 2010"

2

Author(s)

The sender or author, E.g., Mark Twain or f.mckenna@k1corp.com

3

Dated

The original date of the document or published date

4

Date Received

Date received by the recipient or recipient's organization, whichever is the earlier

5

Original Name

e.g., filename or file\pathname for electronic documents  - C:\franks stuff\sample.xls

6

Primary Identifier

An unambiguous reference to the record within a given context. E.g., The file number

7

Secondary Identifier

An unambiguous reference to the record within a given secondary context. E.g., The case number or contract number or employee number

8

Barcode

Barcode number or RFID tag

9

Subject

The topic of the record. Typically, the subject will be represented using keywords or key phrases. Recommended best practice is to use a controlled vocabulary.

10

Description

An account of the record. Description may include but is not limited to: an abstract, a table of contents, a graphical representation, or a free-text account of the record.

11

Content

Words or phrases from the text content of the main document and attached documents

12

Contents

Description of contents if the document is a container, e.g., an archive box

13

Recipient(s)

Addressed to, sent to etc. People or organizations.

14

CC recipient(s)

CC and BCC recipients

15

Publisher

An entity responsible for making the record available.  Company or organization that either published the document or that employs the author

16

Type

The nature or genre of the record, usually from a controlled list, e.g., complaint, quotation, submission, application, etc.

17

Format

The file format, physical medium, or dimensions of the record. E.g., Word, Excel, PDF, etc

18

Language

e.g., English, French, Spanish

19

Retention

 The retention code determining the record’s lifecycle

20

Security

Access rights, security code, etc

 

My contention is that by using an ‘index set’ like the above 20 Metadata elements you can index, manage and retrieve any ‘record’ regardless of form and content.

What about all the standards ‘out there’?

There is a plethora of local, state, federal, industry and international standards pertaining to the management of records. Examples are DoD 5015, MoReq2, Dublin Core, ISO 15489, VERS etc and literally thousands of standards for Metadata.

The problem with most of these standards is that they are extraordinarily difficult to read and understand (even the Dublin Core documentation can be heavy going). I would draw a parallel back to the times when the Bible was in Latin but Christians were supposed to order their lives by its teachings. The problem being that only about 0.025% of Christians spoke Latin. Ergo, how do you order your life by a book you can’t read?

My assertion is that most records managers do not fully understand the standards they are charged with enforcing.

The problem isn’t with the records managers; it is with the people who write the standards. The standards are not written for records managers, they are written for academics and technical people (i.e., systems engineers who are experts in XML).  Just like the Latin Bible, they are not written in the language of the intended user.

And even when you do think you have a grasp of the fundamentals there are always multiple points to be clarified (as to the exact meaning) with the standards authority.

What about Retention/Disposal schedules?

This should probably be the subject of another paper because retention schedules have also become way too complex, unwieldy and difficult to understand and apply.

The question will be, “How can I do away with my classification system when my retention codes are linked to it?”

I have looked at hundreds of retention schedules and every single one has been way too complicated for the organization trying to use it. Another problem is that very few of the authorities that compile retention schedules do so with computers in mind. This means that we end up with lots of very vague conditional statements that are almost impossible to computerize.

Most retention schedules are written for archivists to read, not for computers to process. This is the heritage of retention schedules; they assumed an appraisal process by a trained and expert archivist.

The Continuum model or ‘Whole of Life’ model or File Plan model all assume we will allocate a retention code at the time the record is created, not during a later appraisal process. This made much more sense and allowed us to better manage the record throughout its life cycle. However, many such schemes also linked the retention code to a classification term or embedded the retention codes within the classification system. This of course made the classification system even more complex and difficult to understand and apply.

To my mind no organization needs more than ten retention codes (shortest period, longest period and eight in between) and three life cycles (e.g., active, inactive, destroyed). This is also probably heresy to a lot of the records management profession but, I would ask them to think about the proposition that something that was entirely appropriate to the manual world is not necessarily entirely appropriate to the computerized world. There is an easier and simpler way to manage retention and there is no need to embed retention codes into the classification system just as there is no need for a classification system in any modern, computerized records management system.

What about File Folders and Archive Boxes?

This is the classic stumbling block. This is when the records manager tells you that all the standards require you to use the same taxonomy for emails and electronic documents that he/she uses for traditional file folders and archive boxes.

You need to explain that the classification from the manual paper handling world is inappropriate to the computerized world, that it is an anachronism. You need to explain that all it will add is complexity, massive cost, confusion and a seriously negative attitude to end-users. You should say it is time to discard techniques and tools from the eighteenth century and adopt techniques from the twenty-first century. You should say you have a much better way. Then you should probably duck and run. Failing all else, blame me and give them my email address.

 

 

What is happening with the Tablet market?

by Frank 18. August 2013 06:00

I run a software company called the Knowledgeone Corporation and our main job is to provide the tools to capture, manage and find content. As such, we need to be on top of the hardware and software systems used by our customers so that we can constantly review and update our enterprise content management products like RecFind 6 so that they are appropriate to the times and devices in use.

I have spoken in previous Blogs about tablets and form factors and what is needed for business so other than providing the following links, I won’t go over old ground.

Will the Microsoft surface tablet unseat the iPad?

The PC is dead, or is it?

What will be the next big thing in IT?

Could you manage all of your records with a mobile device?

Why aren’t tablets the single solution yet?

The real impact of mobilization – How will it affect the way we work?

Mobile and the Web – The real future of applications?

Form factor – The real problem with mobile devices doing real work

Since my last Blog on the subject we have all seen RT tablets come and go (there will be a big landfill of RT tablets somewhere) and we are now all watching the slow and painful demise of Blackberry. In both of these cases we have to ask how big, super-clever companies like Microsoft and Blackberry could get it so wrong. Just thinking about the number of well-educated and highly experienced marketing and product people they have, it is inconceivable that they couldn’t work out what the average Joe in the street could have told them for free.

Then let’s also think about HP’s disastrous experiment with its TouchPad tablet (another e-waste landfill) and it becomes apparent that some of the largest, richest and best credentialed companies in the world can’t forecast what will happen in the tablet market.

In my opinion the problem all along, apart from operating system selection (iOS or Android?), has been matching needs to form factor and processing power. For example, no one wants a 12 inch phone and no one wants to write and read large documents on a 3 inch screen. This is why most of us still carry around three devices instead of one; a phone, a tablet and a laptop. This is just plain silly, what is the point of a small form factor device if I have to supplement it with a large form factor device? Like most other users, I really just want to carry around one device and I want it to have the capabilities and processing power for all the work I do.

It is for this reason that I believe the next big thing in the tablet market will be based on phones, not tablets. I envision slightly larger and much more powerful phones with universal connectors (are you listening Apple?) and docking capability. I would also like it to have a minimum of 4G and preferably 5G when available.

I want to be able to use it as a phone and when I get to my office I want to connect it to my keyboard, screen and network. I want to be able to connect it to a projector when visiting customers and prospects and I want a dynamically sizing desktop that knows when to automatically adjust the display to the form factor being viewed. That is, I want a different desktop for my screen at work than I want on the phone screen when travelling.

This brings up an interesting issue about choice of operating system as Windows owns about 95% of all business PCs and servers. I have previously never thought about buying a Windows Phone (I had one once a few years ago with Windows CE and it was awful) but my ideal device is going to have to run on the Windows operating system to be really usable in my new one-device paradigm.

I wonder why Microsoft didn’t think of this?

Month List