WELCOME TO THE VIRTUAL WORLD of Customer Complaints that is rapidly being deployed by major companies...
There are numerous systems that can be ‘tweaked’ to provide the type of defence that the general public is now facing. All are actually designed to make the fact-finding mission more efficient by filtering the complaints by type and generating standard responses that can then be quickly customised by the operator to make the angry customer feel confident that their problem is being attended to. In addition, they provide management with the ability to track complaints, identify production weaknesses, and produce detailed statistical reports to aid the decision process.
In addition to a standard auto-response, some parse complaints for keywords, and use those triggers to generate a personalised message that need only be visually checked and, if needed, edited, by the operator. Those keywords may also be used to provide a number of options that the operator might employ to resolve the issue (by retrieving data that has previously led to a successful outcome of the same problem in the past).
Letter writing is time intensive, and has long since been abandoned in both the sales and customer care departments of large firms; but there are three good reasons for that: Legal, Political Correctness, and the poor English Language Skills of overseas and immigrant staff.
Word Templates have been employed, ever since Microsoft introduced them, to reduce the time required to compose standard letters and ensure their spelling and grammar were correct; but it wasn’t long before management saw the advantages of employing that program’s mail-merge feature, together with a range of templates, to automatically generate a customised response to most enquiries.
IT knowledgeable managers led the way utilising Microsoft Word; but the Cloud has become universal, along with its mother the Internet, and all large companies are now centralising their data there. Moreover, centralisation removes the necessity of employing numerous staff at different physical locations. You just need a small office, a single email address, and a few staff to ‘quality control’ the communication stream initiated by potential customers – and another email address, a smaller office, with even fewer staff, to deal with those pesky complainants in a similar manner.
Some of those systems linked to are, relatively speaking, retail state-of-the-art. They have been designed to ‘learn’ from the data they are given and provide that ‘intelligence’ to the operator by highlighting actions that have previously led to a successful outcome, while supressing actions that have not. If you engage with those systems in real time, you will find that the computer ‘takes control’ of the conversation in such a way as to lead to a successful conclusion that occurred for another customer(s) in the past.
Now, the latest fashion that is currently consuming the Main Stream Media and the business industry concerns Artificial Intelligence (AI) – even though the IT sector explicitly uses ARTIFICIAL in that term. It is used, specifically, because no binary system can ever possibly exhibit intelligence (as we know it): it can only APPEAR to act in an intelligent way.
Binary code is just a sequence of individual electrical switches that can either be turned ON, or switched OFF. There is no other setting in-between. High-level programming languages adopted the terms TRUE and FALSE to make source code more human-readable; but TRUE means ON and FALSE means OFF. The terms bear no relationship to those used in normal human conversation.
There is no way that any binary computer can determine what is true, or what is false – because a binary system has no way of NOT DECIDING. A trinary system is required for that – a system that employs 3-way switches that can still (like the Scottish Judiciary System) operate in UNPROVEN mode until opting for a GUILTY or INNOCENT verdict.
(Programmers need a language, and a trinary chip that offers: TRUE, FALSE and WTF).
Such a machine, in theory, might then be programmed to exhibit some form of intelligence that we humans might interact with.
It is important to understand the distinction; because, when it comes to coding a complaints system, you need to establish what conditions denote a satisfied complaint – and what denotes a complaint that is unresolved. The latter is an easy question to answer: it is an original message that was time-stamped by the system and may or may not have other messages attached to it to form a thread.
For as long as the thread remains in the system: it can be classified as unresolved; but what event can be used to denote a successful outcome so that it can be removed?..
Ever get a ‘How did we do?’ email?
That is the ONLY way such systems should be designed to ensure that the human initiating a conversation can also turn it off (and indicate their satisfaction); but it is not practical in a business environment where the initiator might die, somehow not receive the system’s request, or fail to respond in due time. Moreover, what should that (binary remember) system do when it does not obtain a response ‘in due time’? Does it default to ‘SUCCESS=TRUE;’ and remove the thread from the system? Or should it continue to wait, indefinitely, leaving ‘SUCCESS==FALSE;’ and continue to report that the complaint is unresolved (to management’s chagrin)?
You can, I hope, see the system designer’s dilemma; because human nature, when a complaint is traditionally pursued by letter, is simply to cash the enclosed cheque and walk away – without composing another to say ‘Thank you.’ Moreover, who the hell ever sent a Thank You note after being told that no recompense will be forthcoming?
For commercial reasons, some of those systems have been loosened to permit an administrator the power to declare a dispute RESOLVED. Others default to ‘SUCCESS=TRUE’ when, at the end of a certain time period, the complainant has not replied to a ‘How did we do?’.
I can sense that my detailed explanations might be sending some of you to sleep; but now is the time to FLAMING WELL WAKE UP, and take your exam...
- Q1: If any of those system operators do nothing – other than reply to complainants with the templates suggested by the system – and the complainant simply gives up in despair: what outcome will the system eventually assign to that angry thread?
- Q2: When a similar complaint is received in the future, what action will the system consider as being a suitable action to take?
- Q3: Over time, what will the system’s likely strategy for dealing with any complaint be?
If you have already encountered a Customer Service Department employing one of those systems against you: you already feel my anger - and know my pain...
[Update: 29/04/2016 15:04] It seems all those systems currently share an Achilles Heel. They can only read text - not text contained in images.
You know what to do.
Snipping Tool is your friend...
We Need To STOP This AI StupidityAll public systems are, of course, vulnerable to attack when they provide a human interface; but when a system permits that interface to provide data that it then uses to reconfigure what was given to it by its programmers: literally anything can happen. You do not create an intelligent system - you create what is dumbass stupid.
You cannot solve the underlying problem by inserting more filters or adding more rules. You cannot create a trinary from a binary. All you will have created is another game. That is what makes this report about a Virtual Doctor employing NHS patient data so scary.
The reason for all this stupidity is the failure of educationalists to ensure that everyone understands what data is. It is just a stream of bits that the system's programmers have organised in order to access, modify, rearrange and present in different patterns. Anything can be represented as a stream of bits - provided you know how the designer of its representation organised them - and any stream can be interpreted in any way that a system's programmer dictates by assembling code to manipulate it.
Data is the primordial clay on the potter's wheel, and code is the potter's hands that craft it; but the clay is dumb and has no intrinsic value. All those web pages, all that information, all those PDFs and images that Google faithfully indexes and organises so that we can easily locate the information we seek - all are treated equally. Lies, ill-founded conspiracy theories, musings, rip-offs, scams, news reports, scientific papers, literary masterpieces - they are all there, in virtually every language; but you cannot ask Google what is true.
When Google returns its default search results: it can only bring your attention to what is trending...
Google is a binary dumbass - just like the device you are employing to read this post. All chips are binary dumbasses; because they are incapable of telling the difference between what is true and what is false.
They are not capable of judgement. They can only compute and compare.
If you wish, you can accept the judgement of the programmers when your computer provides you with a result; but why should you if they do not explain how they arrived at their displayed decision? Is the data they used to formulate the result up-to-date? Does it contain any inconsistences? Have the programmers arrived at their conclusion using data that is not trustworthy? Have they employed rules that are no longer true? What have they chosen NOT to display by using filters?
How the hell can they arrive at a decision anyway? At best, all any computing device can do is report what won from among a series of logical binary tests between competing sets of data that began with specific arithmetical weights (chosen and assigned by the programmer) that were then added to and subtracted from by the designer's algorithm that the programmer employed to build the system.
God bless algorithms, huh?
After all, who can beat a computer at chess? Now that's intelligence, isn't it?
'Yes,' I hear some of you say. 'They gave the computer details of all those masters' games; and it took all that data, analysed it, and developed a way of outwitting any human.'
Chess games employ a recursive algorithm that plays-out every possible move from the current position, to determine the one that will lead to the best outcome. There was no 'studying' of previous masters' games. All anyone needs to employ the algorithm is to define its parameters and provide it some rules.
'But you still can't beat 'em, right?'
It is not a perfect algorithm. No AI algorithms are.
Computers compute. The only way they can be made to judge or discriminate is through mathematics; but the problem is maths itself.
When the computer is given the task of comparing different 'strategies' it can only determine what is best if it locates one that has been given a greater value than all the others. If the different strategies have an identical value - it can only make a 'random' choice.
Of course, YOU never use rand, do you, Mr Programmer? You sort those results and select the one that is top of the list. No random there then? No 'equal' results?
They are all there; all together; sharing the top locations in no discernible order - just like Google's results. When you pop: you are more than likely to pop a random...
THERE IS NO ORDER IN EQUALITY.
Check it out for yourself...
Even if you 'fine tune' your values by employing long floats instead of integers, you will still generate random decisions. All you will accomplish from introducing further 'sensitivity' is to introduce the danger of an overflow - and add to the number of equalities that the algorithm generates.
It is not the numbers: it is the numerical scale and the ratios (about which you can do nothing).
You can still beat the chess computer (every time, and at every level - if you exploit its random decisions). Remember: it has to make a choice - and 'random' is not really random anyway. It is just another algorithm, seeded by the current value of the chip's internal clock.
All binary programs must ensure that they do not enter a continuous loop and hang the system, so programmers have no choice but to insert defensive code; but when you have to force a computer to choose between two outcomes: you have a 50:50 chance of being WRONG.
That's as good as it gets. If you encounter four equal values: you only have a 25% chance of being right - and, if you encounter 10: you have a 90% chance of being WRONG.
Don't forget, we are only hypothesising about investigating one level in determining that best outcome here. Increase the levels and the probability of being wrong is magnified - horrendously. A 50% chance taken on, say, four levels, becomes 0.50^4 - just an 0.0625 chance of being right (or a 93.75% chance of being WRONG).
Those errors do not matter in gaming, because the parameters are clearly defined. There are only so many moves that can be made, at any one time, in any game - so the recursive algorithm detects when all possible end positions have been calculated - and only those final values are returned to be compared. If there are fifty, a hundred, or even a thousand equal 'routes' - it doesn't matter. Each one indicates a winning, or least-lost strategy for the computer.
It's a game. There are rules. No one gets hurt - and it doesn't really matter what criteria is used to evaluate the myriad of different positions. You just need to define what the winning position is - the checkmate - and investigate all the possible moves from, say, White's current position until you find it - whilst pruning those paths that lead to a successful outcome for Black. (The best paths are those that arrive at the winning position in the least number of moves - and those moves are equal to the number of levels the algorithm examines before locating it).
It's easy. It's a binary problem with a clearly definable WIN or LOSE to which computers are perfectly suited.
But do we humans make a random choice, whenever we are faced with two or more options that we judge lead to the same outcome?
No, we don't. We investigate further!
If we are examining a map to decide the best direction to take, we will use our intelligence to take the one that most suits our needs. We will reduce the number of possible routes by considering all those other factors that we first chose to ignore because we wished to arrive at a decision quickly.
Maybe we prefer the country route; maybe we'd like to take the motorway. Maybe there is somewhere, along or near one of the possible routes, which we would particularly like to visit.
We don't break out of the investigation loop: we enter another; seeking out further information about each option until we arrive at a clear conclusion. If it becomes apparent that there is no difference: then we might make a random choice - but only if our need to travel to that location is greater than our wish to be sure we are making the right decision! (Are we willing to bet that there are no other factors we should consider?).
It all depends.
If the decision you are trying to reach does not impact upon others, it doesn't really matter (other than to you) if your choice is random; but, when others are involved - when it may impact upon their lives (no matter how small) - no one other that a dumbass egotist would act upon any random decision reached.
Because the decision has not been calculated to be the correct one. It has been left to chance to decide.
Whenever you choose to base your decisions upon what binary suggests: you are always placing a bet.
Computers can't think. They can only play games.
That is Artificial Intelligence.
Remember: binary has to make a decision. Once it has started, it must be forced to stop. It cannot wait, in WTF mode, for more data to analyse. If it is not forced to make a random choice when paths are equal: it just enters a continuous loop.
Thing is: would you entrust your medical diagnosis to a dumbass binary that treats all data equally and often makes random decisions in order to complete its task?
Think about it...
Think about that NHS patient data...
Medical AI is just like that Customer Complaints system - but infinitely worse...
Do you make a 'thank you' appointment with your doctor when his prescriptions brought about your cure? Moreover, did his prescriptions actually cure you? Or did they just assist your natural defence mechanism to defeat the infection? And what about those other patients, with the same complaint, given the same prescription, who had to make another appointment because they were NOT getting better? Was it a misdiagnosis? Did the patient withhold pertinent information? Or did they just want another day off-work?
Garbage In, Garbage Out; but what makes the prospect of the Virtual Doctor most frightening is that it is not utilising a complete data set.
There is no demonstrable chess board from which to define the algorithm's parameters; and no limit to the number of objects that can be employed, which confer the algorithm its rules. There is also no clearly definable criteria (other than the patient is not dead) that can be used to qualify what a best outcome is to have the algorithm stop searching and return its result.
All algorithms must have a clearly definable objective to use as a STOP command - otherwise they will hang the computer in a continuous loop, or, if they are recursive, overflow the stack and possibly corrupt any other task the computer is performing.
You see, no algorithm can work unless it can judge outcomes in terms of simple numerical values. It can only compare numbers. It is just a simple routine, running on a simple calculator. The only way it can judge between outcomes is to compare their assigned numerical values - and the only way it can provide a result is to stop what it is doing and report its 'conclusion'.
ON... OFF... ON... OFF.
It's a binary.
The computer does not care about the data that underlies the values it is given. It may represent a person; it may represent an anonymous patient history; it may represent a game piece; or any other component it is manipulating. But, if the computer is tasked to compare two objects (and it can only compare two at a time) - it can only compute the difference in their individual numerical values to provide a result.
In that chess game, even though all those successful conclusions result in different positions, with differing numbers of black and white pieces remaining on the board: those accurately computed endgame positions are NEVER COMPARED to each other. You cannot assign values to individual pieces, add those values together, and provide that total as a 'position' value to the algorithm - because it means nothing. The algorithm is binary: it just needs to know what is a good result and what is bad. It cannot make use of anything else. If you tell it a Queen is worth 40; a Bishop 30; a Knight 20; and a Rook 10: it does not mean anything to dumbass binary. When it encounters a position value of 50 it has no way of knowing if that represents a Queen and a Rook; a Bishop and a Knight; two Knights and a Rook - or any other combination.
Again, maths is the problem.
You might think that you could task the computer to locate those positions where White was valued at 300, and Black was valued at 100 - and you certainly could. If there were one or more such positions, the computer could certainly find them; but if there were not - it would hang or overflow the stack.
Defensively, you would need to determine all those paths that led to White having a value of 300; all those valuing Black at 100; and then compare them all to see if there was a common path that led to identical positions in which both values were present - if you had the time.
You see, binary is just a dumb male, that really can only do one thing at once - and the only reason binary often appears to be quicker that the human brain is because it locks onto the first solution and ignores all other possibilities.
AI is not just artificial, it is also very superficial.
Of course, if we wanted binary to provide full details of all those 'positions' that it had determined as being the best, we could. It is easy to tag on a report; but that would only present numerous, perhaps hundreds or even thousands, of different combinations for us to consider - and that would force us to think...
Just what are we trying to achieve by studying all these dumb patterns and pathways?
What is so fascinating about having a computer identify all possible combinations of the data it is given, and arbitrarily select one by chance? Moreover, what on earth can be gained by that examination (unless it represents some form of guidance map)?
What, on earth, has happened to SCIENCE?..
The idea that you can infer ANYTHING from a partial set of data, which is constantly evolving, is mind-boggling-dumbass-binary-stupid. Such a system can NEVER assist medical science or its evolution: because it seeks only to determine the patterns in the data given to it that occur most frequently. That is all binary can do. It does not matter how you dress it up, or what 'scientific' term you might assign to your system - it is just dumb binary code, manipulating a dumb binary machine.
Your doctor treats YOU. He analyses YOU. He gets to know YOU. He bases his decisions upon what he knows of SCIENCE. He chooses the drugs YOU are most suited to BEFORE he considers the results he has obtained from prescribing them to his other patients. He is actively managing YOUR health - and is not about to prescribe ANYTHING on the basis of hearsay. He doesn't want to know that penicillin 'cured' your next-door neighbour. What bearing has that on what ails you?..
Your doctor bases his decisions upon what he knows of a healthy body, and compares that with what he has been taught about disease. He does not assume you are ill, and then try to match your symptoms with those in a huge database to find the best fit (which will, more likely than not, be random).
Do I have to ask those same three questions again?..
Can you really not see to where that Virtual Doctor is leading?..
It is leading to a universal prescription, containing 'optimal' quantities of different drugs, that have been 'shown to provide a cure' for every ailment that the database contains.
A single pill, that the pharmaceutical companies will then sell us - removing the need for doctors to whom we can bring our complaints (and the necessity for conducting further expensive research).
It is the natural conclusion to which dumb binary will always be drawn as it is given more and more data. It will always settle upon the equal, and be forced to make a random choice.
Now, get this: scientific clinical trials make use of two groups, specifically chosen to be the same as each other. One group is given the drug; and the other a placebo; but many trials conclude that the placebo was more effective.
There is absolutely no science being conducted from all that NHS data. Drugs, placebos, good outcomes, bad outcomes - they are each given equal weight by dumbass binary - and, of course, there is absolutely no way of knowing if the patients took all the medication they were prescribed, took part of it, or just gave it to a neighbour who told them they had the same complaint. Furthermore, none of those records contain details of over-the-counter medication that the patient may have taken whilst seeking a cure.
That vast Virtual Doctor database will be just as easy for contributors to manipulate as Google's rankings are. When the pharmaceutical companies add their own, carefully prepared data, those patterns will emphasise similar existing patterns - and those latest patterns will, just like a carefully constructed Web Site, rise to the top. The result will not mean anything - it is just a more visible pattern; but it will be sold as conclusive evidence for whatever the pharmaceutical companies wish to claim.
'There you go, Joe. Look what a clever Web Designer I am! Your business hasn't even opened its doors yet; but, there you are, at the top of Google's search rankings!'
All data is interesting. It may provide clues when examined intelligently; but it sure as hell cannot be relied upon, nor can any conclusion be based upon it - just as my fingerprints at a crime scene do not prove that I committed the crime.
What Can Be Done?Earlier, I used simple calculations to explain just how dangerous it is to blindly assume that any computer output is correct. Maths can only be trusted for as long as no part of the computation makes use of a random component.
Artificial Intelligence, I have shown, always employs random choices in order to break out of its loops; but that behaviour can also provide a quantifiable measure of just how much trust can be placed in its results.
All programmers need to do is ensure the truth is always told: by calculating the odds of their results being correct; and displaying that information along with the results that were calculated.
They just need to provide a 'health warning.'
It's an easy patch. You just need to modify the algorithm to ensure it counts the number of equal paths it is being forced to choose between; calculate the odds; and update a global CONFIDENCE float accordingly.
Similarly, when any kind of statistical analysis is employed, those confidence levels also need to be multiplied in.
The only thing is: you do all that work; iron out the bugs; run the first bench test - and then, when you finally see those results, you wonder WTF you ever bothered...
Gaming has fast become the sales pitch for what now passes as modern science touting its mathematical models; but none have ever revealed the 'scientific equation' upon which they are based.
That, of course, is because there isn't one.
The fact is: all computer models employ NUMEROUS, different, scientific and statistical equations - haphazardly strung together by programmers in an effort to reproduce selective patterns from a mass of randomly collected samples by employing chance.
None of that data is actually analysed. It is just re-ordered into similar, manageable chunks that are then assigned different values to place them all upon a numeric scale (similar to a graph's single line), which an algorithm might then employ to detect similar patterns in which selected combinations of chunks are most visible upon that global scale.
Retrieving text from images?
You're right; but those applications are provided with a complete data set (the image) and the algorithm can examine every possible path. Not so with an incomplete data set, which, by definition, cannot be thoroughly examined.
It seems that, whenever IT takes a step forward, the dumbass 'AI' brigade incorporate the new method into their own applications for which it was never designed.
Humans think in three-dimensions: not two; we live in a three-dimensional universe; the Earth is not flat.
Need I continue?..
We have all been conditioned by television to believe that any screen is a window upon the real world. Some of us can remember the days when there was only the BBC that provided movie-screen news, and drama. So there we sat, watching; believing the news; and being entertained by each new drama.
Believing the news.
That worked out SO well, didn't it?..
We have all been conditioned by that behaviour. We have been conditioned to accept whatever we are told; and we no longer teach our children to believe NOTHING - unless they can verify it for themselves.
We need to keep asking questions, when answers are denied to us; and when someone is a proven liar: we must never trust them again.
Above all, we need to stop this AI stupidity, and to stop it now; before innocent people suffer because we chose not to discriminate - and decided to treat TRUE and FALSE with equal contempt...