The Most Powerful AI Needs Human Judgement
The Most Powerful AI Needs Human Judgement https://csuiteold.c-suitenetwork.com/advisors/wp-content/themes/csadvisore/images/empty/thumbnail.jpg 150 150 Mike Moran https://secure.gravatar.com/avatar/46b59d002824d2902a53e1d7bb94702f?s=96&d=mm&r=gI grow weary of reading the simplistic headlines around the impact of AI. Some people say that AI will put many of us into a new leisure class that doesn’t need to work. Others argue that AI will make us all unemployed. They are both saying the same thing, actually, so it is just a personality test to divide optimists from pessimists. But there is no technology that in the past had that kind of impact, so why is this one different? It probably isn’t.
What is much more likely is that as machines do more, we humans will do something else. Something machines can’t do yet. That’s the way it has always been, so I think that is the way to bet.
What fuels my belief that this is true is that the most powerful AI we see today depends on human judgement. No, I don’t mean the highly-paid data scientists and AI engineers that are all the rage these days. Sure, they are important, but I am talking about ordinary people doing ordinary jobs using judgement that computers just don’t have. This technique is called semi-supervised machine learning or active learning.
Here is how it works. Supervised machine learning is what most AI applications use. They need human judgement, too. But they use it only at the beginning. They ask humans to tell the system the right answer to a question–for example, whether a tweet has positive or negative sentiment. You pile up enough tweets with human answers and use that to train the AI system. So, far, so good. But that is where most systems stop.
The most powerful systems keep getting better, using semi-supervised machine learning. The secret is something called the confidence score. Most AI systems can do more than just answer the question. Beyond telling you that they think this tweet is positive or that tweet is negative, they can tell you how confident they are in that opinion. So, the system might be 90% confident that this tweet is positive and just 60% confident that another tweet is negative. This provides some interesting possibilities for semi-supervision.
You can set up your system so that your system handles automatically any tweet with over 70% confidence. If it is that sure of itself, let it provide that answer on its own. But if it is less than 70% confident, you can refer that tweet to a human being to check its answer. Is that tweet negative–the one with 60% confidence? Checking the answers the system isn’t sure of is semi-supervision, and it has two benefits. The first is that the system is more likely to get the answers right if it can ask a human to check its work.
The second benefit is that each new human answer is new training data that the system can use to improve its model. By constantly asking for help with the answers it is least sure of, the system is improving itself as rapidly as possible. You can add more training data at any time to any machine learning system, but if your new training data is merely adding more examples of what the system is already doing well, it doesn’t cause any improvement. Only by adding new training data in the areas that the system is getting wrong does improvement happen.
So, yes, machine learning is very important. But semi-supervised machine learning is what provides that most rapid way of continuously improving your machine learning application. If your team isn’t using that approach, it might be time to ask why not.