One of the hottest areas today is Machine Learning (ML). It will do magic. Cure cancer (literally) and free humanity from drudgery. Which is so naive. Humans have infinity capacity make a mess of things. One day the machines will learn that too.
Recently I’ve been shown some product demos claiming to use ML to solve some aspect of rather intractable problems. Problems usually worked on by skilled architects or specialized experts in their field. In both of these cases, the (different) presenters claimed that through ML, it would a) define and architect a system, and in another improve a business process.
I was intrigued and incredulous. I could see some automation, but solving the whole thing?
I did not believe it. But when something is hot and new, we often lack the language, confidence and assessment framework to refute or judge the claims being made. I politely asked a few questions but I felt a bit uncomfortable. I kind of stewed for a few days.
Finally, I arrived at some questions and way of thinking that will help me have a more detailed dialogue with people making claims based on ML. Here it is:
Schooling. Where did your ML algorithm go to school? Who were the teachers?
Curriculum. If the machine is to learn, it needs data. Lots of data. How much data was used to teach the machine? What was the quality & provenance of the data? Is it sufficient data for learning to happen? What biases are built into the data sets.
Graduation. What exams did the ML algorithms pass? What grades did they get?
Work Experience. Where and when did they do their internships? What projects and results have been produced?
This framework may seem humorous (I hope) but it’s also useful.
The ML algorithms are only as good as the teachers they have. And for now, all the teachers are human. The ML will only be as good as the teachers.
ML requires a sufficiently large amount of data on which it can learn. This data is actually hard to get! In the two cases above, there are no data sets that can be bought or used, so it struck me that the ML algorithm would deliver trivial advice, very biased to the one or two experiences encoded. Not enough data to learn anything. And the definition of sufficiently large will vary by problem.
On graduation, the notion of exams to pass your class are the equivalent of Quality Assurance in software development. How do we know someone knows something? We test them.
On work experience, same thing. Nice that you studied and passed your exam. But work experience is how we know someone actually ‘knows’ how to do something.
In summary, these are the questions I will be asking about ML going forward.
Remember, my opinions only represent my opinions.