Teradata CTO: 'Black box' deep learning algorithms could limit enterprise use

teradata conference logo
Image: Teradata

Speaking at Teradata Universe in Nice, France, Teradata CTO Stephen Brobst explained how a lack of transparency around deep learning algorithms could limit enterprise use of the technology

Share

A wide range of businesses are already showing how machine learning can be applied to their applications.

But while there are clear advantages in a variety of fields, automating processes and improving the effectiveness of existing systems, a lack of transparency could make the technology unsuitable for customer-facing systems.

According to Teradata CTO Stephen Brobst, the complexity of deep learning means that it can be extremely difficult to explain why decisions are made by an algorithm.

"One of the disadvantages of deep learning is that it is not very transparent," Brobst said, speaking at Teradata Universe in Nice today. "These multi-layer neural networks with their intermediate data representations are not understandable by the average human, or even the expert human, to be frank."

Brobst referred to a recent IDC report which predicted that, by the end of the decade, a lack of transparency and perceived bias in deep learning algorithms will lead to "backlash" against AI-based services. He said: "This could be a big problem particularly if you are using machine leaning for something that has a consumer-facing impact."

Probst gave the example of deep learning within the healthcare sector, where AI systems could be used to suggest a pathway of care for a patient. This is an ideal application, he said, because deep learning can draw on a wide variety of data to reach conclusions – such as genomic structure, x-ray images and family history.

"But then if [a system] makes the recommendation 'please treat the patient in this way', but you can't explain to the doctor how you got there, or why you should do that, doctors are going to feel very nervous about treating a patient kind of blindly," he said.

While doctors can tie back recommendations to specific research papers, for example, the workings of a neural network are not something that can be easily explained. "For those decisions we need complete transparency," Brobst said. "Deep learning in the current state we are is actually not a good choice."

Another example is within the financial sector. Brobst explained: "Let's say that you are a financial services organisation and you are granting credit or not to an individual based on your deep learning algorithms, when a regulator comes in to say 'well why did you give credit to this person and not that person’, [to reply that] 'I don't know, my black box told me so' is not a very good answer. I need to be able to prove that I am not discriminating with inadvertent redlining."

Despite barriers in certain cases, deep learning will have many uses in large organisations, Brobst said.

One example he cited is a manufacturing firm which is improving the yield and quality of its products, processing data on the millions of variables that can influence these factors.

"Using this deep learning technique to identify the opportunities for improving their manufacturing processes is absolutely huge," he explained, adding that using traditional 'brute force' computation would not be suitable.

Fraud detection, demand prediction and failure predictions are other clear areas for the technology, as they are not customer-facing.

"Granting a credit or not, or the particular procedure you use as a doctor is very consumer facing," he said, adding that "for internal improvement, you are probably okay in most cases."

"So pick your applications carefully."

Find your next job with computerworld UK jobs