Reading about Deep Neural Networks

I’m diving into tuning hyper parameters multilayer perceptrons (MLPs) and Deep Neural Networks for a zero-code machine learning application I’m working on. Today I came across a helpful post by Jason Brownlee that outlines the use cases for MLP, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) and Hybrid Neural Networks. Since most of the data I anticipate my software package working with is tabular, MLPs seem like a good place to start.

Looking into the space of hyper parameters I might want to tune for an MLP, I found this interesting StackExchange discussion that outlines some rationale for only using one or two layers. Apparently, very few problems benefit from more than two layers. In fact, very few problems benefit from more than one layer…

Previous
Previous

JANE API

Next
Next

Categorical variables in a list