# nupic.tensorflow.models¶

class GSCSparseCNN(cnn_out_channels=(64, 64), cnn_percent_on=(0.095, 0.125), linear_units=1000, linear_percent_on=0.1, linear_weight_sparsity=0.4, boost_strength=1.5, boost_strength_factor=0.9, k_inference_factor=1.5, duty_cycle_period=1000, data_format=tensorflow.python.keras.backend.image_data_format, pre_trained=False, name=None, batch_norm=True, **kwargs)[source]

Bases: tensorflow.keras.Sequential

Sparse CNN model used to classify Google Speech Commands dataset as described in How Can We Be So Dense? paper.

Parameters
• cnn_out_channels – output channels for each CNN layer

• cnn_percent_on – Percent of units allowed to remain on each convolution layer

• linear_units – Number of units in the linear layer

• linear_percent_on – Percent of units allowed to remain on the linear layer

• linear_weight_sparsity – Percent of weights that are allowed to be non-zero in the linear layer

• k_inference_factor – During inference (training=False) we increase percent_on in all sparse layers by this factor

• boost_strength – boost strength (0.0 implies no boosting)

• boost_strength_factor – Boost strength factor to use [0..1]

• duty_cycle_period – The period used to calculate duty cycles

• data_format – one of channels_first or channels_last. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). Similar to data_format argument in keras.layers.Conv2D.

• pre_trained – Whether or not to create a pre-trained model

• name – Model name

• batch_norm – Whether or not to use BatchNormLayers

class GSCSuperSparseCNN(data_format=tensorflow.python.keras.backend.image_data_format, pre_trained=False, name=None, batch_norm=True)[source]

Super Sparse CNN model used to classify Google Speech Commands dataset as described in How Can We Be So Dense? paper. This model provides a sparser version of GSCSparseCNN