Energy Flow Networks (EFNs) and Particle Flow Networks (PFNs) are model architectures designed for learning from collider events as unordered, variable-length sets of particles. Both EFNs and PFNs are parameterized by a learnable per-particle function $\Phi$ and latent space function $F$.

An EFN takes the following form: where $z_i$ is a measure of the energy of particle $i$, such as $z_i = p_{T,i}$, and $\hat p_i$ is a measure of the angular information of particle $i$, such as $\hat p_i = (y_i,\phi_i)$. Any infrared- and collinear-safe observable can be parameterized in this form.

A PFN takes the following form: where $p_i$ is the information of particle $i$, such as its four-momentum, charge, or flavor. Any observable can be parameterized in this form. See the Deep Sets framework for additional discussion.

Since these architectures are not
used by the core EnergyFlow code, and require the external
Keras and scikit-learn
libraries, they are not imported by default but must be explicitly
imported, e.g. `from energyflow.archs import *`

.
EnergyFlow also contains several additional model architectures for ease of using
common models that frequently appear in the intersection of
particle physics and machine learning.

### EFN

Energy Flow Network (EFN) architecture.

```
energyflow.archs.EFN(*args, **kwargs)
```

See `ArchBase`

for how to pass in hyperparameters.

**Required EFN Hyperparameters**

**input_dim**:*int*- The number of features for each particle.

**ppm_sizes**: {*tuple*,*list*} of*int*- The sizes of the dense layers in the per-particle frontend module $\Phi$. The last element will be the number of latent observables that the model defines.

**dense_sizes**: {*tuple*,*list*} of*int*- The sizes of the dense layers in the backend module $F$.

**Default EFN Hyperparameters**

**ppm_acts**=`'relu'`

: {*tuple*,*list*} of*str*- Activation functions(s) for the dense layers in the per-particle frontend module $\Phi$. A single string will apply the same activation to all layers. See the Keras activations docs for more detail.

**dense_acts**=`'relu'`

: {*tuple*,*list*} of*str*- Activation functions(s) for the dense layers in the backend module $F$. A single string will apply the same activation to all layers.

**ppm_k_inits**=`'he_uniform'`

: {*tuple*,*list*} of*str*- Kernel initializers for the dense layers in the per-particle frontend module $\Phi$. A single string will apply the same initializer to all layers. See the Keras initializer docs for more detail.

**dense_k_inits**=`'he_uniform'`

: {*tuple*,*list*} of*str*- Kernel initializers for the dense layers in the backend module $F$. A single string will apply the same initializer to all layers.

**latent_dropout**=`0`

:*float*- Dropout rates for the summation layer that defines the value of the latent observables on the inputs. See the Keras Dropout layer for more detail.

**dense_dropouts**=`0`

: {*tuple*,*list*} of*float*- Dropout rates for the dense layers in the backend module $F$. A single float will apply the same dropout rate to all dense layers.

**mask_val**=`0`

:*float*- The value for which particles with all features set equal to this value will be ignored. See the Keras Masking layer for more detail.

#### eval_filters

```
eval_filters(patch, n=100, prune=True)
```

Evaluates the latent space filters of this model on a patch of the two-dimensional geometric input space.

**Arguments**

**patch**: {*tuple*,*list*} of*float*- Specifies the patch of the geometric input space to be evaluated.
A list of length 4 is interpretted as
`[xmin, ymin, xmax, ymax]`

. Passing a single float`R`

is equivalent to`[-R,-R,R,R]`

.

- Specifies the patch of the geometric input space to be evaluated.
A list of length 4 is interpretted as
**n**: {*tuple*,*list*} of*int*- The number of grid points on which to evaluate the filters. A list
of length 2 is interpretted as
`[nx, ny]`

where`nx`

is the number of points along the x (or first) dimension and`ny`

is the number of points along the y (or second) dimension.

- The number of grid points on which to evaluate the filters. A list
of length 2 is interpretted as
**prune**:*bool*- Whether to remove filters that are all zero (which happens sometimes due to dying ReLUs).

**Returns**

- (
*numpy.ndarray*,*numpy.ndarray*,*numpy.ndarray*)- Returns three arrays,
`(X, Y, Z)`

, where`X`

and`Y`

have shape`(nx, ny)`

and are arrays of the values of the geometric inputs in the specified patch.`Z`

has shape`(num_filters, nx, ny)`

and is the value of the different filters at each point.

- Returns three arrays,

### PFN

Particle Flow Network (PFN) architecture. Accepts the same
hperparameters as the `EFN`

.

```
energyflow.archs.PFN(*args, **kwargs)
```

### CNN

Convolutional Neural Network architecture.

```
energyflow.archs.CNN(*args, **kwargs)
```

See `ArchBase`

for how to pass in hyperparameters.

**Required CNN Hyperparameters**

**input_shape**: {*tuple*,*list*} of*int*- The shape of a single jet image. Assuming that
`data_format`

is set to`channels_first`

, this is`(nb_chan,npix,npix)`

.

- The shape of a single jet image. Assuming that
**filter_sizes**: {*tuple*,*list*} of*int*- The size of the filters, which are taken to be square, in each convolutional layer of the network. The length of the list will be the number of convolutional layers in the network.

**num_filters**: {*tuple*,*list*} of*int*- The number of filters in each convolutional layer. The length of
`num_filters`

must match that of`filter_sizes`

.

- The number of filters in each convolutional layer. The length of

**Default CNN Hyperparameters**

**dense_sizes**=`None`

: {*tuple*,*list*} of*int*- The sizes of the dense layer backend. A value of
`None`

is equivalent to an empty list.

- The sizes of the dense layer backend. A value of
**pool_sizes**=`None`

: {*tuple*,*list*} of*int*- Size of maxpooling filter, taken to be a square. A value of
`None`

will not use maxpooling.

- Size of maxpooling filter, taken to be a square. A value of
**conv_acts**=`'relu'`

: {*tuple*,*list*} of*str*- Activation function(s) for the conv layers. A single string will apply the same activation to all conv layers. See the Keras activations docs for more detail.

**dense_acts**=`'relu'`

: {*tuple*,*list*} of*str*- Activation functions(s) for the dense layers. A single string will apply the same activation to all dense layers.

**conv_k_inits**=`'he_uniform'`

: {*tuple*,*list*} of*str*- Kernel initializers for the convolutional layers. A single string will apply the same initializer to all layers. See the Keras initializer docs for more detail.

**dense_k_inits**=`'he_uniform'`

: {*tuple*,*list*} of*str*- Kernel initializers for the dense layers. A single string will apply the same initializer to all layers.

**conv_dropouts**=`0`

: {*tuple*,*list*} of*float*- Dropout rates for the convolutional layers. A single float will apply the same dropout rate to all conv layers. See the Keras Dropout layer for more detail.

**num_spatial2d_dropout**=`0`

:*int*- The number of convolutional layers, starting from the beginning of the model, for which to apply SpatialDropout2D instead of Dropout.

**dense_dropouts**=`0`

: {*tuple*,*list*} of*float*- Dropout rates for the dense layers. A single float will apply the same dropout rate to all dense layers.

**paddings**=`'valid'`

: {*tuple*,*list*} of*str*- Controls how the filters are convoled with the inputs. See the Keras Conv2D layer for more detail.

**data_format**=`'channels_first'`

: {`'channels_first'`

,`'channels_last'`

}- Sets which axis is expected to contain the different channels.

### DNN

Dense Neural Network architecture.

```
energyflow.archs.DNN(*args, **kwargs)
```

See `ArchBase`

for how to pass in hyperparameters.

**Required DNN Hyperparameters**

**input_dim**:*int*- The number of inputs to the model.

**dense_sizes**: {*tuple*,*list*} of*int*- The number of nodes in the dense layers of the model.

**Default DNN Hyperparameters**

**acts**=`'relu'`

: {*tuple*,*list*} of*str*- Activation functions(s) for the dense layers. A single string will apply the same activation to all layers. See the Keras activations docs for more detail.

**k_inits**=`'he_uniform'`

: {*tuple*,*list*} of*str*- Kernel initializers for the dense layers. A single string will apply the same initializer to all layers. See the Keras initializer docs for more detail.

**dropouts**=`0`

: {*tuple*,*list*} of*float*- Dropout rates for the dense layers. A single float will apply the same dropout rate to all layers. See the Keras Dropout layer for more detail.

**l2_regs**=`0`

: {*tuple*,*list*} of*float*- $L_2$-regulatization strength for both the weights and biases of the dense layers. A single float will apply the same $L_2$-regulatization to all layers.

### LinearClassifier

Linear classifier that can be either Fisher's linear discriminant or logistic regression. Relies on the scikit-learn implementations of these classifiers.

```
energyflow.archs.LinearClassifier(*args, **kwargs)
```

See `ArchBase`

for how to pass in hyperparameters.

**Default Hyperparameters**

**linclass_type**=`'lda'`

: {`'lda'`

,`'lr'`

}- Controls which type of linear classifier is used.
`'lda'`

corresponds to`LinearDisciminantAnalysis`

and`'lr'`

to`Logistic Regression`

. If using`'lr'`

all arguments are passed on directly to the scikit-learn class.

- Controls which type of linear classifier is used.

**LDA Hyperparameters**

**solver**=`'svd'`

: {`'svd'`

,`'lsqr'`

,`'eigen'`

}- Which LDA solver to use.

**tol**=`1e-10`

:*float*- Threshold used for rank estimation. Notably not a convergence parameter.

### ArchBase

Base class for all architectures contained in EnergyFlow. The mechanism of specifying hyperparameters for all architectures is described here. Methods common to all architectures are documented here. Note that this class cannot be instantiated directly as it is an abstract base class.

```
energyflow.archs.archbase.ArchBase(*args, **kwargs)
```

Accepts arbitrary arguments. Positional arguments (if present) are dictionaries of hyperparameters, keyword arguments (if present) are hyperparameters directly. Keyword hyperparameters take precedence over positional hyperparameter dictionaries.

**Arguments**

***args**: arbitrary positional arguments- Each argument is a dictionary containing hyperparameter (name, value) pairs.

***kwargs**: arbitrary keyword arguments- Hyperparameters as keyword arguments. Takes precedence over the positional arguments.

**Default NN Hyperparameters**

Common hyperparameters that apply to all architectures except
for `LinearClassifier`

.

**loss**=`'categorical_crossentropy'`

:*str*- The loss function to use for the model. See the Keras loss function docs for available loss functions.

**lr**=`0.001`

:*float*- The learning rate for the model.

**opt**=`Adam`

: Keras optimizer**output_dim**=`2`

:*int*- The output dimension of the model.

**output_act**=`'softmax'`

:*str*- Activation function to apply to the output.

**metrics**=`['accuracy']`

:*list*of*str*- The Keras metrics to apply to the model.

**compile**=`True`

:*bool*- Whether the model should be compiled or not.

**summary**=`True`

:*bool*- Whether a summary should be printed or not.

#### fit

```
fit(X_train, Y_train, **kwargs)
```

Train the model by fitting the provided training dataset and labels.
Transparently calls the `fit()`

method of the underlying model.

**Arguments**

**X_train**:*numpy.ndarray*- The training dataset as an array of features for each sample.

**Y_train**:*numpy.ndarray*- The labels for the training dataset. May need to be one-hot encoded depending on the requirements of the underlying model (typically Keras models will use one-hot encoding whereas the linear model does not.)

**kwargs**:*dict*- Keyword arguments passed on to the Keras method of the same name. See the Keras model docs for details on available parameters.

**Returns**

- Whatever the underlying model's
`fit()`

returns.

#### predict

```
predict(X_test, **kwargs)
```

Evaluate the model on a dataset.

**Arguments**

**X_test**:*numpy.ndarray*- The dataset to evaluate the model on.

**kwargs**:*dict*- Keyword arguments passed on to the Keras method of the same name. See the Keras model docs for details on available parameters.

**Returns**

*numpy.ndarray*- The value of the model on the input dataset.

##### model

```
model
```

The underlying model held by this architecture.