@@ -188,7 +188,8 @@ static auto train = bob::extension::FunctionDoc(
"To accomplish this, either prepare a list with all your class observations organized in 2D arrays or pass a 3D array in which the first dimension (depth) contains as many elements as classes you want to discriminate.\n\n"
".. note::\n\n"
" We set at most :py:meth:`output_size` eigen-values and vectors on the passed machine.\n"
" You can compress the machine output further using :py:meth:`Machine.resize` if necessary."
" You can compress the machine output further using :py:meth:`Machine.resize` if necessary.",
.add_parameter("X","[array_like(2D, floats)] or array_like(3D, floats)","The input data, separated to contain the training data per class in the first dimension")
...
...
@@ -282,7 +283,8 @@ static auto output_size = bob::extension::FunctionDoc(
"This method should be used to setup linear machines and input vectors prior to feeding them into this trainer.\n\n"
"The value of ``X`` should be a sequence over as many 2D 64-bit floating point number arrays as classes in the problem. "
"All arrays will be checked for conformance (identical number of columns). "
"To accomplish this, either prepare a list with all your class observations organized in 2D arrays or pass a 3D array in which the first dimension (depth) contains as many elements as classes you want to discriminate."
"To accomplish this, either prepare a list with all your class observations organized in 2D arrays or pass a 3D array in which the first dimension (depth) contains as many elements as classes you want to discriminate.",
true
)
.add_prototype("X","size")
.add_parameter("X","[array_like(2D, floats)] or array_like(3D, floats)","The input data, separated to contain the training data per class in the first dimension")
.add_parameter("negatives, positives","array_like(2D, float)","``negatives`` and ``positives`` should be arrays organized in such a way that every row corresponds to a new observation of the phenomena (i.e., a new sample) and every column corresponds to a different feature")
@@ -575,7 +575,8 @@ static auto forward = bob::extension::FunctionDoc(
"If one provides a 1D array, the ``output`` array, if provided, should also be 1D, matching the output size of this machine. "
"If one provides a 2D array, it is considered a set of vertically stacked 1D arrays (one input per row) and a 2D array is produced or expected in ``output``. "
"The ``output`` array in this case shall have the same number of rows as the ``input`` array and as many columns as the output size for this machine.\n\n"
".. note:: The :py:meth:`__call__` function is an alias for this method."
".. note:: The :py:meth:`__call__` function is an alias for this method.",
true
)
.add_prototype("input, [output]","output")
.add_parameter("input","array_like(1D or 2D, float)","The array that should be projected; must be compatible with :py:attr:`shape` [0]")
"Compares this LinearMachine with the ``other`` one to be approximately the same",
"The optional values ``r_epsilon`` and ``a_epsilon`` refer to the relative and absolute precision for the :py:attr:`weights`, :py:attr:`biases` and any other values internal to this machine."
"The optional values ``r_epsilon`` and ``a_epsilon`` refer to the relative and absolute precision for the :py:attr:`weights`, :py:attr:`biases` and any other values internal to this machine.",
"Trains a linear machine to perform the PCA (aka. KLT)"
"Trains a linear machine to perform the PCA (aka. KLT)",
"The resulting machine will have the same number of inputs as columns in ``X`` and :math:`K` eigen-vectors, where :math:`K=\\min{(S-1,F)}`, with :math:`S` being the number of rows in ``X`` (samples) and :math:`F` the number of columns (or features). "
"The vectors are arranged by decreasing eigen-value automatically -- there is no need to sort the results.\n\n"
"The user may provide or not an object of type :py:class:`Machine` that will be set by this method. "
"If provided, machine should have the correct number of inputs and outputs matching, respectively, the number of columns in the input data array ``X`` and the output of the method :py:meth:`output_size`.\n\n"
"The input data matrix ``X`` should correspond to a 64-bit floating point array organized in such a way that every row corresponds to a new observation of the phenomena (i.e., a new sample) and every column corresponds to a different feature.\n\n"
"This method returns a tuple consisting of the trained machine and a 1D 64-bit floating point array containing the eigen-values calculated while computing the KLT. "
"The eigen-value ordering matches that of eigen-vectors set in the machine."
"The eigen-value ordering matches that of eigen-vectors set in the machine.",
.add_parameter("X","array_like(2D, floats)","The input data to train on")
...
...
@@ -232,7 +233,8 @@ static auto output_size = bob::extension::FunctionDoc(
"Calculates the maximum possible rank for the covariance matrix of the given ``X``",
"Returns the maximum number of non-zero eigen values that can be generated by this trainer, given ``X``. "
"This number (K) depends on the size of X and is calculated as follows :math:`K=\\min{(S-1,F)}`, with :math:`S` being the number of rows in ``data`` (samples) and :math:`F` the number of columns (or features).\n\n"
"This method should be used to setup linear machines and input vectors prior to feeding them into the :py:meth:`train` function."
"This method should be used to setup linear machines and input vectors prior to feeding them into the :py:meth:`train` function.",
true
)
.add_prototype("X","size")
.add_parameter("X","array_like(2D, floats)","The input data that should be trained on")