Owl_dense_ndarray_generic
N-dimensional array module: including creation, manipulation, and various vectorised mathematical operations.
About the comparison of two complex numbers x
and y
, Owl uses the following conventions: 1) x
and y
are equal iff both real and imaginary parts are equal; 2) x
is less than y
if the magnitude of x
is less than the magnitude of y
; in case both x
and y
have the same magnitudes, x
is less than y
if the phase of x
is less than the phase of y
; 3) less or equal, greater, greater or equal relation can be further defined atop of the aforementioned conventions.
The generic module supports operations for the following Bigarry element types: Int8_signed, Int8_unsigned, Int16_signed, Int16_unsigned, Int32, Int64, Float32, Float64, Complex32, Complex64.
N-dimensional array type, i.e. Bigarray Genarray type.
Type of the ndarray, e.g., Bigarray.Float32, Bigarray.Complex64, and etc.
empty Bigarray.Float64 [|3;4;5|]
creates a three diemensional array of Bigarray.Float64
type. Each dimension has the following size: 3, 4, and 5. The elements in the array are not initialised, they can be any value. empty
is faster than zeros
to create a ndarray.
The module only supports the following four types of ndarray: Bigarray.Float32
, Bigarray.Float64
, Bigarray.Complex32
, and Bigarray.Complex64
.
create Bigarray.Float64 [|3;4;5|] 2.
creates a three-diemensional array of Bigarray.Float64
type. Each dimension has the following size: 3, 4, and 5. The elements in the array are initialised to 2.
init Bigarray.Float64 d f
creates a ndarray x
of shape d
, then using f
to initialise the elements in x
. The input of f
is 1-dimensional index of the ndarray. You need to explicitly convert it if you need N-dimensional index. The function ind
can help you.
init_nd
is almost the same as init
but f
receives n-dimensional index as input. It is more convenient since you don't have to convert the index by yourself, but this also means init_nd
is slower than init
.
zeros Bigarray.Complex32 [|3;4;5|]
creates a three-diemensional array of Bigarray.Complex32
type. Each dimension has the following size: 3, 4, and 5. The elements in the array are initialised to "zero". Depending on the kind
, zero can be 0.
or Complex.zero
.
ones Bigarray.Complex32 [|3;4;5|]
creates a three-diemensional array of Bigarray.Complex32
type. Each dimension has the following size: 3, 4, and 5. The elements in the array are initialised to "one". Depending on the kind
, one can be 1.
or Complex.one
.
uniform Bigarray.Float64 [|3;4;5|]
creates a three-diemensional array of type Bigarray.Float64
. Each dimension has the following size: 3, 4, and 5. The elements in the array follow a uniform distribution 0,1
.
gaussian Float64 [|3;4;5|]
...
sequential Bigarray.Float64 [|3;4;5|] 2.
creates a three-diemensional array of type Bigarray.Float64
. Each dimension has the following size: 3, 4, and 5. The elements in the array are assigned sequential values.
?a
specifies the starting value and the default value is zero; whilst ?step
specifies the step size with default value one.
complex re im
constructs a complex ndarray/matrix from re
and im
. re
and im
contain the real and imaginary part of x
respectively.
Note that both re
and im
can be complex but must have same type. The real part of re
will be the real part of x
and the imaginary part of im
will be the imaginary part of x
.
complex rho theta
constructs a complex ndarray/matrix from polar coordinates rho
and theta
. rho
contains the magnitudes and theta
contains phase angles. Note that the behaviour is undefined if rho
has negative elelments or theta
has infinity elelments.
unit_basis k n i
returns a unit basis vector with i
th element set to 1.
val shape : ('a, 'b) t -> int array
shape x
returns the shape of ndarray x
.
val num_dims : ('a, 'b) t -> int
num_dims x
returns the number of dimensions of ndarray x
.
val nth_dim : ('a, 'b) t -> int -> int
nth_dim x
returns the size of the nth dimension of x
.
val numel : ('a, 'b) t -> int
numel x
returns the number of elements in x
.
val nnz : ('a, 'b) t -> int
nnz x
returns the number of non-zero elements in x
.
val density : ('a, 'b) t -> float
density x
returns the percentage of non-zero elements in x
.
val size_in_bytes : ('a, 'b) t -> int
size_in_bytes x
returns the size of x
in bytes in memory.
same_shape x y
checks whether x
and y
has the same shape or not.
same_data x y
checks whether x
and y
share the same underlying data in the memory. Namely, both variables point to the same memory address. This is done by checking the Data
pointer in the Bigarray structure.
This function is very useful for avoiding unnecessary copying between two ndarrays especially if one has been reshaped or sliced.
kind x
returns the type of ndarray x
. It is one of the four possible values: Bigarray.Float32
, Bigarray.Float64
, Bigarray.Complex32
, and Bigarray.Complex64
.
val strides : ('a, 'b) t -> int array
strides x
calculates the strides of x
. E.g., if x
is of shape [|3;4;5|]
, the returned strides will be [|20;5;1|]
.
val slice_size : ('a, 'b) t -> int array
slice_size
calculates the slice size in each dimension, E.g., if x
is of shape [|3;4;5|]
, the returned slice size will be [|60; 20; 5|]
.
val ind : ('a, 'b) t -> int -> int array
ind x i
converts x
's one-dimensional index i
to n-dimensional one.
val i1d : ('a, 'b) t -> int array -> int
i1d x i
converts x
's n-dimensional index i
to one-dimensional one.
val get : ('a, 'b) t -> int array -> 'a
get x i
returns the value at i
in x
. E.g., get x [|0;2;1|]
returns the value at [|0;2;1|]
in x
.
val set : ('a, 'b) t -> int array -> 'a -> unit
set x i a
sets the value at i
to a
in x
.
val get_index : ('a, 'b) t -> int array array -> 'a array
get_index i x
returns an array of element values specified by the indices i
. The length of array i
equals the number of dimensions of x
. The arrays in i
must have the same length, and each represents the indices in that dimension.
E.g., [| [|1;2|]; [|3;4|] |]
returns the value of elements at position (1,3)
and (2,4)
respectively.
val set_index : ('a, 'b) t -> int array array -> 'a array -> unit
set_index i x a
sets the value of elements in x
according to the indices specified by i
. The length of array i
equals the number of dimensions of x
. The arrays in i
must have the same length, and each represents the indices in that dimension.
If the length of a
equals to the length of i
, then each element will be assigned by the value in the corresponding position in x
. If the length of a
equals to one, then all the elements will be assigned the same value.
val get_fancy : Owl_types.index list -> ('a, 'b) t -> ('a, 'b) t
get_fancy s x
returns a copy of the slice in x
. The slice is defined by a
which is an int option array
. E.g., for a ndarray x
of dimension [|2; 2; 3|]
, slice [0] x
takes the following slices of index \(0,*,*\)
, i.e., [|0;0;0|]
, [|0;0;1|]
, [|0;0;2|]
... Also note that if the length of s
is less than the number of dimensions of x
, slice
function will append slice definition to higher diemensions by assuming all the elements in missing dimensions will be taken.
Basically, slice
function offers very much the same semantic as that in numpy, i.e., start:stop:step grammar, so if you how to index and slice ndarray in numpy, you should not find it difficult to use this function. Please just refer to numpy documentation or my tutorial.
There are two differences between slice_left
and slice
: slice_left
does not make a copy but simply moving the pointer; slice_left
can only make a slice from left-most axis whereas slice
is much more flexible and can work on arbitrary axis which need not start from left-most side.
val set_fancy : Owl_types.index list -> ('a, 'b) t -> ('a, 'b) t -> unit
set_fancy axis x y
set the slice defined by axis
in x
according to the values in y
. y
must have the same shape as the one defined by axis
.
About the slice definition of axis
, please refer to get_fancy
function.
val get_fancy_ext : Owl_types.index array -> ('a, 'b) t -> ('a, 'b) t
This function is used for extended indexing operator since ocaml 4.10.0. The indexing and slicing syntax become much ligher.
val set_fancy_ext : Owl_types.index array -> ('a, 'b) t -> ('a, 'b) t -> unit
This function is used for extended indexing operator since ocaml 4.10.0. The indexing and slicing syntax become much ligher.
get_slice axis x
aims to provide a simpler version of get_fancy
. This function assumes that every list element in the passed in int list list
represents a range, i.e., R
constructor.
E.g., [[];[0;3];[0]]
is equivalent to [R []; R [0;3]; R [0]]
.
set_slice axis x y
aims to provide a simpler version of set_fancy
. This function assumes that every list element in the passed in int list list
represents a range, i.e., R
constructor.
E.g., [[];[0;3];[0]]
is equivalent to [R []; R [0;3]; R [0]]
.
get_slice_ext axis x
is used for extended indexing operator since ocaml 4.10.0. The indexing and slicing syntax become much ligher.
E.g., x.%{0;1;2}
.
Similar to get_slice_ext axis x
, this function is used for extended indexing operator since ocaml 4.10.0. The indexing and slicing syntax become much ligher.
Some as Bigarray.sub_left
, please refer to Bigarray documentation.
sub_ndarray parts x
is similar to Bigarray.sub_left
. It splits the passed in ndarray x
along the axis 0
according to parts
. The elelments in parts
do not need to be equal but they must sum up to the dimension along axis zero.
The returned sub-ndarrays share the same memory as x
. Because there is no copies made, this function is much faster than using `split` function to divide the lowest dimensionality of x
.
Same as Bigarray.slice_left
, please refer to Bigarray documentation.
val reset : ('a, 'b) t -> unit
reset x
resets all the elements in x
to zero.
val fill : ('a, 'b) t -> 'a -> unit
fill x a
assigns the value a
to the elements in x
.
resize ~head x d
resizes the ndarray x
. If there are less number of elelments in the new shape than the old one, the new ndarray shares part of the memory with the old x
. head
indicates the alignment between the new and old data, either from head or from tail. Note the data is flattened before the operation.
If there are more elements in the new shape d
. Then new memory space will be allocated and the content of x
will be copied to the new memory. The rest of the allocated space will be filled with zeros. The default value of head
is true
.
reshape x d
transforms x
into a new shape definted by d
. Note the reshape
function will not make a copy of x
, the returned ndarray shares the same memory with the original x
.
One shape dimension (only one) can be set to -1
. In this case, the value is inferred from the length of the array and remaining dimensions.
flatten x
transforms x
into a one-dimsonal array without making a copy. Therefore the returned value shares the same memory space with original x
.
reverse x
reverse the order of all elements in the flattened x
and returns the results in a new ndarray. The original x
remains intact.
flip ~axis x
flips a matrix/ndarray along axis
. By default axis = 0
. The result is returned in a new matrix/ndarray, so the original x
remains intact.
rotate x d
rotates x
clockwise d
degrees. d
must be multiple times of 90
, otherwise the function will fail. If x
is an n-dimensional array, then the function rotates the plane formed by the first and second dimensions.
transpose ~axis x
makes a copy of x
, then transpose it according to ~axis
. ~axis
must be a valid permutation of x
dimension indices. E.g., for a three-dimensional ndarray, it can be [2;1;0]
, [0;2;1]
, [1;2;0]
, and etc.
swap i j x
makes a copy of x
, then swaps the data on axis i
and j
.
tile x a
tiles the data in x
according the repetition specified by a
. This function provides the exact behaviour as numpy.tile
, please refer to the numpy's online documentation for details.
repeat x a
repeats the elements of x
according the repetition specified by a
. The i-th element of a
specifies the number of times that the individual entries of the i-th dimension of x
should be repeated.
concat_vertical x y
concatenates two ndarray x
and y
vertically. This is just a convenient function for concatenating two ndarrays along their lowest dimension, i.e. 0.
The associated operator is @||
, please refer to :doc:`owl_operator`.
concat_horizontal x y
concatenates two ndarrays x
and y
horizontally. This is just a convenient function for concatenating two ndarrays along their highest dimension.
The associated operator is @=
, please refer to :doc:`owl_operator`.
concat_vh
is used to assemble small parts of matrices into a bigger one. E.g. In [| [|a; b; c|]; [|d; e; f|]; [|g; h; i|] |]
, wherein `a, b, c ... i` are matrices of different shapes. They will be concatenated into a big matrix as follows.
\begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix}
This is achieved by first concatenating along axis:1
for each element in the array, then concatenating along axis:0
. The number of elements in each array needs not to be equal as long as the aggregated dimensions match. E.g., please check the following example.
.. code-block:: ocaml
let a00 = Mat.sequential 2 3 in let a01 = Mat.sequential 2 2 in let a02 = Mat.sequential 2 1 in let a10 = Mat.sequential 3 3 in let a11 = Mat.sequential 3 3 in Mat.concat_vh | [|a00; a01; a02|]; [|a10; a11|] |
;;
concatenate ~axis:2 x
concatenates an array of ndarrays along the third dimension. For the ndarrays in x
, they must have the same shape except the dimension specified by axis
. The default value of axis
is 0, i.e., the lowest dimension of a matrix/ndarray.
stack ~axis x
stacks an array of ndarrays along the axis
dimension. For example, if x
contains K ndarrays of shape |2;3|
, then stack ~axis:1 x
will return an ndarray of dimensions |2;K;3|
. The ndarrays in x
, they must all have the same shape. The default value of axis
is 0.
split ~axis parts x
splits an ndarray x
into parts along the specified axis
. This function is the inverse operation of concatenate
. The elements in x
must sum up to the dimension in the specified axis.
split_vh parts x
splits a passed in ndarray x
along the first two dimensions, i.e. axis 0
and axis 1
. This is the inverse operation of concat_vh
function, and the function is very useful in dividing a big matrix into smaller (especially heterogeneous) parts.
For example, given a matrix x
of shape [|8;10|]
, it is possible to split in the following ways.
.. code-block:: ocaml
Mat.split_vh | [|(8,5);(8,5)|] |
x;; Mat.split_vh | [|(4,5);(4,5)|]; [|(4,10)|] |
x;; Mat.split_vh | [|(4,5);(4,5)|]; [|(4,5);(4,5)|] |
x;;
squeeze ~axis x
removes single-dimensional entries from the shape of x
.
expand x d
reshapes x
by increasing its rank from num_dims x
to d
. The opposite operation is squeeze x
. The hi
parameter is used to specify whether the expandsion is along high dimension (by setting true
), or along the low dimension (by setting false
). The default value is false
.
pad ~v p x
pads a ndarray x
with a constant value v
. The padding index p
is a list of lists of 2 integers. These two integers denote padding width at both edges of one dimension of x
.
dropout ~rate:0.3 x
drops out 30% of the elements in x
, in other words, by setting their values to zeros.
val top : ('a, 'b) t -> int -> int array array
top x n
returns the indices of n
greatest values of x
. The indices are arranged according to the corresponding element values, from the greatest one to the smallest one.
val bottom : ('a, 'b) t -> int -> int array array
bottom x n
returns the indices of n
smallest values of x
. The indices are arranged according to the corresponding element values, from the smallest one to the greatest one.
sort1 ~axis x
performs quicksort of the elements along specific axis
in x
. A new copy is returned as result, the original x
remains intact.
sort x
performs quicksort of the elelments in x
. A new copy is returned as result, the original x
remains intact. If you want to perform in-place sorting, please use `sort_` instead.
argsort x
returns the indices with which the elements in x
are sorted in increasing order. Note that the returned index ndarray has the same shape as that of x
, and the indices are 1D indices.
draw ~axis x n
draws n
samples from x
along the specified axis
, with replacement. axis
is set to zero by default. The return is a tuple of both samples and the indices of the selected samples.
mmap fd kind layout shared dims
...
val iteri : (int -> 'a -> unit) -> ('a, 'b) t -> unit
iteri f x
applies function f
to each element in x
. Note that 1d index is passed to function f
, you need to convert it to nd-index by yourself.
val iter : ('a -> unit) -> ('a, 'b) t -> unit
iter f x
is similar to iteri f x
, except the index is not passed to f
.
mapi f x
makes a copy of x
, then applies f
to each element in x
.
map f x
is similar to mapi f x
except the index is not passed.
foldi ~axis f a x
folds (or reduces) the elements in x
from left along the specified axis
using passed in function f
. a
is the initial element and in f i acc b
acc
is the accumulater and b
is one of the elements in x
along the same axis. Note that i
is 1d index of b
.
Similar to foldi
, except that the index of an element is not passed to f
.
scan ~axis f x
scans the x
along the specified axis
using passed in function f
. f acc a b
returns an updated acc
which will be passed in the next call to f i acc a
. This function can be used to implement accumulative operations such as sum
and prod
functions. Note that the i
is 1d index of a
in x
.
Similar to scani
, except that the index of an element is not passed to f
.
val filteri : (int -> 'a -> bool) -> ('a, 'b) t -> int array
filteri f x
uses f
to filter out certain elements in x
. An element will be included if f
returns true
. The returned result is an array of 1-dimensional indices of the selected elements. To obtain the n-dimensional indices, you need to convert it manually with Owl's helper function.
val filter : ('a -> bool) -> ('a, 'b) t -> int array
Similar to filteri
, but the indices are not passed to f
.
Similar to iteri
but applies to two N-dimensional arrays x
and y
. Both x
and y
must have the same shape.
Similar to iter2i
, except that the index not passed to f
.
map2i f x y
applies f
to two elements of the same position in both x
and y
. Note that 1d index is passed to function f
.
map2 f x y
is similar to map2i f x y
except the index is not passed.
val iteri_nd : (int array -> 'a -> unit) -> ('a, 'b) t -> unit
Similar to iteri
but n-d indices are passed to the user function.
Similar to mapi
but n-d indices are passed to the user function.
Similar to foldi
but n-d indices are passed to the user function.
Similar to scani
but n-d indices are passed to the user function.
val filteri_nd : (int array -> 'a -> bool) -> ('a, 'b) t -> int array array
Similar to filteri
but n-d indices are returned.
Similar to iter2i
but n-d indices are passed to the user function.
Similar to map2i
but n-d indices are passed to the user function.
iteri_slice ~axis f x
iterates the slices along the specified axis
in x
and applies the function f
. The 1-d index of of the slice is passed in. By default, the axis
is 0. Setting axis
to the highest dimension is not allowed because in that case you can just use `iteri` to iterate all the elements in x
which is more efficient.
Note that the slice is obtained by slicing left (due to Owl's C-layout ndarray) a sub-array out of x
. E.g., if x
has shape [|3;4;5|]
, setting axis=0
will iterate three 4 x 5
matrices. The slice shares the same memory with x
so no copy is made.
Similar to iteri_slice
but slice index is not passed in.
mapi_slice ~axis f x
maps the slices along the specified axis
in x
and applies the function f
. By default, axis
is 0. The index of of the slice is passed in.
Please refer to iteri_slice
for more details.
Similar to mapi_slice
but slice index is not passed in.
filteri_slice ~axis f x
filters the slices along the specified axis
in x
. The slices which satisfy the predicate f
are returned in an array.
Please refer to iteri_slice
for more details.
Similar to filteri_slice
but slice index is not passed in.
foldi_slice ~axis f a x
fold (left) the slices along the specified axis
in x
. The slices which satisfy the predicate f
are returned in an array.
Please refer to iteri_slice
for more details.
Similar to foldi_slice
but slice index is not passed in.
val exists : ('a -> bool) -> ('a, 'b) t -> bool
exists f x
checks all the elements in x
using f
. If at least one element satisfies f
then the function returns true
otherwise false
.
val not_exists : ('a -> bool) -> ('a, 'b) t -> bool
not_exists f x
checks all the elements in x
, the function returns true
only if all the elements fail to satisfy f : float -> bool
.
val for_all : ('a -> bool) -> ('a, 'b) t -> bool
for_all f x
checks all the elements in x
, the function returns true
if and only if all the elements pass the check of function f
.
val is_zero : ('a, 'b) t -> bool
is_zero x
returns true
if all the elements in x
are zeros.
val is_positive : ('a, 'b) t -> bool
is_positive x
returns true
if all the elements in x
are positive.
val is_negative : ('a, 'b) t -> bool
is_negative x
returns true
if all the elements in x
are negative.
val is_nonpositive : ('a, 'b) t -> bool
is_nonpositive
returns true
if all the elements in x
are non-positive.
val is_nonnegative : ('a, 'b) t -> bool
is_nonnegative
returns true
if all the elements in x
are non-negative.
val is_normal : ('a, 'b) t -> bool
is_normal x
returns true
if all the elelments in x
are normal float numbers, i.e., not NaN
, not INF
, not SUBNORMAL
. Please refer to
https://www.gnu.org/software/libc/manual/html_node/Floating-Point-Classes.html https://www.gnu.org/software/libc/manual/html_node/Infinity-and-NaN.html#Infinity-and-NaN
val not_nan : ('a, 'b) t -> bool
not_nan x
returns false
if there is any NaN
element in x
. Otherwise, the function returns true
indicating all the numbers in x
are not NaN
.
val not_inf : ('a, 'b) t -> bool
not_inf x
returns false
if there is any positive or negative INF
element in x
. Otherwise, the function returns true
.
equal x y
returns true
if two matrices x
and y
are equal.
not_equal x y
returns true
if there is at least one element in x
is not equal to that in y
.
greater x y
returns true
if all the elements in x
are greater than the corresponding elements in y
.
less x y
returns true
if all the elements in x
are smaller than the corresponding elements in y
.
greater_equal x y
returns true
if all the elements in x
are not smaller than the corresponding elements in y
.
less_equal x y
returns true
if all the elements in x
are not greater than the corresponding elements in y
.
elt_equal x y
performs element-wise =
comparison of x
and y
. Assume that a
is from x
and b
is the corresponding element of a
from y
of the same position. The function returns another binary (0
and 1
) ndarray/matrix wherein 1
indicates a = b
.
The function supports broadcast operation.
elt_not_equal x y
performs element-wise !=
comparison of x
and y
. Assume that a
is from x
and b
is the corresponding element of a
from y
of the same position. The function returns another binary (0
and 1
) ndarray/matrix wherein 1
indicates a <> b
.
The function supports broadcast operation.
elt_less x y
performs element-wise <
comparison of x
and y
. Assume that a
is from x
and b
is the corresponding element of a
from y
of the same position. The function returns another binary (0
and 1
) ndarray/matrix wherein 1
indicates a < b
.
The function supports broadcast operation.
elt_greater x y
performs element-wise >
comparison of x
and y
. Assume that a
is from x
and b
is the corresponding element of a
from y
of the same position. The function returns another binary (0
and 1
) ndarray/matrix wherein 1
indicates a > b
.
The function supports broadcast operation.
elt_less_equal x y
performs element-wise <=
comparison of x
and y
. Assume that a
is from x
and b
is the corresponding element of a
from y
of the same position. The function returns another binary (0
and 1
) ndarray/matrix wherein 1
indicates a <= b
.
The function supports broadcast operation.
elt_greater_equal x y
performs element-wise >=
comparison of x
and y
. Assume that a
is from x
and b
is the corresponding element of a
from y
of the same position. The function returns another binary (0
and 1
) ndarray/matrix wherein 1
indicates a >= b
.
The function supports broadcast operation.
val equal_scalar : ('a, 'b) t -> 'a -> bool
equal_scalar x a
checks if all the elements in x
are equal to a
. The function returns true
iff for every element b
in x
, b = a
.
val not_equal_scalar : ('a, 'b) t -> 'a -> bool
not_equal_scalar x a
checks if all the elements in x
are not equal to a
. The function returns true
iff for every element b
in x
, b <> a
.
val less_scalar : ('a, 'b) t -> 'a -> bool
less_scalar x a
checks if all the elements in x
are less than a
. The function returns true
iff for every element b
in x
, b < a
.
val greater_scalar : ('a, 'b) t -> 'a -> bool
greater_scalar x a
checks if all the elements in x
are greater than a
. The function returns true
iff for every element b
in x
, b > a
.
val less_equal_scalar : ('a, 'b) t -> 'a -> bool
less_equal_scalar x a
checks if all the elements in x
are less or equal to a
. The function returns true
iff for every element b
in x
, b <= a
.
val greater_equal_scalar : ('a, 'b) t -> 'a -> bool
greater_equal_scalar x a
checks if all the elements in x
are greater or equal to a
. The function returns true
iff for every element b
in x
, b >= a
.
elt_equal_scalar x a
performs element-wise =
comparison of x
and a
. Assume that b
is one element from x
The function returns another binary (0
and 1
) ndarray/matrix wherein 1
of the corresponding position indicates a = b
, otherwise 0
.
elt_not_equal_scalar x a
performs element-wise !=
comparison of x
and a
. Assume that b
is one element from x
The function returns another binary (0
and 1
) ndarray/matrix wherein 1
of the corresponding position indicates a <> b
, otherwise 0
.
elt_less_scalar x a
performs element-wise <
comparison of x
and a
. Assume that b
is one element from x
The function returns another binary (0
and 1
) ndarray/matrix wherein 1
of the corresponding position indicates a < b
, otherwise 0
.
elt_greater_scalar x a
performs element-wise >
comparison of x
and a
. Assume that b
is one element from x
The function returns another binary (0
and 1
) ndarray/matrix wherein 1
of the corresponding position indicates a > b
, otherwise 0
.
elt_less_equal_scalar x a
performs element-wise <=
comparison of x
and a
. Assume that b
is one element from x
The function returns another binary (0
and 1
) ndarray/matrix wherein 1
of the corresponding position indicates a <= b
, otherwise 0
.
elt_greater_equal_scalar x a
performs element-wise >=
comparison of x
and a
. Assume that b
is one element from x
The function returns another binary (0
and 1
) ndarray/matrix wherein 1
of the corresponding position indicates a >= b
, otherwise 0
.
approx_equal ~eps x y
returns true
if x
and y
are approximately equal, i.e., for any two elements a
from x
and b
from y
, we have abs (a - b) < eps
. For complex numbers, the eps
applies to both real and imaginary part.
Note: the threshold check is exclusive for passed in eps
, i.e., the threshold interval is (a-eps, a+eps)
.
val approx_equal_scalar : ?eps:float -> ('a, 'b) t -> 'a -> bool
approx_equal_scalar ~eps x a
returns true
all the elements in x
are approximately equal to a
, i.e., abs (x - a) < eps
. For complex numbers, the eps
applies to both real and imaginary part.
Note: the threshold check is exclusive for the passed in eps
.
approx_elt_equal ~eps x y
compares the element-wise equality of x
and y
, then returns another binary (i.e., 0
and 1
) ndarray/matrix wherein 1
indicates that two corresponding elements a
from x
and b
from y
are considered as approximately equal, namely abs (a - b) < eps
.
approx_elt_equal_scalar ~eps x a
compares all the elements of x
to a scalar value a
, then returns another binary (i.e., 0
and 1
) ndarray/matrix wherein 1
indicates that the element b
from x
is considered as approximately equal to a
, namely abs (a - b) < eps
.
of_array k x d
takes an array x
and converts it into an ndarray of type k
and shape d
.
val to_array : ('a, 'b) t -> 'a array
to_array x
converts an ndarray x
to OCaml's array type. Note that the ndarray x
is flattened before conversion.
val print :
?max_row:int ->
?max_col:int ->
?header:bool ->
?fmt:('a -> string) ->
('a, 'b) t ->
unit
print x
prints all the elements in x
as well as their indices. max_row
and max_col
specify the maximum number of rows and columns to display. header
specifies whether or not to print out the headers. fmt
is the function to format every element into string.
val pp_dsnda : Stdlib.Format.formatter -> ('a, 'b) t -> unit
pp_dsnda x
prints x
in OCaml toplevel. If the ndarray is too long, pp_dsnda
only prints out parts of the ndarray.
val save : out:string -> ('a, 'b) t -> unit
save ~out x
serialises a ndarray x
to a file of name out
.
load k s
loads previously serialised ndarray from file s
into memory. It is necessary to specify the type of the ndarray with parameter k
.
val save_npy : out:string -> ('a, 'b) t -> unit
save_npy ~out x
saves the matrix x
into a npy file out
. This function is implemented using npy-ocaml https://github.com/LaurentMazare/npy-ocaml.
load_npy file
load a npy file
into a matrix of type k
. If the matrix is in the file is not of type k
, fails with [file]: incorrect format
. This function is implemented using npy-ocaml https://github.com/LaurentMazare/npy-ocaml.
val re_c2s :
(Stdlib.Complex.t, Stdlib.Bigarray.complex32_elt) t ->
(float, Stdlib.Bigarray.float32_elt) t
re_c2s x
returns all the real components of x
in a new ndarray of same shape.
val re_z2d :
(Stdlib.Complex.t, Stdlib.Bigarray.complex64_elt) t ->
(float, Stdlib.Bigarray.float64_elt) t
re_d2z x
returns all the real components of x
in a new ndarray of same shape.
val im_c2s :
(Stdlib.Complex.t, Stdlib.Bigarray.complex32_elt) t ->
(float, Stdlib.Bigarray.float32_elt) t
im_c2s x
returns all the imaginary components of x
in a new ndarray of same shape.
val im_z2d :
(Stdlib.Complex.t, Stdlib.Bigarray.complex64_elt) t ->
(float, Stdlib.Bigarray.float64_elt) t
im_d2z x
returns all the imaginary components of x
in a new ndarray of same shape.
sum ~axis x
sums the elements in x
along specified axis
.
val sum' : ('a, 'b) t -> 'a
sum' x
returns the sumtion of all elements in x
.
sum_reduce ~axis x
sums the elements in x
along multiple axes specified in the axis
array.
prod ~axis x
multiples the elements in x
along specified axis
.
val prod' : ('a, 'b) t -> 'a
prod x
returns the product of all elements in x
along passed in axises.
mean ~axis x
calculates the mean along specified axis
.
val mean' : ('a, 'b) t -> 'a
mean' x
calculates the mean of all the elements in x
.
median ~axis x
calculates the median along specified axis
of x
.
val median' : ('a, 'b) t -> 'a
median x
calculates the median of a flattened version of x
.
var ~axis x
calculates the variance along specified axis
.
val var' : ('a, 'b) t -> 'a
var' x
calculates the variance of all the elements in x
.
std ~axis
calculates the standard deviation along specified axis
.
val std' : ('a, 'b) t -> 'a
std' x
calculates the standard deviation of all the elements in x
.
sem ~axis
calculates the standard error of mean along specified axis
.
val sem' : ('a, 'b) t -> 'a
sem' x
calculates the standard error of mean of all the elements in x
.
min x
returns the minimum of all elements in x
along specified axis
. If no axis is specified, x
will be flattened and the minimum of all the elements will be returned. For two complex numbers, the one with the smaller magnitude will be selected. If two magnitudes are the same, the one with the smaller phase will be selected.
val min' : ('a, 'b) t -> 'a
min' x
is similar to min
but returns the minimum of all elements in x
in scalar value.
max x
returns the maximum of all elements in x
along specified axis
. If no axis is specified, x
will be flattened and the maximum of all the elements will be returned. For two complex numbers, the one with the greater magnitude will be selected. If two magnitudes are the same, the one with the greater phase will be selected.
val max' : ('a, 'b) t -> 'a
max' x
is similar to max
but returns the maximum of all elements in x
in scalar value.
minmax' x
returns (min_v, max_v)
, min_v
is the minimum value in x
while max_v
is the maximum.
val minmax' : ('a, 'b) t -> 'a * 'a
minmax' x
returns (min_v, max_v)
, min_v
is the minimum value in x
while max_v
is the maximum.
val min_i : ('a, 'b) t -> 'a * int array
min_i x
returns the minimum of all elements in x
as well as its index.
val max_i : ('a, 'b) t -> 'a * int array
max_i x
returns the maximum of all elements in x
as well as its index.
val minmax_i : ('a, 'b) t -> ('a * int array) * ('a * int array)
minmax_i x
returns ((min_v,min_i), (max_v,max_i))
where (min_v,min_i)
is the minimum value in x
along with its index while (max_v,max_i)
is the maximum value along its index.
abs x
returns the absolute value of all elements in x
in a new ndarray.
val abs_c2s :
(Stdlib.Complex.t, Stdlib.Bigarray.complex32_elt) t ->
(float, Stdlib.Bigarray.float32_elt) t
abs_c2s x
is similar to abs
but takes complex32
as input.
val abs_z2d :
(Stdlib.Complex.t, Stdlib.Bigarray.complex64_elt) t ->
(float, Stdlib.Bigarray.float64_elt) t
abs_z2d x
is similar to abs
but takes complex64
as input.
abs2 x
returns the square of absolute value of all elements in x
in a new ndarray.
val abs2_c2s :
(Stdlib.Complex.t, Stdlib.Bigarray.complex32_elt) t ->
(float, Stdlib.Bigarray.float32_elt) t
abs2_c2s x
is similar to abs2
but takes complex32
as input.
val abs2_z2d :
(Stdlib.Complex.t, Stdlib.Bigarray.complex64_elt) t ->
(float, Stdlib.Bigarray.float64_elt) t
abs2_z2d x
is similar to abs2
but takes complex64
as input.
neg x
negates the elements in x
and returns the result in a new ndarray.
reci x
computes the reciprocal of every elements in x
and returns the result in a new ndarray.
reci_tol ~tol x
computes the reciprocal of every element in x
. Different from reci
, reci_tol
sets the elements whose abs
value smaller than tol
to zeros. If tol
is not specified, the default Owl_utils.eps Float32
will be used. For complex numbers, refer to Owl's doc to see how to compare.
signum
computes the sign value (-1
for negative numbers, 0
(or -0
) for zero, 1
for positive numbers, nan
for nan
).
sqr x
computes the square of the elements in x
and returns the result in a new ndarray.
sqrt x
computes the square root of the elements in x
and returns the result in a new ndarray.
cbrt x
computes the cubic root of the elements in x
and returns the result in a new ndarray.
exp x
computes the exponential of the elements in x
and returns the result in a new ndarray.
exp2 x
computes the base-2 exponential of the elements in x
and returns the result in a new ndarray.
exp10 x
computes the base-10 exponential of the elements in x
and returns the result in a new ndarray.
expm1 x
computes exp x -. 1.
of the elements in x
and returns the result in a new ndarray.
log x
computes the logarithm of the elements in x
and returns the result in a new ndarray.
log10 x
computes the base-10 logarithm of the elements in x
and returns the result in a new ndarray.
log2 x
computes the base-2 logarithm of the elements in x
and returns the result in a new ndarray.
log1p x
computes log (1 + x)
of the elements in x
and returns the result in a new ndarray.
sin x
computes the sine of the elements in x
and returns the result in a new ndarray.
cos x
computes the cosine of the elements in x
and returns the result in a new ndarray.
tan x
computes the tangent of the elements in x
and returns the result in a new ndarray.
asin x
computes the arc sine of the elements in x
and returns the result in a new ndarray.
acos x
computes the arc cosine of the elements in x
and returns the result in a new ndarray.
atan x
computes the arc tangent of the elements in x
and returns the result in a new ndarray.
sinh x
computes the hyperbolic sine of the elements in x
and returns the result in a new ndarray.
cosh x
computes the hyperbolic cosine of the elements in x
and returns the result in a new ndarray.
tanh x
computes the hyperbolic tangent of the elements in x
and returns the result in a new ndarray.
asinh x
computes the hyperbolic arc sine of the elements in x
and returns the result in a new ndarray.
acosh x
computes the hyperbolic arc cosine of the elements in x
and returns the result in a new ndarray.
atanh x
computes the hyperbolic arc tangent of the elements in x
and returns the result in a new ndarray.
floor x
computes the floor of the elements in x
and returns the result in a new ndarray.
ceil x
computes the ceiling of the elements in x
and returns the result in a new ndarray.
round x
rounds the elements in x
and returns the result in a new ndarray.
trunc x
computes the truncation of the elements in x
and returns the result in a new ndarray.
fix x
rounds each element of x
to the nearest integer toward zero. For positive elements, the behavior is the same as floor
. For negative ones, the behavior is the same as ceil
.
modf x
performs modf
over all the elements in x
, the fractal part is saved in the first element of the returned tuple whereas the integer part is saved in the second element.
erf x
computes the error function of the elements in x
and returns the result in a new ndarray.
erfc x
computes the complementary error function of the elements in x
and returns the result in a new ndarray.
logistic x
computes the logistic function 1/(1 + exp(-a)
of the elements in x
and returns the result in a new ndarray.
relu x
computes the rectified linear unit function max(x, 0)
of the elements in x
and returns the result in a new ndarray.
elu alpha x
computes the exponential linear unit function x >= 0. ? x : (alpha * (exp(x) - 1))
of the elements in x
and returns the result in a new ndarray.
leaky_relu alpha x
computes the leaky rectified linear unit function x >= 0. ? x : (alpha * x)
of the elements in x
and returns the result in a new ndarray.
softplus x
computes the softplus function log(1 + exp(x)
of the elements in x
and returns the result in a new ndarray.
softsign x
computes the softsign function x / (1 + abs(x))
of the elements in x
and returns the result in a new ndarray.
softmax x
computes the softmax functions (exp x) / (sum (exp x))
of all the elements along the specified axis
in x
and returns the result in a new ndarray.
By default, axis = -1
, i.e. along the highest dimension.
sigmoid x
computes the sigmoid function 1 / (1 + exp (-x))
for each element in x
.
val log_sum_exp' : (float, 'a) t -> float
log_sum_exp x
computes the logarithm of the sum of exponentials of all the elements in x
.
log_sum_exp ~axis x
computes the logarithm of the sum of exponentials of all the elements in x
along axis axis
.
l1norm x
calculates the l1-norm of of x
along specified axis.
val l1norm' : ('a, 'b) t -> 'a
l1norm x
calculates the l1-norm of all the element in x
.
l2norm x
calculates the l2-norm of of x
along specified axis.
val l2norm' : ('a, 'b) t -> 'a
l2norm x
calculates the l2-norm of all the element in x
.
l2norm_sqr x
calculates the square l2-norm of of x
along specified axis.
val l2norm_sqr' : ('a, 'b) t -> 'a
l2norm_sqr x
calculates the square of l2-norm (or l2norm, Euclidean norm) of all elements in x
. The function uses conjugate transpose in the product, hence it always returns a float number.
vecnorm ~axis ~p x
calculates the generalised vector p-norm along the specified axis
. The generalised p-norm is defined as below.
||v||_p = \Big[ \sum_{k=0}^{N-1} |v_k|^p \Big]^{1/p}
Parameters: * axis
is the axis for reduction. * p
is order of norm, default value is 2. * x
is the input ndarray.
Returns: * If p = infinity
, then returns ||v||_{\infty} = \max_i(|v(i)|)
. * If p = -infinity
, then returns ||v||_{-\infty} = \min_i(|v(i)|)
. * Otherwise returns generalised vector p-norm defined above.
val vecnorm' : ?p:float -> ('a, 'b) t -> 'a
vecnorm'
flattens the input into 1-d vector first, then calculates the generalised p-norm the same as venorm
.
cumsum ~axis x
: performs cumulative sum of the elements along the given axis ~axis
. If ~axis
is None
, then the cumsum
is performed along the lowest dimension. The returned result however always remains the same shape.
cumprod ~axis x
: similar to cumsum
but performs cumulative product of the elements along the given ~axis
.
cummin ~axis x
: performs cumulative min
along axis
dimension.
cummax ~axis x
: performs cumulative max
along axis
dimension.
diff ~axis ~n x
calculates the n
-th difference of x
along the specified axis
.
Parameters: * axis
: axis to calculate the difference. The default value is the highest dimension. * n
: how many times to calculate the difference. The default value is 1.
Return: * The difference ndarray y. Note that the shape of y
1 less than that of x
along specified axis.
angle x
calculates the phase angle of all complex numbers in x
.
proj x
computes the projection on Riemann sphere of all elelments in x
.
lgamma x
computes the loggamma of the elements in x
and returns the result in a new ndarray.
dawsn x
computes the Dawson function of the elements in x
and returns the result in a new ndarray.
i0 x
computes the modified Bessel function of order 0 of the elements in x
and returns the result in a new ndarray.
i0e x
computes the exponentially scaled modified Bessel function of order 0 of the elements in x
and returns the result in a new ndarray.
i1 x
computes the modified Bessel function of order 1 of the elements in x
and returns the result in a new ndarray.
i1e x
computes the exponentially scaled modified Bessel function of order 1 of the elements in x
and returns the result in a new ndarray.
iv v x
computes modified Bessel function of x
of real order v
scalar_iv v x
computes the modified Bessel function of x
of real order v
.
iv_scalar v x
computes modified Bessel function of x
of real order v
j0 x
computes the Bessel function of order 0 of the elements in x
and returns the result in a new ndarray.
j1 x
computes the Bessel function of order 1 of the elements in x
and returns the result in a new ndarray.
jv v x
computes Bessel function the first kind of x
of real order v
scalar_jv v x
computes the Bessel function of the first kind of x
of real order v
.
jv_scalar v x
computes Bessel function of the first kind of x
of real order v
add x y
adds all the elements in x
and y
elementwise, and returns the result in a new ndarray.
General broadcast operation is automatically applied to add/sub/mul/div, etc. The function compares the dimension element-wise from the highest to the lowest with the following broadcast rules (same as numpy): 1. equal; 2. either is 1.
sub x y
subtracts all the elements in x
and y
elementwise, and returns the result in a new ndarray.
mul x y
multiplies all the elements in x
and y
elementwise, and returns the result in a new ndarray.
div x y
divides all the elements in x
and y
elementwise, and returns the result in a new ndarray.
add_scalar x a
adds a scalar value a
to each element in x
, and returns the result in a new ndarray.
sub_scalar x a
subtracts a scalar value a
from each element in x
, and returns the result in a new ndarray.
mul_scalar x a
multiplies each element in x
by a scalar value a
, and returns the result in a new ndarray.
div_scalar x a
divides each element in x
by a scalar value a
, and returns the result in a new ndarray.
scalar_add a x
adds a scalar value a
to each element in x
, and returns the result in a new ndarray.
scalar_sub a x
subtracts each element in x
from a scalar value a
, and returns the result in a new ndarray.
scalar_mul a x
multiplies each element in x
by a scalar value a
, and returns the result in a new ndarray.
scalar_div a x
divides a scalar value a
by each element in x
, and returns the result in a new ndarray.
pow x y
computes pow(a, b)
of all the elements in x
and y
elementwise, and returns the result in a new ndarray.
scalar_pow a x
computes the power value of a scalar value a
using the elements in a ndarray x
.
pow_scalar x a
computes each element in x
power to a
.
atan2 x y
computes atan2(a, b)
of all the elements in x
and y
elementwise, and returns the result in a new ndarray.
hypot x y
computes sqrt(x*x + y*y)
of all the elements in x
and y
elementwise, and returns the result in a new ndarray.
min2 x y
computes the minimum of all the elements in x
and y
elementwise, and returns the result in a new ndarray.
max2 x y
computes the maximum of all the elements in x
and y
elementwise, and returns the result in a new ndarray.
fmod_scalar x a
performs mod division between x
and scalar a
.
scalar_fmod x a
performs mod division between scalar a
and x
.
val ssqr' : ('a, 'b) t -> 'a -> 'a
ssqr x a
computes the sum of squared differences of all the elements in x
from constant a
. This function only computes the square of each element rather than the conjugate transpose as l2norm_sqr
does.
ssqr_diff x y
computes the sum of squared differences of every elements in x
and its corresponding element in y
.
cross_entropy x y
calculates the cross entropy between x
and y
using base e
.
clip_by_value ~amin ~amax x
clips the elements in x
based on amin
and amax
. The elements smaller than amin
will be set to amin
, and the elements greater than amax
will be set to amax
.
clip_by_l2norm t x
clips the x
according to the threshold set by t
.
fma x y z
calculates the `fused multiply add`, i.e. (x * y) + z
.
contract1 index_pairs x
performs indices contraction (a.k.a tensor contraction) on x
. index_pairs
is an array of contracted indices.
Caveat: Not well tested yet, use with care! Also, consider to use TTGT in future for better performance.
contract2 index_pairs x y
performs indices contraction (a.k.a tensor contraction) on two ndarrays x
and y
. index_pairs
is an array of contracted indices, the first element is the index of x
, the second is that of y
.
Caveat: Not well tested yet, use with care! Also, consider to use TTGT in future for better performance.
cast kind x
casts x
of type ('c, 'd) t
to type ('a, 'b) t
specify by the passed in kind
parameter. This function is a generalisation of the other casting functions such as cast_s2d
, cast_c2z
, and etc.
cast_s2d x
casts x
from float32
to float64
.
cast_d2s x
casts x
from float64
to float32
.
val cast_c2z :
(Stdlib.Complex.t, Stdlib.Bigarray.complex32_elt) t ->
(Stdlib.Complex.t, Stdlib.Bigarray.complex64_elt) t
cast_c2z x
casts x
from complex32
to complex64
.
val cast_z2c :
(Stdlib.Complex.t, Stdlib.Bigarray.complex64_elt) t ->
(Stdlib.Complex.t, Stdlib.Bigarray.complex32_elt) t
cast_z2c x
casts x
from complex64
to complex32
.
val cast_s2c :
(float, Stdlib.Bigarray.float32_elt) t ->
(Stdlib.Complex.t, Stdlib.Bigarray.complex32_elt) t
cast_s2c x
casts x
from float32
to complex32
.
val cast_d2z :
(float, Stdlib.Bigarray.float64_elt) t ->
(Stdlib.Complex.t, Stdlib.Bigarray.complex64_elt) t
cast_d2z x
casts x
from float64
to complex64
.
val cast_s2z :
(float, Stdlib.Bigarray.float32_elt) t ->
(Stdlib.Complex.t, Stdlib.Bigarray.complex64_elt) t
cast_s2z x
casts x
from float32
to complex64
.
val cast_d2c :
(float, Stdlib.Bigarray.float64_elt) t ->
(Stdlib.Complex.t, Stdlib.Bigarray.complex32_elt) t
cast_d2c x
casts x
from float64
to complex32
.
val conv1d :
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t
conv1d ?padding input kernel strides
applies a 1-dimensional convolution over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension. Returns the result of the convolution.val conv2d :
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t
conv2d ?padding input kernel strides
applies a 2-dimensional convolution over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension. Returns the result of the convolution.val conv3d :
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t
conv3d ?padding input kernel strides
applies a 3-dimensional convolution over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension. Returns the result of the convolution.val dilated_conv1d :
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t
dilated_conv1d ?padding input kernel strides dilations
applies a 1-dimensional dilated convolution over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension. Returns the result of the dilated convolution.val dilated_conv2d :
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t
dilated_conv2d ?padding input kernel strides dilations
applies a 2-dimensional dilated convolution over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension. Returns the result of the dilated convolution.val dilated_conv3d :
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t
dilated_conv3d ?padding input kernel strides dilations
applies a 3-dimensional dilated convolution over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension. Returns the result of the dilated convolution.val transpose_conv1d :
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t
transpose_conv1d ?padding input kernel strides
applies a 1-dimensional transposed convolution (deconvolution) over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension. Returns the result of the transposed convolution.val transpose_conv2d :
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t
transpose_conv2d ?padding input kernel strides
applies a 2-dimensional transposed convolution (deconvolution) over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension. Returns the result of the transposed convolution.val transpose_conv3d :
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t
transpose_conv3d ?padding input kernel strides
applies a 3-dimensional transposed convolution (deconvolution) over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension. Returns the result of the transposed convolution.val max_pool1d :
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t
max_pool1d ?padding input pool_size strides
applies a 1-dimensional max pooling operation over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. Returns the result of the max pooling operation.val max_pool2d :
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t
max_pool2d ?padding input pool_size strides
applies a 2-dimensional max pooling operation over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. Returns the result of the max pooling operation.val max_pool3d :
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t
max_pool3d ?padding input pool_size strides
applies a 3-dimensional max pooling operation over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. Returns the result of the max pooling operation.val avg_pool1d :
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t
avg_pool1d ?padding input pool_size strides
applies a 1-dimensional average pooling operation over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. Returns the result of the average pooling operation.val avg_pool2d :
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t
avg_pool2d ?padding input pool_size strides
applies a 2-dimensional average pooling operation over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. Returns the result of the average pooling operation.val avg_pool3d :
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t
avg_pool3d ?padding input pool_size strides
applies a 3-dimensional average pooling operation over an input tensor.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. Returns the result of the average pooling operation.val max_pool2d_argmax :
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t * (int64, Stdlib.Bigarray.int64_elt) t
max_pool2d_argmax ?padding input pool_size strides
applies a 2-dimensional max pooling operation over an input tensor, returning both the pooled output and the indices of the maximum values.
padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. Returns a tuple containing the pooled output and the indices of the maximum values.upsampling2d input size
performs a 2-dimensional upsampling on the input tensor input
, scaling it according to the specified size
. Returns the upsampled tensor.
conv1d_backward_input input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 1-dimensional convolutional layer.
input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. Returns the gradient of the loss with respect to the input tensor.conv1d_backward_kernel input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 1-dimensional convolutional layer.
input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. Returns the gradient of the loss with respect to the kernel.conv2d_backward_input input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional convolutional layer.
input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. Returns the gradient of the loss with respect to the input tensor.conv2d_backward_kernel input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 2-dimensional convolutional layer.
input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. Returns the gradient of the loss with respect to the kernel.conv3d_backward_input input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 3-dimensional convolutional layer.
input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. Returns the gradient of the loss with respect to the input tensor.conv3d_backward_kernel input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 3-dimensional convolutional layer.
input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. Returns the gradient of the loss with respect to the kernel.val dilated_conv1d_backward_input :
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
dilated_conv1d_backward_input input kernel strides dilations grad_output
computes the gradient of the loss with respect to the input tensor of a 1-dimensional dilated convolutional layer.
input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. Returns the gradient of the loss with respect to the input tensor.val dilated_conv1d_backward_kernel :
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
dilated_conv1d_backward_kernel input kernel strides dilations grad_output
computes the gradient of the loss with respect to the kernel of a 1-dimensional dilated convolutional layer.
input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. Returns the gradient of the loss with respect to the kernel.val dilated_conv2d_backward_input :
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
dilated_conv2d_backward_input input kernel strides dilations grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional dilated convolutional layer.
input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. Returns the gradient of the loss with respect to the input tensor.val dilated_conv2d_backward_kernel :
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
dilated_conv2d_backward_kernel input kernel strides dilations grad_output
computes the gradient of the loss with respect to the kernel of a 2-dimensional dilated convolutional layer.
input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. Returns the gradient of the loss with respect to the kernel.val dilated_conv3d_backward_input :
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
dilated_conv3d_backward_input input kernel strides dilations grad_output
computes the gradient of the loss with respect to the input tensor of a 3-dimensional dilated convolutional layer.
input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. Returns the gradient of the loss with respect to the input tensor.val dilated_conv3d_backward_kernel :
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
dilated_conv3d_backward_kernel input kernel strides dilations grad_output
computes the gradient of the loss with respect to the kernel of a 3-dimensional dilated convolutional layer.
input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. Returns the gradient of the loss with respect to the kernel.val transpose_conv1d_backward_input :
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
('a, 'b) t
transpose_conv1d_backward_input input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 1-dimensional transposed convolutional layer.
input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. Returns the gradient of the loss with respect to the input tensor.val transpose_conv1d_backward_kernel :
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
('a, 'b) t
transpose_conv1d_backward_kernel input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 1-dimensional transposed convolutional layer.
input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. Returns the gradient of the loss with respect to the kernel.val transpose_conv2d_backward_input :
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
('a, 'b) t
transpose_conv2d_backward_input input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional transposed convolutional layer.
input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. Returns the gradient of the loss with respect to the input tensor.val transpose_conv2d_backward_kernel :
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
('a, 'b) t
transpose_conv2d_backward_kernel input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 2-dimensional transposed convolutional layer.
input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. Returns the gradient of the loss with respect to the kernel.val transpose_conv3d_backward_input :
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
('a, 'b) t
transpose_conv3d_backward_input input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 3-dimensional transposed convolutional layer.
input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. Returns the gradient of the loss with respect to the input tensor.val transpose_conv3d_backward_kernel :
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
('a, 'b) t
transpose_conv3d_backward_kernel input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 3-dimensional transposed convolutional layer.
input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. Returns the gradient of the loss with respect to the kernel.val max_pool1d_backward :
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
max_pool1d_backward padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 1-dimensional max pooling layer.
padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the max pooling layer. Returns the gradient of the loss with respect to the input tensor.val max_pool2d_backward :
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
max_pool2d_backward padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional max pooling layer.
padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the max pooling layer. Returns the gradient of the loss with respect to the input tensor.val max_pool3d_backward :
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
max_pool3d_backward padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 3-dimensional max pooling layer.
padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the max pooling layer. Returns the gradient of the loss with respect to the input tensor.val avg_pool1d_backward :
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
avg_pool1d_backward padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 1-dimensional average pooling layer.
padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the average pooling layer. Returns the gradient of the loss with respect to the input tensor.val avg_pool2d_backward :
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
avg_pool2d_backward padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional average pooling layer.
padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the average pooling layer. Returns the gradient of the loss with respect to the input tensor.val avg_pool3d_backward :
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
('a, 'b) t
avg_pool3d_backward padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 3-dimensional average pooling layer.
padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the average pooling layer. Returns the gradient of the loss with respect to the input tensor.upsampling2d_backward input size grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional upsampling layer.
input
is the original input tensor.size
specifies the upsampling factors for each dimension.grad_output
is the gradient of the loss with respect to the output of the upsampling layer. Returns the gradient of the loss with respect to the input tensor.The following functions are helper functions for some other functions in both Ndarray and Ndview modules. In general, you are not supposed to use these functions directly.
val print_element : ('a, 'b) kind -> 'a -> unit
print_element kind a
prints the value of a single element.
_check_transpose_axis a d
checks whether a
is a legiti('a, 'b) te transpose index.
one_hot idx depth
creates one-hot vectors according to the indices ndarray and the specified depth. If idx
is rank N, then the return is rank N+1. More specifically, if idx
is of shape [|a;b;c|]
, the return is of shape [|a;b;c;depth|]
.
sum_slices ~axis:2 x
for x
of [|2;3;4;5|]
, it returns an ndarray of shape [|4;5|]
. Currently, the operation is done using gemm
, it is fast but consumes more memory.
slide ~axis ~window x
generates a new ndarray by sliding a window along specified axis
in x
. E.g., if x
has shape [|a;b;c|]
and axis = 1
, then [|a; number of windows; window; c|]
is the shape of the returned ndarray.
Parameters: * axis
is the axis for sliding, the default is -1, i.e. highest dimension. * ofs
is the starting position of the sliding window. The default is 0. * step
is the step size, the default is 1. * window
is the size of the sliding window.
val create_ : out:('a, 'b) t -> 'a -> unit
create_ ~out value
initializes the matrix out
in-place with the scalar value value
. This operation modifies the contents of out
.
val uniform_ : ?a:'a -> ?b:'a -> out:('a, 'b) t -> unit
uniform_ ?a ?b ~out
fills the matrix out
in-place with random values drawn from a uniform distribution over the interval [a, b\). If a
and b
are not provided, the default interval is [0, 1\).
val gaussian_ : ?mu:'a -> ?sigma:'a -> out:('a, 'b) t -> unit
gaussian_ ?mu ?sigma ~out
fills the matrix out
in-place with random values drawn from a Gaussian distribution with mean mu
and standard deviation sigma
. If mu
is not provided, the default mean is 0. If sigma
is not provided, the default standard deviation is 1.
val poisson_ : mu:float -> out:('a, 'b) t -> unit
poisson_ ~mu ~out
fills the matrix out
in-place with random values drawn from a Poisson distribution with mean mu
.
val sequential_ : ?a:'a -> ?step:'a -> out:('a, 'b) t -> unit
sequential_ ?a ?step ~out
fills the matrix out
in-place with a sequence of values starting from a
with a step of step
. If a
is not provided, the sequence starts from 0. If step
is not provided, the step size is 1.
val bernoulli_ : ?p:float -> out:('a, 'b) t -> unit
bernoulli_ ?p ~out
fills the matrix out
in-place with random values drawn from a Bernoulli distribution with probability p
of being 1. If p
is not provided, the default probability is 0.5.
val zeros_ : out:('a, 'b) t -> unit
zeros_ ~out
fills the matrix out
in-place with zeros.
val ones_ : out:('a, 'b) t -> unit
ones_ ~out
fills the matrix out
in-place with ones.
one_hot_ ~out depth indices
fills the matrix out
in-place with one-hot encoded vectors according to the specified depth
and the indices
.
val sort_ : ('a, 'b) t -> unit
sort_ x
performs in-place quicksort on the elements in x
, sorting them in ascending order.
val get_fancy_ : out:('a, 'b) t -> Owl_types.index list -> ('a, 'b) t -> unit
get_fancy_ ~out indices src
extracts elements from the source matrix src
according to the list of indices
and stores them in out
. This operation is performed in-place on out
.
val set_fancy_ :
out:('a, 'b) t ->
Owl_types.index list ->
('a, 'b) t ->
('a, 'b) t ->
unit
set_fancy_ ~out indices src
sets the elements in out
at the positions specified by indices
with the values from the source matrix src
. This operation is performed in-place on out
.
get_slice_ ~out slices src
extracts a slice from the source matrix src
according to the list of slices
and stores it in out
. This operation is performed in-place on out
.
set_slice_ ~out slices src
sets the slice in out
defined by slices
with the values from the source matrix src
. This operation is performed in-place on out
.
copy_ ~out src
copies the data from the source matrix src
to the destination matrix out
. This operation is performed in-place on out
.
reshape_ ~out src
reshapes the source matrix src
and stores the result in out
. The total number of elements must remain the same. This operation is performed in-place on out
.
reverse_ ~out src
reverses the elements of the source matrix src
along each dimension and stores the result in out
. This operation is performed in-place on out
.
transpose_ ~out x
is similar to transpose x
but the output is written to out
.
repeat_ ~out x reps
is similar to repeat x reps
but the output is written to out
.
tile_ ~out x reps
is similar to tile x reps
but the output is written to out
.
pad_ ~out ?v p x
is similar to pad ?v p x
but the output is written to out
.
sum_ ~out ~axis x
computes the sum of elements along the specified axis
of the array x
and stores the result in out
.
out
is the output array where the result will be stored.axis
specifies the axis along which to compute the sum. This operation is performed in-place on out
.min_ ~out ~axis x
computes the minimum value along the specified axis
of the array x
and stores the result in out
.
out
is the output array where the result will be stored.axis
specifies the axis along which to compute the minimum value. This operation is performed in-place on out
.max_ ~out ~axis x
computes the maximum value along the specified axis
of the array x
and stores the result in out
.
out
is the output array where the result will be stored.axis
specifies the axis along which to compute the maximum value. This operation is performed in-place on out
.add_ x y
is similar to add
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
sub_ x y
is similar to sub
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
mul_ x y
is similar to mul
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
div_ x y
is similar to div
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
pow_ x y
is similar to pow
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
atan2_ x y
is similar to atan2
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
hypot_ x y
is similar to hypot
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
fmod_ x y
is similar to fmod
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
min2_ x y
is similar to min2
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
max2_ x y
is similar to max2
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
add_scalar_ x y
is similar to add_scalar
function but the output is written to x
.
sub_scalar_ x y
is similar to sub_scalar
function but the output is written to x
.
mul_scalar_ x y
is similar to mul_scalar
function but the output is written to x
.
div_scalar_ x y
is similar to div_scalar
function but the output is written to x
.
pow_scalar_ x y
is similar to pow_scalar
function but the output is written to x
.
atan2_scalar_ x y
is similar to atan2_scalar
function but the output is written to x
.
fmod_scalar_ x y
is similar to fmod_scalar
function but the output is written to x
.
scalar_add_ a x
is similar to scalar_add
function but the output is written to x
.
scalar_sub_ a x
is similar to scalar_sub
function but the output is written to x
.
scalar_mul_ a x
is similar to scalar_mul
function but the output is written to x
.
scalar_div_ a x
is similar to scalar_div
function but the output is written to x
.
scalar_pow_ a x
is similar to scalar_pow
function but the output is written to x
.
scalar_atan2_ a x
is similar to scalar_atan2
function but the output is written to x
.
scalar_fmod_ a x
is similar to scalar_fmod
function but the output is written to x
.
clip_by_value_ ?out ?amin ?amax x
clips the values of the array x
to lie within the range amin, amax
and stores the result in out
.
out
is the optional output array where the result will be stored. If not provided, x
is modified in-place.amin
is the optional minimum value to clip to. If not provided, no minimum clipping is applied.amax
is the optional maximum value to clip to. If not provided, no maximum clipping is applied. This operation is performed in-place.clip_by_l2norm_ ?out l2norm x
clips the L2 norm of the array x
to the specified value l2norm
and stores the result in out
.
out
is the optional output array where the result will be stored. If not provided, x
is modified in-place.l2norm
specifies the maximum L2 norm. This operation is performed in-place.fma_ ~out x y z
is similar to fma x y z
function but the output is written to out
.
val dot_ :
?transa:bool ->
?transb:bool ->
?alpha:'a ->
?beta:'a ->
c:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
unit
Refer to :doc:`owl_dense_matrix_generic`
conj_ x
is similar to conj
but output is written to x
reci_ x
is similar to reci
but output is written to x
signum_ x
is similar to signum
but output is written to x
sqrt_ x
is similar to sqrt
but output is written to x
cbrt_ x
is similar to cbrt
but output is written to x
exp_ x
is similar to exp_
but output is written to x
exp2_ x
is similar to exp2
but output is written to x
exp2_ x
is similar to exp2
but output is written to x
expm1_ x
is similar to expm1
but output is written to x
log2_ x
is similar to log2
but output is written to x
log10_ x
is similar to log10
but output is written to x
log1p_ x
is similar to log1p
but output is written to x
asin_ x
is similar to asin
but output is written to x
acos_ x
is similar to acos
but output is written to x
atan_ x
is similar to atan
but output is written to x
sinh_ x
is similar to sinh
but output is written to x
cosh_ x
is similar to cosh
but output is written to x
tanh_ x
is similar to tanh
but output is written to x
asinh_ x
is similar to asinh
but output is written to x
acosh_ x
is similar to acosh
but output is written to x
atanh_ x
is similar to atanh
but output is written to x
floor_ x
is similar to floor
but output is written to x
ceil_ x
is similar to ceil
but output is written to x
round_ x
is similar to round
but output is written to x
trunc_ x
is similar to trunc
but output is written to x
erfc_ x
is similar to erfc
but output is written to x
relu_ x
is similar to relu
but output is written to x
softplus_ x
is similar to softplus
but output is written to x
softsign_ x
is similar to softsign
but output is written to x
sigmoid_ x
is similar to sigmoid
but output is written to x
softmax_ x
is similar to softmax
but output is written to x
cumsum_ x
is similar to cumsum
but output is written to x
cumprod_ x
is similar to cumprod
but output is written to x
cummin_ x
is similar to cummin
but output is written to x
cummax_ x
is similar to cummax
but output is written to x
dropout_ x
is similar to dropout
but output is written to x
elt_equal_ x y
is similar to elt_equal
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
elt_not_equal_ x y
is similar to elt_not_equal
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
elt_less_ x y
is similar to elt_less
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
elt_greater_ x y
is similar to elt_greater
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
elt_less_equal_ x y
is similar to elt_less_equal
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
elt_greater_equal_ x y
is similar to elt_greater_equal
function but the output is written to out
. You need to make sure out
is big enough to hold the output result.
elt_equal_scalar_ x a
is similar to elt_equal_scalar
function but the output is written to x
.
elt_not_equal_scalar_ x a
is similar to elt_not_equal_scalar
function but the output is written to x
.
elt_less_scalar_ x a
is similar to elt_less_scalar
function but the output is written to x
.
elt_greater_scalar_ x a
is similar to elt_greater_scalar
function but the output is written to x
.
elt_less_equal_scalar_ x a
is similar to elt_less_equal_scalar
function but the output is written to x
.
elt_greater_equal_scalar_ x a
is similar to elt_greater_equal_scalar
function but the output is written to x
.
val conv1d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
unit
conv1d_ ~out ?padding input kernel strides
applies a 1-dimensional convolution over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension. This operation is performed in-place on out
.val conv2d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
unit
conv2d_ ~out ?padding input kernel strides
applies a 2-dimensional convolution over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension. This operation is performed in-place on out
.val conv3d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
unit
conv3d_ ~out ?padding input kernel strides
applies a 3-dimensional convolution over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension. This operation is performed in-place on out
.val dilated_conv1d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
unit
dilated_conv1d_ ~out ?padding input kernel strides dilations
applies a 1-dimensional dilated convolution over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension. This operation is performed in-place on out
.val dilated_conv2d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
unit
dilated_conv2d_ ~out ?padding input kernel strides dilations
applies a 2-dimensional dilated convolution over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension. This operation is performed in-place on out
.val dilated_conv3d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
unit
dilated_conv3d_ ~out ?padding input kernel strides dilations
applies a 3-dimensional dilated convolution over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the convolutional kernel.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension. This operation is performed in-place on out
.val transpose_conv1d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
unit
transpose_conv1d_ ~out ?padding input kernel strides
applies a 1-dimensional transposed convolution (deconvolution) over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the transposed convolutional kernel.strides
specify the stride length for each dimension. This operation is performed in-place on out
.val transpose_conv2d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
unit
transpose_conv2d_ ~out ?padding input kernel strides
applies a 2-dimensional transposed convolution (deconvolution) over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the transposed convolutional kernel.strides
specify the stride length for each dimension. This operation is performed in-place on out
.val transpose_conv3d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
unit
transpose_conv3d_ ~out ?padding input kernel strides
applies a 3-dimensional transposed convolution (deconvolution) over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.kernel
is the transposed convolutional kernel.strides
specify the stride length for each dimension. This operation is performed in-place on out
.val max_pool1d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
unit
max_pool1d_ ~out ?padding input pool_size strides
applies a 1-dimensional max pooling operation over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. This operation is performed in-place on out
.val max_pool2d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
unit
max_pool2d_ ~out ?padding input pool_size strides
applies a 2-dimensional max pooling operation over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. This operation is performed in-place on out
.val max_pool3d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
unit
max_pool3d_ ~out ?padding input pool_size strides
applies a 3-dimensional max pooling operation over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. This operation is performed in-place on out
.val avg_pool1d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
unit
avg_pool1d_ ~out ?padding input pool_size strides
applies a 1-dimensional average pooling operation over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. This operation is performed in-place on out
.val avg_pool2d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
unit
avg_pool2d_ ~out ?padding input pool_size strides
applies a 2-dimensional average pooling operation over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. This operation is performed in-place on out
.val avg_pool3d_ :
out:('a, 'b) t ->
?padding:Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
unit
avg_pool3d_ ~out ?padding input pool_size strides
applies a 3-dimensional average pooling operation over an input tensor and stores the result in out
.
out
is the output array where the result will be stored.padding
specifies the padding strategy to use ('valid' or 'same').input
is the input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension. This operation is performed in-place on out
.upsampling2d_ ~out input size
performs a 2-dimensional upsampling on the input tensor input
, scaling it according to the specified size
, and stores the result in out
.
out
is the output array where the result will be stored.input
is the input tensor to be upsampled.size
specifies the upsampling factors for each dimension. This operation is performed in-place on out
.val conv1d_backward_input_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
conv1d_backward_input_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 1-dimensional convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. This operation is performed in-place on out
.val conv1d_backward_kernel_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
conv1d_backward_kernel_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 1-dimensional convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. This operation is performed in-place on out
.val conv2d_backward_input_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
conv2d_backward_input_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. This operation is performed in-place on out
.val conv2d_backward_kernel_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
conv2d_backward_kernel_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 2-dimensional convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. This operation is performed in-place on out
.val conv3d_backward_input_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
conv3d_backward_input_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 3-dimensional convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. This operation is performed in-place on out
.val conv3d_backward_kernel_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
conv3d_backward_kernel_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 3-dimensional convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the convolutional layer. This operation is performed in-place on out
.val dilated_conv1d_backward_input_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
dilated_conv1d_backward_input_ ~out input kernel strides dilations grad_output
computes the gradient of the loss with respect to the input tensor of a 1-dimensional dilated convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. This operation is performed in-place on out
.val dilated_conv1d_backward_kernel_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
dilated_conv1d_backward_kernel_ ~out input kernel strides dilations grad_output
computes the gradient of the loss with respect to the kernel of a 1-dimensional dilated convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. This operation is performed in-place on out
.val dilated_conv2d_backward_input_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
dilated_conv2d_backward_input_ ~out input kernel strides dilations grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional dilated convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. This operation is performed in-place on out
.val dilated_conv2d_backward_kernel_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
dilated_conv2d_backward_kernel_ ~out input kernel strides dilations grad_output
computes the gradient of the loss with respect to the kernel of a 2-dimensional dilated convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. This operation is performed in-place on out
.val dilated_conv3d_backward_input_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
dilated_conv3d_backward_input_ ~out input kernel strides dilations grad_output
computes the gradient of the loss with respect to the input tensor of a 3-dimensional dilated convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. This operation is performed in-place on out
.val dilated_conv3d_backward_kernel_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
dilated_conv3d_backward_kernel_ ~out input kernel strides dilations grad_output
computes the gradient of the loss with respect to the kernel of a 3-dimensional dilated convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the dilated convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.dilations
specify the dilation factor for each dimension.grad_output
is the gradient of the loss with respect to the output of the dilated convolutional layer. This operation is performed in-place on out
.val transpose_conv1d_backward_input_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
transpose_conv1d_backward_input_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 1-dimensional transposed convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. This operation is performed in-place on out
.val transpose_conv1d_backward_kernel_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
transpose_conv1d_backward_kernel_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 1-dimensional transposed convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. This operation is performed in-place on out
.val transpose_conv2d_backward_input_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
transpose_conv2d_backward_input_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional transposed convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. This operation is performed in-place on out
.val transpose_conv2d_backward_kernel_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
transpose_conv2d_backward_kernel_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 2-dimensional transposed convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. This operation is performed in-place on out
.val transpose_conv3d_backward_input_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
transpose_conv3d_backward_input_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the input tensor of a 3-dimensional transposed convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. This operation is performed in-place on out
.val transpose_conv3d_backward_kernel_ :
out:('a, 'b) t ->
('a, 'b) t ->
('a, 'b) t ->
int array ->
('a, 'b) t ->
unit
transpose_conv3d_backward_kernel_ ~out input kernel strides grad_output
computes the gradient of the loss with respect to the kernel of a 3-dimensional transposed convolutional layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.kernel
is the transposed convolutional kernel used during the forward pass.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the transposed convolutional layer. This operation is performed in-place on out
.val max_pool1d_backward_ :
out:('a, 'b) t ->
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
max_pool1d_backward_ ~out padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 1-dimensional max pooling layer and stores it in out
.
out
is the output array where the gradient will be stored.padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the max pooling layer. This operation is performed in-place on out
.val max_pool2d_backward_ :
out:('a, 'b) t ->
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
max_pool2d_backward_ ~out padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional max pooling layer and stores it in out
.
out
is the output array where the gradient will be stored.padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the max pooling layer. This operation is performed in-place on out
.val max_pool3d_backward_ :
out:('a, 'b) t ->
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
max_pool3d_backward_ ~out padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 3-dimensional max pooling layer and stores it in out
.
out
is the output array where the gradient will be stored.padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the max pooling layer. This operation is performed in-place on out
.val avg_pool1d_backward_ :
out:('a, 'b) t ->
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
avg_pool1d_backward_ ~out padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 1-dimensional average pooling layer and stores it in out
.
out
is the output array where the gradient will be stored.padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the average pooling layer. This operation is performed in-place on out
.val avg_pool2d_backward_ :
out:('a, 'b) t ->
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
avg_pool2d_backward_ ~out padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional average pooling layer and stores it in out
.
out
is the output array where the gradient will be stored.padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the average pooling layer. This operation is performed in-place on out
.val avg_pool3d_backward_ :
out:('a, 'b) t ->
Owl_types.padding ->
('a, 'b) t ->
int array ->
int array ->
('a, 'b) t ->
unit
avg_pool3d_backward_ ~out padding input pool_size strides grad_output
computes the gradient of the loss with respect to the input tensor of a 3-dimensional average pooling layer and stores it in out
.
out
is the output array where the gradient will be stored.padding
specifies the padding strategy used during the forward pass.input
is the original input tensor.pool_size
specifies the size of the pooling window.strides
specify the stride length for each dimension.grad_output
is the gradient of the loss with respect to the output of the average pooling layer. This operation is performed in-place on out
.upsampling2d_backward_ ~out input size grad_output
computes the gradient of the loss with respect to the input tensor of a 2-dimensional upsampling layer and stores it in out
.
out
is the output array where the gradient will be stored.input
is the original input tensor.size
specifies the upsampling factors for each dimension.grad_output
is the gradient of the loss with respect to the output of the upsampling layer. This operation is performed in-place on out
.fused_adagrad_ ?out ~rate ~eps grad
applies the Adagrad optimization algorithm to the gradients grad
with a given learning rate
and epsilon eps
for numerical stability, storing the result in out
.
out
is the optional output array where the updated parameters will be stored. If not provided, grad
is modified in-place.rate
specifies the learning rate.eps
specifies the epsilon value for numerical stability. This operation is performed in-place.val area : int -> int -> int -> int -> area
Refer to :doc:`owl_dense_matrix_generic`
Refer to :doc:`owl_dense_matrix_generic`
val row_num : ('a, 'b) t -> int
Refer to :doc:`owl_dense_matrix_generic`
val col_num : ('a, 'b) t -> int
Refer to :doc:`owl_dense_matrix_generic`
val trace : ('a, 'b) t -> 'a
Refer to :doc:`owl_dense_matrix_generic`
val to_arrays : ('a, 'b) t -> 'a array array
Refer to :doc:`owl_dense_matrix_generic`
Refer to :doc:`owl_dense_matrix_generic`
Refer to :doc:`owl_dense_matrix_generic`
Refer to :doc:`owl_dense_matrix_generic`
val draw_rows2 :
?replacement:bool ->
('a, 'b) t ->
('a, 'b) t ->
int ->
('a, 'b) t * ('a, 'b) t * int array
Refer to :doc:`owl_dense_matrix_generic`
val draw_cols2 :
?replacement:bool ->
('a, 'b) t ->
('a, 'b) t ->
int ->
('a, 'b) t * ('a, 'b) t * int array
Refer to :doc:`owl_dense_matrix_generic`
Identity function to deal with the type conversion required by other functors.