1function [W, bias, Y0] = dnn_mat2gb (W, bias, Y0)
2%DNN_MAT2GB convert sparse deep neural network from MATLAB to GraphBLAS
3%
4% Usage:
5%
6%   [W, bias, Y0] = dnn_matlab2gb (W, bias, Y0) ;
7%
8% This is mostly optional since GrB.* methods can take inputs that are either
9% GrB objects or MATLAB sparse matrices.  The problem is converted to single
10% precision since it gives the same result.  This is a bit faster, but MATLAB
11% doesn't have sparse single precision matrices, so a conversion from one to
12% the other needs to be made.
13%
14% The bias{i} matrix differs, and this needs to be modified here (or in
15% dnn_matlab.m).  For dnn_matlab.m, bias{i} is a 1-by-n row vector.  For the
16% GraphBLAS semiring, it is an n-by-n diagonal matrix.  When comparing GrB.dnn
17% and dnn_matlab.m, this code should not be considered extra work, since the
18% problem could be generated in GraphBLAS format to begin with.  In that case,
19% dnn_matlab.m would include this conversion code, to convert the problem from
20% GraphBLAS format to MATLAB sparse matrices.
21%
22% In any case, the setup time is very low.
23%
24% See also GrB.dnn, dnn_matlab.
25
26% SuiteSparse:GraphBLAS, Timothy A. Davis, (c) 2017-2021, All Rights Reserved.
27% SPDX-License-Identifier: GPL-3.0-or-later
28
29fmt = 'by row' ;
30prec = 'single' ;
31
32d = struct ('format', fmt) ;
33n = size (Y0, 2) ;
34Y0 = GrB (Y0, prec, fmt) ;
35for k=1:length(W)
36    W {k} = GrB (W {k}, prec, fmt) ;
37    bias {k} = GrB.build (1:n, 1:n, bias {k}, n, n, '+', prec, d) ;
38end
39
40