You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+24-19Lines changed: 24 additions & 19 deletions
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ This repo contains the code used for numerical experiments in the "[Learning Mix
16
16
## License and Citation
17
17
The code in this repo is being released under the GNU General Public License v3.0; please refer to the [LICENSE](./LICENSE) file in the repo for detailed legalese pertaining to the license. In particular, if you use any part of this code then you must cite both the original paper as well as this codebase as follows:
18
18
19
-
**Paper Citation:** M. Ghassemi, Z. Shakeri, A.D. Sarwate, and W.U. Bajwa, "Learning mixtures of separable dictionaries for tensor data: Analysis and algorithms," IEEE Trans. Signal Processing, vol. 68, pp. 33-48, 2020.
19
+
**Paper Citation:** M. Ghassemi, Z. Shakeri, A.D. Sarwate, and W.U. Bajwa, "Learning mixtures of separable dictionaries for tensor data: Analysis and algorithms," IEEE Trans. Signal Processing, vol. 68, pp. 33-48, 2020; doi: [10.1109/TSP.2019.2952046](https://doi.org/10.1109/TSP.2019.2952046).
20
20
21
21
**Codebase Citation:** J. Shenouda, M. Ghassemi, Z. Shakeri, A.D. Sarwate, and W.U. Bajwa, "Codebase---Learning mixtures of separable dictionaries for tensor data: Analysis and algorithms," GitHub Repository, 2020.
22
22
@@ -39,13 +39,16 @@ Almost all of the experiment were completed in about 3 days; however, some of th
39
39
40
40
**Note:** Precise values of some of the parameters, such as the random seeds, initially used to generate results in the paper were lost. Nonetheless, all the results obtained from this codebase are consistent with all the discussions and conclusions made in the paper.
41
41
42
+
## External Dependencies
43
+
In order to reproduce our results for image denoising with the [SeDiL](https://doi.org/10.1109/CVPR.2013.63) algorithm, you will need the source code for SeDiL; however, we do not have permission to publicize that code. In the absence of that code, you can run the alternative function `LSRImageDenoising_noSeDiL.m`. Alternatively, you can contact us with proof of express permission from the original authors of the SeDiL algorithm, after which we can provide you the codebase that includes SeDiL.
44
+
42
45
<aname="real_experiments"></a>
43
46
# Real-data Experiments
44
-
The `Real_Experiments` directory contains the code used to produce the results for the real image denoising experiments as described in the paper.
47
+
The `Real_Experiments` directory contains the code used to produce the results for the real image denoising experiments as described in the paper.
45
48
46
49
## Steps to reproduce the results
47
50
### Table II in the Paper: Performance of all Dictionary Learning Algorithms
48
-
To perform the image denoising experiments, we had one function `LSRImageDenoising.m` that was used for each image by passing in different parameters to the function. In order to speed up our computations, we ran the`LSRImageDenoising.m` function 3 times for each image and then concatenated our representation errors in all three `.mat` files that our function returned to give us a a total of 25 Monte Carlo trials.
51
+
To perform the image denoising experiments for Table II in the paper, we had one function `LSRImageDenoising.m` that was used for each image by passing in different parameters to the function. In order to speed up our computations, we ran the`LSRImageDenoising.m` function three times for each image and then concatenated our representation errors in all three `.mat` files that our function returned to give us results corresponding to a total of 25 Monte Carlo trials.
49
52
50
53
For example to perform image denoising experiments on the "House" image, we ran:
@@ -65,40 +68,42 @@ To reproduce Table III in the paper, we ran the `mushroomDenoisingTeFDiL.m` func
65
68
This produces three `.mat` files under the `Real_Experiments/Mushroom` directory. Once all three functions finished running, we ran `getMushroomTeFDiLPSNR.m` to produce the PSNR values of TeFDiL at various ranks, corresponding to Table III in the paper.
66
69
67
70
## Runtime
68
-
On our servers, this job completed in 3 days for the House, Castle and Mushroom images; however for the Lena image, it took over 5 days for the job to finish completely.
69
-
70
-
## External Dependency
71
-
In order to reproduce our results for image denoising with SeDiL, you will need the source code for SeDiL. We do not have permission to publicize that code; therefore, if you do not have it then you can run the alternative function `LSRImageDenoising_noSeDiL.m` or contact us with proof of express permission from the original authors of the SeDiL algorithm to allow us to give you the codebase that includes SeDiL.
71
+
On our servers, this job completed in three days for the House, Castle and Mushroom images; however for the Lena image, it took over five days for the job to finish completely.
72
72
73
73
<aname="online_experiments"></a>
74
-
# Online Algorithm Experiment with House image
74
+
# Online-learning Algorithm Experiments with House image
75
75
The `Online_Experiment` directory contains the code used to run the experiments for the online dictionary learning algorithms.
76
-
## Steps to reproduce
77
-
- Run the `HouseOnline.m` function twice once with `Data/rand_state1` and again with `Data/rand_state2`
78
76
79
-
ex.
77
+
## Steps to reproduce the results
78
+
In order to reproduce Figure 3(b) in the paper, we ran the `HouseOnline.m` function twice; once with `Data/rand_state1` and again with `Data/rand_state2`. E.g.,
80
79
-`HouseOnline('../Data/rand_state1')`
81
80
-`HouseOnline('../Data/rand_state2')`
82
81
83
-
We split up the monte carlos over two jobs on our server for a total of 30 monte carlos.
82
+
We split up the Monte Carlo trials over two jobs on our server for a total of 30 Monte Carlo trials.
83
+
84
+
After running the function twice (preferably at the same time as two jobs), it will save two new `.mat` files; copy those new `.mat` files to your local machine and run the `plotsOnline.m` script, which will load in the two `.mat` files that were generated and concatenate them together before plotting the result.
84
85
85
-
After running the function twice (preferably at the same time as 2 jobs) it will save 2 new `.mat` files copy those new `.mat` files to your local machine and run the `plotsOnline.m` script which will load in the two `.mat` files that were generated and concatenate them together before plotting the result.
86
86
## Runtime
87
-
It took about 3 days for our online experiments to finish running.
87
+
It took about three days for our online experiments to finish running.
88
+
88
89
<aname="synthetic_experiments"></a>
89
-
# Synthetic Experiment
90
+
# Synthetic-data Experiments
90
91
The code for the synthetic experiments can be found in the `Synthetic_Experiments` directory.
91
-
## Steps to reproduce
92
-
To obtain our results we ran the `synthetic_experiments.m` file which will return a `.mat` file called `3D_synthetic_results_25MonteCarlo.mat` after the code has finished running. Once it is finished copy the generated `.mat` files to your local machine and run the `plot_synthetic.m` script in MATLAB which will produce a plot of the average test error for each algorithm.
92
+
93
+
## Steps to reproduce the results
94
+
In order to reproduce Figure 3(a) in the paper, we ran the `synthetic_experiments.m` file that returns a `.mat` file called `3D_synthetic_results_25MonteCarlo.mat` after the code has finished running. Once the code finishes execution, copy the generated `.mat` files to your local machine and run the `plot_synthetic.m` script in MATLAB. This will produce a plot of the average test error for each algorithm.
93
95
94
96
## Runtime
95
-
This experiment also took about 3 days to finish running on our computing cluster.
97
+
This set of experiments also took about three days to finish running on our computing cluster.
98
+
96
99
<aname="contributors"></a>
97
100
# Contributors
98
101
99
-
The original algorithms and experiments were developed by the authors of the paper
102
+
The original algorithms and experiments were developed by the authors of the paper:
0 commit comments