You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The terminal output before the gradient descent is a bit of a mess and always annoys me slightly :-)
fast_tsne data_path: data.dat
fast_tsne result_path: result.dat
fast_tsne nthreads: 8
Read the following parameters:
n 70000 by d 50 dataset, theta 0.500000,
perplexity 50.000000, no_dims 2, max_iter 1000,
stop_lying_iter 250, mom_switch_iter 250,
momentum 0.500000, final_momentum 0.800000,
learning_rate 10000.000000, K -1, sigma -1.000000, nbody_algo 2,
knn_algo 1, early_exag_coeff 12.000000,
no_momentum_during_exag 0, n_trees 50, search_k 7500,
start_late_exag_iter -1, late_exag_coeff -1.000000
nterms 3, interval_per_integer 1.000000, min_num_intervals 50, t-dist df 0.500000
Read the 70000 x 50 data matrix successfully. X[0,0] = 0.479432
Read the initialization successfully.
Will use momentum during exaggeration phase
Computing input similarities...
Using perplexity, so normalizing input data (to prevent numerical problems)
Using perplexity, not the manually set kernel width. K (number of nearest neighbors) and sigma (bandwidth) parameters are going to be ignored.
Using ANNOY for knn search, with parameters: n_trees 50 and search_k 7500
Going to allocate memory. N: 70000, K: 150, N*K = 10500000
Building Annoy tree...
Done building tree. Beginning nearest neighbor search...
parallel (8 threads):
[===========================================================>] 99% 10.518s
Symmetrizing...
Using the given initialization.
Exaggerating Ps by 12.000000
Input similarities computed (sparsity = 0.002962)!
Learning embedding...
Using FIt-SNE approximation.
Iteration 50 (50 iterations in 3.58 seconds), cost 6.430274
...
It would be nice to tidy this up a little to make it a little more user-friendly.
The list of parameters is ugly, this can be formatted better.
"Will use momentum during exaggeration phase" -- this is the default so maybe rather say nothing unless momentum is not used?
"Using perplexity, not the manually set kernel width. K (number of nearest neighbors) and sigma (bandwidth) parameters are going to be ignored." -- again the default, so maybe rather not print this
Would be nice to see the time spent on building Annoy tree once its built.
"parallel (8 threads):" -- this is printed above already, maybe don't need to repeat?
"Symmetrizing..." -- would definitely be nice to see the time spent, because this can take quite some time in some cases.
"Input similarities computed (sparsity = 0.002962)!" -- this should be printed after symmetrizing is done.
If no exaggeration is used, no need to print "exaggerating by 1"
In general line breaks, "...", etc. can be used more consistently.
etc.
The text was updated successfully, but these errors were encountered:
The terminal output before the gradient descent is a bit of a mess and always annoys me slightly :-)
It would be nice to tidy this up a little to make it a little more user-friendly.
etc.
The text was updated successfully, but these errors were encountered: