-
Notifications
You must be signed in to change notification settings - Fork 0
/
accepted.html
713 lines (429 loc) · 30.6 KB
/
accepted.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
---
layout: default
---
<script type="text/javascript">
document.getElementById('LNcfp').id='leftcurrent';
</script>
<div class="contents">
<h1>AISTATS 2017 Accepted Papers</h1>
<i>Any typos will be corrected in the final list of proceedings. This is a temporary list in alphabetical order of the title. To address serious typos, please contact publicity chair Aaditya Ramdas at aramdas [at] berkeley.edu.</i><br><br>
A Fast and Scalable Joint Estimator for Learning Multiple Related Sparse Gaussian Graphical Models <br>
Beilun Wang, Ji Gao, Yanjun Qi <br><br>
A Framework for Optimal Matching for Causal Inference <br>
Nathan Kallus <br><br>
A Learning Theory of Ranking Aggregation <br>
Anna KORBA, Stéphan Clemençon, Eric Sibony <br><br>
A Lower Bound on the Partition Function of Attractive Graphical Models in the Continuous Case<br>
Nicholas Ruozzi<br><br>
A Maximum Matching Algorithm for Basis Selection in Spectral Learning<br>
Ariadna Quattoni, Xavier Carreras, Matthias Gallé<br><br>
A New Class of Private Chi-Square Hypothesis Tests <br>
Ryan Rogers, Daniel Kifer<br><br>
A Stochastic Nonconvex Splitting Method for Symmetric Nonnegative Matrix Factorization
<br>Songtao Lu, Mingyi Hong, Zhengdao Wang <br><br>
A Sub-Quadratic Exact Medoid Algorithm <br>
James Newling, Francois Fleuret <br><br>
A Unified Computational and Statistical Framework for Nonconvex Low-rank Matrix Estimation <br>
Lingxiao Wang, Xiao Zhang, Quanquan Gu<br><br>
A Unified Optimization View on Generalized Matching Pursuit and Frank-Wolfe <br>
Francesco Locatello, Rajiv Khanna, Michael Tschannen, Martin Jaggi<br><br>
Active Positive Semidefinite Matrix Completion: Algorithms, Theory and Applications <br>
Aniruddha Bhargava, Ravi Ganti, Rob Nowak<br><br>
Adaptive ADMM with Spectral Penalty Parameter Selection<br>
Zheng Xu, Mario Figueiredo, Tom Goldstein <br><br>
An Information-Theoretic Route from Generalization in Expectation to Generalization in Probability <br>
Ibrahim Alabdulmohsin<br><br>
Annular Augmentation Sampling<br>
Francois Fagan, Jalaj Bhandari, John Cunningham<br><br>
Anomaly Detection in Extreme Regions via Empirical MV-sets on the Sphere<br>
Albert Thomas, Stéphan Clemençon, Alexandre Gramfort, Anne Sabourin<br><br>
ASAGA: Asynchronous Parallel SAGA <br>
Rémi Leblond, Fabian Pedregosa, Simon Lacoste-Julien<br><br>
Asymptotically exact inference in likelihood-free models<br>
Matthew Graham, Amos Storkey<br><br>
Attributing Hacks<br>
Ziqi Liu, Alex Smola, Kyle Soska, Yu-Xiang Wang, Qinghua Zheng <br><br>
Automated Inference with Adaptive Batches<br>
Soham De, Abhay Yadav, David Jacobs, Tom Goldstein <br><br>
Bayesian Hybrid Matrix Factorisation for Data Integration<br>
Thomas Brouwer, Pietro Lio<br><br>
Belief Propagation in Conditional RBMs for Structured Prediction<br>
Wei Ping, Alex Ihler <br><br>
Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers<br>
Meelis Kull, Telmo de Menezes e Silva Filho, Peter Flach
<br><br>
Binary and Multi-Bit Coding for Stable Random Projections<br>
Ping Li <br><br>
Black-box Importance Sampling<br>
Qiang Liu, Jason Lee<br><br>
Clustering from Multiple Uncertain Experts<br>
Yale Chang, Junxiang Chen, Michael Cho, Peter Castaldi, Ed Silverman, Jennifer Dy <br><br>
Co-Occuring Directions Sketching for Approximate Matrix Multiply<br>
Youssef Mroueh, Etienne Marcheret, Vaibahava Goel<br><br>
Combinatorial Topic Models using Small-Variance Asymptotics<br>
Ke Jiang, Suvrit Sra, Brian Kulis<br><br>
Communication-efficient Distributed Sparse Linear Discriminant Analysis<br>
Lu Tian, Quanquan Gu <br><br>
Communication-Efficient Learning of Deep Networks from Decentralized Data<br>
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Aguera y Arcas<br><br>
Comparison Based Nearest Neighbor Search <br>
Siavash Haghiri, Ulrike von Luxburg, Debarghya Ghoshdastidar<br><br>
Complementary Sum Sampling for Likelihood Approximation in Large Scale Classification<br>
David Barber, Aleksandar Botev, Bowen Zheng<br><br>
Compressed Least Squares Regression revisited<br>
Martin Slawski<br><br>
Conditions beyond treewidth for tightness of higher-order LP relaxations<br>
Mark Rowland, Aldo Pacchiano, Adrian Weller<br><br>
Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models<br>
Mohammad Khan, Wu Lin<br><br>
Consistent and Efficient Nonparametric Different-Feature Selection<br>
Satoshi Hara, Takayuki Katsuki, Hiroki Yanagisawa, Takafumi Ono, Ryo Okamoto, Shigeki Takeuchi<br><br>
Contextual Bandits with Latent Confounders: An NMF Approach<br>
Rajat Sen, Karthikeyan Shanmugam, Murat Kocaoglu, Alex Dimakis, Sanjay Shakkottai<br><br>
Convergence rate of stochastic k-means<br>
Cheng Tang, Claire Monteleoni<br><br>
ConvNets with Smooth Adaptive Activation Functions for Regression<br>
Le Hou, Dimitris Samaras, Tahsin Kurc, Yi Gao, Joel Saltz<br><br>
CPSG-MCMC: Clustering-Based Preprocessing method for Stochastic Gradient MCMC<br>
Tianfan Fu, Zhihua Zhang<br><br>
Data Driven Resource Allocation for Distributed Learning<br>
Travis Dick, Venkata Krishna Pillutla, Mu Li, Colin White, Nina Balcan, Alex Smola<br><br>
Decentralized Collaborative Learning of Personalized Models over Networks<br>
Paul Vanhaesebrouck, Aurélien Bellet, Marc Tommasi<br><br>
Detecting Dependencies in High-Dimensional, Sparse Databases Using Probabilistic Programming and Non-parametric Bayes<br>
Feras Saad, Vikash Mansinghka<br><br>
Discovering and Exploiting Additive Structure for Bayesian Optimization<br>
Jacob Gardner, Chuan Guo, Kilian Weinberger, Roman Garnett, Roger Grosse<br><br>
Distance Covariance Analysis<br>
Benjamin Cowley, Joao Semedo, Amin Zandvakili, Adam Kohn, Matthew Smith, Byron Yu<br><br>
Distributed Sequential Sampling for Kernel Matrix Approximation<br>
Daniele Calandriello, Alessandro Lazric, Michal Valko<br><br>
Distribution of Gaussian Process Arc Lengths<br>
Justin Bewsher, Alessandra Tosi, Michael Osborne, Stephen Roberts<br><br>
Diversity Leads to Generalization in Neural Networks<br>
Bo Xie, Yingyu Liang, Le Song<br><br>
DP-EM: Differentially Private Expectation Maximization<br>
Mijung Park, James Foulds, Kamalika Choudhary, Max Welling <br><br>
Dynamic Collaborative Filtering With Compound Poisson Factorization<br>
Ghassen Jerfel, Basbug, Barbara Engelhardt <br><br>
Efficient Algorithm for Sparse Tensor-variate Gaussian Graphical Models via Gradient Descent<br>
Pan Xu, Quanquan Gu <br><br>
Efficient Multiclass Prediction on Graphs via Surrogate Losses<br>
Alexander Rakhlin, Karthik Sridharan<br><br>
Efficient Rank Aggregation via Lehmer Codes<br>
Pan Li, Arya Mazumdar, Olgica Milenkovic<br><br>
Encrypted accelerated least squares regression<br>
Pedro Esperanca, Louis Aslett, Chris Holmes<br><br>
Estimating Density Ridges by Direct Estimation of Density-Derivative-Ratios<br>
Hiroaki Sasaki, Takafumi Kanamori, Masashi Sugiyama<br><br>
Exploration--Exploitation in MDPs with Options<br>
Ronan Fruit, Alessandro Lazric <br><br>
Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets<br>
Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, Frank Hutter<br><br>
Fast Classification with Binary Prototypes<br>
Kai Zhong, Ruiqi Guo, Sanjiv Kumar, Bowei Yan, David Simcha, Inderjit Dhillon <br><br>
Fast column generation for atomic norm regularization.<br>
Marina Vinyes, Guillaume Obozinski <br><br>
Fast rates with high probability in exp-concave statistical learning<br>
Nishant Mehta<br><br>
Faster Coordinate Descent via Adaptive Importance Sampling<br>
Dmytro Perekrestenko, Volkan Cevher, Martin Jaggi<br><br>
Finite-sum Composition Optimization via Variance Reduced Gradient Descent<br>
Xiangru Lian, Ji Liu, Mengdi Wang <br><br>
Linking Micro Event History to Macro Prediction in Point Process Models<br>
Yichen Wang, Xiaojing Ye, Haomin Zhou, Hongyuan Zha, Le Song<br><br>
Frank-Wolfe Algorithms for Saddle Point Problems<br>
Gauthier Gidel, Simon Lacoste-Julien, Tony Jebara<br><br>
Frequency Domain Predictive Modelling with Aggregated Data<br>
Avradeep Bhowmik, Joydeep Ghosh, Oluwasanmi Koyejo <br><br>
Generalization Error of Invariant Classifiers<br>
Jure Sokolic, Raja Giryes, Guillermo Sapiro, Miguel Rodrigues<br><br>
Generalized Pseudolikelihood Methods for Inverse Covariance Estimation<br>
Alnur Ali, Kshitij Khare, Sang-Yun Oh, Bala Rajaratnam <br><br>
Global Convergence of Non-Convex Gradient Descent for Computing Matrix Squareroot<br>
Prateek Jain, Chi Jin, Sham Kakade, Praneeth Netrapalli <br><br>
Gradient Boosting on Stochastic Data Streams<br>
Hanzhang Hu, Andrew Bagnell, Wen Sun, Martial Hebert, Arun Venkatraman<br><br>
Gray-box inference for structured Gaussian process models<br>
Pietro Galliani, Amir Dezfouli, Edwin Bonilla, Novi Quadrianto<br><br>
Greedy Direction Method of Multiplier for MAP Inference of Large Output Domain<br>
Xiangru Huang, Ian En-Hsu Yen, Ruohan Zhang, Qixing Huang, Pradeep Ravikumar, Inderjit Dhillon<br><br>
Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains<br>
Andrew An Bian, Baharan Mirzasoleiman, Joachim Buhmann, Andreas Krause<br><br>
Hierarchically-partitioned Gaussian Process Approximation<br>
Byung-Jun Lee, Jongmin Lee, Kee-Eung Kim<br><br>
High-dimensional Time Series Clustering via Cross-Predictability<br>
Dezhi Hong, Quanquan Gu, Kamin Whitehouse<br><br>
Hit-and-Run for Sampling and Planning in Non-Convex Spaces<br>
Yasin Abbasi-Yadkori, Alan Malek, Peter Bartlett, Victor Gabillon<br><br>
Horde of Bandits using Gaussian Markov Random Fields<br>
Sharan Vaswani, Mark Schmidt, Laks Lakshmanan<br><br>
Identifying groups of strongly correlated variables through Smoothed Ordered Weighted L_1-norms<br>
Raman Sankaran, Francis Bach, Chiranjib Bhattacharya <br><br>
Improved Strongly Adaptive Online Learning using Coin Betting<br>
Kwang-Sung Jun, Rebecca Willett, Stephen Wright, Francesco Orabona<br><br>
Inference Compilation and Universal Probabilistic Programming<br>
Tuan Anh Le, Atilim Gunes Baydin, Frank Wood <br><br>
Information Projection and Approximate Inference for Structured Sparse Variables<br>
Rajiv Khanna, Joydeep Ghosh, Rusell Poldrack, Oluwasanmi Koyejo <br><br>
Information-theoretic limits of Bayesian network structure learning<br>
Asish Ghoshal, Jean Honorio<br><br>
Initialization and Coordinate Optimization for Multi-way Matching<br>
Da Tang, Tony Jebara<br><br>
Label Filters for Large Scale Multilabel Classification<br>
Alexandru Niculescu-Mizil, Ehsan Abbasnejad <br><br>
Large-Scale Data-Dependent Kernel Approximation<br>
Alin Popa, Catalin Ionescu, Cristian Sminchisescu <br><br>
Learning Cost-Effective Treatment Regimes using Markov Decision Processes<br>
Himabindu Lakkaraju, Cynthia Rudin<br><br>
Learning from Conditional Distributions via Dual Kernel Embeddings<br>
Bo Dai, Niao He, Yunpeng Pan, Byron Boots, Le Song<br><br>
Learning Graphical Games from Behavioral Data: Sufficient and Necessary Conditions<br>
Asish Ghoshal, Jean Honorio<br><br>
Learning Nash Equilibrium for General-Sum Markov Games from Batch Data<br>
Julien Perolat, Florian Strub, Bilal Piot, Olivier Pietquin<br><br>
Learning Nonparametric Forest Graphical Models with Prior Information<br>
Yuancheng Zhu, Zhe Liu, Siqi Sun<br><br>
Learning Optimal Interventions<br>
Jonas Mueller, David Reshef, George Du, Tommi Jaakkola<br><br>
Learning Structured Weight Uncertainty in Bayesian Neural Networks<br>
Shengyang Sun, Changyou Chen, Lawrence Carin<br><br>
Learning the Network Structure of Heterogeneous Data via Pairwise Exponential Markov Random Fields <br>
Youngsuk Park, David Hallac, Stephen Boyd, Jure Leskovec<br><br>
Learning Theory for Conditional Risk Minimization<br>
Alexander Zimin, Christoph Lampert <br><br>
Learning Time Series Detection Models from Temporally Imprecise Labels<br>
Roy Adams, Ben Marlin<br><br>
Learning with feature feedback: from theory to practice <br>
Stefanos Poulis, Sanjoy Dasgupta<br><br>
Least-Squares Log-Density Gradient Clustering for Riemannian Manifolds<br>
Mina Ashizawa, Hiroaki Sasaki, Tomoya Sakai, Masashi Sugiyama<br><br>
Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method<br>
Lihua Lei, Michael Jordan<br><br>
Linear Convergence of Stochastic Frank Wolfe Variants<br>
Chaoxu Zhou, Donald Goldfarb, Garud Iyengar<br><br>
Linear Thompson Sampling Revisited<br>
Marc Abeille, Alessandro Lazric <br><br>
Lipschitz Density-Ratios, Structured Data, and Data-driven Tuning<br>
Samory Kpotufe<br><br>
Local Group Invariant Representations via Orbit Embeddings<br>
Anant Raj, Abhishek Kumar, Youssef Mroueh, Tom Fletcher, Bernhard Schoelkopf<br><br>
Local Perturb-and-MAP for Structured Prediction<br>
Gedas Bertasius, Lorenzo Torresani, Jianbo Shi, Qiang Liu <br><br>
Localized Lasso for High-Dimensional Regression<br>
Makoto Yamada, Takeuchi Koh, Tomoharu Iwata, John Shawe-Taylor, Samuel Kaski <br><br>
Lower Bounds on Active Learning for Graphical Model Selection<br>
Jonathan Scarlett, Volkan Cevher<br><br>
Markov Chain Truncation for Doubly-Intractable Inference<br>
Colin Wei, Iain Murray<br><br>
Minimax Approach to Variable Fidelity Data Interpolation<br>
Alexey Zaytsev, Evgeny Burnaev <br><br>
Minimax density estimation for growing dimension<br>
Daniel McDonald<br><br>
Minimax Gaussian Classification & Clustering<br>
Tianyang Li, Xinyang Yi, Constantine Carmanis, Pradeep Ravikumar <br><br>
Minimax-optimal semi-supervised regression on unknown manifolds<br>
Amit Moscovich, Ariel Jaffe, Boaz Nadler <br><br>
Modal-set estimation with an application to clustering<br>
Heinrich Jiang, Samory Kpotufe<br><br>
Near-optimal Bayesian Active Learning with Correlated and Noisy Tests<br>
Yuxin Chen, Hamed Hassani, Andreas Krause<br><br>
Nearly Instance Optimal Sample Complexity Bounds for Top-k Arm Selection<br>
Lijie Chen, Jian Li, Mingda Qiao<br><br>
Non-Count Symmetries in Boolean & Multi-Valued Probabilistic Graphical Models<br>
Parag Singla, Ritesh Noothigattu, Ankit Anand, Mausam<br><br>
Non-square matrix sensing without spurious local minima via the Burer-Monteiro approach<br>
Dohyung Park, Anastasios Kyrillidis, Constantine Carmanis, Sujay Sanghavi<br><br>
Nonlinear ICA of Temporally Dependent Stationary Sources<br>
Aapo Hyvarinen, Hiroshi Morioka<br><br>
On the Hyperprior Choice for the Global Shrinkage Parameter in the Horseshoe Prior<br>
Juho Piironen, Aki Vehtari <br><br>
On the Interpretability of Conditional Probability Estimates in the Agnostic Setting<br>
Yihan Gao, Aditya Parameswaran, Jian Peng<br><br>
On the learnability of fully-connected neural networks<br>
Yuchen Zhang, Jason Lee, Martin Wainwright, Michael Jordan<br><br>
On the Troll-Trust Model for Edge Sign Prediction in Social Networks<br>
Géraud Le Falher, Nicolo Cesa-Bianchi, Claudio Gentile, Fabio Vitale <br><br>
Online Learning with Partial Monitoring: Optimal Convergence Rates<br>
Joon Kwon, Vianney Perchet <br><br>
Online Nonnegative Matrix Factorization with General Divergences<br>
Renbo Zhao, Vincent Tan, Huan Xu<br><br>
Online Optimization of Smoothed Piecewise Constant Functions<br>
Vincent Cohen-Addad, Varun Kanade <br><br>
Optimal Recovery of Tensor Slices<br>
Andrew Li, Vivek Farias<br><br>
Optimistic Planning for the Stochastic Knapsack Problem<br>
Ciara Pike-Burke, Steffen Grunewalder<br><br>
Orthogonal Tensor Decompositions via Two-Mode Higher-Order SVD (HOSVD)<br>
Miaoyan Wang, Yun Song<br><br>
Performance Bounds for Graphical Record Linkage<br>
Rebecca C. Steorts, Mattew Barnes, Willie Neiswanger<br><br>
Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation<br>
Sohail Bahmani, Justin Romberg<br><br>
Poisson intensity estimation with reproducing kernels<br>
Seth Flaxman, Yee Whye Teh, Dino Sejdinovic<br><br>
Prediction Performance After Learning in Gaussian Process Regression<br>
Johan Wagberg, Dave Zachariah, Thomas Schon, Petre Stoica <br><br>
Quantifying the accuracy of approximate diffusions and Markov chains<br>
Jonathan Huggins, James Zou<br><br>
Random Consensus Robust PCA<br>
Daniel Pimentel-Alarcon, Robert Nowak<br><br>
Random projection design for scalable implicit smoothing of randomly observed stochastic processes <br>
Francois Belletti, Evan Sparks, Alexandre Bayen, Kurt Keutzer, Joseph Gonzalez<br><br>
Rank Aggregation and Prediction with Item Features<br>
Kai-Yang Chiang, Cho-Jui Hsieh, Inderjit Dhillon <br><br>
Rapid Mixing Swendsen-Wang Sampler for Stochastic Partitioned Attractive Models <br>
Sejun Park, Yunhun Jang, Andreas Galanis, Jinwoo Shin, Daniel Stefankovic, Eric Vigoda <br><br>
Recurrent Switching Linear Dynamical Systems<br>
Scott Linderman, Andrew Miller, David Blei, Ryan Adams, Liam Paninski, Matthew Johnson <br><br>
Regression Uncertainty on the Grassmannian <br>
Yi Hong, Xiao Yang, Roland Kwitt, Martin Styner, Marc Niethammer <br><br>
Regret Bounds for Lifelong Learning<br>
Pierre Alquier, Tien Mai, Massimiliano Pontil<br><br>
Regret Bounds for Transfer Learning in Bayesian Optimisation<br>
Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh<br><br>
Rejection Sampling Variational Inference <br>
Christian Naesseth, Francisco Ruiz, Scott Linderman, David Blei<br><br>
Relativistic Monte Carlo <br>
Xiaoyu Lu, Valerio Perrone, Leonard Hasenclever, Yee Whye Teh, Sebastian Vollmer<br><br>
Removing Phase Transitions from Gibbs Measures<br>
Ian Fellows<br><br>
Robust and Efficient Computation of Eigenvectors in a Generalized Spectral Method for Constrained Clustering<br>
Chengming Jiang, Huiqing Xie, Zhaojun Bai<br><br>
Robust Causal Estimation in the Large-Sample Limit without Strict Faithfulness<br>
Ioan Gabriel Bucur, Tom Heskes, Tom Claassen<br><br>
Scalable Convex Multiple Sequence Alignment via Entropy-Regularized Dual Decomposition<br>
Jiong Zhang, Ian En-Hsu Yen, Pradeep Ravikumar, Inderjit Dhillon <br><br>
Scalable Greedy Support Selection via Weak Submodularity<br>
Rajiv Khanna, Ethan Elenberg, Joydeep Ghosh, Alex Dimakis<br><br>
Scalable Learning of Non-Decomposable Objectives<br>
Elad Eban, Mariano Schain, Alan Mackey, Ariel Gordon, Ryan Rifkin, Gal Elidan<br><br>
Scalable variational inference for super resolution microscopy<br>
Ruoxi Sun, Evan Archer, Liam Paninski<br><br>
Scaling Submodular Maximization via Pruned Submodularity Graphs<br>
Tianyi Zhou, Hua Ouyang, Yi Chang, Jeff Blimes, Carlos Guestrin<br><br>
Sequential Graph Matching with Sequential Monte Carlo<br>
Seong-Hwan Jun, Alexandre Bouchard-Cote, Samuel W.K. Wong<br><br>
Sequential Multiple Hypothesis Testing with Type I Error Control<br>
Alan Malek, Yinlam Chow, Mohammad Ghavamzadeh, Sumeet Katariya <br><br>
Signal-based Bayesian Seismic Monitoring<br>
David Moore, Stuart Russell<br><br>
Sketching Meets Random Projection in the Dual: A Provable Recovery Algorithm for Big and High-dimensional Data<br>
Jialei Wang, Jason Lee, Mehrdad Mahdavi, Mladen Kolar, Nati Srebro<br><br>
Sketchy Decisions: Convex Low-Rank Matrix Optimization with Optimal Storage<br>
Alp Yurtsever, Madeleine Udell, Joel Tropp, Volkan Cevher<br><br>
Sparse Accelerated Exponential Weights<br>
Pierre Gaillard, Olivier Wintenberger<br><br>
Sparse Randomized Partition Trees for Nearest Neighbor Search<br>
Kaushik Sinha, Omid Keivani<br><br>
Spatial Decompositions for Large Scale SVMs<br>
Philipp Thomann, Ingo Steinwart, Ingrid Blaschzyk, Mona Meister<br><br>
Spectral Methods for Correlated Topic Models<br>
Forough Arabshahi, Anima Anandkumar<br><br>
Stochastic Difference of Convex Algorithm and its Application to Training Deep Boltzmann Machines<br>
Atsushi Nitanda, Taiji Suzuki<br><br>
Stochastic Rank-1 Bandits<br>
Sumeet Katariya, Branislav Kveton, Csaba Szepesvari, Claire Vernade, Zheng Wen<br><br>
Structured adaptive and random spinners for fast machine learning computations<br>
Mariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, Francois Fagan, Cedric Gouy-Pailler, Anne Morvan, Nourhan Sakr, Tamas Sarlos, Jamal Atif <br><br>
Tensor-Dictionary Learning with Deep Kruskal-Factor Analysis<br>
Andrew Stevens, Yunchen Pu, Yannan Sun, Gregory Spell, Lawrence Carin <br><br>
The End of Optimism? An Asymptotic Analysis of Finite-Armed Linear Bandits<br>
Tor Lattimore, Csaba Szepesvari<br><br>
Thompson Sampling for Linear-Quadratic Control Problems<br>
Marc Abeille, Alessandro Lazric <br><br>
Tracking Objects with Higher Order Interactions via Delayed Column Generation<br>
Shaofei Wang, Steffen Wolf, Charless Fowlkes, Julian Yarkony<br><br>
Trading off Rewards and Errors in Multi-Armed Bandits<br>
Akram Erraqabi, Alessandro Lazric, Michal Valko, Yun-En Liu, Emma Brunskill <br><br>
Training Fair Classifiers<br>
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, Krishna Gummadi<br><br>
Unsupervised Sequential Sensor Acquisition<br>
Manjesh Hanawal, Venkatesh Saligrama, Csaba Szepesvari<br><br>
Value-Aware Loss Function for Model-based Reinforcement Learning<br>
Amir-Massoud Farahmand, Andre Barreto, Daniel Nikovski <br><br>
<hr>
<h1>Reviewer Instructions for AISTATS 2017</h1>
Reviews must be entered electronically through the
<a href="https://cmt.research.microsoft.com/AISTATS2017/">CMT system for AISTATS 2017</a>.
<h3>Review Content</h3>
<p>
Each review should begin with a paragraph providing an overview of the paper, and summarizing its main contributions. In particular, some thought should be given to how the paper fits with the aims and topics of the conference (not interpreted overly narrowly). The paragraph should relate the ideas in the paper to to previous work in the field.
</p>
<p>
The next section of the review should deal with major comments, issues that the reviewer sees as standing in the way of acceptance of the paper, or issues that should be addressed prior to publication, or reasons for rejecting the paper.
</p>
<p>
The final section of the review should deal with any minor issues, such as typographical errors, spelling mistakes, or areas where presentation could be improved.
</p>
<p>
As was done last year, reviewers may request public or non-proprietary code/data as part of the initial reviews for the purpose of better judging the paper. The authors will then provide the code/data as part of the author response. This might be, for instance, to check whether the authors' methods work as claimed, or whether it correctly treats particular scenarios the authors did not consider in their initial submission. Note this request is NOT to be used to ask the authors to release their code after the paper has been published. Code/data should only be requested in the event that this is the deciding factor in paper acceptance. The request should be reasonable in light of the duration of the discussion period, which limits the time available for review. The SPC member in charge of the paper will confirm whether a code/data request is warranted and reasonable. Authors may only submit separate code and data at the invitation of a reviewer; otherwise, the usual restrictions apply on author response length. The conference chairs will enable the anonymous transfer of code and data to the relevant reviewers.
</p>
<h3>Evaluation Criteria</h3>
<p>
Contributions of AISTATS papers can be categorized into four areas a) algorithmic, b) theoretical, c) unifying or d) application.
</p>
<p>
Algorithmic contributions may make a particular approach feasible for the first time or may extend the applicability of an approach (for example allowing it to be applied to very large data sets).
</p>
<p>
A theoretical contribution should provide a new result about a model or algorithm. For example convergence proofs, consistency proofs or performance guarantees.
</p>
<p>
A unifying contribution may bring together several apparently different ideas and show how they are related, providing new insights and directions for future research.
</p>
<p>
Finally, an application contribution should typically have aspects that present particular statistical challenges which require solution in a novel way or through clever adaptation of existing techniques.
</p>
<p>
A paper may exhibit one or more of these contributions, where each of them are important in advancing the state of the art in the field. Of course, at AISTATS we are also particularly keen to see work which relates machine learning and statistics or highlights novel connections between the fields or even contrasts them.
</p>
<p>
One aspect of result presentation that is often neglected is a discussion of the failure cases of an algorithm, often due to concern that reviewers will penalize authors who provide this information. We emphasize that description of failure cases as well as successes should be encouraged and rewarded in submissions.
</p>
<p>
When reviewing, bear in mind that one of the most important aspects of a successful conference paper is that it should be thought provoking. Thought provoking papers sometimes generate strong reactions on initial reading, which may sometimes be negative. However, if the paper genuinely represents a paradigm shift it may take a little longer than a regular paper to come around to the author's way of thinking. Keep an eye out for such papers, although they may take longer to review, if they do represent an important advance the effort will be well worth it.
</p>
<p>
Finally, we would like to signal to newcomers to AISTATS (and to machine-learning conferences generally) that the review process is envisioned in exactly the same spirit as in a top quality journal like JRSS B, JASA, or Annals of Statistics. Accepted contributions are published in proceedings, and acceptance is competitive, so authors can rightly include these contributions in their publication list, on par with papers published in top quality journals. Further, AISTATS does not give the option to revise and resubmit: if a paper cannot be accepted with minor revisions (e.g., as proposed by the authors in their response to the reviews), it should be rejected.
</p>
<p>
Given the culture gap between the statistics and machine learning communities, we thus want to emphasize from the start the required levels of quality and innovation. All deadlines are very strict, as we cannot delay an overall tight schedule.
</p>
<h3>Confidentiality and Double Blind Process</h3>
<p>
AISTATS 2017 is a double blind reviewed conference. Whilst we expect authors to remove information that will obviously reveal their identity, we also trust reviewers not to take positive steps to try and uncover the authors' identity.
</p>
<p>
We are happy for authors to submit material that they have placed online as tech reports (such as in arXiv), or that they have submitted to existing workshops that do not produce published proceedings. This can clearly present a problem with regard to anonymization. Please do not seek out such reports on line in an effort to deanonymize.
</p>
<p>
The review process is double blind. Authors do not know reviewer identities, and this includes any authors on the senior program committee (i.e., the area chairs). However, area chairs do see reviewer identities. Also, during the discussion phase reviewer identities will be made available to other reviewers. In other words, whilst the authors will not know your identity, your co-reviewers will. This should help facilitate discussion of the papers.
</p>
<p>
If a reviewer requests code from the authors, this code should be anonymized (e.g., author names should be removed from the file headers). That said, we understand that it might be difficult to remove all traces of the authors from the files, and will exercise reasonable judgment if innocent mistakes are made.
</p>
<p>
The AISTATS reviewing process is confidential. By agreeing to review you agree not to use ideas, results, code, and data from submitted papers in your work. This includes research and grant proposals. This applies unless that work has appeared in other publicly available formats, for example technical reports or other published work. You also agree not to distribute submitted papers, ideas, code, or data to anyone else. If you request code and accompanying data, you agree that this is provided for your sole use, and only for the purposes of assessing the submission. All code and data must be discarded once the review is complete, and may not be used in further research or transferred to third parties.
</p>
<h3>The CMT Reviewing System</h3>
<p>The first step in the review process is to enter conflicts of interests. These conflicts can be entered as domain names (e.g., cmu.edu) and also by marking specific authors with whom you have a conflict. The use of double blind reviewing means you may not able to determine the papers you have a conflict with, so it is important you go through this list carefully and mark any conflicts. You should mark a conflict with anyone who is or ever was your student or mentor, is a current or recent colleague, or is a close collaborator. If in doubt, it's probably better to mark a conflict, in order to avoid the appearance of impropriety. Your own username should be automatically marked as a conflict, but sometimes the same person may have more than one account, in which case you should definitely mark your other accounts as a conflict as well. If you do not mark a conflict with an author, it is assumed that you do not have a conflict by default.
</p>
<p>
CMT also requests subject information which will be used to assist allocation of reviewers to papers. Please enter relevant keywords to assist in paper allocation.
</p>
<p>
You can revise your review multiple times before the submission. Your formal invite to be a reviewer will come from the CMT system. The email address used in this invite is your login, you can change your password with a password reset from the login screen.
</p>
<h3>Supplementary Material</h3>
<p>
Supplementary material is allowed by AISTATS 2017. For example, this supplementary material could include proofs, video, source code or audio. As a reviewer you should feel free to make use of this supplementary material to help in your review, though reviewing supplementary material is up to your discretion. One exception is the letter of revision from papers previously submitted to NIPS. If this letter is present in the supplementary material, we ask you to take it into consideration.
</p>
<h3>Simultaneous Submission</h3>
<p>
Simultaneous submission to other conference venues in the areas of machine learning and statistics is not permitted.
</p>
<p>
Simultaneous submission to journal publications of significantly extended versions of the paper is permitted, as long as the publication date of the journal is not before May 2017.
</p>
<br><br>