A | B | C | D | E | |
---|---|---|---|---|---|
1 | Day 1 (10/18) | Speaker | Affiliation | Talk Title | |
2 | 9:00 AM | Session 1 | chair: David Gleich | ||
3 | Jingmei Qiu | University of Delaware | Low rank Tensor Approximations to Kinetic Models | ||
4 | William Detmold | MIT | Tensor Computations in Lattice QCD | ||
5 | Joyce Ho | Emory University | Federated tensor learning and its application for healthcare | ||
6 | Ed Valeev | Virginia Tech | Automating symbolic manipulation and evaluation of data-sparse tensor algebra for quantum electronic structure | ||
7 | 10:30 AM | break | |||
8 | 11:00 AM | Session 2 | chair: Aditya Devarakonda | ||
9 | Changwan Hong | MIT | Compiler Support for Structured Data | ||
10 | Jee Choi | University of Oregon | Linearized Tensor Format for Performance-Portable Sparse Tensor Computation | ||
11 | Toluwanimi Odemuyiwa | University of California, Davis | Extending Einsums to Support Graph Analytics: A BFS Example | ||
12 | 12:00 PM | lunch | |||
13 | 1:30 PM | Session 3 | chair: Andrew Christleib | ||
14 | Cory Hauck | Oak Ridge National Laboratory | A semi-implicit, low-rank DG method for a kinetic model of radiation emission and absorption | ||
15 | Elizabeth Newman | Emory University | Optimal Matrix-Mimetic Tensor Algebras via Variable Projection | ||
16 | Kejun Huang | University of Florida | HOQRI: Higher-order QR Iteration for Scalable Tucker Decomposition | ||
17 | Huan He | University of Pennsylvania | Efficient Fine-tuning of pretrained machine learning models using Tensor Training | ||
18 | 3:00 PM | break | |||
19 | 3:15 PM | Session 4 | chair: Ramki Kannan | ||
20 | Nandeeka Nayak | University of Illinois Urbana-Champaign | TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators | ||
21 | Saday Sadayappan | University of Utah | Can tensor factorization help us shrink language models? | ||
22 | Scott Kovach | Stanford | Indexed Streams: A Formal Intermediate Representation for Fused Contraction Programs | ||
23 | Arvind Saibaba | North Carolina State University | Tensor methods for parametric low-rank kernel approximations | ||
24 | 4:40 PM | poster session | |||
25 | 5:45 PM | end day 1 | |||
26 | |||||
27 | Day 2 (10/19) | ||||
28 | 8:30 AM | Session 5 | chair: Sara Pollock | ||
29 | Osman Malik | Lawrence Berkeley National Laboratory | Recent advances in sampling-based methods for tensor decomposition | ||
30 | Carmeliza Navasca | University of Alabama at Birmingham | Sampling Methods for the Canonical Polyadic Decomposition | ||
31 | Linjian Ma | University of Illinois at Urbana Champaign | Efficient tensor network contraction algorithms | ||
32 | Vivek Bharadwaj | UC Berkeley | Faster Implicit Leverage Sampling Algorithms for CP and Tensor-Train Decomposition | ||
33 | 10:00 AM | break | |||
34 | 10:30 AM | Session 6 | char: Vishwas Rao | ||
35 | Alex Gittens | Rensselaer Polytechnic Institute | Faster Structured Tensor Decompositions via Sketching | ||
36 | Teresa Ranadive | Laboratory for Physical Sciences | Distributed Large-Scale All-at-Once Count Tensor Decomposition | ||
37 | Eric Phipps | Sandia National Laboratories | Streaming Generalized Canonical Polyadic Tensor Decompositions | ||
38 | Akwum Onwunta | Lehigh University | Tensor Train Approach to PDE-Constrained Optimization under Uncertainty | ||
39 | 12:00 PM | lunch | |||
40 | 1:30 PM | Session 7 | chair: Piotr Luszczek | ||
41 | Matt Fishman | Flatiron Institute | Convenient development of general tensor network algorithms with ITensor | ||
42 | Avery Laird | University of Toronto | Automatically Translating Sparse Codes | ||
43 | Paul Kielstra | UC Berkeley | Tensor Butterfly Factorization (In Parallel!) | ||
44 | Mit Kotak | Massachusetts Institute of Technology | Optimizing Equivariant Tensor Products — the Computational Bottleneck of Symmetry-Equivariant Neural Networks | ||
45 | 3:00 PM | end |