Alex Williams


I'm a postdoc in Statistics at Stanford University working with Scott Linderman's research group on developing neural data analysis methods. My work is currently supported by the Wu Tsai Neurosciences Institute and the NIH BRAIN initiative (1F32MH122998-01).

In 2019, I received a PhD in Neuroscience from Stanford with supervision from Surya Ganguli, and funding through the DOE Computational Science Graduate Fellowship Program. Before that, I worked one year at the Salk Institute with Terry Sejnowski and two years at Brandeis University with Eve Marder and Tim O'Leary. I got my undergrad degree at Bowdoin College, advised by Patsy Dickinson.

Some Current Research Interests

  • Unsupervised learning methods for large-scale, multi-trial neural data
  • A common paradigm in systems neuroscience is to simultaneously record the activity of many neurons over repeated experimental trials (e.g., multiple presentations of a sensory stimulus, or a repeated motor action). The resulting datasets can be very large, potentially containing thousands of neurons over thousands of trials. I'm interested in finding general-purpose statistical approaches for understanding datasets of this form.
    • Tensor Decomposition — In collaboration with experimental groups headed by Krishna Shenoy and Mark Schnitzer we demonstrate how to extract functional neural subpopulations across learning and changing environments
    • Time Warping — In collaboration with Rinberg, Shenoy, and Ölveczky, labs we demonstrate how to discover precisely timed neural spike patterns in the presence of trial-to-trial jitter and other temporal misalignments.
    • Tutorial video — In this talk (given at MIT, 2017) I give a high level overview of some of the research cited above.
  • Automated segmentation and detection of motifs in high-dimensional time series
  • Trial-structured data (described above) are convenient for data analysis — in particular, if neural dynamics on different trials follow a similar (though not identical) trajectory, we can build a model that isolates this structure while discarding the remaining "noise" (for lack of a better term). A more challenging problem is how to extract scientifically interpretable features from unstructured neural and behavioral data streams (e.g. during unconstrained natural behaviors). To address this, I have developed some models that extract temporal motifs or "pseudo-trials" from raw time series.
  • Modeling individual variability in biological and model networks
  • Comparative analyses are a well-established approach used in many fields of biology including anatomy, ecology, and cellular physiology. Applying similar approaches in systems-level neuroscience is challenging because we often lack a principled mapping between neurons recorded in different subjects (except in important invertebrate model organisms). However, recent progress has shown that we can often extract population-level statistical features from large-scale neural recording traces, which replicate (at least qualitatively) across subjects (see, e.g., Churchland et al., 2012). I'm interested in developing statistical methods that formalize these observations and using these new tools to perform larger-scale comparative analyses across cohorts containing hundreds of experimental subjects.
    • Universality & Individuality in Artificial Networks — I worked with Niru Maheswaranathan and David Sussillo to study these questions across large populations of trained recurrent neural networks (RNNs). On simple tasks that were amenable to low-dimensional visualization and interpretation, we found that different network architectures (e.g. vanilla RNNs vs. LSTMs) often showed qualitatively similar computational mechanisms (loosely, "universality"). However, we also found differences ("individuality") in the geometric structure of different solutions. We expect that we will need to wrestle with similar outcomes when these comparative analyses are brought to bear on biological data. Also see this talk from David Sussillo for more details.
    • Distance Metrics for Comparing Neural Representations — I have recently been interested in revisiting some of the foundational approaches to quantifying similarity across neurocomputational systems. I describe some of the high-level ideas in the video above, and I have shared a small code package on github. A forthcoming manuscript will provide more details and specifics.



click here for my google scholar profile
(* denotes co-first authors.)
  • Statistical Neuroscience in the Single Trial Limit
  • Williams AH, Linderman SW (2021). arXiv preprint. 2103.05075
  • Point process models for sequence detection in high-dimensional neural spike trains
  • Williams AH, Degleris A, Wang Y, Linderman SW (2020). Neural Information Processing Systems. Vancouver, CA. (selected for oral presentation)
  • Dynamic and reversible remapping of network representations in an unchanging environment
  • Low IIC, Williams AH, Campbell MG, Linderman SW, Giocomo LM (2020). bioRxiv preprint.
  • Combining tensor decomposition and time warping models for multi-neuronal spike train analysis
  • Williams AH (2020). bioRxiv preprint.
  • Thalamic activity patterns unfolding over multiple time scales predict seizure onset in absence epilepsy
  • Sorokin JM, Williams AH, Ganguli S, Huguenard J (2020). bioRxiv preprint.
  • Discovering precise temporal patterns in large-scale neural recordings through robust and interpretable time warping
  • Williams AH, Poole B, Maheswaranathan N, Dhawale AK, Fisher T, Wilson CD, Brann DH, Trautmann E, Ryu S, Shusterman R, Rinberg D, Ölveczky BP, Shenoy KV, Ganguli S (2020). Neuron. 105(2):246-259.e8
  • Universality and individuality in neural dynamics across large populations of recurrent networks
  • Maheswaranathan N*, Williams AH*, Golub MD, Ganguli S, Sussillo D (2019). Neural Information Processing Systems. Vancouver, CA. (selected for spotlight talk)
  • Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics
  • Maheswaranathan N*, Williams AH*, Golub MD, Ganguli S, Sussillo D (2019). Neural Information Processing Systems. Vancouver, CA
  • Fast Convolutive Nonnegative Matrix Factorization Through Coordinate and Block Coordinate Updates
  • Degleris A, Antin B, Ganguli S, Williams AH (2019). arXiv preprint. 1907.00139
  • Unsupervised discovery of temporal sequences in high-dimensional datasets, with applications to neuroscience
  • Mackevicius EL*, Bahle AH*, Williams AH, Gu S, Denissenko NI, Goldman MS, Fee MS (2019). eLife. 8:e38471
  • Unsupervised discovery of demixed, low-dimensional neural dynamics across multiple timescales through tensor components analysis
  • Williams AH, Kim TH, Wang F, Vyas S, Ryu SI, Shenoy KV, Schnitzer M, Kolda TG, Ganguli S (2018). Neuron. 98(6):1099–1115.e8
  • Dendritic trafficking faces physiologically critical speed-precision tradeoffs
  • Williams AH, O’Donnell C, Sejnowski T, O’Leary T (2016). eLife. 5:e20556
  • Distinct or shared actions of peptide family isoforms: II. Multiple pyrokinins exert similar effects in the lobster stomatogastric nervous system.
  • Dickinson PS, Kurland SC, Qu X, Parker BO, Sreekrishnan A, Kwiatkowski MA, Williams AH, Ysasi AB, Christie AE (2015). J Exp Biol. 218:2905-17
  • Summary of the DREAM8 parameter estimation challenge: Toward parameter identification for whole-cell models.
  • Karr JR, Williams AH, Zucker JD, Raue A, Steiert B, Timmer J, Kreutz C, DREAM8 Parameter Estimation Challenge Consortium, Wilkinson S, Allgood BA, Bot BM, Hoff BR, Kellen MR, Covert MW, Stolovitzky GA, Meyer P (2015). PLoS Comput Biol. 11(5):e1004096
  • Cell types, network homeostasis and pathological compensation from a biologically plausible ion channel expression model.
  • O’Leary T, Williams AH, Franci A, Marder E (2014). Neuron. 82(4):809-21
  • Many parameter sets in a multicompartment model oscillator are robust to temperature perturbations.
  • Caplan JS, Williams AH, Marder E (2014). J Neurosci. 34(14):4963-75
  • The neuromuscular transform of the lobster cardiac system explains the opposing effects of a neuromodulator on muscle output.
  • Williams AH, Calkins A, O’Leary T, Symonds R, Marder E, Dickinson PS (2013). J Neurosci. 33(42):16565-75
  • Correlations in ion channel expression emerge from homeostatic regulation mechanisms.
  • O’Leary T, Williams AH, Caplan JS, Marder E (2013). Proc Natl Acad Sci USA. 110(28):E2645-54
  • Animal-to-animal variability in the phasing of the crustacean cardiac motor pattern: an experimental and computational analysis.
  • Williams AH, Kwiatkoswki MA, Mortimer AL, Marder E, Zeeman ML, Dickinson PS (2013). (2013). J Neurophysiol. 109:2451-65.


  • Neuromodulation in Small Networks.
  • Williams AH, Hamood AW, Marder E (2015). Springer Encyclopedia of Computational Neuroscience.
  • Homeostatic Regulation of Neuronal Excitability.
  • Williams AH, O’Leary T, Marder E (2015). Scholarpedia. 8(1):1656