Skip to content

Commit

Permalink
Added keypoint-moseq slides and finishing touches (#19)
Browse files Browse the repository at this point in the history
* WIP started making keypoint-moseq slides

* tweaked schedule

* updated menti code

* added quote from datta lab

* added more keypoint-moseq slides

* added a poll about pose estimation

* moved keypoint-moseq slides to separate qmd file
  • Loading branch information
niksirbi authored Oct 3, 2024
1 parent 7e0bc95 commit 748a0a4
Show file tree
Hide file tree
Showing 15 changed files with 144 additions and 24 deletions.
Binary file added img/all_trajectories.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/allocentric-poses.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/depth-moseq-diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/depth-moseq-syllables.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/depth-moseq.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/depth-vs-keypoint-moseq.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/egocentric-alignment.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/keypoint-errors.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/keypoint-jitter.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/keypoint-moseq-modeling.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/keypoint-moseq-paper.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/moseq-model-diagrams.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
34 changes: 11 additions & 23 deletions index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -54,14 +54,18 @@ links:
dropbox: "https://tinyurl.com/behav-analysis-course-data"
menti: "https://www.menti.com/"
menti-link: "https://www.menti.com/aldg47maopsr"
menti-code: "`5804 2245`"
menti-code: "`5524 7145`"
papers:
neuro-needs-behav-title: "Neuroscience Needs Behavior: Correcting a Reductionist Bias"
neuro-needs-behav-doi: "https://www.sciencedirect.com/science/article/pii/S0896627316310406"
quant-behav-title: "Quantifying behavior to understand the brain"
quant-behav-doi: "https://www.nature.com/articles/s41593-020-00734-z"
open-source-title: "Open-source tools for behavioral video analysis: Setup, methods, and best practices"
open-source-doi: "https://elifesciences.org/articles/79305"
moseq-title: "Mapping Sub-Second Structure in Mouse Behavior"
moseq-doi: "https://doi.org/10.1016/j.neuron.2015.11.031"
keypoint-moseq-title: "Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics"
keypoint-moseq-doi: "https://doi.org/10.1038/s41592-024-02318-2"
---

# Introductions
Expand Down Expand Up @@ -114,10 +118,10 @@ Sofía Miñano

| Time | Topic | Goals |
|----|-------|--------|
| 14:00 - 15:30 | Practice: movement | Load pose tracks into Python, clean and visualise data, compute kinematics |
| 15:30 - 15:45 | Coffee break | |
| 15:45 - 16:30 | Theory: From behaviour to actions | Classifying behaviours, action segmentation |
| 16:30 - 17:30 | Practice: keypoint-moseq | Extract behavioural syllables |
| 14:00 - 15:45 | Practice: movement | Load pose tracks into Python, clean and visualise data, compute kinematics |
| 15:45 - 16:00 | Coffee break | |
| 16:00 - 16:30 | Theory: From behaviour to actions | Approaches to action segmentation |
| 16:15 - 17:30 | Demo: Keypoint-MoSeq | Extract behavioural syllables |

: {.striped}

Expand Down Expand Up @@ -149,24 +153,8 @@ If you don't have them, you can create them as follows:
# Coffee break ☕ {background-color="#1E1E1E"}

{{< include slides/from_behaviour_to_actions.qmd >}}

# Practice: keypoint-moseq {background-color="#03A062"}

## `keypoint-moseq` intro {.smaller}

Insert a brief description of the tool here.

## Time to play 🛝 with `keypoint-moseq`

::: {.incremental}
- Navigate to the same repository you cloned earlier `cd course-behavioural-analysis/notebooks`
- open the `EPM_syllables.ipynb` notebook
- select the environment `keypoint_moseq` as the kernel
:::

::: {.fragment}
We will go through the notebook step-by-step, together.
:::

{{< include slides/keypoint_moseq.qmd >}}

# Feedback

Expand Down
120 changes: 120 additions & 0 deletions slides/keypoint_moseq.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# Demo: Keypoint-MoSeq {background-color="#03A062"}

## Motion Sequencing {.smaller}

::: {layout="[[1,1,2]]"}

![Depth video recordings](img/depth-moseq.gif){fig-align="center" style="text-align: center" height="225px"}

![AR-HMM](img/depth-moseq-diagram.png){fig-align="center" style="text-align: center" height="225px"}

![depth-MoSeq](img/depth-moseq-syllables.png){fig-align="center" style="text-align: center" height="225px"}

:::

::: {.incremental}
- Timescale is controlled by the `kappa` parameter
- Higher `kappa` > higher P(self-transition) > "stickier" states > longer syllables
:::

::: aside
source: [{{< meta papers.moseq-title >}}]({{< meta papers.moseq-doi >}})
:::

## Keypoint-MoSeq {.smaller}

Can we apply MoSeq to keypoint data (predicted poses)?

![](img/depth-vs-keypoint-moseq.png){fig-align="center" height="350px"}

::: aside
source: [{{< meta papers.keypoint-moseq-title >}}]({{< meta papers.keypoint-moseq-doi >}})
:::

## Problems with keypoint data {.smaller}

::::: {.columns}

:::: {.column width="70%"}
![](img/keypoint-errors.png){width="600px"}

![](img/keypoint-jitter){width="610px"}
::::

:::: {.column width="30%"}

::: {.incremental}
- Keypoint noise leads to artifactual syllables
- We should somehow isolate true pose from noise
- But smoothing also blurs syllable boundaries
:::
::::

:::::

::: aside
source: [{{< meta papers.keypoint-moseq-title >}}]({{< meta papers.keypoint-moseq-doi >}})
:::


## Solution: a more complex model {.smaller}

**Switching Linear Dynamical System (SLDS):** combine noise-removal and action segmentation in a single probabilistic model

:::: {layout="[[1,1]]"}
![](img/moseq-model-diagrams.png)

::: {.r-stack}
![](img/allocentric-poses.png){.fragment}

![](img/egocentric-alignment.png){.fragment}

![](img/keypoint-moseq-modeling.png){.fragment}
:::

::::


## Keypoint-MoSeq drawbacks

::: {.incremental}
- probabilistic output
- stochasticity of output syllables
- must fit ensemble of models and take a "consensus"
- limited to describing behaviour at a single time-scale
- but can be adapted by tuning `kappa`
- may miss rare behaviours (not often seen in training data)
:::

## Let's look at some syllables {.smaller}

We've trained a keypoint-MoSeq model on 10 videos from the (EPM) dataset.

```{.bash code-line-numbers="false"}
mouse-EPM/
├── derivatives
│ └── software-kptmoseq_n-10_project
└── rawdata
```

::: {.fragment}
![](img/all_trajectories.gif){fig-align="center" height="400px"}
:::

::: aside
The model was trained using the [EPM_train_keypoint_moseq.ipynb]({{< meta links.gh-repo >}}/blob/main/notebooks/EPM_train_keypoint_moseq.ipynb) notebook in the course's
[GitHub repository]({{< meta links.gh-repo >}}).
:::


## Time to play 🛝 with Keypoint-MoSeq

We will use the trained model to extract syllables from a new video.

::: {.fragment}
- Navigate to the same repository you cloned earlier `cd course-behavioural-analysis/notebooks`
- open the `EPM_syllables.ipynb` notebook
- select the environment `keypoint_moseq` as the kernel

We will go through the notebook step-by-step, together.
:::
14 changes: 13 additions & 1 deletion slides/quantifying_behaviour.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ source: [{{< meta papers.neuro-needs-behav-title >}}]({{< meta papers.neuro-need
{{< include slides/go_to_menti.qmd >}}


## Neuroscience needs behaviour {.smaller}
## Neuroscience needs behaviour 1/2 {.smaller}

> ...detailed examination of brain parts or their selective perturbation is not sufficient to understand how the brain generates behavior
Expand All @@ -46,6 +46,14 @@ source: [{{< meta papers.neuro-needs-behav-title >}}]({{< meta papers.neuro-need
source: [{{< meta papers.neuro-needs-behav-title >}}]({{< meta papers.neuro-needs-behav-doi >}})
:::

## Neuroscience needs behaviour 2/2 {.smaller .scrollable}

> The Datta lab **embraces the perspective of the ethologists: if we are to understand how the brain works, we need to think about the actual problems it evolved to solve**. Addressing this challenge means studying natural behavior — the kinds of behaviors generated by animals when they are free to act on their own internally-generated goals without physical or psychological restraint…really, the kinds of behaviors you see when you watch lions in the wild, mice in a home cage, or humans at the mall. Importantly, when one observes animals expressing spontaneous, self-generated behavior, it is clear that much of what they are doing is exploring the world — using movement to sense what is out there, and taking advantage of sensation to inform future movements. Answering the question — how does the brain give rise to natural behavior? — therefore requires understanding how sensory and motor systems are usefully intertwined to support cognition.
::: aside
source: [Datta Lab website](http://datta.hms.harvard.edu/research/overview/)
:::


## Quantifying behaviour: ethogram {.smaller}

Expand Down Expand Up @@ -189,6 +197,10 @@ Many others:
...
:::

## Poll: are you familiar with pose estimation? {.smaller}

{{< include slides/go_to_menti.qmd >}}

## Multi-animal part grouping {.smaller}

::: {.r-stack}
Expand Down

0 comments on commit 748a0a4

Please sign in to comment.