Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clear all CI checks on all platforms #402

Merged
merged 38 commits into from
Sep 20, 2024
Merged
Show file tree
Hide file tree
Changes from 26 commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
9637fb4
Apply all safe ruff fixes
glopesdev Sep 6, 2024
97389bd
Black formatting
glopesdev Sep 6, 2024
e60b766
Move top-level linter settings to lint section
glopesdev Sep 6, 2024
d2a5104
Ignore missing docs in __init__ and magic methods
glopesdev Sep 6, 2024
743324a
Apply ruff recommendations to low-level API
glopesdev Sep 6, 2024
257d9cd
Ignore missing docs for module, package and tests
glopesdev Sep 6, 2024
e4fe028
Ignore missing docs for schema classes and streams
glopesdev Sep 6, 2024
0927a9a
Apply more ruff recommendations to low-level API
glopesdev Sep 6, 2024
be5d3c1
Update pre-commit-config
lochhh Apr 26, 2024
38aeeba
Remove black dependency
lochhh Apr 26, 2024
53e88c2
Temporarily disable ruff and pyright in pre-commit
lochhh Sep 12, 2024
6798e07
Auto-fix mixed lined endings and trailing whitespace
lochhh Sep 12, 2024
cc9c924
Ruff autofix
lochhh Sep 12, 2024
f225406
Fix D103 Missing docstring in public function
lochhh Sep 12, 2024
c7e74f7
Fix D415 First line should end with a period, question mark, or excla…
lochhh Sep 12, 2024
d9d0287
Ignore deprecated PT004
lochhh Sep 12, 2024
c866ab9
Fix D417 Missing argument description in the docstring
lochhh Sep 12, 2024
75af9b8
Ignore E741 check for `h, l, s` assignment
lochhh Sep 12, 2024
12abfcb
Use redundant import alias as suggested in F401
lochhh Sep 12, 2024
a5d88a2
Re-enable ruff in pre-commit
lochhh Sep 12, 2024
7fe4837
Re-enable pyright in pre-commit
lochhh Sep 12, 2024
b0714ab
Configure ruff to ignore .ipynb files
lochhh Sep 12, 2024
68e4344
Remove ruff `--config` in build_env_run_tests workflow
lochhh Sep 12, 2024
038f118
Merge pull request #409 from SainsburyWellcomeCentre/lint-format
glopesdev Sep 13, 2024
9c9a88b
Merge remote-tracking branch 'origin/main' into gl-ruff-check
glopesdev Sep 18, 2024
df20e9f
Apply remaining ruff recommendations
glopesdev Sep 18, 2024
6e64c83
Exclude venv folder from pyright checks
glopesdev Sep 19, 2024
8d0c03f
Remove obsolete and unused qc module
glopesdev Sep 19, 2024
97bc21c
Apply pyright recommendations
glopesdev Sep 19, 2024
6bacc43
Disable useLibraryCodeForTypes
glopesdev Sep 19, 2024
d1180a8
Remove unused function call
glopesdev Sep 19, 2024
23c440f
Ensure all roots are Path objects
glopesdev Sep 19, 2024
5dfd4a4
Exclude dj_pipeline tests from online CI
glopesdev Sep 19, 2024
f557c48
Exclude dj_pipeline tests from coverage report
glopesdev Sep 19, 2024
81bbfa1
Fix macOS wheel build for `datajoint` (Issue #249) (#406)
MilagrosMarin Sep 20, 2024
a678b8d
Run CI checks using pip env and pyproject.toml
glopesdev Sep 20, 2024
2107691
Run code checks and tests on all platforms
glopesdev Sep 20, 2024
1de5c25
Activate venv for later steps and remove all conda dependencies (#413)
lochhh Sep 20, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/build_env_run_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ jobs:
# Only run codebase checks and tests for ubuntu.
- name: ruff
if: matrix.os == 'ubuntu-latest'
run: python -m ruff check --config ./pyproject.toml .
run: python -m ruff check .
glopesdev marked this conversation as resolved.
Show resolved Hide resolved
- name: pyright
if: matrix.os == 'ubuntu-latest'
run: python -m pyright --level error --project ./pyproject.toml .
Expand Down
21 changes: 7 additions & 14 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,16 +1,12 @@
# For info on running pre-commit manually, see `pre-commit run --help`

default_language_version:
python: python3.11

files: "^(test|aeon)\/.*$"
repos:
- repo: meta
hooks:
- id: identity

- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
rev: v4.6.0
glopesdev marked this conversation as resolved.
Show resolved Hide resolved
hooks:
- id: check-json
- id: check-yaml
Expand All @@ -25,20 +21,17 @@ repos:
- id: trailing-whitespace
args: [--markdown-linebreak-ext=md]

- repo: https://github.com/psf/black
rev: 23.7.0
hooks:
- id: black
args: [--check, --config, ./pyproject.toml]

- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.0.286
rev: v0.6.4
hooks:
# Run the linter with the `--fix` flag.
- id: ruff
args: [--config, ./pyproject.toml]
args: [ --fix ]
# Run the formatter.
- id: ruff-format

- repo: https://github.com/RobertCraigie/pyright-python
rev: v1.1.324
rev: v1.1.380
glopesdev marked this conversation as resolved.
Show resolved Hide resolved
hooks:
- id: pyright
args: [--level, error, --project, ./pyproject.toml]
Expand Down
2 changes: 1 addition & 1 deletion aeon/README.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
#
#
4 changes: 2 additions & 2 deletions aeon/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,5 @@
finally:
del version, PackageNotFoundError

# Set functions avaialable directly under the 'aeon' top-level namespace
from aeon.io.api import load
# Set functions available directly under the 'aeon' top-level namespace
from aeon.io.api import load as load # noqa: PLC0414
glopesdev marked this conversation as resolved.
Show resolved Hide resolved
1 change: 0 additions & 1 deletion aeon/analysis/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +0,0 @@
#
26 changes: 5 additions & 21 deletions aeon/analysis/block_plotting.py
Original file line number Diff line number Diff line change
@@ -1,17 +1,7 @@
import os
import pathlib
from colorsys import hls_to_rgb, rgb_to_hls
from contextlib import contextmanager
from pathlib import Path

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import plotly
import plotly.express as px
import plotly.graph_objs as go
import seaborn as sns
from numpy.lib.stride_tricks import as_strided

"""Standardize subject colors, patch colors, and markers."""

Expand All @@ -35,27 +25,21 @@
"star",
]
patch_markers_symbols = ["●", "⧓", "■", "⧗", "♦", "✖", "×", "▲", "★"]
patch_markers_dict = {
marker: symbol for marker, symbol in zip(patch_markers, patch_markers_symbols)
}
patch_markers_dict = dict(zip(patch_markers, patch_markers_symbols, strict=False))
patch_markers_linestyles = ["solid", "dash", "dot", "dashdot", "longdashdot"]


def gen_hex_grad(hex_col, vals, min_l=0.3):
"""Generates an array of hex color values based on a gradient defined by unit-normalized values."""
# Convert hex to rgb to hls
h, l, s = rgb_to_hls(
*[int(hex_col.lstrip("#")[i: i + 2], 16) / 255 for i in (0, 2, 4)]
)
h, l, s = rgb_to_hls(*[int(hex_col.lstrip("#")[i : i + 2], 16) / 255 for i in (0, 2, 4)]) # noqa: E741
grad = np.empty(shape=(len(vals),), dtype="<U10") # init grad
for i, val in enumerate(vals):
cur_l = (l * val) + (
min_l * (1 - val)
) # get cur lightness relative to `hex_col`
cur_l = (l * val) + (min_l * (1 - val)) # get cur lightness relative to `hex_col`
cur_l = max(min(cur_l, l), min_l) # set min, max bounds
cur_rgb_col = hls_to_rgb(h, cur_l, s) # convert to rgb
cur_hex_col = "#%02x%02x%02x" % tuple(
int(c * 255) for c in cur_rgb_col
cur_hex_col = "#{:02x}{:02x}{:02x}".format(
*tuple(int(c * 255) for c in cur_rgb_col)
) # convert to hex
glopesdev marked this conversation as resolved.
Show resolved Hide resolved
grad[i] = cur_hex_col

Expand Down
18 changes: 11 additions & 7 deletions aeon/analysis/movies.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,7 @@


def gridframes(frames, width, height, shape=None):
"""Arranges a set of frames into a grid layout with the specified
pixel dimensions and shape.
"""Arranges a set of frames into a grid layout with the specified pixel dimensions and shape.

:param list frames: A list of frames to include in the grid layout.
:param int width: The width of the output grid image, in pixels.
Expand Down Expand Up @@ -65,7 +64,7 @@ def groupframes(frames, n, fun):
i = i + 1


def triggerclip(data, events, before=pd.Timedelta(0), after=pd.Timedelta(0)):
def triggerclip(data, events, before=None, after=None):
glopesdev marked this conversation as resolved.
Show resolved Hide resolved
"""Split video data around the specified sequence of event timestamps.

:param DataFrame data:
Expand All @@ -76,10 +75,16 @@ def triggerclip(data, events, before=pd.Timedelta(0), after=pd.Timedelta(0)):
:return:
A pandas DataFrame containing the frames, clip and sequence numbers for each event timestamp.
"""
if before is not pd.Timedelta:
if before is None:
before = pd.Timedelta(0)
elif before is not pd.Timedelta:
before = pd.Timedelta(before)
if after is not pd.Timedelta:

if after is None:
after = pd.Timedelta(0)
elif after is not pd.Timedelta:
after = pd.Timedelta(after)
glopesdev marked this conversation as resolved.
Show resolved Hide resolved

if events is not pd.Index:
events = events.index

Expand Down Expand Up @@ -107,8 +112,7 @@ def collatemovie(clipdata, fun):


def gridmovie(clipdata, width, height, shape=None):
"""Collates a set of video clips into a grid movie with the specified pixel dimensions
and grid layout.
"""Collates a set of video clips into a grid movie with the specified pixel dimensions and grid layout.

:param DataFrame clipdata:
A pandas DataFrame where each row specifies video path, frame number, clip and sequence number.
Expand Down
17 changes: 11 additions & 6 deletions aeon/analysis/plotting.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from matplotlib import colors
from matplotlib.collections import LineCollection

from aeon.analysis.utils import *
from aeon.analysis.utils import rate, sessiontime
glopesdev marked this conversation as resolved.
Show resolved Hide resolved


def heatmap(position, frequency, ax=None, **kwargs):
Expand Down Expand Up @@ -60,16 +60,17 @@ def rateplot(
ax=None,
**kwargs,
):
"""Plot the continuous event rate and raster of a discrete event sequence, given the specified
window size and sampling frequency.
"""Plot the continuous event rate and raster of a discrete event sequence.

The window size and sampling frequency can be specified.

:param Series events: The discrete sequence of events.
:param offset window: The time period of each window used to compute the rate.
:param DateOffset, Timedelta or str frequency: The sampling frequency for the continuous rate.
:param number, optional weight: A weight used to scale the continuous rate of each window.
:param datetime, optional start: The left bound of the time range for the continuous rate.
:param datetime, optional end: The right bound of the time range for the continuous rate.
:param datetime, optional smooth: The size of the smoothing kernel applied to the continuous rate output.
:param datetime, optional smooth: The size of the smoothing kernel applied to the rate output.
glopesdev marked this conversation as resolved.
Show resolved Hide resolved
:param DateOffset, Timedelta or str, optional smooth:
The size of the smoothing kernel applied to the continuous rate output.
:param bool, optional center: Specifies whether to center the convolution kernels.
Expand Down Expand Up @@ -108,8 +109,8 @@ def colorline(
x,
y,
z=None,
cmap=plt.get_cmap("copper"),
norm=plt.Normalize(0.0, 1.0),
cmap=None,
norm=None,
glopesdev marked this conversation as resolved.
Show resolved Hide resolved
ax=None,
**kwargs,
):
Expand All @@ -128,6 +129,10 @@ def colorline(
ax = plt.gca()
if z is None:
z = np.linspace(0.0, 1.0, len(x))
if cmap is None:
cmap = plt.get_cmap("copper")
if norm is None:
norm = plt.Normalize(0.0, 1.0)
glopesdev marked this conversation as resolved.
Show resolved Hide resolved
z = np.asarray(z)
points = np.array([x, y]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
Expand Down
2 changes: 1 addition & 1 deletion aeon/analysis/readme.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
#
#
22 changes: 13 additions & 9 deletions aeon/analysis/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,9 @@


def distancetravelled(angle, radius=4.0):
"""Calculates the total distance travelled on the wheel, by taking into account
its radius and the total number of turns in both directions across time.
"""Calculates the total distance travelled on the wheel.

Takes into account the wheel radius and the total number of turns in both directions across time.

:param Series angle: A series of magnetic encoder measurements.
:param float radius: The radius of the wheel, in metric units.
Expand All @@ -22,10 +23,11 @@ def distancetravelled(angle, radius=4.0):


def visits(data, onset="Enter", offset="Exit"):
"""Computes duration, onset and offset times from paired events. Allows for missing data
by trying to match event onset times with subsequent offset times. If the match fails,
event offset metadata is filled with NaN. Any additional metadata columns in the data
frame will be paired and included in the output.
"""Computes duration, onset and offset times from paired events.

Allows for missing data by trying to match event onset times with subsequent offset times.
If the match fails, event offset metadata is filled with NaN. Any additional metadata columns
in the data frame will be paired and included in the output.

:param DataFrame data: A pandas data frame containing visit onset and offset events.
:param str, optional onset: The label used to identify event onsets.
Expand Down Expand Up @@ -69,16 +71,17 @@ def visits(data, onset="Enter", offset="Exit"):


def rate(events, window, frequency, weight=1, start=None, end=None, smooth=None, center=False):
"""Computes the continuous event rate from a discrete event sequence, given the specified
window size and sampling frequency.
"""Computes the continuous event rate from a discrete event sequence.

The window size and sampling frequency can be specified.

:param Series events: The discrete sequence of events.
:param offset window: The time period of each window used to compute the rate.
:param DateOffset, Timedelta or str frequency: The sampling frequency for the continuous rate.
:param number, optional weight: A weight used to scale the continuous rate of each window.
:param datetime, optional start: The left bound of the time range for the continuous rate.
:param datetime, optional end: The right bound of the time range for the continuous rate.
:param datetime, optional smooth: The size of the smoothing kernel applied to the continuous rate output.
:param datetime, optional smooth: The size of the smoothing kernel applied to the rate output.
:param DateOffset, Timedelta or str, optional smooth:
The size of the smoothing kernel applied to the continuous rate output.
:param bool, optional center: Specifies whether to center the convolution kernels.
Expand All @@ -98,6 +101,7 @@ def rate(events, window, frequency, weight=1, start=None, end=None, smooth=None,
def get_events_rates(
events, window_len_sec, frequency, unit_len_sec=60, start=None, end=None, smooth=None, center=False
):
"""Computes the event rate from a sequence of events over a specified window."""
# events is an array with the time (in seconds) of event occurence
# window_len_sec is the size of the window over which the event rate is estimated
# unit_len_sec is the length of one sample point
Expand Down
5 changes: 3 additions & 2 deletions aeon/dj_pipeline/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,10 @@ def dict_to_uuid(key) -> uuid.UUID:


def fetch_stream(query, drop_pk=True):
"""
"""Fetches data from a Stream table based on a query and returns it as a DataFrame.

Provided a query containing data from a Stream table,
fetch and aggregate the data into one DataFrame indexed by "time"
fetch and aggregate the data into one DataFrame indexed by "time"
glopesdev marked this conversation as resolved.
Show resolved Hide resolved
"""
df = (query & "sample_count > 0").fetch(format="frame").reset_index()
cols2explode = [
Expand Down
18 changes: 8 additions & 10 deletions aeon/dj_pipeline/acquisition.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
import datetime
import json
import pathlib
import re

import datajoint as dj
import numpy as np
import pandas as pd
import json

from aeon.io import api as io_api
from aeon.schema import schemas as aeon_schemas
from aeon.io import reader as io_reader
from aeon.analysis import utils as analysis_utils

from aeon.dj_pipeline import get_schema_name, lab, subject
from aeon.dj_pipeline.utils import paths
from aeon.io import api as io_api
from aeon.io import reader as io_reader
from aeon.schema import schemas as aeon_schemas

logger = dj.logger
schema = dj.schema(get_schema_name("acquisition"))
Expand Down Expand Up @@ -181,7 +181,7 @@ class Epoch(dj.Manual):

@classmethod
def ingest_epochs(cls, experiment_name):
"""Ingest epochs for the specified "experiment_name" """
"""Ingest epochs for the specified ``experiment_name``."""
device_name = _ref_device_mapping.get(experiment_name, "CameraTop")

all_chunks, raw_data_dirs = _get_all_chunks(experiment_name, device_name)
Expand Down Expand Up @@ -475,7 +475,7 @@ class MessageLog(dj.Part):
-> master
---
sample_count: int # number of data points acquired from this stream for a given chunk
timestamps: longblob # (datetime)
timestamps: longblob # (datetime)
priority: longblob
type: longblob
message: longblob
Expand Down Expand Up @@ -604,9 +604,7 @@ def _match_experiment_directory(experiment_name, path, directories):


def create_chunk_restriction(experiment_name, start_time, end_time):
"""
Create a time restriction string for the chunks between the specified "start" and "end" times
"""
"""Create a time restriction string for the chunks between the specified "start" and "end" times."""
start_restriction = f'"{start_time}" BETWEEN chunk_start AND chunk_end'
end_restriction = f'"{end_time}" BETWEEN chunk_start AND chunk_end'
start_query = Chunk & {"experiment_name": experiment_name} & start_restriction
Expand Down
Loading
Loading