You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While working on #421 I noticed we are currently doing a lot of manipulation and reorganization of the data at the level of the pose reader, specifically running apply over the entire data frame to collapse columns into a single dictionary:
This will unnecessarily slow down the parsing of the raw data (both the use of apply and allocation of many dictionaries), especially over long time intervals. Is this really necessary? I think the philosophy for readers should be as much as possible to simply load the raw data as-is. This code also scales poorly as performance will degrade the faster we run our cameras and the more possible identities we have.
We could add a flag similar to downsample for encoder data if we really wanted to preserve backwards-compatibility, but I feel that here we should rather do this dictionary transformation post-hoc in a utility function.
By doing this post-hoc we get the added benefit that we only run the coalescing code over the final "cropped" data in case we are reading in a time range.
The text was updated successfully, but these errors were encountered:
While working on #421 I noticed we are currently doing a lot of manipulation and reorganization of the data at the level of the pose reader, specifically running
apply
over the entire data frame to collapse columns into a single dictionary:aeon_mecha/aeon/io/reader.py
Lines 364 to 374 in 7812b4f
This will unnecessarily slow down the parsing of the raw data (both the use of
apply
and allocation of many dictionaries), especially over long time intervals. Is this really necessary? I think the philosophy for readers should be as much as possible to simply load the raw data as-is. This code also scales poorly as performance will degrade the faster we run our cameras and the more possible identities we have.We could add a flag similar to
downsample
for encoder data if we really wanted to preserve backwards-compatibility, but I feel that here we should rather do this dictionary transformation post-hoc in a utility function.By doing this post-hoc we get the added benefit that we only run the coalescing code over the final "cropped" data in case we are reading in a time range.
The text was updated successfully, but these errors were encountered: