Skip to content

Update check install script#157

Open
C-Achard wants to merge 5 commits intomainfrom
cy/fix-install-script
Open

Update check install script#157
C-Achard wants to merge 5 commits intomainfrom
cy/fix-install-script

Conversation

@C-Achard
Copy link
Collaborator

This pull request refactors and enhances the benchmarking and installation check scripts for DLC-Live, with a primary focus on supporting both TensorFlow and PyTorch backends in the installation check. The changes improve code structure, add backend-agnostic testing, and include various code style and clarity improvements.

Major enhancements to installation check and backend support

  • Refactored dlclive/check_install/check_install.py to add support for testing both TensorFlow and PyTorch backends, including backend detection, model export/download, and backend-specific benchmarking via new run_pytorch_test and run_tensorflow_test functions. The script now iterates over available backends and runs tests for each, with improved error handling and clearer output. [1] [2] [3]
  • Added logic to ensure temporary directories and model folders are created as needed and cleaned up after testing, with improved handling of missing dependencies and clearer error messages. [1] [2]

Code style, clarity, and maintainability improvements

  • Applied code formatting and line wrapping throughout dlclive/benchmark.py for better readability, including argument formatting, function signatures, and block structure. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
  • Improved import organization and added TYPE_CHECKING and noqa comments to avoid unnecessary runtime imports and linter warnings. [1] [2] [3] [4]
  • Minor function signature and docstring adjustments for consistency and clarity, including argument additions (e.g., single_animal in benchmark_videos). [1] [2]

These changes make the benchmarking and installation check scripts more robust, maintainable, and compatible with multiple deep learning backends.

Refactor check_install.py to support both PyTorch and TensorFlow backends and improve temporary file handling. Introduces TMP_DIR, MODELS_FOLDER, and separate run_pytorch_test/run_tensorflow_test helpers; PyTorch now exports and benchmarks an exported .pt checkpoint, TensorFlow model download logic is preserved. Replaces --nodisplay with --display, centralizes video download and assertions, tightens error handling for downloads, and ensures proper cleanup of temporary files. Also updates imports (urllib.error, export_modelzoo_model) and updates backend availability checks to require at least one backend.
Expose a single_animal flag on benchmark_videos (default False) and forward it to the underlying analysis call. This allows benchmarking to run in single-animal mode when using DeepLabCut-live exported models.
Update .github/workflows/testing.yml to run the Model Benchmark Test with --display instead of --nodisplay. This allows dlc-live-test to run tests that require a display (e.g., visual/benchmarking checks) in the CI workflow.
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request refactors the DLC-Live installation check script to support both TensorFlow and PyTorch backends, while also applying code style improvements to the benchmarking module. The changes enable backend-agnostic testing by detecting available backends and running appropriate tests for each.

Changes:

  • Added PyTorch backend support to check_install.py with new run_pytorch_test() and run_tensorflow_test() functions
  • Changed argument parsing from --nodisplay to --display, reversing the default display behavior (breaking change)
  • Applied code formatting and import organization improvements throughout benchmark.py

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 10 comments.

File Description
dlclive/check_install/check_install.py Refactored to support both TensorFlow and PyTorch backends with separate test functions, added backend detection loop, and improved error handling
dlclive/benchmark.py Applied code formatting improvements, reorganized imports with TYPE_CHECKING, added single_animal parameter to benchmark_videos, and corrected spacing/line wrapping throughout
Comments suppressed due to low confidence (1)

dlclive/benchmark.py:45

  • This import of module os is redundant, as it was previously imported on line 16.
    import os

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Refactor test startup and error handling for check_install and simplify a type-only import.

- dlclive/benchmark.py: replace the try/except tensorflow import under TYPE_CHECKING with a direct import (type ignored) to simplify typing logic.
- dlclive/check_install/check_install.py:
  - Defer importing export_modelzoo_model into run_pytorch_test to avoid importing heavy modules unless PyTorch test runs.
  - Move MODELS_FOLDER.mkdir to after temporary directory creation.
  - Add a --nodisplay flag and set default for --display to False so CLI can explicitly disable display.
  - Comment out resize parameters in test calls and remove an unnecessary model_dir.exists() assertion.
  - Wrap per-backend test runs in try/except, collect backend failures, allow other backends to continue, and raise an aggregated RuntimeError if all backend tests fail.

These changes improve robustness when some backends fail or are unavailable and reduce unnecessary imports during initial checks.
Copy link
Collaborator

@sneakers-the-rat sneakers-the-rat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good! some small optional notes.

Re: the formatting comment in the PR body, I see ruff in the dev dependencies, but we don't have a CI linter/formatter check action set up for this. I think it would probably be worth just setting up a set of ruff rules and a ruff CI action to do formatting all in one go and then not have to worry about it again in any further PRs.

tmp_dir = Path(dlclive.__file__).parent / "check_install" / "dlc-live-tmp"
tmp_dir.mkdir(mode=0o775, exist_ok=True)
if Engine.PYTORCH not in get_available_backends():
raise NotImplementedError(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
raise NotImplementedError(
raise ImportError(

super minor/optional thing - it is implemented, they just don't don't have the package installed

# download model from the DeepLabCut Model Zoo
def run_tensorflow_test(video_file: str, display: bool = False):
if Engine.TENSORFLOW not in get_available_backends():
raise NotImplementedError(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
raise NotImplementedError(
raise ImportError(

"checkpoint": MODELS_FOLDER / f"exported_quadruped_{TORCH_MODEL}.pt",
"super_animal": "superanimal_quadruped",
}
TF_MODEL_DIR = TMP_DIR / "DLC_Dog_resnet_50_iteration-0_shuffle-0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not in the models folder?

SNAPSHOT_NAME = "snapshot-700000.pb"
TMP_DIR = Path(__file__).parent / "dlc-live-tmp"

MODELS_FOLDER = TMP_DIR / "test_models"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably use _DIR or _FOLDER consistently? doesn't really matter tho

print("\nCreating temporary directory...\n")
tmp_dir = TMP_DIR
tmp_dir.mkdir(mode=0o775, exist_ok=True)
MODELS_FOLDER.mkdir(parents=True, exist_ok=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this is only used in pytorch test, maybe make it there? or put all models in the model folder.

Comment on lines +125 to +129
video_file = str(tmp_dir / "dog_clip.avi")

# download dog test video from github:
# Use raw.githubusercontent.com for direct file access
if not Path(video_file).exists():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gets a little odd switching back and forth, maybe we could just make the download_file accept a path

Comment on lines +175 to +181
try:
if tmp_dir is not None and tmp_dir.exists():
shutil.rmtree(tmp_dir)
except PermissionError:
warnings.warn(
f"Could not delete temporary directory {str(tmp_dir)} due to a permissions error, but otherwise dlc-live seems to be working fine!"
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we move the start of the try to after the directories are created, this could be a little cleaner, e.g. not having to init tmp_dir to None.

Comment on lines +165 to +170

if not any_backend_succeeded and backend_failures:
failure_messages = "; ".join(
f"{b}: {exc}" for b, exc in backend_failures.items()
)
raise RuntimeError(f"All backend tests failed. Details: {failure_messages}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the copilot suggestion is fine, but now it doesn't look like we get confirmation when the backends do succeed. like if both succeed we'd probably want the output to look something like

tensorflow [SUCCESS]
pytorch    [SUCCESS]

and then if they fail like

tensorflow [SUCCESS]
pytorch    [ERROR]
---
Pytorch error:
{blah blah blah}

doesn't have to be that, just saying we should probably also tell the user when things went right

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants