Skip to content

Conversation

@MattsonCam
Copy link
Member

Computed UMAP from sampled JUMP data and visualized UMAPs labeled by different categories.

Copilot AI review requested due to automatic review settings January 23, 2026 22:04
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a workflow to compute UMAP embeddings from sampled JUMP single-cell data and generate labeled UMAP visualizations (treatment type, anomaly score, control type).

Changes:

  • Pin umap-learn in the conda environment for compatibility with the repo’s scikit-learn version.
  • Add a notebook + nbconverted Python script to sample per-plate single cells and compute 2D UMAP coordinates.
  • Add an nbconverted R script to render and save UMAP figures colored by multiple metadata fields.

Reviewed changes

Copilot reviewed 4 out of 8 changed files in this pull request and generated 7 comments.

File Description
environment.yml Pins umap-learn to a specific version to support the new UMAP computation workflow.
3.analyze_data/visualize_umaps/nbconverted/compute_plate_umaps.py Implements plate sampling + UMAP computation and writes a parquet used for plotting.
3.analyze_data/visualize_umaps/compute_plate_umaps.ipynb Notebook version of the UMAP sampling/computation workflow.
3.analyze_data/visualize_umaps/nbconverted/visualize_plate_umaps.r Generates and saves UMAP plots labeled by treatment type, anomaly score, and control type.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +2 to +4
suppressPackageStartupMessages(library(dplyr))
suppressPackageStartupMessages(library(arrow))
suppressPackageStartupMessages(library(stringr))
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dplyr and stringr are imported but not used anywhere in this script. Removing unused package imports reduces required dependencies and speeds up startup.

Suggested change
suppressPackageStartupMessages(library(dplyr))
suppressPackageStartupMessages(library(arrow))
suppressPackageStartupMessages(library(stringr))
suppressPackageStartupMessages(library(arrow))

Copilot uses AI. Check for mistakes.
umapdf = umapdf.dropna(axis=1, how="any")

print("Shape of plate data after sampling:", umapdf.shape)
umapdf["Metadata_control_type"].unique()
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This standalone umapdf["Metadata_control_type"].unique() expression has no effect in a script context (it only displays in a notebook). Consider removing it or converting it to an explicit print(...) if it’s meant as a diagnostic.

Suggested change
umapdf["Metadata_control_type"].unique()
print(umapdf["Metadata_control_type"].unique())

Copilot uses AI. Check for mistakes.
Comment on lines +26 to +27
# pin to version compatible with scikit-learn 1.1.1
- conda-forge::umap-learn==0.5.3
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The conda dependency pin uses == (pip-style). This repo’s conda environment files consistently use conda’s package=version syntax (e.g., python=3.9 in this file, r-base=4.2.2 in R_environment.yml). Please change this to conda-forge::umap-learn=0.5.3 to match conventions and avoid potential conda parsing issues.

Copilot uses AI. Check for mistakes.
Comment on lines +25 to +27
ggplot(umap_df, aes(x = umap_0, y = umap_1, color = Metadata_Treatment_Type)) +
geom_point(shape = 20, size = 2, alpha = 4) +
scale_color_manual(
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alpha in ggplot2 is expected to be in the 0–1 range. Using alpha = 4 likely clamps to 1 (fully opaque) and makes the intent unclear. Please change this to a value between 0 and 1 (e.g., 0.4) for the treatment-type plot.

Copilot uses AI. Check for mistakes.
Comment on lines +62 to +64
ggplot(umap_df, aes(x = umap_0, y = umap_1, color = Metadata_control_type)) +
geom_point(shape = 20, size = 2, alpha = 4) +
scale_color_manual(
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alpha in ggplot2 should be in the 0–1 range. alpha = 4 is likely unintended and removes transparency. Please use a value between 0 and 1 for the control-type plot as well.

Copilot uses AI. Check for mistakes.
Comment on lines +121 to +124
scdf = scdf.groupby(["Metadata_control_type"], group_keys=False).apply(
lambda grp: grp.sample(n=min(250, len(grp)), random_state=0)
)

Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This groupby(...).apply(...) pattern is emitting a pandas deprecation warning (seen in the committed notebook output). To make this forward-compatible and keep logs clean, update the sampling to avoid GroupBy.apply’s changing behavior (e.g., use include_groups=False where available, or restructure sampling so it doesn’t rely on apply).

Suggested change
scdf = scdf.groupby(["Metadata_control_type"], group_keys=False).apply(
lambda grp: grp.sample(n=min(250, len(grp)), random_state=0)
)
# Sample up to 250 rows per control type without using GroupBy.apply
group_sizes = scdf["Metadata_control_type"].value_counts()
large_groups = group_sizes[group_sizes > 250].index
small_groups = group_sizes[group_sizes <= 250].index
sampled_large = (
scdf[scdf["Metadata_control_type"].isin(large_groups)]
.groupby("Metadata_control_type", group_keys=False)
.sample(n=250, random_state=0)
)
small = scdf[scdf["Metadata_control_type"].isin(small_groups)]
scdf = pd.concat([sampled_large, small], axis=0)

Copilot uses AI. Check for mistakes.
Comment on lines +148 to +152
umapdf["umap_0"], umapdf["umap_1"] = (
umap_data[:, 0],
umap_data[:, 1],
)

Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assigning umap_0/umap_1 this way is triggering a pandas PerformanceWarning about a highly-fragmented DataFrame (also visible in the notebook output). Consider defragmenting before assignment (e.g., umapdf = umapdf.copy()) and assigning both columns in a single operation to avoid fragmentation overhead.

Suggested change
umapdf["umap_0"], umapdf["umap_1"] = (
umap_data[:, 0],
umap_data[:, 1],
)
# Defragment before assigning new columns and assign both at once
umapdf = umapdf.copy()
umapdf[["umap_0", "umap_1"]] = umap_data[:, :2]

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants