scrna5/6 Jupyter Notebook lamindata

Train a machine learning model on a collection#

Here, we iterate over the artifacts within a collection to train a machine learning model at scale.

import lamindb as ln
馃挕 connected lamindb: testuser1/test-scrna
ln.settings.transform.stem_uid = "Qr1kIHvK506r"
ln.settings.transform.version = "1"
ln.track()
馃挕 notebook imports: lamindb==0.71.0 torch==2.3.0
馃挕 saved: Transform(uid='Qr1kIHvK506r5zKv', name='Train a machine learning model on a collection', key='scrna5', version='1', type='notebook', updated_at=2024-05-01 18:52:30 UTC, created_by_id=1)
馃挕 saved: Run(uid='5lEzrWeGJxBj5QWrEF6Y', transform_id=5, created_by_id=1)

Query our collection:

collection = ln.Collection.filter(
    name="My versioned scRNA-seq collection", version="2"
).one()
collection.describe()
Hide code cell output
Collection(uid='Z2NfJBgXfWpwl3rG9Kb1', name='My versioned scRNA-seq collection', version='2', hash='HNR3VFV60_yqRnUka11E', visibility=1, updated_at=2024-05-01 18:52:04 UTC)

Provenance:
  馃搸 transform: Transform(uid='ManDYgmftZ8C5zKv', name='Standardize and append a batch of data', key='scrna2', version='1', type='notebook')
  馃搸 run: Run(uid='bKXzQTJlj2eaUJ6VNVhE', started_at=2024-05-01 18:51:43 UTC, is_consecutive=True)
  馃搸 created_by: User(uid='DzTjkKse', handle='testuser1', name='Test User1')
  馃搸 input_of (core.Run): ['2024-05-01 18:52:16 UTC']
Features:
  var: FeatureSet(uid='T8aY3zwWcp6UgypbmFS2', n=36508, type='number', registry='bionty.Gene')
    'GPT', 'H1-7', 'POLD2', 'ZNF267', 'WDFY3-AS2', 'ARHGEF6', 'FANCD2OS', 'SNX9', 'SLC33A1', 'SPIN2A', 'TNFRSF11B', 'BCL11B', 'LINC00299', 'FAM166A', 'NEU3', 'SELENOO-AS1', 'GUCY1B1', 'PRSS54', 'MEOX2', 'PCAT1', ...
  obs: FeatureSet(uid='fBDpr3Aj5v5RJU80kIei', n=4, registry='core.Feature')
    馃敆 donor (12, core.ULabel): 'D496', 'A29', 'A31', 'A36', '621B', 'D503', 'A37', 'A52', '637C', '640C', ...
    馃敆 tissue (17, bionty.Tissue): 'transverse colon', 'blood', 'lamina propria', 'skeletal muscle tissue', 'thoracic lymph node', 'spleen', 'bone marrow', 'thymus', 'omentum', 'mesenteric lymph node', ...
    馃敆 cell_type (40, bionty.CellType): 'T follicular helper cell', 'dendritic cell', 'animal cell', 'conventional dendritic cell', 'alpha-beta T cell', 'progenitor cell', 'CD16-negative, CD56-bright natural killer cell, human', 'plasma cell', 'lymphocyte', 'gamma-delta T cell', ...
    馃敆 assay (3, bionty.ExperimentalFactor): '10x 3' v3', '10x 5' v2', '10x 5' v1'
Labels:
  馃搸 tissues (17, bionty.Tissue): 'transverse colon', 'blood', 'lamina propria', 'skeletal muscle tissue', 'thoracic lymph node', 'spleen', 'bone marrow', 'thymus', 'omentum', 'mesenteric lymph node', ...
  馃搸 cell_types (40, bionty.CellType): 'T follicular helper cell', 'dendritic cell', 'animal cell', 'conventional dendritic cell', 'alpha-beta T cell', 'progenitor cell', 'CD16-negative, CD56-bright natural killer cell, human', 'plasma cell', 'lymphocyte', 'gamma-delta T cell', ...
  馃搸 experimental_factors (3, bionty.ExperimentalFactor): '10x 3' v3', '10x 5' v2', '10x 5' v1'
  馃搸 ulabels (12, core.ULabel): 'D496', 'A29', 'A31', 'A36', '621B', 'D503', 'A37', 'A52', '637C', '640C', ...

Create a map-style dataset#

Let us create a map-style dataset using using mapped(): a MappedCollection. This is what, for example, the PyTorch DataLoader expects as an input.

Under-the-hood, it performs a virtual inner join of the features of the underlying AnnData objects and thus allows to work with very large collections.

You can either perform a virtual inner join:

with collection.mapped(obs_keys=["cell_type"], join="inner") as dataset:
    print(len(dataset.var_joint))
749

Or a virtual outer join:

dataset = collection.mapped(obs_keys=["cell_type"], join="outer")
len(dataset.var_joint)
36508

This is compatible with a PyTorch DataLoader because it implements __getitem__ over a list of backed AnnData objects. The 5th cell in the collection can be accessed like:

dataset[5]
Hide code cell output
{'X': array([ 0.   ,  0.   ,  0.   , ...,  0.   ,  0.   , -0.456], dtype=float32),
 '_store_idx': 0,
 'cell_type': 39}

The labels are encoded into integers:

dataset.encoders
Hide code cell output
{'cell_type': {'T follicular helper cell': 0,
  'dendritic cell': 1,
  'animal cell': 2,
  'conventional dendritic cell': 3,
  'alpha-beta T cell': 4,
  'progenitor cell': 5,
  'CD16-negative, CD56-bright natural killer cell, human': 6,
  'plasma cell': 7,
  'lymphocyte': 8,
  'gamma-delta T cell': 9,
  'effector memory CD8-positive, alpha-beta T cell, terminally differentiated': 10,
  'naive thymus-derived CD4-positive, alpha-beta T cell': 11,
  'CD38-positive naive B cell': 12,
  'naive B cell': 13,
  'memory B cell': 14,
  'regulatory T cell': 15,
  'classical monocyte': 16,
  'mucosal invariant T cell': 17,
  'CD4-positive helper T cell': 18,
  'alveolar macrophage': 19,
  'megakaryocyte': 20,
  'mast cell': 21,
  'naive thymus-derived CD8-positive, alpha-beta T cell': 22,
  'effector memory CD4-positive, alpha-beta T cell, terminally differentiated': 23,
  'CD4-positive, alpha-beta T cell': 24,
  'effector memory CD4-positive, alpha-beta T cell': 25,
  'B cell, CD19-positive': 26,
  'germinal center B cell': 27,
  'CD14-positive, CD16-negative classical monocyte': 28,
  'non-classical monocyte': 29,
  'dendritic cell, human': 30,
  'plasmacytoid dendritic cell': 31,
  'CD8-positive, alpha-beta memory T cell, CD45RO-positive': 32,
  'CD8-positive, CD25-positive, alpha-beta regulatory T cell': 33,
  'macrophage': 34,
  'plasmablast': 35,
  'CD16-positive, CD56-dim natural killer cell, human': 36,
  'group 3 innate lymphoid cell': 37,
  'CD8-positive, alpha-beta memory T cell': 38,
  'cytotoxic T cell': 39}}

Create a pytorch DataLoader#

Let us use a weighted sampler:

from torch.utils.data import DataLoader, WeightedRandomSampler

# label_key for weight doesn't have to be in labels on init
sampler = WeightedRandomSampler(
    weights=dataset.get_label_weights("cell_type"), num_samples=len(dataset)
)
dataloader = DataLoader(dataset, batch_size=128, sampler=sampler)

We can now iterate through the data loader:

for batch in dataloader:
    pass

Close the connections in MappedCollection:

dataset.close()
In practice, use a context manager
with collection.mapped(obs_keys=["cell_type"]) as dataset:
    sampler = WeightedRandomSampler(
        weights=dataset.get_label_weights("cell_type"), num_samples=len(dataset)
    )
    dataloader = DataLoader(dataset, batch_size=128, sampler=sampler)
    for batch in dataloader:
        pass