Tutorial: Features & labels#

In the previous tutorial (Tutorial: Files & datasets), we learned about about how to leverage basic metadata for files & datasets to access data (query, search, stage & load).

Here, we walk through annotating & validating data with features & labels to improve:

  1. Finding data: Which datasets measured expression of cell marker CD14? Which characterized cell line K562? Which datasets have a test & train split? Etc.

  2. Using data: Are there typos in feature names? Are there typos in sampled labels? Are units of features consistent? Etc.

What was LaminDB’s most basic inspiration?

The pydata family of objects is at the heart of most data science, ML & comp bio workflows: DataFrame, AnnData, pytorch.DataLoader, zarr.Array, pyarrow.Table, xarray.Dataset, …

And still, we couldn’t find a tool to link these objects to context so that they could be analyzed in context!

Context relevant for analyses includes anything that’s needed to interpret & model data.

So, lamindb.File and lamindb.Dataset track:

  • data sources, data transformations, models, users & pipelines that performed transformations (provenance)

  • any entity of the domain in which data is generated and modeled (features & labels)

import lamindb as ln
import pandas as pd
💡 lamindb instance: testuser1/lamin-tutorial
ln.settings.verbosity = "hint"

Register metadata#

Features and labels are the primary ways of registering domain-knowledge related metadata in LaminDB.

Features represent measurement dimensions (e.g. organism) and labels represent measurement values (e.g. iris setosa, iris versicolor, iris virginica).

Register labels#

We study 3 organism of the Iris plant: setosa, versicolor & virginica.

Let’s populate the universal (untyped) label registry (ULabel) for them:

labels = [ln.ULabel(name=name) for name in ["setosa", "versicolor", "virginica"]]
ln.save(labels)

labels
[ULabel(uid='I7m0eKZ2', name='setosa', updated_at=2023-12-08 11:33:46 UTC, created_by_id=1),
 ULabel(uid='QHudiekA', name='versicolor', updated_at=2023-12-08 11:33:46 UTC, created_by_id=1),
 ULabel(uid='9Gwr5bTc', name='virginica', updated_at=2023-12-08 11:33:46 UTC, created_by_id=1)]

Anticipating that we’ll have many different labels when working with more data, we’d like to express that all 3 labels are organism labels:

parent = ln.ULabel(name="is_organism")
parent.save()

for label in labels:
    label.parents.add(parent)

parent.view_parents(with_children=True)
_images/6c09c5b1f125987a3500e49196943686fba7dc1d767c08a4c396d2617f636848.svg

ULabel enables you to manage an in-house ontology to manage all kinds of untyped labels.

If you’d like to leverage pre-built typed ontologies for basic biological entities in the same way, see: Manage biological registries.

In addition to organism, we’d like to track the studies that produced the data:

ln.ULabel(name="study0").save()
Why label a data batch by study?

We can then

  1. query all files link to this experiment

  2. model it as a confounder when we’ll analyze similar data from a follow-up experiment, and concatenate data using the label as a feature in a data matrix

Register features#

For every set of studied labels (measured values), we typically also want an identifier for the corresponding measurement dimension: the feature.

When we integrate data batches, feature names will label columns that store data.

Let’s create and save two Feature records to identify measurements of the iris organism label and the study:

ln.Feature(name="iris_organism_name", type="category").save()
ln.Feature(name="study_name", type="category").save()
# create a lookup object so that we can access features with auto-complete
features = ln.Feature.lookup()

Run an ML model#

Let’s now run a ML model that transforms the images into 4 high-level features.

def run_ml_model() -> pd.DataFrame:
    transform = ln.Transform(name="Petal & sepal regressor", type="pipeline")
    ln.track(transform)
    input_dataset = ln.Dataset.filter(name="Iris study 1").one()
    input_paths = [file.stage() for file in input_dataset.files.all()]
    # transform the data...
    output_dataset = ln.dev.datasets.df_iris_in_meter_study1()
    return output_dataset


df = run_ml_model()
Hide code cell output
💡 saved: Transform(uid='Pn3w7H1eYvnDGJ', name='Petal & sepal regressor', type='pipeline', updated_at=2023-12-08 11:33:48 UTC, created_by_id=1)
💡 saved: Run(uid='cuokWCDczjvY8Cw9pyta', run_at=2023-12-08 11:33:48 UTC, transform_id=2, created_by_id=1)
💡 adding dataset [1] as input for run 2, adding parent transform 1
💡 adding file [2] as input for run 2, adding parent transform 1
💡 adding file [3] as input for run 2, adding parent transform 1
💡 adding file [4] as input for run 2, adding parent transform 1
💡 adding file [5] as input for run 2, adding parent transform 1
💡 adding file [6] as input for run 2, adding parent transform 1

The output is a dataframe:

df.head()
sepal_length sepal_width petal_length petal_width iris_organism_name
0 0.051 0.035 0.014 0.002 setosa
1 0.049 0.030 0.014 0.002 setosa
2 0.047 0.032 0.013 0.002 setosa
3 0.046 0.031 0.015 0.002 setosa
4 0.050 0.036 0.014 0.002 setosa

And this is the ML pipeline that produced the dataframe:

ln.run_context.transform.view_parents()
_images/1390a14d45e3857d50cbc265f72da3bc7deb6ef6dddadcd5fd64953befc2c5c6.svg

Register the output data#

Let’s first register the features of the transformed data:

new_features = ln.Feature.from_df(df)
ln.save(new_features)
How to track units of features?

Use the unit field of Feature. In the above example, you’d do:

for feature in features:
    if feature.type == "number":
        feature.unit = "m"  # SI unit for meters
        feature.save()

We can now validate & register the dataframe in one line by creating a Dataset record:

dataset = ln.Dataset.from_df(
    df,
    name="Iris study 1 - transformed",
    description="Iris dataset after measuring sepal & petal metrics",
)

dataset.save()
5 terms (100.00%) are validated for name
💡 file will be copied to default storage upon `save()` with key `None` ('.lamindb/P26F2iLZW41PB0d1FJeH.parquet')
✅ saved 1 feature set for slot: 'columns'
✅ storing file 'P26F2iLZW41PB0d1FJeH' at '/home/runner/work/lamindb/lamindb/docs/lamin-tutorial/.lamindb/P26F2iLZW41PB0d1FJeH.parquet'

Feature sets#

Get an overview of linked features:

dataset.features
Features:
  columns: FeatureSet(uid='gtTAJ5HORe2iHAjODN8J', n=5, registry='core.Feature', hash='2Bh2jXo1NMtCjbWrBAXN', updated_at=2023-12-08 11:33:51 UTC, created_by_id=1)
    🔗 iris_organism_name (0, core.ULabel): 
    sepal_length (number)
    sepal_width (number)
    petal_length (number)
    petal_width (number)

You’ll see that they’re always grouped in sets that correspond to records of FeatureSet.

Why does LaminDB model feature sets, not just features?
  1. Performance: Imagine you measure the same panel of 20k transcripts in 1M samples. By modeling the panel as a feature set, you’ll only need to store 1M instead of 1M x 20k = 20B links.

  2. Interpretation: Model protein panels, gene panels, etc.

  3. Data integration: Feature sets provide the currency that determines whether two datasets can be easily concatenated.

These reasons do not hold for label sets. Hence, LaminDB does not model label sets.

A slot provides a string key to access feature sets. It’s typically the accessor within the registered data object, here pd.DataFrame.columns.

Let’s use it to access all linked features:

dataset.features["columns"].df()
uid name type unit description registries synonyms updated_at created_by_id
id
1 m4XJh9K6GntL iris_organism_name category None None core.ULabel None 2023-12-08 11:33:48.455661+00:00 1
3 SqXgQXbvI6JY sepal_length number None None None None 2023-12-08 11:33:51.332873+00:00 1
4 6RQRyHuji58P sepal_width number None None None None 2023-12-08 11:33:51.332970+00:00 1
5 qncJoU9lrjxx petal_length number None None None None 2023-12-08 11:33:51.333031+00:00 1
6 lPv0AOi5h2Jj petal_width number None None None None 2023-12-08 11:33:51.333086+00:00 1

There is one categorical feature, let’s add the organism labels:

organism_labels = ln.ULabel.filter(parents__name="is_organism").all()
dataset.labels.add(organism_labels, feature=features.iris_organism_name)

Let’s now add study labels:

dataset.labels.add(study_label, feature=features.study_name)
✅ linked new feature 'study_name' together with new feature set FeatureSet(uid='hkMkNGmap2LrRZDggStJ', n=1, registry='core.Feature', hash='B9bU6Irw5r6psCSY8Nbl', updated_at=2023-12-08 11:33:51 UTC, created_by_id=1)

In addition to the columns feature set, we now have an external feature set:

dataset.features
Features:
  columns: FeatureSet(uid='gtTAJ5HORe2iHAjODN8J', n=5, registry='core.Feature', hash='2Bh2jXo1NMtCjbWrBAXN', updated_at=2023-12-08 11:33:51 UTC, created_by_id=1)
    🔗 iris_organism_name (3, core.ULabel): 'setosa', 'versicolor', 'virginica'
    sepal_length (number)
    sepal_width (number)
    petal_length (number)
    petal_width (number)
  external: FeatureSet(uid='hkMkNGmap2LrRZDggStJ', n=1, registry='core.Feature', hash='B9bU6Irw5r6psCSY8Nbl', updated_at=2023-12-08 11:33:51 UTC, created_by_id=1)
    🔗 study_name (1, core.ULabel): 'study0'

This is the context for our file:

dataset.describe()
Dataset(uid='P26F2iLZW41PB0d1FJeH', name='Iris study 1 - transformed', description='Iris dataset after measuring sepal & petal metrics', hash='Ws3vdEbIGfvivB5u4BB0Cg', visibility=1, updated_at=2023-12-08 11:33:51 UTC)

Provenance:
  🧩 transform: Transform(uid='Pn3w7H1eYvnDGJ', name='Petal & sepal regressor', type='pipeline', updated_at=2023-12-08 11:33:48 UTC, created_by_id=1)
  👣 run: Run(uid='cuokWCDczjvY8Cw9pyta', run_at=2023-12-08 11:33:48 UTC, transform_id=2, created_by_id=1)
  📄 file: File(uid='P26F2iLZW41PB0d1FJeH', suffix='.parquet', accessor='DataFrame', description='See dataset P26F2iLZW41PB0d1FJeH', size=5347, hash='Ws3vdEbIGfvivB5u4BB0Cg', hash_type='md5', visibility=1, key_is_virtual=True, updated_at=2023-12-08 11:33:51 UTC, storage_id=1, transform_id=2, run_id=2, created_by_id=1)
  👤 created_by: User(uid='DzTjkKse', handle='testuser1', name='Test User1', updated_at=2023-12-08 11:33:39 UTC)
Features:
  columns: FeatureSet(uid='gtTAJ5HORe2iHAjODN8J', n=5, registry='core.Feature', hash='2Bh2jXo1NMtCjbWrBAXN', updated_at=2023-12-08 11:33:51 UTC, created_by_id=1)
    🔗 iris_organism_name (3, core.ULabel): 'setosa', 'versicolor', 'virginica'
    sepal_length (number)
    sepal_width (number)
    petal_length (number)
    petal_width (number)
  external: FeatureSet(uid='hkMkNGmap2LrRZDggStJ', n=1, registry='core.Feature', hash='B9bU6Irw5r6psCSY8Nbl', updated_at=2023-12-08 11:33:51 UTC, created_by_id=1)
    🔗 study_name (1, core.ULabel): 'study0'
Labels:
  🏷️ ulabels (4, core.ULabel): 'setosa', 'versicolor', 'virginica', 'study0'
dataset.file.view_flow()
_images/dbb92fd39b957d8866d604d404037f5fa4b92ef7fa79173e5fb7c40149c07db4.svg

See the database content:

ln.view(registries=["Feature", "FeatureSet", "ULabel"])
Hide code cell output
Feature
uid name type unit description registries synonyms updated_at created_by_id
id
6 lPv0AOi5h2Jj petal_width number None None None None 2023-12-08 11:33:51.333086+00:00 1
5 qncJoU9lrjxx petal_length number None None None None 2023-12-08 11:33:51.333031+00:00 1
4 6RQRyHuji58P sepal_width number None None None None 2023-12-08 11:33:51.332970+00:00 1
3 SqXgQXbvI6JY sepal_length number None None None None 2023-12-08 11:33:51.332873+00:00 1
1 m4XJh9K6GntL iris_organism_name category None None core.ULabel None 2023-12-08 11:33:48.455661+00:00 1
2 8vA2zv5WptXb study_name category None None core.ULabel None 2023-12-08 11:33:48.430576+00:00 1
FeatureSet
uid name n type registry hash updated_at created_by_id
id
5 hkMkNGmap2LrRZDggStJ None 1 None core.Feature B9bU6Irw5r6psCSY8Nbl 2023-12-08 11:33:51.540053+00:00 1
4 gtTAJ5HORe2iHAjODN8J None 5 None core.Feature 2Bh2jXo1NMtCjbWrBAXN 2023-12-08 11:33:51.443776+00:00 1
2 tkuAjVupelkDbzyKDvZA None 2 None core.Feature DON93FV6BKzlfKu0_PPw 2023-12-08 11:33:48.716415+00:00 1
ULabel
uid name description reference reference_type updated_at created_by_id
id
5 VdhkNwnr study0 None None None 2023-12-08 11:33:47.056422+00:00 1
4 3Ff6j9XG is_organism None None None 2023-12-08 11:33:46.966285+00:00 1
3 9Gwr5bTc virginica None None None 2023-12-08 11:33:46.922946+00:00 1
2 QHudiekA versicolor None None None 2023-12-08 11:33:46.922896+00:00 1
1 I7m0eKZ2 setosa None None None 2023-12-08 11:33:46.922815+00:00 1

Manage follow-up data#

Assume that a couple of weeks later, we receive a new batch of data in a follow-up study 2.

Let’s track a new analysis:

ln.track()
💡 notebook imports: lamindb==0.63.4 pandas==1.5.3
💡 saved: Transform(uid='dMtrt8YMSdl6z8', name='Tutorial: Features & labels', short_name='tutorial2', version='0', type=notebook, updated_at=2023-12-08 11:33:52 UTC, created_by_id=1)
💡 saved: Run(uid='Hr7Q0Ar8EhpaPr4d3eUy', run_at=2023-12-08 11:33:52 UTC, transform_id=3, created_by_id=1)

Register a joint dataset#

Assume we already ran all preprocessing including the ML model.

We get a DataFrame and store it as a file:

df = ln.dev.datasets.df_iris_in_meter_study2()
ln.File.from_df(df, description="Iris study 2 - transformed").save()
Hide code cell output
💡 file will be copied to default storage upon `save()` with key `None` ('.lamindb/0JHCOUioNEvKGn1vRde8.parquet')
5 terms (100.00%) are validated for name
✅ loaded: FeatureSet(uid='gtTAJ5HORe2iHAjODN8J', n=5, registry='core.Feature', hash='2Bh2jXo1NMtCjbWrBAXN', updated_at=2023-12-08 11:33:51 UTC, created_by_id=1)
✅ storing file '0JHCOUioNEvKGn1vRde8' at '/home/runner/work/lamindb/lamindb/docs/lamin-tutorial/.lamindb/0JHCOUioNEvKGn1vRde8.parquet'

Let’s load both data batches as files:

dataset1 = ln.Dataset.filter(name="Iris study 1 - transformed").one()

file1 = dataset1.file
file2 = ln.File.filter(description="Iris study 2 - transformed").one()

We can now store the joint dataset:

dataset = ln.Dataset([file1, file2], name="Iris flower study 1 & 2 - transformed")

dataset.save()
✅ loaded: FeatureSet(uid='gtTAJ5HORe2iHAjODN8J', n=5, registry='core.Feature', hash='2Bh2jXo1NMtCjbWrBAXN', updated_at=2023-12-08 11:33:51 UTC, created_by_id=1)
💡 adding file [7] as input for run 3, adding parent transform 2

Auto-concatenate data batches#

Because both data batches measured the same validated feature set, we can auto-concatenate the sharded dataset.

This means, we can load it as if it was stored in a single file:

dataset.load().tail()
sepal_length sepal_width petal_length petal_width iris_organism_name
145 0.067 0.030 0.052 0.023 virginica
146 0.063 0.025 0.050 0.019 virginica
147 0.065 0.030 0.052 0.020 virginica
148 0.062 0.034 0.054 0.023 virginica
149 0.059 0.030 0.051 0.018 virginica

We can also access & query the underlying two file objects:

dataset.files.list()
[File(uid='P26F2iLZW41PB0d1FJeH', suffix='.parquet', accessor='DataFrame', description='See dataset P26F2iLZW41PB0d1FJeH', size=5347, hash='Ws3vdEbIGfvivB5u4BB0Cg', hash_type='md5', visibility=1, key_is_virtual=True, updated_at=2023-12-08 11:33:51 UTC, storage_id=1, transform_id=2, run_id=2, created_by_id=1),
 File(uid='0JHCOUioNEvKGn1vRde8', suffix='.parquet', accessor='DataFrame', description='Iris study 2 - transformed', size=5397, hash='Idy3PWYICjY6F92WAc65oA', hash_type='md5', visibility=1, key_is_virtual=True, updated_at=2023-12-08 11:33:52 UTC, storage_id=1, transform_id=3, run_id=3, created_by_id=1)]

Or look at their data flow:

dataset.view_flow()
_images/e3d5e8e069a189e64a789ec2388dd72b130090abf168c6f80a86b4958944bd81.svg

Or look at the database:

ln.view()
Hide code cell output
Dataset
uid name description version hash reference reference_type transform_id run_id file_id storage_id initial_version_id visibility updated_at created_by_id
id
3 lNEsthtmoxXrGk0K87MT Iris flower study 1 & 2 - transformed None None 9WdgbPvFqezjG278Icev None None 3 3 NaN None None 1 2023-12-08 11:33:52.672911+00:00 1
2 P26F2iLZW41PB0d1FJeH Iris study 1 - transformed Iris dataset after measuring sepal & petal met... None Ws3vdEbIGfvivB5u4BB0Cg None None 2 2 7.0 None None 1 2023-12-08 11:33:51.455658+00:00 1
1 RdyRvi0A9SEaNPu0xCUo Iris study 1 50 image files and metadata None qW6WbNWDV_xiHYqAhku7 None None 1 1 NaN None None 1 2023-12-08 11:33:44.178702+00:00 1
Feature
uid name type unit description registries synonyms updated_at created_by_id
id
6 lPv0AOi5h2Jj petal_width number None None None None 2023-12-08 11:33:51.333086+00:00 1
5 qncJoU9lrjxx petal_length number None None None None 2023-12-08 11:33:51.333031+00:00 1
4 6RQRyHuji58P sepal_width number None None None None 2023-12-08 11:33:51.332970+00:00 1
3 SqXgQXbvI6JY sepal_length number None None None None 2023-12-08 11:33:51.332873+00:00 1
1 m4XJh9K6GntL iris_organism_name category None None core.ULabel None 2023-12-08 11:33:48.455661+00:00 1
2 8vA2zv5WptXb study_name category None None core.ULabel None 2023-12-08 11:33:48.430576+00:00 1
FeatureSet
uid name n type registry hash updated_at created_by_id
id
5 hkMkNGmap2LrRZDggStJ None 1 None core.Feature B9bU6Irw5r6psCSY8Nbl 2023-12-08 11:33:51.540053+00:00 1
4 gtTAJ5HORe2iHAjODN8J None 5 None core.Feature 2Bh2jXo1NMtCjbWrBAXN 2023-12-08 11:33:51.443776+00:00 1
2 tkuAjVupelkDbzyKDvZA None 2 None core.Feature DON93FV6BKzlfKu0_PPw 2023-12-08 11:33:48.716415+00:00 1
File
uid storage_id key suffix accessor description version size hash hash_type transform_id run_id initial_version_id visibility key_is_virtual updated_at created_by_id
id
8 0JHCOUioNEvKGn1vRde8 1 None .parquet DataFrame Iris study 2 - transformed None 5397 Idy3PWYICjY6F92WAc65oA md5 3 3 None 1 True 2023-12-08 11:33:52.625203+00:00 1
7 P26F2iLZW41PB0d1FJeH 1 None .parquet DataFrame See dataset P26F2iLZW41PB0d1FJeH None 5347 Ws3vdEbIGfvivB5u4BB0Cg md5 2 2 None 1 True 2023-12-08 11:33:51.449569+00:00 1
6 tgQysVodCUvuKOg7Phei 2 iris_studies/study0_raw_images/iris-125b6645e0... .jpg None None None 21418 Bsko3tdvYxWq_JB5fdoIbw md5 1 1 None 1 False 2023-12-08 11:33:43.976175+00:00 1
5 6Oe5HVzBhPkdcrgZFkRZ 2 iris_studies/study0_raw_images/iris-0fec175448... .jpg None None None 10773 d3I43842Sd5PUMgFBrgjKA md5 1 1 None 1 False 2023-12-08 11:33:43.975732+00:00 1
4 60lIRFWQDufDlesngd5k 2 iris_studies/study0_raw_images/iris-0f133861ea... .jpg None None None 12201 1uP_ORc_dQpcuk3oKkIOLw md5 1 1 None 1 False 2023-12-08 11:33:43.975259+00:00 1
3 3hAS8WfPf2vL4iqpbQEZ 2 iris_studies/study0_raw_images/iris-0797945218... .jpg None None None 19842 v3G73F-8oISKexASY3RvUw md5 1 1 None 1 False 2023-12-08 11:33:43.974634+00:00 1
2 gaJ8bBncZ5JjQdn9gOF8 2 iris_studies/study0_raw_images/iris-0337d20a3b... .jpg None None None 14529 e0Gct8LodEyQzNwy1glOPA md5 1 1 None 1 False 2023-12-08 11:33:43.973763+00:00 1
Run
uid transform_id run_at created_by_id report_id is_consecutive reference reference_type
id
1 rcIFXvHFbjXlRwB6ClMX 1 2023-12-08 11:33:41.304417+00:00 1 None None None None
2 cuokWCDczjvY8Cw9pyta 2 2023-12-08 11:33:48.767178+00:00 1 None None None None
3 Hr7Q0Ar8EhpaPr4d3eUy 3 2023-12-08 11:33:52.210256+00:00 1 None None None None
Storage
uid root type region updated_at created_by_id
id
2 0dIOT7An s3://lamindb-dev-datasets s3 us-east-1 2023-12-08 11:33:43.365760+00:00 1
1 1ZqmkRA9 /home/runner/work/lamindb/lamindb/docs/lamin-t... local None 2023-12-08 11:33:39.369645+00:00 1
Transform
uid name short_name version type latest_report_id source_file_id reference reference_type initial_version_id updated_at created_by_id
id
3 dMtrt8YMSdl6z8 Tutorial: Features & labels tutorial2 0 notebook None None None None None 2023-12-08 11:33:52.206988+00:00 1
2 Pn3w7H1eYvnDGJ Petal & sepal regressor None None pipeline None None None None None 2023-12-08 11:33:48.763907+00:00 1
1 NJvdsWWbJlZSz8 Tutorial: Files & datasets tutorial 0 notebook None None None None None 2023-12-08 11:33:41.300410+00:00 1
ULabel
uid name description reference reference_type updated_at created_by_id
id
5 VdhkNwnr study0 None None None 2023-12-08 11:33:47.056422+00:00 1
4 3Ff6j9XG is_organism None None None 2023-12-08 11:33:46.966285+00:00 1
3 9Gwr5bTc virginica None None None 2023-12-08 11:33:46.922946+00:00 1
2 QHudiekA versicolor None None None 2023-12-08 11:33:46.922896+00:00 1
1 I7m0eKZ2 setosa None None None 2023-12-08 11:33:46.922815+00:00 1
User
uid handle name updated_at
id
1 DzTjkKse testuser1 Test User1 2023-12-08 11:33:39.365052+00:00

This is it! 😅

If you’re interested, please check out guides & use cases or make an issue on GitHub to discuss.

Appendix#

Manage metadata#

Hierarchical ontologies#

Say, we want to express that study0 belongs to project 1 and is a study, we can use .parents:

project1 = ln.ULabel(name="project1")
project1.save()
is_study = ln.ULabel(name="is_study")
is_study.save()
study_label.parents.set([project1, is_study])
study_label.view_parents()
_images/cfc0e3387e6751b97d934e6ef44e43279f15dec0cbaf7a4953ef3b16bc666fa3.svg

For more info, see view_parents().

Avoid duplicates#

We already created a project1 label before, let’s see what happens if we try to create it again:

label = ln.ULabel(name="project1")

label.save()
✅ loaded ULabel record with exact same name: 'project1'

Instead of creating a new record, LaminDB loads and returns the existing record from the database.

If there is no exact match, LaminDB will warn you upon creating a record about potential duplicates.

Say, we spell “project 1” with a white space:

ln.ULabel(name="project 1")
❗ record with similar name exist! did you mean to load it?
uid score
name
project1 SFqYw7TP 94.1
ULabel(uid='hM1n1COL', name='project 1', created_by_id=1)

To avoid inserting duplicates when creating new records, a search compares whether a similar record already exists.

You can switch it off for performance gains via upon_create_search_names.

Update & delete records#

label = ln.ULabel.filter(name="project1").first()

label
ULabel(uid='SFqYw7TP', name='project1', updated_at=2023-12-08 11:33:52 UTC, created_by_id=1)
label.name = "project1a"

label.save()

label
ULabel(uid='SFqYw7TP', name='project1a', updated_at=2023-12-08 11:33:53 UTC, created_by_id=1)
label.delete()
(2, {'lnschema_core.ULabel_parents': 1, 'lnschema_core.ULabel': 1})

Manage storage#

Change default storage#

The default storage location is:

ln.settings.storage  # your "working data directory"
PosixUPath('/home/runner/work/lamindb/lamindb/docs/lamin-tutorial')

You can change it by setting ln.settings.storage = "s3://my-bucket".

See all storage locations#

ln.Storage.filter().df()
uid root type region updated_at created_by_id
id
1 1ZqmkRA9 /home/runner/work/lamindb/lamindb/docs/lamin-t... local None 2023-12-08 11:33:39.369645+00:00 1
2 0dIOT7An s3://lamindb-dev-datasets s3 us-east-1 2023-12-08 11:33:43.365760+00:00 1

Set verbosity#

To reduce the number of logging messages, set verbosity:

ln.settings.verbosity = 3  # only show info, no hints
# clean up what we wrote in this notebook
!lamin delete --force lamin-tutorial
!rm -r lamin-tutorial
💡 deleting instance testuser1/lamin-tutorial
✅     deleted instance settings file: /home/runner/.lamin/instance--testuser1--lamin-tutorial.env
✅     instance cache deleted
✅     deleted '.lndb' sqlite file
❗     consider manually deleting your stored data: /home/runner/work/lamindb/lamindb/docs/lamin-tutorial