Bird’s eye view#
Here, you’ll backtrace file transformations through notebooks, pipelines & app uploads in a research project based on Schmidt22.
It’s a mix of a guide & a demo usecase.
Why should I care about data lineage?
Data lineage enables to trace the origin of biological insights, verify experimental outcomes, meet regulatory standards, and increase the reproducibility & reliability research.
While tracking data lineage is easier when it’s governed by deterministic pipelines, it becomes hard when its governed by interactive human-driven analyses.
This is where LaminDB fills a gap in the tools space.
Setup#
We need an instance:
!lamin init --storage ./mydata
Show code cell output
💡 creating schemas: core==0.45.3
🌱 saved: User(id='DzTjkKse', handle='testuser1', email='testuser1@lamin.ai', name='Test User1', updated_at=2023-08-16 21:59:21)
🌱 saved: Storage(id='IWuJyZVy', root='/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata', type='local', updated_at=2023-08-16 21:59:21, created_by_id='DzTjkKse')
✅ loaded instance: testuser1/mydata
💡 did not register local instance on hub (if you want, call `lamin register`)
Import lamindb:
import lamindb as ln
✅ loaded instance: testuser1/mydata (lamindb 0.50.5)
We’ll need toy data:
assert ln.setup.settings.user.handle == "testuser1"
bfx_run_output = ln.dev.datasets.dir_scrnaseq_cellranger(
"perturbseq", basedir=ln.settings.storage, output_only=False
)
ln.track(ln.Transform(name="Chromium 10x upload", type="pipeline"))
ln.File(bfx_run_output.parent / "fastq/perturbseq_R1_001.fastq.gz").save()
ln.File(bfx_run_output.parent / "fastq/perturbseq_R2_001.fastq.gz").save()
ln.setup.login("testuser2")
Show code cell output
🌱 saved: Transform(id='o8CUeNo7nkprz8', name='Chromium 10x upload', stem_id='o8CUeNo7nkpr', version='0', type='pipeline', updated_at=2023-08-16 21:59:22, created_by_id='DzTjkKse')
🌱 saved: Run(id='7jOHdO6bziRTcnlUK7qX', run_at=2023-08-16 21:59:22, transform_id='o8CUeNo7nkprz8', created_by_id='DzTjkKse')
💡 file in storage '/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata' with key 'fastq/perturbseq_R1_001.fastq.gz'
💡 file in storage '/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata' with key 'fastq/perturbseq_R2_001.fastq.gz'
✅ logged in with email testuser2@lamin.ai and id bKeW4T6E
🔶 record with similar name exist! did you mean to load it?
id | __ratio__ | |
---|---|---|
name | ||
Test User1 | DzTjkKse | 90.0 |
🌱 saved: User(id='bKeW4T6E', handle='testuser2', email='testuser2@lamin.ai', name='Test User2', updated_at=2023-08-16 21:59:23)
Track a bioinformatics pipeline#
When working with a pipeline, we’ll register it before running it.
This only happens once and could be done by anyone on your team.
transform = ln.Transform(name="Cell Ranger", version="7.2.0", type="pipeline")
ln.User.filter().df()
handle | name | updated_at | ||
---|---|---|---|---|
id | ||||
DzTjkKse | testuser1 | testuser1@lamin.ai | Test User1 | 2023-08-16 21:59:21 |
bKeW4T6E | testuser2 | testuser2@lamin.ai | Test User2 | 2023-08-16 21:59:23 |
transform
Transform(id='YL2Qjuqj3p5PsM', name='Cell Ranger', stem_id='YL2Qjuqj3p5P', version='7.2.0', type='pipeline', created_by_id='bKeW4T6E')
ln.track(transform)
🌱 saved: Transform(id='YL2Qjuqj3p5PsM', name='Cell Ranger', stem_id='YL2Qjuqj3p5P', version='7.2.0', type='pipeline', updated_at=2023-08-16 21:59:23, created_by_id='bKeW4T6E')
🌱 saved: Run(id='QJqwvGBQVMXPvvbYsl2W', run_at=2023-08-16 21:59:23, transform_id='YL2Qjuqj3p5PsM', created_by_id='bKeW4T6E')
Now, let’s stage a few files from an instrument upload:
files = ln.File.filter(key__startswith="fastq/perturbseq").all()
filepaths = [file.stage() for file in files]
💡 adding file q3cJ3SSDwMFEnMjjlcB3 as input for run QJqwvGBQVMXPvvbYsl2W, adding parent transform o8CUeNo7nkprz8
💡 adding file CqvJ44vzSBWQVSdXIdcX as input for run QJqwvGBQVMXPvvbYsl2W, adding parent transform o8CUeNo7nkprz8
Assume we processed them and obtained 3 output files in a folder 'filtered_feature_bc_matrix'
:
output_files = ln.File.from_dir("./mydata/perturbseq/filtered_feature_bc_matrix/")
ln.save(output_files)
Show code cell output
✅ created 3 files from directory using storage /home/runner/work/lamin-usecases/lamin-usecases/docs/mydata and key = perturbseq/filtered_feature_bc_matrix/
🌱 storing file 'F4MS8ehlJiBGlFRljNPl' with key 'perturbseq/filtered_feature_bc_matrix/features.tsv.gz'
🌱 storing file 'qlhwSf74ngeLURGwQ9p6' with key 'perturbseq/filtered_feature_bc_matrix/matrix.mtx.gz'
🌱 storing file 'ZOL5CJlojxzMw2TEJGjA' with key 'perturbseq/filtered_feature_bc_matrix/barcodes.tsv.gz'
Let’s look at the data lineage at this stage:
output_files[0].view_lineage()
And let’s keep running the Cell Ranger pipeline in the background.
Show code cell content
transform = ln.Transform(
name="Preprocess Cell Ranger outputs", version="2.0", type="pipeline"
)
ln.track(transform)
[f.stage() for f in output_files]
filepath = ln.dev.datasets.schmidt22_perturbseq(basedir=ln.settings.storage)
file = ln.File(filepath, description="perturbseq counts")
file.save()
🌱 saved: Transform(id='GlOTy7VbChAJ0b', name='Preprocess Cell Ranger outputs', stem_id='GlOTy7VbChAJ', version='2.0', type='pipeline', updated_at=2023-08-16 21:59:24, created_by_id='bKeW4T6E')
🌱 saved: Run(id='M0L3ec0jJjkt9O0u83Cx', run_at=2023-08-16 21:59:24, transform_id='GlOTy7VbChAJ0b', created_by_id='bKeW4T6E')
💡 adding file F4MS8ehlJiBGlFRljNPl as input for run M0L3ec0jJjkt9O0u83Cx, adding parent transform YL2Qjuqj3p5PsM
💡 adding file qlhwSf74ngeLURGwQ9p6 as input for run M0L3ec0jJjkt9O0u83Cx, adding parent transform YL2Qjuqj3p5PsM
💡 adding file ZOL5CJlojxzMw2TEJGjA as input for run M0L3ec0jJjkt9O0u83Cx, adding parent transform YL2Qjuqj3p5PsM
💡 file in storage '/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata' with key 'schmidt22_perturbseq.h5ad'
💡 file is AnnDataLike, consider using File.from_anndata() to link var_names and obs.columns as features
Track app upload & analytics#
The hidden cell below simulates additional analytic steps including:
uploading phenotypic screen data
scRNA-seq analysis
analyses of the integrated datasets
Show code cell content
# app upload
ln.setup.login("testuser1")
transform = ln.Transform(name="Upload GWS CRISPRa result", type="app")
ln.track(transform)
# upload and analyze the GWS data
filepath = ln.dev.datasets.schmidt22_crispra_gws_IFNG(ln.settings.storage)
file = ln.File(filepath, description="Raw data of schmidt22 crispra GWS")
file.save()
ln.setup.login("testuser2")
transform = ln.Transform(name="GWS CRIPSRa analysis", type="notebook")
ln.track(transform)
file_wgs = ln.File.filter(key="schmidt22-crispra-gws-IFNG.csv").one()
df = file_wgs.load().set_index("id")
hits_df = df[df["pos|fdr"] < 0.01].copy()
file_hits = ln.File(hits_df, description="hits from schmidt22 crispra GWS")
file_hits.save()
✅ logged in with email testuser1@lamin.ai and id DzTjkKse
🌱 saved: Transform(id='6LHjNUWqkCa8z8', name='Upload GWS CRISPRa result', stem_id='6LHjNUWqkCa8', version='0', type='app', updated_at=2023-08-16 21:59:26, created_by_id='DzTjkKse')
🌱 saved: Run(id='misq3GBByruI8WTGqOvJ', run_at=2023-08-16 21:59:26, transform_id='6LHjNUWqkCa8z8', created_by_id='DzTjkKse')
💡 file in storage '/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata' with key 'schmidt22-crispra-gws-IFNG.csv'
✅ logged in with email testuser2@lamin.ai and id bKeW4T6E
🌱 saved: Transform(id='id9YimYt3pIRz8', name='GWS CRIPSRa analysis', stem_id='id9YimYt3pIR', version='0', type='notebook', updated_at=2023-08-16 21:59:28, created_by_id='bKeW4T6E')
🌱 saved: Run(id='dRLETvJddC64NtIPkIdq', run_at=2023-08-16 21:59:28, transform_id='id9YimYt3pIRz8', created_by_id='bKeW4T6E')
💡 adding file SOSzRwtqj9gnhMF7F0Fq as input for run dRLETvJddC64NtIPkIdq, adding parent transform 6LHjNUWqkCa8z8
💡 file will be copied to default storage upon `save()` with key 'VuUcmkNkygNaQAJcC59i.parquet'
💡 file is a dataframe, consider using File.from_df() to link column names as features
🌱 storing file 'VuUcmkNkygNaQAJcC59i' with key '.lamindb/VuUcmkNkygNaQAJcC59i.parquet'
Let’s see how the data lineage of this looks:
file = ln.File.filter(description="hits from schmidt22 crispra GWS").one()
file.view_lineage()
In the backgound, somebody integrated and analyzed the outputs of the app upload and the Cell Ranger pipeline:
Show code cell content
# Let us add analytics on top of the cell ranger pipeline and the phenotypic screening
transform = ln.Transform(
name="Perform single cell analysis, integrating with CRISPRa screen",
type="notebook",
)
ln.track(transform)
file_ps = ln.File.filter(description__icontains="perturbseq").one()
adata = file_ps.load()
screen_hits = file_hits.load()
import scanpy as sc
sc.tl.score_genes(adata, adata.var_names.intersection(screen_hits.index).tolist())
filesuffix = "_fig1_score-wgs-hits.png"
sc.pl.umap(adata, color="score", show=False, save=filesuffix)
filepath = f"figures/umap{filesuffix}"
file = ln.File(filepath, key=filepath)
file.save()
filesuffix = "fig2_score-wgs-hits-per-cluster.png"
sc.pl.matrixplot(
adata, groupby="cluster_name", var_names=["score"], show=False, save=filesuffix
)
filepath = f"figures/matrixplot_{filesuffix}"
file = ln.File(filepath, key=filepath)
file.save()
🌱 saved: Transform(id='WcjRp2t7TEuyz8', name='Perform single cell analysis, integrating with CRISPRa screen', stem_id='WcjRp2t7TEuy', version='0', type='notebook', updated_at=2023-08-16 21:59:28, created_by_id='bKeW4T6E')
🌱 saved: Run(id='04IPweATnqEZroiklIYq', run_at=2023-08-16 21:59:28, transform_id='WcjRp2t7TEuyz8', created_by_id='bKeW4T6E')
💡 adding file r0qE8pTcEoF7KQubMSdP as input for run 04IPweATnqEZroiklIYq, adding parent transform GlOTy7VbChAJ0b
💡 adding file VuUcmkNkygNaQAJcC59i as input for run 04IPweATnqEZroiklIYq, adding parent transform id9YimYt3pIRz8
WARNING: saving figure to file figures/umap_fig1_score-wgs-hits.png
💡 file will be copied to default storage upon `save()` with key 'figures/umap_fig1_score-wgs-hits.png'
🌱 storing file 'HHgQhiBtpBL5axw1tQj2' with key 'figures/umap_fig1_score-wgs-hits.png'
WARNING: saving figure to file figures/matrixplot_fig2_score-wgs-hits-per-cluster.png
💡 file will be copied to default storage upon `save()` with key 'figures/matrixplot_fig2_score-wgs-hits-per-cluster.png'
🌱 storing file '9JJTJD1CfBfb4JVCl1W9' with key 'figures/matrixplot_fig2_score-wgs-hits-per-cluster.png'
The outcome of it are a few figures stored as image files. Let’s query one of them and look at the data lineage:
Track notebooks#
We’d now like to track the current Jupyter notebook to continue the work:
ln.track()
💡 notebook imports: lamindb==0.50.5 scanpy==1.9.3
🌱 saved: Transform(id='1LCd8kco9lZUz8', name='Bird's eye view', short_name='birds-eye', stem_id='1LCd8kco9lZU', version='0', type=notebook, updated_at=2023-08-16 21:59:30, created_by_id='bKeW4T6E')
🌱 saved: Run(id='iIlHVKcd9Ii3UrKomCV7', run_at=2023-08-16 21:59:30, transform_id='1LCd8kco9lZUz8', created_by_id='bKeW4T6E')
Visualize data lineage#
Let’s load one of the plots:
file = ln.File.filter(key__contains="figures/matrixplot").one()
file.stage()
💡 adding file 9JJTJD1CfBfb4JVCl1W9 as input for run iIlHVKcd9Ii3UrKomCV7, adding parent transform WcjRp2t7TEuyz8
PosixPath('/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata/figures/matrixplot_fig2_score-wgs-hits-per-cluster.png')
We see that the image file is tracked as an input of the current notebook. The input is highlighted, the notebook follows at the bottom:
file.view_lineage()
Alternatively, we can also purely look at the sequence of transforms and ignore the files:
transform = ln.Transform.search("Bird's eye view", return_queryset=True).first()
transform.parents.df()
name | short_name | stem_id | version | type | reference | updated_at | created_by_id | |
---|---|---|---|---|---|---|---|---|
id | ||||||||
WcjRp2t7TEuyz8 | Perform single cell analysis, integrating with... | None | WcjRp2t7TEuy | 0 | notebook | None | 2023-08-16 21:59:30 | bKeW4T6E |
transform.view_parents()
Understand runs#
We tracked pipeline and notebook runs through run_context
, which stores a Transform
and a Run
record as a global context.
File
objects are the inputs and outputs of runs.
What if I don’t want a global context?
Sometimes, we don’t want to create a global run context but manually pass a run when creating a file:
run = ln.Run(transform=transform)
ln.File(filepath, run=run)
When does a file appear as a run input?
When accessing a file via stage()
, load()
or backed()
, two things happen:
The current run gets added to
file.input_of
The transform of that file gets added as a parent of the current transform
You can then switch off auto-tracking of run inputs if you set ln.settings.track_run_inputs = False
: Can I disable tracking run inputs?
You can also track run inputs on a case by case basis via is_run_input=True
, e.g., here:
file.load(is_run_input=True)
Query by provenance#
We can query or search for the notebook that created the file:
transform = ln.Transform.search("GWS CRIPSRa analysis", return_queryset=True).first()
And then find all the files created by that notebook:
ln.File.filter(transform=transform).df()
storage_id | key | suffix | accessor | description | version | initial_version_id | size | hash | hash_type | transform_id | run_id | updated_at | created_by_id | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
id | ||||||||||||||
VuUcmkNkygNaQAJcC59i | IWuJyZVy | None | .parquet | DataFrame | hits from schmidt22 crispra GWS | None | None | 18368 | yw5f-kMLJhaNhdEF-lhxOQ | md5 | id9YimYt3pIRz8 | dRLETvJddC64NtIPkIdq | 2023-08-16 21:59:28 | bKeW4T6E |
Which transform ingested a given file?
file = ln.File.filter().first()
file.transform
Transform(id='o8CUeNo7nkprz8', name='Chromium 10x upload', stem_id='o8CUeNo7nkpr', version='0', type='pipeline', updated_at=2023-08-16 21:59:22, created_by_id='DzTjkKse')
And which user?
file.created_by
User(id='DzTjkKse', handle='testuser1', email='testuser1@lamin.ai', name='Test User1', updated_at=2023-08-16 21:59:26)
Which transforms were created by a given user?
users = ln.User.lookup()
ln.Transform.filter(created_by=users.testuser2).df()
name | short_name | stem_id | version | type | reference | updated_at | created_by_id | |
---|---|---|---|---|---|---|---|---|
id | ||||||||
YL2Qjuqj3p5PsM | Cell Ranger | None | YL2Qjuqj3p5P | 7.2.0 | pipeline | None | 2023-08-16 21:59:23 | bKeW4T6E |
GlOTy7VbChAJ0b | Preprocess Cell Ranger outputs | None | GlOTy7VbChAJ | 2.0 | pipeline | None | 2023-08-16 21:59:25 | bKeW4T6E |
id9YimYt3pIRz8 | GWS CRIPSRa analysis | None | id9YimYt3pIR | 0 | notebook | None | 2023-08-16 21:59:28 | bKeW4T6E |
WcjRp2t7TEuyz8 | Perform single cell analysis, integrating with... | None | WcjRp2t7TEuy | 0 | notebook | None | 2023-08-16 21:59:30 | bKeW4T6E |
1LCd8kco9lZUz8 | Bird's eye view | birds-eye | 1LCd8kco9lZU | 0 | notebook | None | 2023-08-16 21:59:30 | bKeW4T6E |
Which notebooks were created by a given user?
ln.Transform.filter(created_by=users.testuser2, type="notebook").df()
name | short_name | stem_id | version | type | reference | updated_at | created_by_id | |
---|---|---|---|---|---|---|---|---|
id | ||||||||
id9YimYt3pIRz8 | GWS CRIPSRa analysis | None | id9YimYt3pIR | 0 | notebook | None | 2023-08-16 21:59:28 | bKeW4T6E |
WcjRp2t7TEuyz8 | Perform single cell analysis, integrating with... | None | WcjRp2t7TEuy | 0 | notebook | None | 2023-08-16 21:59:30 | bKeW4T6E |
1LCd8kco9lZUz8 | Bird's eye view | birds-eye | 1LCd8kco9lZU | 0 | notebook | None | 2023-08-16 21:59:30 | bKeW4T6E |
We can also view all recent additions to the entire database:
ln.view()
Show code cell output
File
storage_id | key | suffix | accessor | description | version | initial_version_id | size | hash | hash_type | transform_id | run_id | updated_at | created_by_id | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
id | ||||||||||||||
9JJTJD1CfBfb4JVCl1W9 | IWuJyZVy | figures/matrixplot_fig2_score-wgs-hits-per-clu... | .png | None | None | None | None | 28814 | JYIPcat0YWYVCX3RVd3mww | md5 | WcjRp2t7TEuyz8 | 04IPweATnqEZroiklIYq | 2023-08-16 21:59:30 | bKeW4T6E |
HHgQhiBtpBL5axw1tQj2 | IWuJyZVy | figures/umap_fig1_score-wgs-hits.png | .png | None | None | None | None | 118999 | laQjVk4gh70YFzaUyzbUNg | md5 | WcjRp2t7TEuyz8 | 04IPweATnqEZroiklIYq | 2023-08-16 21:59:29 | bKeW4T6E |
VuUcmkNkygNaQAJcC59i | IWuJyZVy | None | .parquet | DataFrame | hits from schmidt22 crispra GWS | None | None | 18368 | yw5f-kMLJhaNhdEF-lhxOQ | md5 | id9YimYt3pIRz8 | dRLETvJddC64NtIPkIdq | 2023-08-16 21:59:28 | bKeW4T6E |
SOSzRwtqj9gnhMF7F0Fq | IWuJyZVy | schmidt22-crispra-gws-IFNG.csv | .csv | None | Raw data of schmidt22 crispra GWS | None | None | 1729685 | cUSH0oQ2w-WccO8_ViKRAQ | md5 | 6LHjNUWqkCa8z8 | misq3GBByruI8WTGqOvJ | 2023-08-16 21:59:26 | DzTjkKse |
r0qE8pTcEoF7KQubMSdP | IWuJyZVy | schmidt22_perturbseq.h5ad | .h5ad | AnnData | perturbseq counts | None | None | 20659936 | la7EvqEUMDlug9-rpw-udA | md5 | GlOTy7VbChAJ0b | M0L3ec0jJjkt9O0u83Cx | 2023-08-16 21:59:25 | bKeW4T6E |
ZOL5CJlojxzMw2TEJGjA | IWuJyZVy | perturbseq/filtered_feature_bc_matrix/barcodes... | .tsv.gz | None | None | None | None | 6 | 4c9GlDhpkQHxUfapmIfHZA | md5 | YL2Qjuqj3p5PsM | QJqwvGBQVMXPvvbYsl2W | 2023-08-16 21:59:23 | bKeW4T6E |
qlhwSf74ngeLURGwQ9p6 | IWuJyZVy | perturbseq/filtered_feature_bc_matrix/matrix.m... | .mtx.gz | None | None | None | None | 6 | h5C1MAwIZs8TohL2IiTQLw | md5 | YL2Qjuqj3p5PsM | QJqwvGBQVMXPvvbYsl2W | 2023-08-16 21:59:23 | bKeW4T6E |
F4MS8ehlJiBGlFRljNPl | IWuJyZVy | perturbseq/filtered_feature_bc_matrix/features... | .tsv.gz | None | None | None | None | 6 | fLIi4pGVmfs4RNdVj4ZQfw | md5 | YL2Qjuqj3p5PsM | QJqwvGBQVMXPvvbYsl2W | 2023-08-16 21:59:23 | bKeW4T6E |
CqvJ44vzSBWQVSdXIdcX | IWuJyZVy | fastq/perturbseq_R2_001.fastq.gz | .fastq.gz | None | None | None | None | 6 | 8uh99YnFmsM1K32v2DZlwA | md5 | o8CUeNo7nkprz8 | 7jOHdO6bziRTcnlUK7qX | 2023-08-16 21:59:22 | DzTjkKse |
q3cJ3SSDwMFEnMjjlcB3 | IWuJyZVy | fastq/perturbseq_R1_001.fastq.gz | .fastq.gz | None | None | None | None | 6 | Hob7PrSo6P4cTrzDAZlcpQ | md5 | o8CUeNo7nkprz8 | 7jOHdO6bziRTcnlUK7qX | 2023-08-16 21:59:22 | DzTjkKse |
Run
transform_id | run_at | created_by_id | reference | reference_type | |
---|---|---|---|---|---|
id | |||||
7jOHdO6bziRTcnlUK7qX | o8CUeNo7nkprz8 | 2023-08-16 21:59:22 | DzTjkKse | None | None |
QJqwvGBQVMXPvvbYsl2W | YL2Qjuqj3p5PsM | 2023-08-16 21:59:23 | bKeW4T6E | None | None |
M0L3ec0jJjkt9O0u83Cx | GlOTy7VbChAJ0b | 2023-08-16 21:59:24 | bKeW4T6E | None | None |
misq3GBByruI8WTGqOvJ | 6LHjNUWqkCa8z8 | 2023-08-16 21:59:26 | DzTjkKse | None | None |
dRLETvJddC64NtIPkIdq | id9YimYt3pIRz8 | 2023-08-16 21:59:28 | bKeW4T6E | None | None |
04IPweATnqEZroiklIYq | WcjRp2t7TEuyz8 | 2023-08-16 21:59:28 | bKeW4T6E | None | None |
iIlHVKcd9Ii3UrKomCV7 | 1LCd8kco9lZUz8 | 2023-08-16 21:59:30 | bKeW4T6E | None | None |
Storage
root | type | region | updated_at | created_by_id | |
---|---|---|---|---|---|
id | |||||
IWuJyZVy | /home/runner/work/lamin-usecases/lamin-usecase... | local | None | 2023-08-16 21:59:21 | DzTjkKse |
Transform
name | short_name | stem_id | version | type | reference | updated_at | created_by_id | |
---|---|---|---|---|---|---|---|---|
id | ||||||||
1LCd8kco9lZUz8 | Bird's eye view | birds-eye | 1LCd8kco9lZU | 0 | notebook | None | 2023-08-16 21:59:30 | bKeW4T6E |
WcjRp2t7TEuyz8 | Perform single cell analysis, integrating with... | None | WcjRp2t7TEuy | 0 | notebook | None | 2023-08-16 21:59:30 | bKeW4T6E |
id9YimYt3pIRz8 | GWS CRIPSRa analysis | None | id9YimYt3pIR | 0 | notebook | None | 2023-08-16 21:59:28 | bKeW4T6E |
6LHjNUWqkCa8z8 | Upload GWS CRISPRa result | None | 6LHjNUWqkCa8 | 0 | app | None | 2023-08-16 21:59:26 | DzTjkKse |
GlOTy7VbChAJ0b | Preprocess Cell Ranger outputs | None | GlOTy7VbChAJ | 2.0 | pipeline | None | 2023-08-16 21:59:25 | bKeW4T6E |
YL2Qjuqj3p5PsM | Cell Ranger | None | YL2Qjuqj3p5P | 7.2.0 | pipeline | None | 2023-08-16 21:59:23 | bKeW4T6E |
o8CUeNo7nkprz8 | Chromium 10x upload | None | o8CUeNo7nkpr | 0 | pipeline | None | 2023-08-16 21:59:22 | DzTjkKse |
User
handle | name | updated_at | ||
---|---|---|---|---|
id | ||||
bKeW4T6E | testuser2 | testuser2@lamin.ai | Test User2 | 2023-08-16 21:59:28 |
DzTjkKse | testuser1 | testuser1@lamin.ai | Test User1 | 2023-08-16 21:59:26 |
Show code cell content
!lamin login testuser1
!lamin delete --force mydata
!rm -r ./mydata
✅ logged in with email testuser1@lamin.ai and id DzTjkKse
💡 deleting instance testuser1/mydata
✅ deleted instance settings file: /home/runner/.lamin/instance--testuser1--mydata.env
✅ instance cache deleted
✅ deleted '.lndb' sqlite file
🔶 consider manually deleting your stored data: /home/runner/work/lamin-usecases/lamin-usecases/docs/mydata